You don't. I can guarantee that 90% of the generated code will never receive a detailed review, simply because there's too much of a cognitive overhead, and too little time, everything moves too fast.
I remember having to do such a code review before an AI in a highly complex component, and it would take a full day of work to do it. In this day and age, most of the people i know take like half an hour and are mostly scanning for obvious mistakes, where the bigger problem are those sneaky non obvious ones.
Exactly. Its same for reviewing somebody else's code. How many companies did this perfectly before llms came? I know mine didn't. But these days people that aren't senior enough do reviews of llm output, and do a quick mental path through the code, see the success and approve it.
What could work - llm creating a very good test suite, for their own code changes and overall app (as much as feasible), and those tests need a hardcore review. Then actual code review doesn't have to be that deep. But if everybody is shipping like there is no tomorrow, edge cases will start biting hard and often.