My current stance with reviewing code is: It's not ok to make another human review the code you made with AI, if you used AI then you're the reviewer, so unless you come to me with a well defined question or decision to make, just merge it and take responsibility.
Obviously that could only work in a high trust environment, that why open source suffers so much with AI submissions.