> Would you trust a Cursor review of Claude-written code more, less, or the same as a Cursor review of Cursor-written code?
You're assuming models/prompts insist on a previous iteration of their work being right. They don't. Models try to follow instructions, so if you ask them to find issues, they will. 'Trust' is a human problem, not a model/harness problem.
> Our view is that code validation will be completely autonomous in the medium term.
If reviews are going to be autonomous, they'd be part of the coding agent. Nobody would see it as an independent activity, you mentioned above.
> Our first step towards making this easier is a native Claude Code plugin.
Claude can review code based on a specific set of instructions/context in an MD file. An additional plugin is unnecessary.
My view is that to operate in this space, you gotta build a coding agent or get acquired by one. The writing was on the wall a year ago.