I've been doing something similar to this in a personal claude code frontend, though not particularly "magical".
I'm mostly using my system to make comments on long AI-generated documents (especially design documents). I find it works well to have the AI generate something, and then I read through it, making comments along the way.
You can get pretty far just repeating the things you see... "I'm reading [heading] and [comments]". But I do find some use in selecting content and saying "I don't agree with this" or whatever else.
The result is just an augmented message. It looks like:
<transcript>
Let's see what we've got here.
<selection doc="proposal.md" location="paragraph 3">
The system already...
</selection>
No, I don't like how this is approaching the problem, ...
</transcript>
Then I just send this as a user message. Claude Code (and I'm guessing any of the agentic systems) picks up on the markup very easily. It also helps to label it as a transcript, as it can understand there may be errors, and things like spelling and punctuation are inferred not deliberate. (Some additional instruction is necessary to help it understand, for example, that it should look for homophones that might make more sense in context.)It makes reviewing feel pretty relaxed and natural. I've played around with similar note taking systems, which I think could be great for studying in school, but haven't had the focus on that particular problem to take it very far.
But I think the best thing really is giving the agent a richer understanding of what the user is experiencing and doing and just creating a rich representation of that. The keywords can be useful, but almost only as checkpoints: a keyword can identify the moment to take the transcript and package it up and deliver it.
One difference perhaps in design motivation: I have really embraced long latency interactions. I use ChatGPT with extended thinking by default, and just suck it up when the answer didn't really require thinking. I deliver 10 points of feedback at once instead of little by little. (Often halfway through I explicitly contradict myself, because I'm thinking out loud and my ideas are developing.) I just don't stress out about latency or feedback, and so low-latency but lower-intelligence interactions don't do it for me (such as ChatGPT's advanced voice mode, or probably Thinking Machine's work). I think this focus is in part a value statement: I'm trying to do higher quality work, not faster work.