logoalt Hacker News

Reimagining the mouse pointer for the AI era

61 pointsby devhousetoday at 5:40 PM52 commentsview on HN

Comments

ianbickingtoday at 8:50 PM

I've been doing something similar to this in a personal claude code frontend, though not particularly "magical".

I'm mostly using my system to make comments on long AI-generated documents (especially design documents). I find it works well to have the AI generate something, and then I read through it, making comments along the way.

You can get pretty far just repeating the things you see... "I'm reading [heading] and [comments]". But I do find some use in selecting content and saying "I don't agree with this" or whatever else.

The result is just an augmented message. It looks like:

    <transcript>
      Let's see what we've got here.
      <selection doc="proposal.md" location="paragraph 3">
        The system already...
      </selection>
      No, I don't like how this is approaching the problem, ...
    </transcript>
Then I just send this as a user message. Claude Code (and I'm guessing any of the agentic systems) picks up on the markup very easily. It also helps to label it as a transcript, as it can understand there may be errors, and things like spelling and punctuation are inferred not deliberate. (Some additional instruction is necessary to help it understand, for example, that it should look for homophones that might make more sense in context.)

It makes reviewing feel pretty relaxed and natural. I've played around with similar note taking systems, which I think could be great for studying in school, but haven't had the focus on that particular problem to take it very far.

But I think the best thing really is giving the agent a richer understanding of what the user is experiencing and doing and just creating a rich representation of that. The keywords can be useful, but almost only as checkpoints: a keyword can identify the moment to take the transcript and package it up and deliver it.

One difference perhaps in design motivation: I have really embraced long latency interactions. I use ChatGPT with extended thinking by default, and just suck it up when the answer didn't really require thinking. I deliver 10 points of feedback at once instead of little by little. (Often halfway through I explicitly contradict myself, because I'm thinking out loud and my ideas are developing.) I just don't stress out about latency or feedback, and so low-latency but lower-intelligence interactions don't do it for me (such as ChatGPT's advanced voice mode, or probably Thinking Machine's work). I think this focus is in part a value statement: I'm trying to do higher quality work, not faster work.

1970-01-01today at 8:52 PM

How about you give me my normal white cursor and an "AI enhanced" orange cursor only when I'm doing AI things. To use their words, that would be "intuitive AI that meets users across all the tools they use, without interrupting their flow"

arjietoday at 7:24 PM

Oh interesting, this is very cool. At first I thought it was just focus-follows-mouse but it's more interesting. You have certain keywords trigger "add to prompt". Ignoring the voice functionality (which is admittedly crucial currently because other inputs currently take over focus), I've often wanted to just have a continuous conversation with the LLM as I 'point and click' (or tab over and select) at various things. Might be neat to have text input focus continue to go to the LLM where I'm typing text etc.

Sometimes I go to a different page to take a screenshot and other times I'm browsing for a file, and other times I'm highlighting some log lines. Cursor did this well, with selecting text in the terminal auto-focusing the Cursor agent textbox so you could talk to the agent and then select some text and you didn't have to re-select the original agent textbox again. The agent is a top-level function in that system not "just another app I have to switch to" to take my context with.

I have some small amount of bias because I've always felt input-constrained on computers. I have to move my hands to go places and that's exasperating. I've tried head tracking, had a vim pedal for a while, and used tiling WMs, and things like this to aid but while my vim-fu is pretty good and I function inside things very well with it, my cross-application interface isn't.

In the end, perhaps we all have our home offices with our Apple Vision Pros and we talk to them like this to maneouvre faster through our machines and get our ideas into them.

Cool research. I wonder what we'll end up with.

why_attoday at 7:08 PM

My first impression coming away from this is skepticism.

Anything with voice controls for routine use is a pretty tough sell. Doing this when you're not completely alone would be annoying to everyone around you.

Most of their examples seem like they could have been done with a right click drop down menu so they don't really need to "re-invent the mouse pointer".

So is this thing talking to Google's servers all the time for the AI integration? So it won't work if you're not connected to the internet? Privacy concerns are obvious; now Google wants to have an AI watching literally everything you do on your computer?

Does it cost the user anything for the LLM use? If it's free will it stay free forever? That's quite a lot to give away if they're expecting people to use it to change a single word like in one of their examples. I guess they're expecting to make the money back by gathering data about literally everything you do on your computer.

There might be a killer app for AI integration with personal computers that has yet to be invented, but this doesn't look like it.

show 2 replies
chromacitytoday at 7:39 PM

My reaction to the first demo (recipe) is that it was slower than typing the same thing on your keyboard.

The second demo seems to be a wash: there's no time saved in saying "move this" versus "move crab". And an app-specific contextual menu would probably be faster.

The third demo doesn't seem to warrant the use of a pointer at all, since there is only one way to interpret the prompt.

None of this means that this approach will not be successful, but there's a reason why so many attempts to revolutionize user interfaces ended up going nowhere. Talking to your computer was always supposed to be the future, but in practice, it's slower and more finicky than typing.

In fact, the only new UI paradigm of the past 28+ years appears to have been touchscreens and swipe gestures on phones. But they are a matter of necessity. No one wants to finger-paint on a desktop screen.

show 1 reply
kjellsbellstoday at 6:59 PM

I sense a privacy problem brewing.

It reminds me of Microsoft Recall in the sense that some portion of the screen is going to be continuously transmitted outside of the users control.

What happens when someone browses something very private (planning a surprise engagement. looking at medical data. planning a protest)? All that data gets slurped to google and subject to a warrant or discovery or building your advertising fingerprint.

Maybe the idea is that the data is sent to AI only when you right click, but that seems like a very thin firewall that a product manager will breach in the interests of delivering "predictive AI" via some kind of precomputed results.

gobdovantoday at 7:56 PM

This is how I always imagined FE development would work once ChatGPT 3 came out. Then Cursor appeared and seeing how successful they were with just a chat and a few tool calls, I thought I was over-complicating things.

Anyway, I built a prototype on this idea, but instead of relying only on hover, I press Option to select a node in a custom AST-ish semantic layer I designed around a minimalist UI grammar, and Option + up/down arrows to move to parent/child node. This way, I have have an accurate pointer to the element I want to talk about, plus a minimal context window (parent component, state, a few navigation related queries).

What I learned from using it, though, is that the killer use case isn't necessarily the flashy "talk to this UI element" interaction shown in the Google demos. I do use it that way too; I have `Option + Shift + click` to copy a selector to the clipboard, so I can give an LLM connected to the live medium a precise reference to the element I want to discuss.

But the place where it has been most useful day to day is much simpler: source navigation. Point at the thing in the UI, jump to the code that is responsible for it. The difficult part is jumping to the code you care about (the code for UI or for the semantic element?), but in my system that distinction turned out to be usually obvious, which is what makes the interaction useful.

jpattentoday at 6:57 PM

Reminds me of Put That There https://m.youtube.com/watch?v=RyBEUyEtxQo

juancntoday at 7:27 PM

Please don't.

I like text selection exactly how it is. I want precise controls.

It's fine for a touch interface like a phone, but on a computer I expect precision. As much as I can get.

nolist_policytoday at 6:55 PM

Wiggle at CAPTCHAs, wiggle at Termux, wiggle at Emacs, wiggle at the Godot Editor, wiggle at my remote desktop.

(Not going to happen)

loaderchipstoday at 6:20 PM

It's beautiful how the human mind can take something very obvious but overlooked and make it into this fantastic innovation. Fab stuff.

tintortoday at 6:46 PM

Of course, it isn't a Google Demo, if you can't use it to book a table at restaurant. (shown at the bottom of the page)

dandakatoday at 7:40 PM

Next generation of OS should have constant video and audio recognition by on device LLM. This will provide valuable context for a lot of scenarios. So instead of frequent copy-pasting we are used to, we can let agents access context of our whole workflows from different apps.

But Google is a very ill positioned candidate for such OS. I would rather trust Apple and local-first on-device models.

maheenaslamtoday at 7:34 PM

The concept is good but accuracy in cluttered environment can be a concern, also misinterpreting context can be a problem

AbuAssartoday at 6:16 PM

so Google will be monitoring whatever on the screen continuously or only when the user say the magic words (this, that, here, there)?

show 2 replies
jaccolatoday at 6:51 PM

This seems like one of those things that is usable infrequently enough to be forgotten/poorly developed/never used. (Even before accounting for the actual failure rate of the LLM which will be none-zero).

Perhaps a text box and file upload isn’t the perfect interface for every use case but it is versatile which is a huge barrier to overcome.

hmokiguesstoday at 7:17 PM

Don't build these things, instead build protocols and expose system level APIs for application developers to build things.

iridionetoday at 6:36 PM

Interesting! I wonder how UI will evolve in the long-term? If there are browser-use/computer-use and clicky-clones automating pointer actions, do we really need complex UI anymore? If yes, when?

show 1 reply
strgrdtoday at 6:20 PM

No thanks

SirFattytoday at 6:24 PM

It only took Google and their AI offering to come up with Graffiti.

mcooklytoday at 6:44 PM

I wonder what sort of monstrous power would be unleashed if Google used Plan9 as a foundation.

show 1 reply
xiphias2today at 7:33 PM

Google needs to beat OpenAI and Antropic in coding models because that's where the big money is going. I love using the Gemini pro model for quick questions, but that's not where I'm spending the real money.

They have so many great software engineers but unable to use them to speed up coding AI research. Hopefully with Sergey's focus it will get better.

This cursor thing is just another experiment nobody cares about.

mvdtnztoday at 6:37 PM

Both of the text based demos would have been simpler and faster with traditional mouse and keyboard interactions. What is the AI adding?

show 4 replies
Joker_vDtoday at 7:14 PM

Just seven hours ago there was a plea on HN [0] to please not do this. Seriously, what are they smoking at Google right now?

[0] https://news.ycombinator.com/item?id=48107027

jinkuantoday at 6:52 PM

being able to make precise edits would be huge for AI

LocalHtoday at 6:51 PM

do not want

simondwtoday at 6:59 PM

Maybe I'm misunderstanding, but what is new about the pointer itself? Seems to be functionally the same as selecting + tooltips / context menus.

show 2 replies
OtomotOtoday at 6:53 PM

Like a dream come true...

Nightmares are dreams as well and this is a nightmare like Windows Recall.

Technically wonderful though.

themafiatoday at 6:26 PM

> We’ve been exploring new AI-powered capabilities to help the pointer not only understand what it’s pointing at, but also why it matters to the user.

We couldn't quite track you well enough before. So we're fixing that under the guise of "AI powered capabilities."

show 1 reply
pmarrecktoday at 7:16 PM

There's already a product that does this lol

Aaaaand now I can't remember the name of it

SirMastertoday at 7:00 PM

Thanks, I hate it

brgsktoday at 7:12 PM

what the hell is going on at google