Right — it does seem cool but the voice is patching over a major gap. If I'm talking already, why wouldn't I just describe what I'm looking at and have the AI grab it for me?
I think they answer that question pretty convincingly: Because if what you're looking at is already on the screen, it much more easy to point to it and say "that" than to describe it.
(And if it's an abstract entity like a file, it might not even be possible to describe it, short of rattling off the entire file path)
I think they answer that question pretty convincingly: Because if what you're looking at is already on the screen, it much more easy to point to it and say "that" than to describe it.
(And if it's an abstract entity like a file, it might not even be possible to describe it, short of rattling off the entire file path)