I still don’t understand what you mean by “what they do with your data” - because it sounds like exfiltration fear mongering, whereas LLMs are a static series of weights. If you don’t explicitly call your “send_data_to_bad_actor” function with the user’s I/O, nothing can happen.
I disagree that it’s fear mongering. Have we not had numerous articles on HN about data exfiltration in recent memory? Why would an LLM that is in the drivers seat of a browser (not talking about current feature status in Firefox wrt to sanitised data being interacted with) not have the same pitfalls?
Seems as if we’d be 3 for 3 in the “agents rule of 2” in the context of the web and a browser?
> [A] An agent can process untrustworthy inputs
> [B] An agent can have access to sensitive systems or private data
> [C] An agent can change state or communicate externally
https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...
Even if we weren’t talking about such malicious hypotheticals, hallucinations are a common occurrence as are CLI agents doing things it thinks best, sometimes to the detriment of the data it interacts with. I personally wouldn’t want my history being modified or deleted, same goes with passwords and the like.
It is a bit doomerist, I doubt it’ll have such broad permissions but it just doesn’t sit well which I suppose is the spirit of the article and the stance Waterfox takes.