Yes I agree with this, but the blog post makes a much more aggressive claim.
> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.
Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.
The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.
Exactly this. The black box in this case is a problem because it's not in my computer. It transfers the users data to an external entity that can use this data to train it's model or sell it.
Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.
Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.