> Machine learning technologies like the Bergamot translation project offer real, tangible utility. Bergamot is transparent in what it does (translate text locally, period), auditable (you can inspect the model and its behavior), and has clear, limited scope, even if the internal neural network logic isn’t strictly deterministic.
This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/
The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.
> but I don't understand the hate for LLMs.
It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense
To me it sounds like a reasonable "AI-conservative" position.
(It's weird how people can be so anti-anti-AI, but then when someone takes a middle position, suddenly that's wrong too.)
Your tone is kind of ridiculous.
It’s insane this has to be pointed out to you but here we go.
Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.
The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.
The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.
The local part is the important part here. If we get consumer level hardware that can run general LLM models, there we can actually monitor locally what goes in and what goes out, then it meets the privacy needs/wants of power users.
My take is that I'm ok with anything a company wants to do with their product EXCEPT when they make it opt out or non-opt-outable.
Firefox could have an entire section dedicated to torturing digital puppies built into the platform and... Ok, well, that's too far, but they could have a costco warehouse full of AI crap and I wouldn't mind at all as long as it was off by default and preferably not even downloaded to the system unless I went in and chose to download it.
I know respecting user preference doesn't line their pockets but neither does chasing users down and shoving services they never asked for and explicitly do not want into their faces.
Translation AI though has provable behavior cases though: round tripping.
An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.
No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).
I think the author was close to something here but messed up the landing.
To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.
It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.
It's not necessarily close minded to choose to abstain from interacting with generative text, and choose not to use software that integrates it.
I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)