Why?
Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times".
Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
> Imagine you have an AI button. When you click it, the locally running LLM
sure, you can imagine Firefox integrating a locally-running LLM if you want.
but meanwhile, in the real world [0]:
> In the next three years, that means investing in AI that reflects the Mozilla Manifesto. It means diversifying revenue beyond search.
if they were going to implement your imagination of a local LLM, there's no reason they'd be talking about "revenue" from LLMs.
but with ChatGPT integrating ads, they absolutely can get revenue by directing users there, in the same way they get money for Google for putting Google's ads into Firefox users' eyeballs.
that's ultimately all this is. they're adding more ads to Firefox.
0: https://blog.mozilla.org/en/mozilla/leadership/mozillas-next...
>Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
but.. why? I can read the website myself. That's why I'm on the website.
> When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
I'm also now imagining my GPU whirring into life and the accompanying sound of a jetplane getting ready for takeoff, as my battery suddenly starts draining visibly.
Local LLMs for are a pipe dream, the technology fundamentally requires far too much computation for any true intelligence to ever make sense with current computing technologies.
That last one sounds like a lot of churn and resources for little results? You're not really making them sound compelling compared to just blocking click bait sites with a normal extension somehow. And it could also be an extension users install and configure - why a pop up offering it to me, and why built into the browser that directly?
> The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN"
I've already hit that option before reading the other ones.
> "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title"
Why would you bother fetching the clickbait at all? It's spam.
The main transformation I want out of a browser, the absolutely critical one, is the removal of advertising. I concede that AI might be decent at removing ads and all the overlay clutter that makes news sites unreadable; does anyone have the demo of "AI readability mode"? Crucially I do not want it changing any non-ad text found on the page.
I like Firefox and don't think it's about to collapse like many users here, but I have already unchecked "Recommend features as you browse" and "Recommend extensions as you browse" along with setting the welcome page for updates to about:blank.
Ideally the user interface for any tool I use should never change unless I actively prompt it to change, and the only notifications I should get would be from my friends and family contacting me or calendars/alarms that I set myself.
> Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
They basically already have this feature: https://support.mozilla.org/en-US/kb/use-link-previews-firef...
Lots of imagining here.
For any mildly useful AI feature, there are hundreds of entirely dangerous ones. Either way I don't want the browser to have any AI features integrated, just like I don't want the OS to have them.
Especially since we know very well that they won't be locally running LLMs, everyone's plan is to siphon your data to their "cloud hybrid AI" to feed into the surveillance models (for ad personalization, and for selling to scammers, law enforcement and anyone else).
I'd prefer to have entirely separate and completely controlled and fire-walled solutions for any useful LLM scenarios.
> Imagine you have an AI button.
That pretty much sums up the problem: an "AI" button is about as useful to me as a "do stuff" button, or one of those red "that was easy" buttons they sell at Home Depot. Google translate has offered machine translation for 20+ years that is more or less adequate to understand text written in a language I don't read. Fine, add a button to do that. Mediocre page summaries? That can live in some submenu. "Agentic" things like booking flights for an upcoming trip? I would never trust an "AI" button to do that.
Machine learning can be useful for well-defined, low-consequence tasks. If you think an LLM is a robot butler, you're fundamentally misunderstanding what you're dealing with.
I have already clicked the all-caps button
>Why?
Do we have to re-tread 3 years of big tech overreach, scams, user hostility in nearly every common program , questionable utility that is backed by hype more than results, and way its hoisting up the US economy's otherwise stagnant/weakening GDP?
I don't really have much new to add here. I've hated this "launch in alpha" mentality for nearly a decade. Calling 2022 "alpha" is already a huge stretch.
>When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Why is this valuable? I spent my entire childhood reading, and my college years being able to research and navigate technical documents. I don't value auto-summarizations. Proper writing should be able to do this in its opening paragraphs.
>Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times"
Yes, this is my "good enough" compromise that most applications are failing to perform. Let's hope for the best.
>Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
No, probably not. I don't trust the powers behind such tools to be able to identify what is "clickbait" for me. Grok shows that these are not impartial tools, and news is the last thing I want to outsource sentiment too without a lot of built trust.
meanwhile, trust has only corroded this decade.