The nice thing is unlike Cloudflare or AWS you can actually host good LLMs locally. I see a future where a non-trivial percentage of devs have an expensive workstation that runs all of the AI locally.
I'd imagine at some point the companies will just... stop publishing any open models precisely to stop that and keep people paying the subscription.
I’m fairly sure you can also still run computers locally and connect them to the Internet.
What's the best you can do hosting an LLM locally for under $X dollars. Let's say $5000. Is there a reference guide online for this? Is there a straight answer or does it depend? I've looked at Nvidia spark and high end professional GPUs but they all seem to have serious drawbacks.
I think it's possible, but the current trend is that by the time you can run x level at home, they have 10-100x in the frontier models, so if you can run today's Claude.ai at home, then software engineering as a career is already over.
That's the only future of open source that I can see.
[dead]
I'm more and more convinced of the importance of this.
There is a very interesting thing happening right now where the "llm over promisers" are incentivized to over promise for all the normal reasons -- but ALSO to create the perception that the "next/soon" breakthrough is only going to be applicable when run on huge cloud infra such that running locally is never going to be all that useful ... I tend to think that will prove wildly wrong and that we will very soon arrive at a world where state of art LLM workloads should be expected to be massively more efficiently runnable than they currently are -- to the point of not even being the bottleneck of the workflows that use these components. Additionally these workloads will be viable to run locally on common current_year consumer level hardware ...
"llm is about to be general intelligence and sufficient llm can never run locally" is a highly highly temporary state that should soon be falsifiable imo. I don't think the llm part of the "ai computation" will be the perf bottleneck for long.