logoalt Hacker News

borsch_not_souptoday at 1:36 AM0 repliesview on HN

Interesting, I’ve always thought neural network progress was primarily bottlenecked by compute.

If it turns out that LLM-like models can produce genuinely useful outputs on something as constrained as a Commodore 64—or even more convincingly, if someone manages to train a capable model within the limits of hardware from that era—it would suggest we may have left a lot of progress on the table. Not just in terms of efficiency, but in how we framed the problem space for decades.