logoalt Hacker News

giancarlostorotoday at 1:19 PM1 replyview on HN

I wonder how long till some breakthrough comes along that makes a new architecture that can run efficiently and cheaper on basic hardware, that'd be the real AI bubble, if you could train and run inference locally at lower cost. Microsoft had one that is supposed to run fine on regular CPUs though I'm not sure how far along we can reasonably take that. They say our brains can store 2.5 PB, but we use drastically less (though I can't find a ballpark) of "RAM" to reason about things, so makes you wonder, just how efficient can we take things. Our bodies use drastically less power too.

https://huggingface.co/microsoft/bitnet-b1.58-2B-4T


Replies

segmondytoday at 2:45 PM

How long? We already have that. Qwen3.6 have 35b/27b models that beat chatgpt4o. You can run them at home in one GPU. DeepSeekV4 just came up with a new way to have super long context with KV cache an order of magnitude smaller than before. It's already going on!

show 1 reply