logoalt Hacker News

gambitingyesterday at 10:16 PM1 replyview on HN

>>Imagine 50 years from now.

That's the whole point though - I can't, and I don't think anyone can. Right now the LLMs are just getting bigger and bigger, we're bruteforcing the way out of their stupidity by giving them bigger and bigger datasets - unless something fundamental changes soon that tech has an actual dead end. Hence my (joke-ish) prediction that you'll eventually need a 16PB GPU to run a basic gemini model, and such a thing will always be very expensive no matter how much our tech advances(especially since we are already hitting some technical limits). Human brains won't get any more expensive with time - they already contain all the hardware they are ever going to get - but what might get cheaper is the plumbing to make them "run" and interact with other systems.


Replies

throw310822yesterday at 11:09 PM

Yeah, well, we have a very different view on this- and I know there are two diametrically opposed camps, and I am in the awe-struck one. LLMs are getting bigger and bigger and they're getting much smarter, and all in the space of a few years. They went from making up erratic articles about unicorns to writing complex PRs in codebases of millions of lines of code, solving math olympics level problems, speaking fluently in tens or hundreds of languages and exhibiting a breadth of knowledge than no human being possesses. Considering their size, they are monstrously efficient compared to the human brain. But anyway, this is a matter for a different discussion.