logoalt Hacker News

crystal_revengelast Friday at 9:24 PM1 replyview on HN

This is where I do wish we had more people working on the theoretical CS side of things in this space.

Once you recognize that all ML techniques, including LLMs, are fundamentally compression techniques you should be able to come up with some estimates of the minimum feasible size of an LLM based on: information that can be encoded in a given parameter size, relationship between loss of information and model performance, and information contained in the original data set.

I simultaneously believe LLMs are bigger than the need to be, but suspect they need to be larger than most people think given that you are trying to store a fantastically large amount of information. Even given lossy compression (which ironically is what makes LLMs "generalize"), we're still talking about an enormous corpus of data we're trying to represent.


Replies

sfpotterlast Friday at 9:34 PM

Getting theoretical results along these lines that can be operationalized meaningfully is... really hard.