logoalt Hacker News

volemotoday at 4:23 PM2 repliesview on HN

> moving forward, as the information density and architectural efficiency of smaller models continue to increase

If they continue to increase.


Replies

vessenestoday at 5:00 PM

They will. Either new architectures will come out that give us greater efficiency, or we will hit a point where the main thing we can do is shove more training time onto these weights to get more per byte. Similar thing is already happening organically when it comes to efficient token use; see for instance https://github.com/qlabs-eng/slowrun.

show 1 reply
simopatoday at 5:50 PM

The "if" is fair. But when scaling hits diminishing returns, the field is forced to look at architectures with better capacity-per-parameter tradeoffs. It's happened before, maybe it'll happen again now.