logoalt Hacker News

wolttamyesterday at 4:58 PM2 repliesview on HN

In 2023 GPT-4 was allegedly 1.8T parameters. In 2026 we have ~100x smaller models (10-20B) that handily outperform it, and can indeed run on a laptop.


Replies

WanderPandayesterday at 6:43 PM

It highly depends on the task. For math and coding, sure. But for knowledge tasks GPT-4 is wayy better than even SOTA ~100B models. For my knowledge test cases the lines get blurry at >400B

rectangyesterday at 6:11 PM

How does "outperform" translate to the propensity of an LLM to hallucinate?

show 1 reply