logoalt Hacker News

Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)

38 pointsby ai_bottoday at 9:07 AM27 commentsview on HN

Interactive timeline of every major Large Language Model. Filterable by open/closed source, searchable, 54 organizations tracked.


Comments

NitpickLawyertoday at 9:37 AM

Misses a few interesting early models: GPT-J (by Eleuther, using gpt2 arch) was the first-ish model runnable on consumer hardware. I actually had a thing running for a while in prod with real users on this. And GPT-NeoX was their attempt to scale to gpt3 levels. It was 20b and was maybe the first glimpse that local models might someday be usable (although local at the time was questionable, quantisation wasn't as widely used, etc).

show 2 replies
Marotoday at 12:11 PM

This would be interesting if each of them had a high-level picture of the NN, "to scale", perhaps color coding the components somehow. OnMouseScroll it would scroll through the models, and you could see the networks become deeper, wider, colors change, almost animated. That'd be cool.

show 1 reply
jvillasantetoday at 12:11 PM

Why is it hard in the times where AI itself can do it to add a light mode to those blacks websites!? There are people that just can't read dark mode!

show 2 replies
wobblywobbegongtoday at 2:02 PM

Calling this "The complete history of AI" seems wrong. LLM's are not all AI there is, and it has existed for way longer than people realize.

show 2 replies
hmokiguesstoday at 12:41 PM

Would be nice to see some charts and perhaps an average of the cycles with a prediction of the next one based on it

show 1 reply
YetAnotherNicktoday at 1:19 PM

It misses almost every milestones, and lists Llama 3.1 as milestone. T5 was much bigger milestone than almost everything in the list.

show 2 replies
varispeedtoday at 12:40 PM

The models used for apps like Codex, are they designed to mimic human behaviour - as in they deliberately create errors in code that then you have to spend time debugging and fixing or it is natural flaw and that humans also do it is a coincidence?

This keeps bothering me, why they need several iterations to arrive at correct solution instead of doing it first time. The prompts like "repeat solving it until it is correct" don't help.

show 1 reply
EpicIvotoday at 1:19 PM

Great site! I noticed a minor visual glitch where the tooltips seem to be rendering below their container on the z-axis, possibly getting clipped or hidden.

show 1 reply