logoalt Hacker News

alphazardlast Friday at 8:44 PM0 repliesview on HN

When explaining LLMs to people, often the high level architecture is what they find the most interesting. Not the transformer, but the token by token prediction strategy (autoregression), and not always choosing the most likely token, but a token proportional to its likelihood.

The minutiae of how next token prediction works is rarely appreciated by lay people. They don't care about dot products, or embeddings, or any of it. There's basically no advantage to explaining how that part works since most people won't understand, retain, or appreciate it.