logoalt Hacker News

andy12_yesterday at 10:35 AM1 replyview on HN

You do understand that the mechanism through which an auto-regressive transformer works (predicting one token at a time) is completely unrelated to how a model with that architecture behaves or how it's trained, right? You can have both:

- An LLM that works through completely different mechanisms, like predicting masked words, predicting the previous word, or predicting several words at a time.

- A normal traditional program, like a calculator, encoded as an autoregressive transformer that calculates its output one word at a time (compiled neural networks) [1][2]

So saying "it predicts the next word" is a nothing-burger. That a program calculates its output one token at a time tells you nothing about its behavior.

[1] https://arxiv.org/pdf/2106.06981

[2] https://wengsyx.github.io/NC/static/paper_iclr.pdf


Replies

hansmayeryesterday at 2:23 PM

> So saying "it predicts the next word" is a nothing-burger. That a program calculates its output one token at a time tells you nothing about its behavior.

Well it does - it tells me it is utterly un-reliable, because it does not understand anything. It just merely goes on, shitting out a nice pile of tokens that placed one after another kind of look like coherent sentences but make no sense, like "you should absolutely go on foot to the car wash". A completely logical culmination of Bill Gates' idiotic "Content is King" proclamation of 20 years ago.