logoalt Hacker News

da_chickentoday at 1:53 AM2 repliesview on HN

The issue is one that's been stated here before: LLMs are language models. They are not world models. They are not problem models. They do not actually understand world or the underlying entities represented by language, or the problems being addressed. LLMs understand the shape of a correct answer, and how the components of language fit together to form a correct answer. They do that because they have seen enough language to know what correct answers look like.

In human terms, we would call that knowing how to bullshit. But just like a college student hitting junior year, sooner or later you'll learn that bullshitting only gets you so far.

That's what we've really done. We've taught computers how to bullshit. We've also managed to finally invent something that lets us communicate relatively directly with a computer using human languages. The language processing capabilities of an LLM are an astonishing multi-generational leap. These types of models will absolutely be the foundation for computing interfaces in the future. But they're still language models.

To me it feels like we've invented a new keyboard, and people are fascinated by the stories the thing produces.


Replies

rbransontoday at 6:04 AM

Is it bullshitting to perform nearly perfect language to language translation or to generate photorealistic depictions from text quite reliably? or to reliably perform named entity extraction or any of the other millions of real-world tasks LLMs already perform quite well?

show 1 reply
zoom6628today at 5:26 AM

THIS !