logoalt Hacker News

ninetynineninelast Saturday at 2:49 AM2 repliesview on HN

>It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

There's actually some ground truth facts about AI many people are not knowledgeable about.

Many people believe we understand in totality how LLMs work. The absolute truth of this is that we overall we do NOT understand how LLMs work AT all.

The mistaken belief that we understand LLMs is the driver behind most of the arguments. People think we understand LLMs and that we Understand that the output of LLMs is just stochastic parroting, when the truth is We Do Not understand Why or How an LLM produced a specific response for a specific prompt.

Whether the process of an LLM producing a response resembles anything close to sentience or consciousness, we actually do not know because we aren't even sure about the definitions of those words, Nor do we understand how an LLM works.

This erroneous belief is so pervasive amongst people that I'm positive I'll get extremely confident responses declaring me wrong.

These debates are not the result of people talking past each other. It's because a large segment of people on HN literally are Misinformed about LLMs.


Replies

whatevertrevorlast Saturday at 6:05 AM

I couldn't agree more, and not just on HN but the world at large.

For the general populace including many tech people who are not ML researchers, understanding how convolutional neural nets work is already tricky enough. For non tech people, I'd hazard a guess that LLM/ generative AI is complexity-indistinguishable from "The YouTube/Tiktok Algorithm".

And this lack of understanding, and in many cases lack of conscious acknowledgement of the lack of understanding has made many "debates" sound almost like theocratic arguments. Very little interest in grounding positions against facts, yet strongly held opinions.

Some are convinced we're going to get AGI in a couple years, others think it's just a glorified text generator that cannot produce new content. And worse there's seemingly little that changes their mind on it.

And there are self contradictory positions held too. Just as an example: I've heard people express AI produced stuff to not qualify as art (philosophically and in terms of output quality) but at the same express deep concern how tech companies will replace artists...

show 1 reply
exceptionelast Saturday at 9:28 AM

  > we do NOT understand how LLMs work AT all.
  > We Do Not understand Why or How an LLM produced a specific response for a
  > specific prompt.

You mean the system is not deterministic? How the system works should be quite clear. I think the uncertainty is more about the premise if billions of tokens and their weights relative to each other is enough to reach intelligence. These debates are older than LLM's. In 'old' AI we were looking at (limited) autonomous agents that had the capability to participate in an environment and exchange knowledge about the world with each other. The next step for LLM's would be to update their own weights. That would be too costly in terms of money and time yet. What we do know is that for something to be seen as intelligent it cannot live in a jar. I consider the current crop as shared 8-bit computers, while each of us need one with terabytes of RAM.
show 1 reply