logoalt Hacker News

Terr_yesterday at 5:15 PM1 replyview on HN

Prompt-injection in all its forms. If the hyper-mad-libs machine doesn't reliably "understand" and model the difference between internal and external words, how can we trust them to model fancier stuff?


Replies

bigstrat2003yesterday at 8:25 PM

We can't even trust LLMs to get basic logic right, or even the language syntax sometimes. They reliably generate code worse than a human would write, and have zero reasoning ability. Anyone who thinks they can model something complicated is either uncritically absorbing hype or has a financial stake in convincing people of the hype.