logoalt Hacker News

nkrisctoday at 2:12 AM2 repliesview on HN

Looking at the tweet he’s replying to, I still find it incredible people talk to these LLMs as if they are rational beings who will listen to them. The fact that they sometimes do is almost coincidence more than anything.

It’s even more unbelievable that they seem to think instructions are rules it will follow.

To paraphrase Captain Barbossa: “They’re more guidelines than actual rules.”


Replies

slopinthebagtoday at 2:18 AM

Lol. I tried doing some image generation with SOTA models. I explicitly asked it not to do something it was doing and it would literally do the thing, and straight up tell me it didn't.

Unless someone has a cognitive impairment it's just simply not a failure mode of cooperative humans. Same with hallucinations. Both humans and AI can be wrong, but a human has the ability to admit when they don't understand or know something, AI will just make it up.

I don't understand why people would ever trust anything important to something with the same failure mode as AI. It's insane.

show 1 reply