logoalt Hacker News

butlikeyesterday at 8:34 PM1 replyview on HN

An LLM gives AN answer. If you ask for not many more than that it gets confused, but instead of acting in a human-like way, it confidently proceeds forward with incorrect answers. You never quite know when the context got poisoned, but reliability drops to 0.

There's many things to say on this. Free is worthless. Speed is not necessarily a good thing. The image generation is drivel. But...

The main nail in the coffin is accountability. I can't trust my work if I can't trust the output of the machine. (and as a bonus, the machine can't build a house. It's single purpose).


Replies

Dylan16807yesterday at 9:23 PM

Okay, but this has vanishingly little to do with the comment chain you replied to, which was about energy efficiency.