logoalt Hacker News

me-vs-catlast Friday at 4:44 AM1 replyview on HN

> ...using other models, never touching that product again.

> ...that the LLM even suggested (without special prompting) to do something that I should have realized was a stupid idea with a low chance of success...

Since you're using other models instead, do you believe they cannot give similarly stupid ideas?


Replies

Duanemclemorelast Friday at 6:17 AM

I'm under no misimpression they can't. But I have found ChatGPT to be most confident when it f's up. And to suggest the worst ideas most often.

Until you queried I had forgotten to mention that the same day I was trying to work out a Linux system display issue and it very confidently suggested to remove a package and all its dependencies, which would have removed all my video drivers. On reading the output of the autoremove command I pointed out that it had done this, and the model spat out an "apology" and owned up to ** the damage it would have wreaked.

** It can't "apologize" for or "own up" to anything, it can just output those words. So I hope you'll excuse the anthropomorphization.

show 1 reply