logoalt Hacker News

6stringmerclast Wednesday at 12:53 PM1 replyview on HN

Yes, it should refuse.

Humans have made progress by admitting when they don’t know something.

Believing an LLM should be exempt from this boundary of “responsible knowledge” is an untenable path.

As in, if you trust an ignorant LLM then by proxy you must trust a heart surgeon to perform your hip replacement.


Replies

ijklast Wednesday at 4:07 PM

Just on a practical level, adding a way for the LLM to bail if can detect that things are going wrong saves a lot of trouble. Especially if you are constraining the inference. You still get some false negatives and false positives, of course, but giving the option to say "something else" and explain can save you a lot of headaches when you accidentally send it down the wrong path entirely.