logoalt Hacker News

nathancahillyesterday at 8:27 PM1 replyview on HN

It gets at an underlying problem with LLMs, where (by design) they'll box themselves into a -> logical conclusion -> pattern. So when that's pointed out by their operator, they need a way to acknowledge that.


Replies

GrinningFooltoday at 11:26 AM

Why do they need a way to acknowledge that? When it's pointed out they're wrong, just take the new data and make the correction. They don't need human mannerisms.