logoalt Hacker News

n4r9today at 4:26 PM1 replyview on HN

The concern for me about LLMs confabulating is not that humans don't do it. It's that the massive scale at which LLMs will inevitably be deployed makes even the smallest confabulation extremely risky.


Replies

NiloCKtoday at 4:54 PM

I don't understand this. Many small errors distributed across a large deployment sounds a lot like normal mode of error prone humans / cogs / whatevers distributed over a wide deployment.

show 4 replies