The concern for me about LLMs confabulating is not that humans don't do it. It's that the massive scale at which LLMs will inevitably be deployed makes even the smallest confabulation extremely risky.
I don't understand this. Many small errors distributed across a large deployment sounds a lot like normal mode of error prone humans / cogs / whatevers distributed over a wide deployment.
I don't understand this. Many small errors distributed across a large deployment sounds a lot like normal mode of error prone humans / cogs / whatevers distributed over a wide deployment.