Isn’t “bad output” already worst case? Pre-LLMs correct output was table stakes.
You expect your calculator to always give correct answers, your bank to always transfer your money correctly, and so on.
I've seen plenty of decision makers act on bad output from human employees in the past. The company usually survives.
> Isn’t “bad output” already worst case?
Worst case in a modern agentic scenario is more like "drained your bank account to buy bitcoin and then deleted your harddrive along with the private key"
> Pre-LLMs correct output was table stakes
We're only just getting to the point where we have languages and tooling that can reliably prevent segfaults. Correctness isn't even on the table, outside of a few (mostly academic) contexts