When organizational incentives penalize NOT using AI and firing the bottom x% regularly then are you really surprised LLM outputs aren't being scrutinized?
Yes, because trusting LLM output is a great way to be in the bottom x%.
Yes, because trusting LLM output is a great way to be in the bottom x%.