logoalt Hacker News

alfalfasproutyesterday at 7:31 PM1 replyview on HN

When organizational incentives penalize NOT using AI and firing the bottom x% regularly then are you really surprised LLM outputs aren't being scrutinized?


Replies

Uhhrrryesterday at 10:22 PM

Yes, because trusting LLM output is a great way to be in the bottom x%.