logoalt Hacker News

breytoday at 7:41 AM1 replyview on HN

The next sentence after your quoted section:

“Even then, AI output is never treated as an authoritative source. Everything must be verified.”


Replies

applfanboysbgontoday at 7:46 AM

Any verification process thorough enough to catch all LLM fabrications would take more work than simply not using the LLM in the first place. If anything verifying what an LLM wrote is substantially more difficult than just reading the material it's "summarising", because you need to fully read and comprehend the material and then also keep in mind what the LLM generated to contrast and at that point what the fuck are you even doing?

I believe this policy can never result in a positive outcome. The policy implicitly suggests that verification means taking shortcuts and letting fabrications slip through in the name of "efficiency", with the follow-up sentence existing solely so that Ars won't take accountability for enabling such a policy but instead place the blame entirely on the reporters it told to take shortcuts.

show 5 replies