[flagged]
I’m not sure where our guidelines/norms are on this kind of thing, but I get the sense that most of us feel very capable of pasting articles into LLMs ourselves.
What we’re less capable of—and the reason we look to each other here instead—is distinguishing where the LLM’s errors or misinterpretations lie. The gross mistakes are often easy enough to spot, but the subtle misstatements get masked by its overconfidence.
Luckily for us, a lot of the same people actually doing the work on the stuff we care about tend to hang out around here. And often, they’re kind enough to duck in and share.
Thank you in any case for being upfront about it. It’s just that it’d be a shame and a real loss if the slop noise came to drown out the signal here.
> I asked AI to explain it to me,
We all know how to do that, but that's not why were here.