"Now, with LLM-generated content, it’s hard to even build mental models for what might go wrong, because there’s such a long tail of possible errors. An LLM-generated literature review might cite the right people but hallucinate the paper titles. Or the titles and venues might look right, but the authors are wrong."
This is insidious and if humans were doing it they would be fired and/or cancelled on the spot. Yet we continue to rave about how amazing LLMs are!
It's actually a complete reversal on self driving car AI. Humans crash cars and hurt people all the time. AI cars are already much safer drivers than humans. However, we all go nuts when a Waymo runs over a cat, but ignore the fact that humans do that on a daily basis!
Something is really broken in our collective morals and reasoning
Noticing this most in visual content rather than LLMs. That era of anyone young and perpetually online can spot AI via uncanny valley was remarkably shortlived. [0]
>In the pre-LLM era, I could build mental models, rely on heuristics, or spot-check information strategically.
I wonder if this will be an enduring advantage of the current generation - building your formative world model in a pre-AI era. It seems plausible to me that anyone who built the foundations there has a much higher chance of having instincts that are more grounded even if post-AI experiences are layered on later
[0] https://www.reddit.com/r/antiai/comments/1p8z6y6/nano_banana...
In my opinion, this is the biggest (current) problem with AI. It is really good at that thing you used to do when you had to hit a word count in a school essay. How long until the world's hard drive space is filled up with filler words and paragraphs of text that goes nowhere, and how could you possibly search and find anything in such conditions?
Scroll through and read only the section headers. I would be shocked if this wasn't at the very least run through an LLM itself. For sure the section headers are, I'll skip the rest unless someone posts that it's worth a read for some reason.
It doesn't appear to be section headings glued together with bullet lists so maybe the content really does retain the author's perspective but at this point I'd rather skip stuff I know has been run through an LLM and miss a few gems rather than get slopped daily.
What's crazy is you're starting to see an overreaction to this fact as well.
The other day I posted a short showcasing some artwork I made for a TCG I'm in the process of creating.
Comments poured in saying it was "doomed to fail" because it was just "AI slop"
In the video itself I explained how I made them, in Adobe Illustrator (even showing some of the layers, elements, etc).
Next I'm actually posting a recording of me making a character from start to finish, a timelapse.
Will be interesting if I get any more "AI slop" comments, but it's becoming increasingly difficult to share anything drawn now because people immediately assume it's generated.
gpt is eternal september for normies
> There’s a frustration I can’t quite shake when consuming content now—
perhaps even a frustration you can't quite name
I'm pretty sure that the reason everything seems like AI is that AI produces stupid, pointless content at scale, and our "writers" have become people who generate stupid, pointless content at scale.
There's no reason for most things to have been written. Whatever point is being made is pointless. It's not really entertaining, it's meant to be identified with; it's not a call to any specific action; it doesn't create some new fertile interpretation of past events or ideas; it's not even a cry for help. It's just pointless fluff to surround advertising. From a high concept likely dictated by somebody's boss.
AI has no passion and no point. It is not trying to convince anyone of anything, because it does not care. If AI were trying to be convincing, it would try to conceal its own style. But it doesn't mean anything for an AI to try. It's just running through the motions of filling out an idea to a certain length. It's whatever the opposite of compression is.
A generation of writers raised on fanfiction and prestige tv who grew up to write Buzzfeed articles at the rate of five a day are indistinguishable from AI.
Why This Matters
> If something seems off, I can just regenerate and hope the next version is better. But that’s not the same as actually checking. It feels like a slot machine—pull the lever again, see if you get a better result—substitutes for the slower, harder work of understanding whether the output is correct.
What a great point. In some work loops I feel like I get addicted to seeing what pops in the next generation.
One of the things i Learned from moderating internet usage is not fall prey to recommendation systems. As in, when I am on the web, I only consume what I explicitly looked for, and not what the algorithm thinks i should consume next.
sites like reddit and HN make this tricky.
yeah everything sounds like AI, and why is that? Well it might be because everything is AI but I think that writing style is more LinkedIn than LLM, the style of people who might get slapped down if they wrote something individual.
Much of the world has agreed to sound like machines.
Another thing I've noticed is that weird stuff that is perhaps off in some way, also gets accused of being LLMs because it doesn't feel right.
If you sound unique and weird you get accused of being a bad LLM that can't falsify humanity well enough, and if you sounds boring and bland and boosterist, you get accused of being a good LLM.
You can't write like no one else, but you also can't write like everybody else.
The best part is that this article is almost certainly AI-generated or heavily AI-assisted too.
Before people get angry with me... there's plenty of small tells, starting with section headings, a lot of linguistic choices, and low information density... but more importantly, the author openly says she writes using LLMs: https://www.sh-reya.com/blog/ai-writing/#how-i-write-with-ll...