I know I'm walking into a den of wolves here and will probably get buried in downvotes, but I have to disagree with the idea that using LLMs for writing breaks some social contract.
If you hand me a financial report, I expect you used Excel or a calculator. I don't feel cheated that you didn't do long division by hand to prove your understanding. Writing is no different. The value isn't in how much you sweated while producing it. The value is in how clear the final output is.
Human communication is lossy. I think X, I write X' (because I'm imperfect), you understand Y. This is where so many misunderstandings and workplace conflicts come from. People overestimate how clear they are. LLMs help reduce that gap. They remove ambiguity, clean up grammar, and strip away the accidental noise that gets in the way of the actual point.
Ultimately, outside of fiction and poetry, writing is data transmission. I don't need to know that the writer struggled with the text. I need to understand the point clearly, quickly, and without friction. Using a tool that delivers that is the highest form of respect for the reader.
I think often, though, people use LLMs as a substitute for thinking about what they want to express in a clear manner. The result is often a large document which locally looks reasonable and well written but overall doesn't communicate a coherant point because there wasn't one expressed to the LLM to begin with, and even a good human writer can only mind-read so much.
> The value is in how clear the final output is.
Clarity is useless if it's inaccurate.
Excel is deterministic. ChatGPT isn't.
I’m with you, and further, I’d apply this (with some caveats) to images created by generative AI too.
I’ve come across a lot of people recently online expressing anger and revulsion at any images or artwork that have been created by genAI.
For relatively mundane purposes, like marketing materials, or diagrams, or the sort of images that would anyway be sourced from a low-cost image library, I don’t think there’s an inherent value to the “art”, and don’t see any problem with such things being created via genAI.
Possible consequences:
1) Yes, this will likely lead to loss/shifts in employment, but wasn’t progress ever like this? People have historically reacted strongly against many such shifts when advancing technology threatens some sector, but somehow we always figure it out and move on.
2) For genuine art, I suspect this will in time lead to a greater value being placed in demonstrably human-created originals. Related, there’s probably of money to be made by whoever can create a trusted system somehow capturing proof of human work, in a way that can’t be cheated or faked.
The point made in the article was about social contract, not about efficacy. Basically if you use an llm in such a way that the reader detects the style, you lose the trust of the reader that you as the author rigorously understand what has been written, and the reader loses the incentive pay attention easily.
I would extend the argument further to say it applies to lots of human generated content as well. Especially sales and marketing information which similarly elicit very low trust.
Something only a bad writer would write.
Totally agree. The output is what matters.
At this point, who really cares what the person who sees everything as "AI slop" thinks?
I would rather just interact with Gemini anyway. I don't need to read/listen to the "AI slop hunter" regurgitate their social media feed and NY Times headlines back to me like a bad language model.
I think the main problem is people using the tool badly and not producing concise material. If what they produced was really lean and correct it'd be great, but you grow a bit tired when you have to expend time reviewing and parsing long, winding and straight wrong PRs and messages from _people_ who have not put in the time.