logoalt Hacker News

We're losing our voice to LLMs

311 pointsby TonyAlicea10today at 2:51 PM336 commentsview on HN

Comments

coffeecoderstoday at 4:40 PM

I actually think we’re overestimating how much of "losing our voice" is caused by LLMs. Even before LLMs, we were doing the same tweet-sized takes, the same medium-style blog posts and the same corporate tone.

Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.

show 6 replies
tptacektoday at 6:23 PM

If you care about voice, you still can get a lot of value from LLMs. You just have to be careful not to use a single word they generate.

I've had a lot of luck using GPT5 to interrogate my own writing. A prompt I use (there are certainly better ones): "I'm an editor considering a submitted piece for a publication {describe audience here}. Is this piece worth the effort I'll need to put in, and how far will I need to cut it back?". Then I'll go paragraph by paragraph asking whether it has a clear topic, flows, and then I'll say "I'm not sure this graf earns its keep" or something like that.

GPT5 and Claude will always respond to these kinds of prompts with suggested alternative language. I'm convinced the trick to this is never to use those words, even if they sound like an improvement over my own. At the first point where that happens, I get dial my LLM-wariness up to 11 and take a break. Usually the answer is to restructure paragraphs, not to apply the spot improvement (even in my own words) the LLM is suggesting.

LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere (2) spotting repetitive word choices (3) keeping things active voice and keeping subject/action clear (4) catching non-sequiturs (a constant problem for me; I have a really bad habit of assuming the reader is already in my head or has been chatting with me on a Slack channel for months).

Another thing I've come to trust LLMs with: writing two versions of a graf and having it select the one that fits the piece better. Both grafs are me. I get that LLMs will have a bias towards some language patterns and I stay alert to that, but there's still not that much opportunity for an LLM to throw me into "LLM-voice".

show 4 replies
AstroBentoday at 4:28 PM

There's something unique about art and writing where we just don't want to see computers do it

As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it

show 10 replies
ricardo81today at 4:01 PM

I deleted my Facebook account a couple of years ago and my Twitter one yesterday.

It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.

There needs to be algorithms that promote cohorts and individuals preferences.

Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.

show 17 replies
moooo99today at 6:29 PM

I generally agree with the sentiment, but I can't help but feel like we're attributing to much of this change to LLMs. While they're certainly driving this change even further, this is a trend that has already started way before LLMs became as widespread as they are today.

What personally disturbs me the most is the self censorship that was initially brought forward by TikTok and quickly spread to other platforms - all in the name of being as advertiser friendly as possible.

LinkedIn was the first platform where I really observed people losing their unique voice in favor of corporate friendly - please hire me - speak. Now this seems to be basically any platform. The only platform that seems to be somewhat protected from it is Reddit, where many mods seem to dislike LLMs as much as everybody else. But more likely, its just less noticeable

show 1 reply
pksebbentoday at 8:44 PM

The potentially bitter pill to swallow here is that we all need to get better at critical thinking.

There's a lot of talk over whether LLMs make discourse 'better' or 'worse', with very little attention given to the crisis we were having with online discourse before they came around. Edelman was astroturfing long before GPT. Fox 'news' and the spectrum of BS between them and the NYT (arranged by how sophisticated they considered their respective pools of rubes to be) have always, always been propaganda machines and PR firms at heart wearing the skin of journalism like buffalo bill.

We have needed to learn to think critically for a very long time.

Consider this; if you are capable of reading between the lines, and dealing with what you read or hear on the merits of the thoughts contained therein, then how are you vulnerable to slop? If it was written by an AI (or a reporter, or some rando on the internet) but contains ideas that you can turn over and understand critically for yourself, is it still slop? If it's dumb and it works, it's not dumb.

I'm not even remotely suggesting that AI will usher in a flood of good ideas. No, it's going to be used to pump propaganda and disseminate bullshit at massive scale (and perhaps occasionally help develop good ideas).

We need to inoculate ourselves against bullshit, as a society and a culture. Be a skeptic. Ironnman arguments against your beliefs. Be ready to bench test ideas when you hear them and make it difficult for nonsense to flourish. It is (and has been) high time to get loud about critical thinking.

mkzettoday at 4:02 PM

The Internet will become truly dead with the rise of LLMs. The whole hacking culture within 90s and 00s will always be the golden age. RIP

show 5 replies
mentalgeartoday at 4:13 PM

I've been thinking about this as well, especially in the context of historical precedents in terms of civilization/globalization/industrialization.

How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).

In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.

We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.

show 1 reply
leetrouttoday at 4:04 PM

Hits close to home after I've caught myself tweaking AI drafts just to make them "sound like me". That uniformity in feeds is real and it's like scrolling through a corporate newsletter disguised as personal takes.

what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?

Nudge to post more of my own mess this week...

truelsontoday at 4:27 PM

It's still an editor I can turn to in a pinch when my favorite humans aren't around. It makes better analogies sometimes. I like going back and forth with it, and if it doesn't sound like me, I rewrite it.

Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")

The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.

show 1 reply
WD-42today at 3:59 PM

Where are these places where everything is written by a LLM? I guess just don’t go there. Most of the comments on HN still seem human.

show 12 replies
ChrisMarshallNYtoday at 4:09 PM

Not sure if it's an endemic problem, just yet, but I expect it to be, soon.

For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.

That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.

[0] https://littlegreenviper.com/miscellany/

show 1 reply
hshdhdhj4444today at 5:32 PM

The problem with the “your voice is unique and an asset” argument is what we’ve promoted for so long in the software industry.

Worse is better.

A unique, even significantly superior, voice will find it hard to compete against the pure volume of terrible non unique LLM generated voices.

Worse is better.

whatever1today at 4:06 PM

It’s ok. Most of our opinions suck and are unoriginal anyway.

The few ones who have something important to say they will, and we will listen regardless of the medium.

show 3 replies
morgengoldtoday at 6:54 PM

First we need to think about why we consume content? I am happy to read llm created stuff when I need to know sth and it delivers 100%. Other reasons like "get perspectives of real humans", or "resonate" ... not so much

bachittletoday at 5:04 PM

If you give an LLM enough context, it writes in your voice. But it requires using an intelligent model, and very thoughtful context development. Most people don't do this because it requires effort, and one could argue maybe even more effort than just writing the damn thing yourself. It's like trying to teach a human, or anyone, how to talk like you: very hard because it requires at worst your entire life story.

show 3 replies
zavgtoday at 4:29 PM

In one of the WhatsApp communities I belong to, I noticed that some people use ChatGPT to express their thoughts (probably asking it to make their messages more eloquent or polite or whatever).

Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.

Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.

show 2 replies
nusltoday at 5:09 PM

Sometime within the next few years I imagine there will be a term along the lines of "re-humanise," where folks detox from AI use to get back in touch with humanity. At the rate we're going, humanity has become a luxury and will soon demand a premium.

ftrsprvlntoday at 8:32 PM

[Sometime in the near future] The world's starved for authenticity. The last original tweet crowned a God... then killed the kid chasing that same high. Trillionaires run continent-wide data centers, endlessly spinning up agents that hire cheap physical labor to scavenge the world for any spark of novelty. The major faith is an LLM cult forecasting the turning of the last stone. The rest of us choke on recycled ad slop.

A4ET8a8uTh0_v2today at 3:59 PM

<< Write in your voice.

I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.

show 2 replies
motbus3today at 4:30 PM

Also that these models are being used to promote fake news and create controversy ou interact with real humans with unknown purposes

Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot

CityOfThrowawaytoday at 4:03 PM

In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.

There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.

But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.

In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.

Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.

And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!

show 4 replies
logsrtoday at 5:37 PM

In my view LLMs are simply a different method of communication. Instead of relying on "your voice" to engage the reader and persuade them of your point of view, writing with LLMs for analysis and exploration through LLMs, is about creating an idea space that a reader can interact with and explore from their own perspective, and develop their own understanding of, which is much more powerful.

Glemkloksdjftoday at 4:23 PM

The global alignment also happens through media like tv shows and movies, the internet overall.

I agree I think we should try to do both.

In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.

If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...

Even our german coke like Fritz cola is a niche / hipster thing even today.

analog31today at 4:20 PM

Soon, we'll be nostalgic for social media. The irony.

blenderobtoday at 4:18 PM

FWIW this prompt works for very good for me:

  Improve grammar and typos in my draft but don't change my writing style.
Your mileage may vary.
show 1 reply
BoredomIsFuntoday at 5:37 PM

The posts sounds beige and AI-generated ironically.

In any case, as someone who experimented with AI for creative writing, LLM _do not destroy_ your voice; it does flatten your voice, but with minimal effort you can make it sound the way you find reflects you thought best.

benterixtoday at 6:40 PM

A devils advocate in me would say that this post was authored by one of LLM models creator realizing they really need more fresh meat to train on.

show 1 reply
adamzwassermantoday at 3:01 PM

It is not a zero sum game.

I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.

I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?

I promise to tell the truth.

show 10 replies
officeherotoday at 7:42 PM

It's more that people who historically didn't have a voice now have one. It's often stupid but sometimes also interesting and innovative. Saw a channel where a university professor "I" comes to the realization she's been left-leaning/biased for decades, that her recent male students no longer dare engage in debate because of shaming/gaslighting etc. Then I click channel description and turns out it's "100% original writing". Now if it hadn't said that it would be strawman propaganda. But now it does... Not sure how to put a finger on it, there's some nervous excitement when reading these days, not knowing who the sender is, getting these 'reveal' moments when finding out whole thing was made up by some highschool kid with AI or insane person.

ruudatoday at 5:27 PM

I wholeheartedly agree, I wrote about this at https://ruudvanasseldonk.com/2025/llm-interactions.

show 1 reply
ukFxqnLa2sBSBf6today at 6:13 PM

There has been an explosion in verbose status update emails at my job recently which have all clearly been written by ChatGPT. It’s the fucking emojis though that drive me wild. It’s so hard to read the actual content when there’s an emoji for every single sentence.

And now when I see these emoji fests I instantly lose interest and trust in the content of the email. I have to spend time sifting through the fluff to find what’s actually important.

LLMs are creating an assymetric imbalance in effort to write vs effort to read. What is taking my coworkers probably a couple minutes to draft requires me 2-3x as long to decipher. That imbalance used to be the opposite.

I’ve raised the issue before at work and one response I got was to “use AI to summarize the email.” Are we really spending all this money and energy on the worlds worst compression algorithm?

meindnochtoday at 4:51 PM

You're absolutely right.

Here's why:

simianwordstoday at 6:19 PM

1) People who use LLMs for generating

2) People who use LLMs for understanding

I think I'll stick to 2) for many reasons.

TeMPOraLtoday at 5:05 PM

> Social media has become a reminder of something precious we are losing in the age of LLMs: unique voices.

Social media already lost that nearly two decades ago - it died as content marketing rose to life.

Don't blame on LLMs what we've long lost due to cancer that is advertising[0].

And don't confuse GenAI as a technology with what the cancer of advertising coopts it to. The root of the problem isn't in the generative models, it's in what they're used for - and the problem uses aren't anything new. We've been drowning in slop for decades, it's just that GenAI is now cheaper than cheap labor in content farms.

--

[0] - https://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html

show 1 reply
nechalnikaSamatoday at 6:49 PM

it's not the voice. it's the lack of need to talk tough about the hard problems. if you accept what is and just babble, anything you write will sound like babbling.

there's enough potential and wiggle room but people align, even when they don't, just to align.

when Rome was flourishing, only a few saw what was lingering in the cracks but when in flourishing Rome ...

bookofjoetoday at 4:31 PM

The LLM v human debate here reminds me of the now dormant "Are you living in a simulation?" discussions of previous decades.

rdtsctoday at 4:25 PM

I call it the enshittification fix-point. Not only are we losing our voice, we'll soon enough start thinking and talking like LLMs. After a generation of kids grows up reading and talking to LLMs, that will be only way they'll know how to communicate. You'll talk to a person and you couldn't tell the difference between them and LLMs, not because LLMs became amazing, but because our writing and thinking style become more LLM-like.

- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"

- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."

binary132today at 4:45 PM

Ironically I find it hard to tell whether this writing is LLM or merely a bit hollow and vapid.

show 2 replies
shevy-javatoday at 6:20 PM

Well - voice is ultimately coupled to a person. LLMs thus fake and pretend being a person. There are, however had, use cases for LLMs too. I saw it used for the creation of video games; also content generated by hobbyists. So, while I think AI should actually die, for hobbyists generating mods for old games, AI voice overs may not be that bad. Just as AI generating images for free to play browser games may not be solely bad either.

Of course there are also horrible use of AI, liars, scummy cheaters and fake videos on youtube, owned by a greedy mega-corporation that sold its soul to AI. So the bad use cases may be higher than the good use cases, but there are good use cases, and the "losing our voice to LLMs" isn't a whole view of it, sorry.

franciscatortoday at 6:18 PM

Deus Ex-Machina is starting to take off ...

satisficetoday at 7:58 PM

Don’t write anything with LLMs, ever. Unless having no credibility is your goal.

tetracatoday at 6:43 PM

Subsume your agency. Stop writing. Stop learning. Stop thinking for yourself. Become hylic. Just let the machine think everything for you and act as it acts. Those that own them are benevolent and there will never be consequences.

e-danttoday at 5:14 PM

You never had a voice to lose

poolnoodletoday at 5:29 PM

I'm just using the internet less and less recreationally. Except for pirating movies.

lutusptoday at 6:58 PM

I'm in complete agreement with the idea that people should express themselves in their own words. But this collides with certain facts about U.S. adults (and students). This summary (https://www.nu.edu/blog/49-adult-literacy-statistics-and-fac...) reveals that:

* 28% of U.S. adults are at or below "level 1" literacy, essentially meaning people unable to function in an environment that requires written language skills.

* 54% of U.S. adults read below a sixth-grade level.

These statistics refer to an inability to interpret written material, much less create it. As to the latter, a much smaller percentage of U.S. adults can compose a coherent sentence.

We're moving toward a world where people will default to reliance on LLMs to generate coherent writing, including college students, who according to recent reports are sometimes encouraged to rely on LLMs to complete their assignments.

If we care to, we can distinguish LLM output from that of a typical student: An LLM won't make the embarrassing grammatical and spelling errors that pepper modern students' prose.

Yesterday I saw this headline in a major online media outlet: "LLMs now exceed the intelect [sic] of the average human." You don't say.

btbuildemtoday at 6:16 PM

I think that's for the best. It was human-made slop, now it's automated slop. Can't wait for people to stop paying it attention so that it withers. "It" being the whole attention economy scam.

ArcHoundtoday at 4:16 PM

I think that this is imbalanced in favour of wannabe influencers, who want to be consistent and popular.

If you really have no metrics to hit (not even the internal craving for likes), then it doesn't make much sense to outsource writing to LLMs.

But yes, it's sad to see that your original stuff is lost in the sea of slop.

Sadly, as long as there will be money in publishing, this will keep happening.

fpausertoday at 5:32 PM

100% agree.

gchamonlivetoday at 4:08 PM

Social media is a reminder we are losing our voice to mass media consumption way before LLMs were a thing.

Even before LLMs, if you wanted to be a big content creator on YouTube, Instagram, tiktok..., you better fall in line and produce content with the target aesthetic. Otherwise good luck.

🔗 View 18 more comments