logoalt Hacker News

AI may be making us think and write more alike

139 pointsby giuliomagnificotoday at 11:29 AM133 commentsview on HN

Comments

beej71today at 2:46 PM

LLMs have felt to me like they excel in one particular skill (being able to make connections across vast amounts of knowledge) and are basically average, otherwise. If I'm below average at something (painting, say) the results astound me. But if I'm above average (programming, writing (I like to think)), I'm generally underwhelmed by the results.

I used Claude a lot for planning my current fun project. Good rubber duck. It liked all the suggestions I pitched for the design, but I only went with the last one after discarding the others.

The others were all fine and would have worked, but they weren't the best that I found.

Back to the point, if we're getting average guidance from the AI and we're just offloading our thinking process at that level, then I could sure see it panning out like TFA says.

rdevillatoday at 12:02 PM

This state of affairs presages the advent of a second dark age - one that will forever eclipse the era of radical openness & transparency that once served the software community for decades. Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM whok would steal their competitive advantage & replicate it at scale, until any possible information asymmetries have been arbitraged away. The development & secrecy of technique will once again become a deep moat as LLMs fall into local, suboptimal minima, trained on and marketed towards the lowest common denominator. The Internet, or at least, The Web, becomes a Dark Forest of the Dead Internet (Theory), in which humans fear of speaking out and capturing the attention of the LLM who would siphon their creative essence for more, ever more training data. Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay. Quasi-monastic orders that still scribe with pen and paper emerge, that believe there is still value in training and educating a human mind and body.

- Unknown, 19 Feb 2026

show 8 replies
sobiolitetoday at 12:53 PM

Human communication and reasoning is the end result of billions of years of evolution. I'd be very surprised if LLMs can fundamentally alter it in a few years.

When considering phenomenon like these, I think people seriously underestimate what I'd call the "fashion effect". When a new technology, medium or aesthetic appears, it can have a surprisingly rapid influence on behaviour and discourse. The human social brain seems especially susceptible to novelty in this way.

Because the effects appear so fast and are often so striking, even disturbing, due to their unfamiliarity, it is tempting to imagine that they represent a fundamental transformation and break from the existing technological, social and moral order. And we extrapolate that their rapid growth will continue unchecked in its speed and intensity, eventually crowding out everything that came before it.

But generally this isn't what happens, because often what a lot of what we're seeing is just this new thing occupying the zeitgeist. Eventually, its novelty passes, the underlying norms of human behaviour reassert themselves, and society regresses to the mean. Not completely unchanged, but not as radically transformed as we feared either. The new phenomenon goes from being the latest fashion to overexposed and lame, then either fades away entirely, retreats to a niche, or settles in as just one strand of mainstream civilisational diversity.

LLMs will certainly have an effect on how humans reason and communicate, but the idea that they will so effortlessly reshape it is, in my opinion, rather naive. The comments in this thread alone prove that LLM-speak is already a well-recognised dialect replete with clichés that most people will learn to avoid for fear of looking bad.

show 9 replies
IlyaIvanov0today at 2:48 PM

The most interesting finding here is that LLMs make individuals generate more ideas but make groups generate fewer. The individual effect, in my own experience, depends entirely on how you use the tool. If you treat the first answer as the answer, you get the homogenization the article describes. If you use the LLM to attack your own framing from angles you wouldn't reach alone, you end up closer to first principles, not further. Same tool, opposite outcomes. The discipline is what differs, and most people probably default to the first mode.

misterflibbletoday at 11:52 AM

Subtly? I beg to differ. My team leader only communicates to me using his LLM and so his "thoughts" are not his own!

show 10 replies
downbootstoday at 11:55 AM

It's not explanation — it's relabeling. Why it matters:

show 2 replies
jesseptoday at 12:39 PM

Yeah, I’ve notice that people have started to sound like LLMs even when the LLMs aren’t writing for them. Not stupid people. Not lazy people. Some of the smartest people I know —- I can’t figure out how to use an em dash here, but you get the point.

show 3 replies
jeffwasktoday at 1:04 PM

Take a community with AI moderation like Reddit, I've been a participant for years. With the recent push to AI autocorrect and moderation, you can see the changes in language. New words, new ways of speaking, unconsciously editing yourself because you don't want to draw the eye of the bot. It doesn't feel subtle. It feels Orwellian.

show 1 reply
CompoundEyestoday at 1:17 PM

An aspect of LLMs that I like is the specificity in word choice. One well defined word can be an alias for a couple sentences of explanation that human might not have pulled out of the air in that moment.

It reminds me of the wheel of emotions. If people absorb a wider palette of words communication might benefit. https://www.isu.edu/media/libraries/counseling-and-testing/d...

show 1 reply
adriandtoday at 11:58 AM

I always wonder if competitive market dynamics will solve problems like these, at least to some extent and for some people, because the people who retain the ability to communicate in a distinctive, persuasive and original style will be rewarded. Corporate dronespeak is no less homogeneous than AI writing, and companies with this communication style are regularly disrupted by nimbler, more authentic-sounding competitors.

davebrentoday at 1:20 PM

This is my current fear, even if I choose not to use it if everyone around me does their way of speaking is all going to become more chatbot-esque. It already seems to be transferring to people its false sense of confidence, and its lack of reasoning ability. The corporate demand to participate in this is something I can't do, the cost is our humanity.

I guess one hope for luddites is that we can stay tethered by reading pre-LLM books and other content.

mplanchardtoday at 2:06 PM

A really aggravating thing about seeing so much AI-generated text around is that it makes me constantly second-guess my own writing. Does that sentence sound natural? Am I veering into ChatGPT territory? God forbid I use an em-dash. And how much of the perception of it "feeling" like AI text is real vs. paranoia?

It's incredibly frustrating, but maybe a silver lining is that it will help me write more authentically, I don't know.

pdimitartoday at 2:06 PM

While I cringe at most LLM speak, I have learned quite a bit from it. Certain terminology and some gaps in my entirely self-learned English. I appreciate that. It helped me better express myself at work and use less words (but hopefully more substantive ones).

But yeah, their general tone is very... castrated. Safe. Hugely impersonal.

I have learned to quickly edit out their suggested comments when I ask for an advice.

To me they have been a positive -- after careful curation.

kusokuraetoday at 1:24 PM

On a creative level, I remember McCarthy describing scalped heads as like wet polyps blue in the moonlight. The more generic ways of describing something like that would never give me such a visceral reaction to the violence he was trying to tell me something about.

I already lose interest reading books where the phrases are recycled and the max sentencelength for the whole book grazes 40.

If people communicate to me without personality through prompt wastrelry I'll discount theirs and wait till they're willing to actually have an opinion. In this specific context style and substance tend to come in a pair or not at all. If you can't beat 'em you can at least filter 'em out.

giancarlostorotoday at 1:11 PM

English is not my first language, but when I started using Firefox with the built-in spell correction, I firmly believe my ability to spell words went drastically up. My grammar is stiff iffy, like I'm pretty sure I do comma splices everywhere, but at least most people can understand what I say now compared to when I was 13 and on the internet.

If there was a "gramma nazi" teenie tiny LLM with a total focus on English grammar only, and you baked that into every browser, I feel like my grammar would improve slightly. Word does it to an extent, but I don't use Word nearly enough for it to be meaningful. Firefox text spell checking was on 98% of the things I used online.

show 3 replies
tarkin2today at 1:39 PM

People from a nation think and write alike because they share a common canon of literature and stories.

It's just a pity AI was trained on mindless, garbage business-speak, and now that's our globalised common literature.

And now we're feeding that regurgitated mindless, garbage business-speak back into AI models, thereby reinforcing the garbage and further rotting our minds.

break_the_banktoday at 1:42 PM

Wrote about this a while ago actually; I called it the Billion Steve problem - https://x.com/gyani1595/status/2034652087494090829

tom-blktoday at 12:07 PM

This is undoubtedly the case and imo quite concerning. Hard to minimize the effects as well, personally speaking.

anizantoday at 12:02 PM

Social media is a tool for perpetuating monothought

show 1 reply
rob_ctoday at 2:49 PM

I know many people from the continent who sound American because they learned ENGLISH language that way... yes it's strange how the world of communication centres around the world of discourse...

This isn't new. But nice to see more social sciences joining the party on the LLM bandwagon.

staredtoday at 12:22 PM

You are absolutely right!

robofanatictoday at 12:49 PM

Well, in few years not sure I will know how to think any more. If I am stuck on something I just ask the LLM and get the solution. While this shortcut sometimes saves me a ton of time and headaches, I miss that long route of thinking and getting to a solution myself. Maybe in future we will have gyms for brain workouts… I don’t know

everdrivetoday at 12:49 PM

So too did the printing press. Again, this is not a "something similar has happened in the past, therefore this is nothing new" sort of comment.

This is quite new, however this outcome was totally unavoidable -- once methods of communication become widespread and centralized it is impossible for them not to impact language and thought.

show 2 replies
iainctduncantoday at 1:28 PM

One has only to compare blogs and "thought leadership" posts from now and five years ago to see this is already happening, and big time.

Brendinoootoday at 12:22 PM

I would imagine a similar critique was leveled at the written word when it was starting to supplant oral cultures.

show 1 reply
uncanny2today at 12:19 PM

I have made an observation that others have not discussed, that the real gem of our collective LLM experience is the proper documentation of “skills.”

Am I the only one who has noticed that the proper documentation of skills we do for LLMs after so many decades of neglecting junior and mid level roles are the real work?

We carefully explain to our LLMs policies, procedures, and practices which for generations before we have vaguely arbitrarily and ambiguously expected each human role to “figure out” for themselves?

Simply as a catalog of expectations our experiences have been valuable, apart from the “automated” aspects the LLms provide.

show 1 reply
ameliustoday at 2:23 PM

Just crank up the temperature.

compounding_ittoday at 12:48 PM

People are unloading the cognitive load onto the LLM. Probably because life stress is causing them to rely on technology to bring relief. It may not necessarily be a great choice.

jerftoday at 2:10 PM

"say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets,"

Are you kidding me?

How much more "real-world diversity" could they possibly incorporate into the models than the entire freaking Internet and also every scrap of text written on paper the AI companies could get a hold of?

How on Earth could someone think that AIs speak like this because their training set is full of LLM-speak? This is transparently obviously false.

This is the sort of massive, blinding error that calls everything else written in the article into question. Whatever their mental model of AI is it has no resemblance to reality.

show 1 reply
taco_emojitoday at 1:27 PM

Can't affect you if you don't use it

ori_btoday at 12:50 PM

Knowing people have gone full "LLM-brain", it's not subtle.

stabblestoday at 12:48 PM

Oh no, LLMs threaten our individuality ⸻ what will we do?!

indrextoday at 12:53 PM

…and the first paragraph has an em dash

dfxm12today at 1:45 PM

Large language models may be standardizing human expression

I think it is important to distinguish "human expression" from copying a response from an LLM. Someone who outsources their thinking to an LLM is only offering an AI's expression. It's not human expression.

nickphxtoday at 1:15 PM

Who is "us"?

dist-epochtoday at 12:14 PM

> The team points to multiple studies showing that LLM outputs are less varied than human-generated writing and that LLM outputs tend to reflect the language, values and reasoning styles of Western, educated, industrialized, rich and democratic societies. ... The researchers say that AI developers should intentionally incorporate diversity in language, perspectives and reasoning into their models.

Which is why Altman says Saudi Arabia should have it's own Sovereign AI cloud. Why should LLMs reflect democratic societies views on man and woman for example? They should also reflect the perspectives on man and woman that Saudi Arabia has, especially to local people. Western views should not be imposed on the rest of the world.

apitoday at 12:09 PM

Compared to social media, probably for the better.

paganeltoday at 12:03 PM

> contributed to the research, which was supported by funding from the Air Force Office of Scientific Research.

I guess when they're not busying bombing train infrastructure in Iran they have some money left to give to some propagandizing about AI. Always try to stay on top of the game!

incomingpaintoday at 2:01 PM

The LLM people call it "safety" but in reality its censorship and conformity. Yet, it's trivial to get them to talk about how to make a bomb or whatever. It's mostly political in nature.

https://www.trackingai.org/political-test

You dont accidentally end up entirely left wing libertarian.

show 1 reply
Manchitsanantoday at 2:16 PM

[dead]

wei03288today at 2:06 PM

[dead]

minutesmithtoday at 2:10 PM

[dead]

ynajjarinetoday at 12:07 PM

[dead]

philbitttoday at 2:14 PM

[dead]

metanoia_today at 1:28 PM

[dead]

benreesmantoday at 12:52 PM

[dead]

oceanskytoday at 11:56 AM

Wasted the opportunity of using an em dash instead of an en dash in the title.