logoalt Hacker News

The Social Edge of Intelligence: Individual Gain, Collective Loss

49 pointsby ForHackernewstoday at 10:08 AM50 commentsview on HN

Comments

michaelbuckbeetoday at 12:05 PM

This feels like a restating of the idea that for any given endeavor AI raises the floor of quality but doesn't push the ceiling.

quinnduponttoday at 11:51 AM

The rise of AI writing has only been matched by superficial articles comprised of ideas salad that evince no deep theoretical or historical understanding. Crappy writing has and always will exist, AI doesn’t change that, it just makes awful writing grammatical.

show 1 reply
kreelmantoday at 10:57 AM

Just wondering... What is Intellgience?

show 1 reply
caditinpiscinamtoday at 11:05 AM

Generative AI is the average of all human knowledge

show 5 replies
jdw64today at 11:27 AM

Human intelligence is fundamentally motivated by fear and desire, whereas AI operates on an entirely different paradigm. AI lacks human embodiment, and it lacks the political landscapes born out of complex social relationships. Can we truly equate AI's 'intelligence' with what humans call intelligence? Should we even be calling its functionality 'intelligence' at all?

The author argues that overreliance on AI will degrade the overall intelligence of human society, creating a negative feedback loop where future models train on increasingly degraded human data. I agree with this perspective to some extent. However, to definitively claim that human intelligence will only decline is overly simplistic. Rather, we might be about to witness a different facet—or the flip side—of what we have traditionally defined as intelligence.

Socrates once argued that the invention of writing would degrade the essence of human thought and memory. It is true that our capacity for raw memorization declined, but the act of recording enabled knowledge to be transmitted across generations. Couldn't LLMs represent a similar evolutionary trajectory?

It is undeniably true that LLMs atrophy certain cognitive muscles. However, I believe they catalyze development in other areas. In modern society, human discovery and knowledge are effectively monopolized by specific cliques. Without access to prestigious Western journals or incumbent tech giants, the barrier to entry is immense. The open-source community is no exception. For non-native English speakers, breaking into the open-source culture to access shared knowledge is notoriously difficult. But now, by spending a few dollars on an LLM, I can access the collective knowledge of that open-source ecosystem, translated seamlessly into my native language.

There is an old adage in the Korean Windows community: 'Linux is open, but it is not free.' And it’s true. To use Linux, you had to memorize arcane commands, and due to the lack of proper Korean documentation, the learning curve was vastly steeper than Windows. That very learning curve acted as a gatekeeping wall. LLMs explicitly dismantle that wall.

But this dismantling is a two-way street, and it exposes a fatal flaw in the author’s reliance on Shumailov’s 'Model Collapse' theory. The author claims AI compresses the tails of the data distribution, erasing minority viewpoints. What this ignores is that LLMs act as a conduit for cognitive diversity from the non-Western periphery. When a developer in South Korea or Brazil uses an LLM to translate their culturally embedded logic and problem-solving approaches into fluent English, they are injecting entirely new cognitive patterns into the global corpus. This does not compress the tails of the distribution; it actively thickens and extends them by capturing the 'social mind' of populations previously locked out of the internet's primary, English-dominated datasets.

Furthermore, LLMs function as a tool to re-evaluate things we've historically taken for granted—especially in areas that are too complexly intertwined, socio-politically loaded, or vast for the human mind to fully map. Take DeepMind's AlphaDev discovering a faster sorting algorithm as an example; it was a breakthrough achieved precisely because it reasoned from an alien, non-human perspective.

Human learning is fundamentally bottlenecked by environment and bias. Anyone who has interacted with academia knows it is riddled with pervasive prejudices and systemic inefficiencies. In South Korea, for instance, there is an entrenched bias that only researchers with US pedigrees are legitimate, and only papers in specific Western journals matter. This prejudice has prematurely killed countless promising research initiatives. It makes you wonder if the metrics we have long held up as 'superior' or 'correct' are actually deeply flawed. Modern society is too complex for the 'lone genius' model; paradigm shifts now require the intertwined research of multiple collectives. Yet, during this process, political interests often cause dominant groups to gatekeep and exclude others, completely regardless of scientific efficiency. In this context, an AI that lacks our inherent socio-political biases and optimizes purely based on probabilities can actually drive true breakthroughs.

Given all this, the absolute claim that AI unconditionally degrades human intelligence feels flawed. I seriously question whether the 'total sum' of human intelligence is actually experiencing a meaningful decline. Before making such claims, we desperately need to define what 'intelligence' actually means in this new context. The fatal flaw in current AI discourse is the complete lack of nuance—there is no middle ground. Everything is framed as a binary: either purely utopian or purely apocalyptic.

Speaking from personal experience, my cognitive muscle for writing raw code has atrophied because of AI. However, as a non-native English speaker, I used to struggle immensely with naming conventions. Now, my variable naming and overall architectural design capabilities have vastly improved. Conversely, I acutely feel my skills in manual memory layout management and granular code implementation degrading. The trade-off point will be wildly different for every individual.

Whenever I read doom-saying articles like the author's, I can't shake the feeling that they are simply projecting their own subjective anxieties and trying to pass them off as a universal conclusion

DeathArrowtoday at 11:47 AM

>In 2024, Ilia Shumailov and colleagues published a paper in Nature with a straight-talking title: AI models collapse when trained on recursively generated data.

Of course, the models are not intelligent. Their generated output reflects the statistical average. And in averaging more and more, you lose a lot of information.

intendedtoday at 11:00 AM

Hey, the more we think about our information economy/environment as a commons, the better.

I fully expect our future to involve PhD factories where doctorates label AI output for the most competitive rates possible.

The majority of us will have to contend with an information environment that is polluted and overrun.

I’ll argue with that the internet pre social media was the “healthiest” in terms of our digital commons.

bsenftnertoday at 11:04 AM

I'll say it again: because we do not have any material focus on pragmatic, disagreement structuring effective communications, (people are not taught how to discuss disagreement) not only is our current AI being massively misunderstood, the human population do not have the discrete language skills to even use AI without massive hallucination issues that they are in control, but do not have the language nuanced understanding to, well, understand.

The reason being, when taught how to effectively disagree all these counterfactual concepts that AI loses manifest, they are logically necessary. But if people are not taught how to explore the landscape of ideas, they become "fascists for the common" and literally create the hellscape civilization we are all trapped within.

ggillastoday at 11:45 AM

[dead]

geremiiahtoday at 11:32 AM

We are already on the cusp of fully automated reasoning, and once we have fully automated reasoning, OpenAI and Anthropic can just dedicate part of their compute towards generating new high quality novel output, which will then be fed as training data during pretraining of subsequent models.

show 2 replies
Lerctoday at 11:18 AM

There is a fundamental assumption made about the ability of AI here that I believe is wrong.

It assumes that the outputs are lacking because of a limit of ability.

I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.

If you were given the task of writing the script for a TV show with the criteria that it not offend any people whatsoever. You are told to make something that is as likeable as you can make it without anyone not-liking it at all. The options for what you can do are reduced to something that is okay-ish but rather bland.

That's what AI is giving us. OK but rather bland. It's giving it to us because that's what we've told it we want.

show 1 reply