It is a quirky article but the author, instead of engaging with information sources to understand what important thoughts people have had about these topics, feels the best thing to do is introduce new terms that other terms already exist for. This is basically just inductive bias plus the AI homogenization idea producing a distribution shift.
This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.
Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.
The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.
But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.
I think we're excluding from this analysis the probability that these "AI" products will remain truly unbiased and free from external (corporate) influences.
When AI gains true marketshare in the "think-space", I have zero trust that the corporate overlords controlling these machines will use them in the fairest interests of humanity.
>This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.
I think for a lot of of us the problem is that this is not a given. It’s often promised and rarely occurs, especially in the modern era. Increased productivity usually just means increased demands in the workplace.
When I read pieces like this all I think is, resistance to change is a helluva drug.
I've been working on a project and using LLMs heavily to inform my design decisions. There's already a long list of cases where it has taught me things I wasn't familiar with, alerted me to possibilities I didn't consider, shown me how to do things that I was struggling with. In those cases I ask for references, and it delivers.
This is not "endangering human development". If anything, it's the exact opposite - allowing human knowledge to be transmitted to other humans in an accessible way that otherwise, usually simply would not have happened.
Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.
I'm not saying there isn't a moral dimension to all this, and areas of serious concern. But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn. The former will be better for human development.
One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.
It's possible I miss something, but are you saying that the author should relax and she should leaves this to smarter people?