We've had AI retrieval for two decades--this is the first time you can outsource your intelligence to a program. In the 2000-2010s, the debates was "why memorize when you can just search and synthesize." The debate is now "why even think?" (!)
I think its obvious why it would be bad for people to stop thinking.
1. We need people to be able to interact with AI. What good is it if an AI develops some new cure but no one understands or knows how to implement it?
2. We need people to scrutinize an AI's actions.
3. We need thinking people to help us achieve further advances in AI too.
4. There are a lot of subjective ideas for which there are no canned answers. People need to think through these for themselves.
5. Also world of hollowed-out humans who can’t muster the effort to write a letters to their own kids terrifies me[0]
I could think of more, but you could also easily ask ChatGPT.
[0]: https://www.forbes.com/sites/maryroeloffs/2024/08/02/google-...
I'd argue that most humans are terrible at thinking. It's actually one of our weakest and most fragile abilities. We're only rational because our intelligence is collective, not individual. Writing and publishing distribute and distill individual thinking so good and useful ideas tend to linger and the noise is ignored.
What's happening at the moment is an attack on that process, with a new anti-orthodoxy of "Get your ideas and beliefs from polluted, unreliable sources."
One of those is the current version of AI. It's good at the structure of language without having a reliable sense of the underlying content.
It's possible future versions of AI will overcome that. But at the moment it's like telling kids "Don't bother to learn arithmetic, you'll always have a calculator" when the calculator is actually a random number generator.