Yeah, realized this the first time I used an LLM to code. I've not used them since. No matter how good it gets, it's dangerous to lose touch of my own intelligence.
This is just being lazy. I like to use Claude and Gemini to have debates and test ideas. If you do it right you can learn new things with every chat.
The very next entry on the homepage, just below this one: "The danger of military AI isn't killer robots; it's worse human judgement"
this is exactly the same as people who drive their car into a river because google maps told them to.
Funny, the author of this piece was one of the two on the byline of the Ars article with the AI-fabricated quotes.
The cognitive surrender is the most predictable outcome. Many here will claim they'll rise above the path of least resistance and use AI responsibly, and even if that is true for many here, think about the most typical worker. Those who only want to go home at 5 after putting the least amount of effort into their job. Our society is about to be rewritten by them.
Don't know about that research but I certainly have read many HN comments made by those who drank the AI kool-aid (and I write this as someone using Claude Code CLI daily) where any semblance of logical thinking was gone.
How I imagine "wololo" would practically work
[dead]
This sounds like FUD to get people to abandon one of our strongest cognitive enhancing toolsof all time
I work in a creative field, and we've started to get a lot of clients using AI to generate initial concepts for us to build upon. The problem is, they're not actually thinking about these concepts, they're just generating until they see something they like.
Then, we have meetings where we will ask a basic but specific question about what they want us to make, and we're just met with blank stares. They have no answers, because they've never actually thought about it.
And then everyone else needs to do the thinking for them.