> AI killed curiosity.
Only if you let yours be killed.
There will always be a demand for high-value signal, even though it might not be as easy to find anymore. But then again, has it ever been?
> Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so.
I have sympathy for that argument when it comes to locked bootloaders, closed-source software etc., but with AI? How? Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code?
I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.
> I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.
This is what gets me every single time. I genuinely don’t think this is a hard realization to come to, and yet, the vast majority of arguments from both sides of the aisle, both proponents and antis, always assume that EITHER you do everything yourself, OR you have the AI do everything for you. If you use AI, you’re DOOMED to never think critically about anything anyone ever tells you ever again. If you don’t, you’re an idiot, because everyone else is using it, and skills and experience no longer matter because everyone can now do everything.
And this is on HN, too; supposedly, a site where experienced engineers, developers, and builders converge; the exact kind of demographic you’d expect to understand such a thing as nuance. And yet, your comment is one of very few. There’s someone RIGHT HERE, a few comments down, saying, verbatim, “it’s a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.” Treating curiosity as the end rather than the means, as if I stop being a curious person once I find an answer to a question I’ve been asking myself, or as if curiosity is some sort of “temporary status effect” that an answer/solution “consumes.”
And it seems to be worse than just “no one’s thought it through properly.” I’ve literally had someone show a fundamental incapability to understand the concept. I spent a non-trivial amount of effort writing out three comments with several paragraphs about how knowing your knowns and unknowns, and the fact that you have unknown unknowns, is the most important thing in any project, not just when it comes to AI. That these tools aren’t just doers, but also searchers. That they’re pretty much the best rubber ducky that’s ever been created, and that I argue a rubber ducky is exactly what you should be using for in any contexts that don’t have it automate trivial and testable work. The guy refused to read any of it and, after three walls of text, continued claiming I’m “advocating for the LLM to guide me.” There is some sort of deeply instinctive and intrinsically defensive reflex that a lot of people seem to immediately collapse into when the topic comes up, and it seems to seriously impair the ability to acknowledge nuance or concede a single fraction of an inch. It’s baffling.
> Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code?
Microsoft owns CoPilot and controls GitHub, LinkedIn, etc
Google owns Gemini and control search results for most of the web
Meta owns whatever their model name is now and controls person-to-person relationships on the web
etc
It's up to any of them to flip the switch and make AI the default entry point when they decide that their AI isn't gaining enough traction. And then you can just hide the source data as proprietary information. Is it cynical? Sure, but I don't think we can say it's unlikely.