I've learned a lot of shit while getting AI to give me the answers, because I wanted to understand why it did what it did. It saves me a lot of time trying to fix things that would have never worked, so I can just spend time analyzing success.
There might be value in learning from failure, but my guess is that there's more value in learning from success, and if the LLM doesn't need me to succeed my time is better spent pushing into territory where it fails so I can add real value.
Do you honestly think that’s how people learn?
This is an example of a book on Common Lisp
https://gigamonkeys.com/book/practical-a-simple-database
What you usually do is follow the book instructions and get some result, then go to do some exploration on your own. There’s no walk in the dark trying to figure your own path.
Once you learn what works, and what does not, then you’ll have a solid foundation to tackle more complex subject. That’s the benefit of having a good book and/or a good teacher to guide you to the path of mastering. Using a slot machine is more tortuous than that.
>I've learned a lot of shit while getting AI to give me the answers
I would argue you're learning less than you might believe. Similarly to how people don't learn math by watching others solve problems, you're not going to learn to become a better engineer/problem solver by reading the output of ChatGPT.