> Being able to type quickly and accurately reduces
LLMs can generate code quickly. But there's no guarantee that it's syntactically, let alone semantically, accurate.
> I feel that I'm learning faster because I'm not tripping over silly little things.
I'm curious: what have you actually learned from using LLMs to generate code for you? My experience is completely the opposite. I learn nothing from running generated code, unless I dig in and try to understand it. Which happens more often than not, since I'm forced to review and fix it anyway. So in practice, it rarely saves me time and energy.
I do use LLMs for learning and understanding code, i.e. as an interactive documentation server, but this is not the use case you're describing. And even then, I have to confirm the information with the real API and usage documentation, since it's often hallucinated, outdated, or plain wrong.
> LLMs can generate code quickly. But there's no guarantee that it's syntactically, let alone semantically, accurate.
This has been a non-issue with self-correcting models and in-context learning capabilities for so long that saying it today highlights highly out of date priors.
> I'm curious: what have you actually learned from using LLMs to generate code for you?
I learn whether my design works. Some of the things I plan would take hours to type out and test. Now I can just ask the LLM, it throws out a working, compiling solution, and I can test that without spending my waking hours on silly things. I can just glance at the code and see that it's right or wrong.
If there are internal contradictions in the design, I find that out as well.