I am guessing: Maybe you are not used to or comfortable with delegating work?
You will certainly understand a program better where you write every line of code yourself, but that limits your output. It's a trade-off.
The part that makes it work quite well is that you can also use the LLM to better understand the code where required, simply by asking.
> I am guessing: Maybe you are not used to or comfortable with delegating work?
The difference between delegating to a human vs an LLM is that a human is liable for understanding it, regardless of how it got there. Delegating to an LLM means you're just more rapidly creating liabilities for yourself, which indeed is a worthwhile tradeoff depending on the complexity of what you're losing intimate knowledge of.
I’m perfectly comfortable and used to delegating, but delegation requires trust that the result will be fit for purpose.
It doesn’t have to be exactly how I would do it but at a minimum it has to work correctly and have acceptable performance for the task at hand.
This doesn’t mean being super optimized just that it shouldn’t be doing stupid things like n+1 requests or database queries etc.
See a sibling comment for one example on correctness, another one related to performance was querying some information from a couple of database tables (the first with 50,000 rows the next with 2.5 million)
After specifying things in enough detail to let the AI go, it got correct results but performance was rather slow. A bit more back and forthing and it got up to processing 4,000 rows a second.
It was so impressed with its new performance it started adding rocket ship emojis to the output summary.
There were still some obvious (to me) performance issues so I pressed it to see if it could improve the performance. It started suggesting some database config tweaks which provided some marginal improvements but was still missing some big wins elsewhere - namely it was avoiding “expensive” joins and doing that work in the app instead - resulting in n+1 db calls.
So I suggested getting the DB to do the join and just processing the fully joined data on the app side. This doubled throughout (8,000 rows/second) and led to claims from the AI this was now enterprise ready code.
There was still low hanging fruit though because it was calling the db and getting all results back before processing anything.
After suggesting switching to streaming results (good point!) we got up to 10,000 rows/second.
This was acceptable performance, but after a bit more wrangling we got things up to 11,000 rows/second and now it wasn’t worth spending much extra time squeezing out more performance.
In the end the AI came to a good result, but, at each step of the way it was me hinting it in the correct direction and then the AI congratulating me on the incredible “world class performance” (actual quote but difficult to believe when you then double performance again).
If it has just been me I would have finished it in half the time.
If I’d delegated to a less senior employee and we’d gone back and forth a bit pairing to get it to this state it might have taken the same amount and effort but they would’ve at least learnt something.
Not so with the AI however - it learns nothing and I have to make sure I re-explain things and concepts all over again the next time and in sufficient detail that it will do a reasonable job (not expecting perfection, just needs to be acceptable).
And so my experience so far (much more than just these 2 examples) is that I can’t trust the AI to the point where I can delegate enough that I don’t spend more time supervising/correcting it than I would spend writing things myself.
Edit: using AI to explain existing code is a useful thing it can do well. My experience is it is much better at explaining code than producing it.
There is probably a case of people both being right here, just having gotten to, or found, different end results. For me, Claude has been a boon for prototyping stuff I always wanted to build but didn’t want to do the repetitive plumbing slog to get started, but I have found you hit a level of complexity where AIs bog down and start telling you they have fixed the bug you have just asked about for the sixth time without doing anything or bothering to check.
Maybe that’s just the level I gave up at and it’s a matter of reworking the Claude.md file and other documentation into smaller pieces and focusing the agent on just little things to get past it.