> I am sure if I used LLMs "properly" "agentically", then they would have triggered the tooling around them to build and execute the code, gotten the same results as me much faster, then equally authoritatively and confidently stated that I don't need to call Dispose.
Yes, usually my agents directly read the source code of libraries that don't have lots of good documentation or information in their training data, and/or create test programs as minimal viable examples and compile and run them themselves to see what happens, it's quite useful.
But you're right overall; LLMs placed inside agents are essentially providing a sort of highly steerable plausible prior for a genetic algorithm to automatically solve problems and do automation tasks. It's not as brute force as a classic genetic algorithm, but it can't always one-shot things, there is sometimes an element of guess-and-check. But IME at least that element is usually not more iterations than it takes me to figure something out (2-3), on average, although sometimes it needs more iterations than I would've on simple problems, and other times much less on harder ones, or vice versa.