There are two major reasons people don't show proof about the impact of agentic coding:
1) The prompts/pipelines portain to proprietary IP that may or may not be allowed to be shown publically.
2) The prompts/pipelines are boring and/or embarrassing and showing them will dispel the myth that agentic coding is this mysterious magical process and open the people up to dunking.
For example in the case of #2, I recently published the prompts I used to create a terminal MIDI mixer (https://github.com/minimaxir/miditui/blob/main/agent_notes/P...) in the interest of transparency, but those prompts correctly indicate that I barely had an idea how MIDI mixing works and in hindsight I was surprised I didn't get harrassed for it. Given the contentious climate, I'm uncertain how often I will be open-sourcing my prompts going forward.
> The prompts/pipelines are boring and/or embarrassing and showing them will dispel the myth that agentic coding is this mysterious magical process
You nailed it. Prompting is dull and self evident. Sure, you need basic skills to formulate a request. But it’s not a science and has nothing to do with engineering.
No. The main reasons are that
1) the code AI produces is full of problems, and if you show it, people will make fun of you, or
2) if you actually run the code as a service people can use, you'll immediately get hacked by people to prove that the code is full of problems.
I'm fundamentally a hobbyist programmer, so I would have no problem sharing my process.
However, I'm not nearly organized enough to save all my prompts! I've tried to do it a few times for my own reference. The thing is, when I use Claude Code, I do a lot of:
- Going back and revising a part of the conversation and trying again—sometimes reverting the code changes, sometimes not.
- Stopping Claude partway through a change so I can make manual edits before I let Claude continue.
- Jumping between entirely different conversation histories with different context.
And so on. I could meticulously document every action, but it quickly gets in the way of experimentation. It's not entirely different from trying to write down every intermediate change you make in your code editor, between actual VCS commits.
I guess I could record my screen, but (A) I promise you don't actually want to watch me fiddle with Claude for hours and (B) it would make me too self-conscious.
It would be very cool to have a tool that goes through Claude's logs and exports some kind of timeline in a human-readable format, but I would need it to be automated.
---
Also, if you can't tell from the above, my use of Claude is very far from "type a prompt, get a finished program." I do a lot of work in order to get useful output. I happen to really enjoy coding this way, and I've gotten great results, but it's not like I'm entering a prompt and then taking a nap.
Could you clarify that last paragraph for me? I’m not sure what ”contentious climate” is here. AI antihype? I don’t understand the connection to not being harassed for something, isn’t that a good thing rather than something that would make you uncertain if you want to share prompts in the future?
Or 3 it’s my competitive advantage to keep my successes close to my chest.
You weren't harassed for it because (1) it is interesting and (2) you were not hiding the AI involvement and passing it off as your own.
The results (for me) are very much hit-and-miss and I still see it as a means of last resort rather than a reliable tool that I know the up and downsides of. There is a pretty good chance you'll be wasting your time and every now and then it really moves the needle. It is examples like yours that actually help to properly place the tool amongst the other options.