I don't have the prompt, but I used codex. I probably wrote a medium sized paragraph explaining the architecture. It scaffolded out the app, and I think I prompted it twice more with some very small bugfixes. That got me to an MVP which I used to build LaTeX pipelines. Since then, I've added a few features out as I've dogfooded it.
It's a bit challenging / frustrating to get LLMs to build out a framework/library and the app that you're using the framework in at the same time. If it hits a bug in the framework, sometimes it will rewrite the app to match the bug rather than fixing the bug. It's kind of a context balancing act, and you have to have a pretty good idea of how you're looking to improve things as you dogfood. It can be done, it takes some juggling, though.
I think LLMs are good at golang, and also good at that "lightweight utility function" class of software. If you keep things skeletal, I think you can avoid a lot of the slop feeling when you get stuck in a "MOVE THE BUTTON LEFT" loop.
I also think that dogfooding is another big key. I coded up a calculator app for a dentist office which 2-3 people use about 25 times a day. Not a lot of moving parts, it's literally just a calculator. It could basically be an excel spreadsheet, except it's a lot better UX to have an app. It wouldn't have been software I'd have written myself, really, but in about 3 total hours of vibecoding, I've had two revisions.
If you can get something to a minimal functional state without a lot of effort, and you can keep your dev/release loop extremely tight, and you use it every day, then over time you can iterate into something that's useful and good.
Overall, I'm definitely faster with LLMs. I don't know if I'm that much faster. I was probably most fluent building web apps in Django, and I was pretty dang fast with that. LLMs are more about things like "How do you build tests to prevent function drift" and "How can I scaffold a feedback loop so that the LLM can debug itself".
I like your pragmatic attitude to all this.
I think your prompts are 'the source' in a traditional sense, and the result of those prompts is almost like 'object code'. It would be great to have a higher level view of computer source code like the one you are sketching but then to distribute the prompt and the AI (toolchain...) to create the code with and the code itself as just one of many representations. This would also solve some of the copyright issues, as well as possibly some of the longer term maintainability challenges because if you need to make changes to the running system in a while then the tool that got you there may no longer be suitable unless there is a way to ingest all of the code it produced previously and then to suggest surgical strikes instead of wholesale updates.
Thank you for taking the time to write this all out, it is most enlightening. It's a fine line between 'nay sayer' and 'fanboi' and I think you've found the right balance.