the real takeaway is buried at the bottom: "the magic isn't in the input, it's in the system around it." random keystrokes producing playable games means the input barely matters anymore. we're basically at the point where the engineering is in the scaffolding, not the prompting.
> we're basically at the point where the engineering is in the scaffolding, not the prompting.
This still required prompting, and not from the dog. Engineering is still the holistic practice of engineering.
+ Also the fact that the Memory.md file was a hindrance to the quality of output
That also shows the delusion of some people that believe their vibe coded projects have any value.
If generative AI improves at the rate that is promised then all your "promting skills" or whatever you believe you had will be obsolete. You might think you will be an "AI engineer" or whatever and that it is other people that will lose their job, that you are safe because you have the magic skills to use the new tech. You believe the tech overlords will reward you for your faith.
Nope. You are just training your replacement.
No one will buy your game that you vibe coded. If the tech were good enough to create games that are actually fun then they would just generate their own games. Oh your skill? Yeah, a dog can do it.
Yes people will cope by saying but oh the whole initial prompt and setting it all up was still hard but yeah currently. The tech will improve and it will get more accessible. So enjoy the few months you are still relevant.
Of course there is reason to believe that you can't scale up LLMs endlessly and bigger models hit diminishing returns. In fact we might already be seeing this. So there is an upside but then again when the AI bubble pops and the economy crashes you will be out of a job all the same.
> the engineering is in the scaffolding, not the prompting
Well, yes. Feeding random tokens as prompts until something good comes out is a valid strategy.
This matches what I've been finding building AI-integrated systems. The persistent memory, behavioral constraints, and feedback loops around the model do more for output quality than any prompt optimization ever did.
The dog experiment takes this to its logical conclusion — if random keystrokes produce playable games, the "intelligence" was never in the input. We spent two years obsessing over prompt engineering when the real discipline was always system architecture. The scaffolding isn't supporting the AI — it IS the AI's capability.