logoalt Hacker News

the_aftoday at 8:42 PM1 replyview on HN

This is an extremely cute, cool and fun experiment. Kudos.

That said, I wonder: does the dog input matter? It seems this is simply surfacing Claude's own encoded assumptions of what a game is (yes, the feedback loop, controls, etc, are all interesting parts of the experiment).

How would this differ if instead of dog input, you simply plugged /dev/random into it? In other words, does the input to the system matter at all?

The article seems to acknowledge this:

> If there’s a takeaway beyond the spectacle, it’s this: the bottleneck in AI-assisted development isn’t the quality of your ideas - it’s the quality of your feedback loops. The games got dramatically better not when I improved the prompt, but when I gave Claude the ability to screenshot its own work, play-test its own levels, and lint its own scene files.

I'll go further: it's not only not "the bottleneck", it simply doesn't matter. The dog's ideas certainly didn't matter, and the dog didn't think of the feedback loop for Claude either.


Replies

alexhanstoday at 9:22 PM

This fun exercise might actually be extremely insightful as a educational vehicle around AI and intent.

It can also help combat the excessive emphasis on any "end to end" demo on twitter which doesn't really correspond to a desired and quality sought outcome. Generating things is easy if you want to spend tokens. Proper product building and maintenance is a different exercise and finding ways to differentiate between these will be key in a high entropy world.

> I'll go further: it's not only not "the bottleneck", it simply doesn't matter. The dog's ideas certainly didn't matter, and the dog didn't think of the feedback loop for Claude either

Absolutely. The scientific test would to put any other signal and look at the outcomes. Brown noise, rain, a random number generator. whatever.