The point they seem to be making is that AI can "orchestrate" the real world even if it can't interact physically. I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
Right. This whole process still appears to have a human as the ultimate outer loop.
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
I think that’s the point though. If they succeeded in the experiment, they wouldn’t need to do the same instructions again, AI will handle everything based on what happened and probably learn from mistakes for the next round(s).
Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”
So Seth, as presumably a non-farmer, is doing professional farmer's work all on his own without prior experience? Is that what you're saying?
Yes. In other words, this is a nice exemplification of the issue that AI lacks world models. A case study to work through.
Would be crazy it's looking through satellite imagery and is like "buy land in Africa" or whatever and gets a farm going there
Another way to look at it is that Seth is a Tool that Claude can leverage.
These experiments always seems to end up requiring the hand-holding of a human at top, seemingly breaking down the idea behind the experiment in the first place. Seems better to spend the time and energy on finding better ways for AI to work hand-in-hand with the user, empowering them, rather than trying to find the areas where we could replace humans with as little quality degradation as possible. That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.