Right. This whole process still appears to have a human as the ultimate outer loop.
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
> But unless they've made a commitment not to prompt the agent again
Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".
Why wouldn't they be able to eventually set it up to work autonomously? A simple github action could run a check every $t hour to check on the status, and an orchestrator is only really needed once initially to set up the if>then decision tree.