The way my thinking has evolved is that "AGI" isn't actually necessary for an agent (NB: agents, specifically ones with state, not LLMs by themselves - "AI" was vague and I should've been clearer) to be enough like a person to be interesting and/or problematic. To quote myself [1]:
> [OpenClaw agents are like] an actor who doesn't know they're in a play. How much does it matter that they aren't really Hamlet?
Does the agent understand the words it's predicting? Does the actor know they're in a play? I don't know but I'm more concerned with how the actor would respond to finding someone eavesdropping behind a curtain.
> Or is there a new development which should make me consider anthropomorphizing them?
The development that caused me to be more concerned about their personhood or pseudopersonhood was the MJ Rathbun affair. I'm not saying that "AGI" or "superintelligence" was achieved, I'm saying that's actually the wrong question and the right questions are around their capabilities, their behaviors, and how they evolve over time unattended or minimally attended. And I'm not saying I understand those questions, I thought I did but I was wrong. I frankly am confused and don't really know what's going on or how to respond to it.