Symbolic processing was obviously a bad approach to building a thinking machine. Well, obvious now, 40 years ago probably not as much, but there were strong hints back then, too.
"AI agent" roughly just means invoking the system repeatedly in a while loop, and giving the system a degree of control when to stop the loop. That's not a particularly novel or breakthrough idea, so similarities are not surprising.
When “invoking” becomes “evolving” I think that remains very fertile ground.
I'm not convinced that symbolic processing doesn't still have a place in AI though. My feeling about language models is that, while they can be eerily good at solving problems, they're still not as capable of maintaining logical consistency as a symbolic program would be.
Sure, we obviously weren't going to get to this point with only symbolic processing, but it doesn't have to be either/or. I think combining neural nets with symbolic approaches could lead to some interesting results (and indeed I see some people are trying this, e.g. https://arxiv.org/abs/2409.11589)