I have no personal experience with the SRE agents, but I used Codex recently when trying to root cause an incident after we're put in a stop gap, and it did the last mile debugging of looking through the code for me once I had assembled a set of facts & log lines and accurately pointed me to some code I had ignored in my mental model because it was so trivial I didn't think it could be an issue.
That experience made me think we're getting close to SRE agents being a thing.
And as the LLM makers like to reiterate, the underlying models will get better.
Which is to say, I think everyone should have some humility here because how useful the systems end up being is very uncertain. This of course applies just as much to execs who are ingesting the AI hype too.