Staff engineer (also at FAANG), so yes, I have at least comparable experience. I'm not trying to summarize every level of SWE in a few sentences. The point is that AI's infallibility is no different than human infallibility. You may fire a human for a mistake, but it won't solve the business problems they may have created, so I believe the accountability argument is bogus. You can hold the next layer up accountable. The new models are startling good at direction setting, technical to product translation, and providing leadership guidance on technical matters and providing multiple routes for roadblocks.
We're starting to see engineers running into bugs and roadblocks feed input into AI and not only root causing the problem, but suggesting and implementing the fix and taking it into review.
Surely at some point in your career as a SWE at FAANG you had to "dive deep" as they say and learn something that wasn't part of your "training data" to solve a problem?