IME "Risk 2: Distributed Monolith" always comes back to bite. You have a nice separation of concerns at first, but then a quarter later there's some new feature request that cuts across those boundaries, and you're forced into distributed monolith territory.
Then the problem is you can't refactor it even if you wanted to, because other teams have taken dependencies on your existing APIs etc. Cleaning up becomes a risky quarter-long multi-team project instead of a straightforward sprint-long ticket.
I think AI is going to reverse the microservice trend too. The main problem that microservices improves is allowing teams to work more independently. Deployments, and especially rollbacks if there's a bug, can be quick for a microservice but take lots of coordination for monoliths. With AI (once/if it gets better), I imagine project work to be a lot more serial, since they work so much faster, and it'll be able to deploy one project at a time. A lot less chance of overlapping or incompatible changes that block monolith rollouts for weeks until they're resolved. A lot less extra feature-flagging of every single little change since you can just roll back the deployment if needed. Plus, a single codebase will be a lot easier for a single AI system to understand and work with e2e.
First two pars are 100% correct. Third par, I'm not so sure about. The jury is still out, I feel.
I am yet to see this mysterious phenomenon of AI working so much faster and better than high performing teams. What I see so far, is sloppy code, poor tests and systems that do not scale in the long room.