My working theory, which I hold quite confidently, is that anything that doesn't test well with new users in usability testing focus groups or A/B testing eventually gets the axe. But the people conducting that testing are - intentionally or unintentionally - optimizing for the wrong metric: "how quickly and easily can someone who has never seen this app before figure out how to do this action." That's the wrong thing to optimize for at a macro scale. It might make your conversions go up for a while, but at a long term cost of usability, capability, and discoverability that enrages the users that you want to convert into advanced, loyal, word of mouth evangelists for your app because they love it.
When people who are not thinking in that bigger-scale, zoomed-out, societal-level perspective conduct A/B testing or usability testing in a lab or focus group setting, they focus on the wrong metrics (the ones that make an immediate, short-term KPI go up) and then promote the resulting objectively worse UX designs as being evidence-based and data-driven.
It has been destroying software usability for the last 20 years and doing a deep disservice to subsequent generations who are growing up without having been exposed to TRULY thoughtful UX except very rarely.
I will die on this hill.