That's a great answer that offers concrete insight into what design thinkers are trying to achieve. And it seems like they have a chance to succeed if they also employ iterative experimental methods to learn whether their mental model of user experience is incorrect or incomplete. Do they?
Traditionally you use a lot of paper and experiential prototypes to iterate on, which doesn't cover everything but helps refine assumptions (I sometimes like starting with mocking downstream output like reports and report data, which is a quick way to test specific assumptions about the client's operations and strategic goals, which then can affect the detailed project). When I can, I also try to iterate using scenario-based wargaming, especially for complex processes with a lot of handoffs and edge cases; it lets us "chaos monkey" situations and stress-test our assumptions.
More than once early iterations have led me to call off a project and tell the client that they'd be wasting their money with us; these were problems that either could be solved more effectively internally (with process, education, or cultural changes), weren't going to be effectively addressed by the proposed project, or, quite often, because what they wanted was not what they actually needed.
Increasingly, AI technical/functional prototyping's making it into the early design process where traditionally we'd be doing clickable prototypes, letting us get cheap working prototypes in place for users to test drive and provide feedback on. I like to iterate aggressively on the data schema up front, so this fits in well with my bias towards getting the database and query models largely created during the design effort based on domain research and collaboration.