This is so true have been working on a project for exactly this principle -
https://www.decisional.com/blog/workflow-automation-should-b...
I think there is a fundamental incentive problem - code + llm + harness is bound to be more efficient but the labs want you to burn tokens so they are not going to tell you to use the code, just burn more tokens. They are asking us to forget about the token cost and reliability for now - model will become better.
This means that most people just believe that their agent should just be able to do anything with the help of some Model fairy dust with prompts + skills.
People need to watch their agents fail in production to be able to come to the right conclusion unfortunately.