Not GP, but I agree with him and I will expand.
The fact isn't that we don't know how to use AI. We've done so and the result can be very good sometimes (mostly because we know what's good and not). What's pushing us away from it is its unreliability. Our job is to automate some workflow (the business's and some of our owns') so that people can focus on the important matters and have the relevant information to make decisions.
The defect of LLM is that you have to monitor its whole output. It's like driving a car where the steering wheel loosely connected to the front wheels and the position for straight ahead varies all the time. Or in the case of agents, it's like sleeping in a plane and finding yourself in Russia instead of Chile. If you care about quality, the cognitive load is a lot. If you only care about moving forward (even if the path made is a circle or the direction is wrong), then I guess it's OK.
So we go for standard solutions where fixed problems stays fixed and the amount of issues is a downward slope (in a well managed codebase), not an oscillating wave that is centered around some positive value.
I understand that but I'm not sure how it's a response to my original statement.