One thing that I'm sure of is that the agentic future is test-driven. Tests are basically executable specs the agent can follow and verify against.
When we have solid tests, the agent output is useful and we can trust it. When tests are thin or missing, the agents still ship a lot of code, but we spend way more time debugging and fixing subtle bugs.
Great, so now I have to design the API for the AI, think of all the edge cases without actually going through the logic, and then I'm invariably going to end up with tests tightly coupled to the implementation.
This is why I think they work so well with strongly typed languages like Haskell and OCaml. You say do this until it compiles and passes a set unit tests for business logic. I find I am using even more verification tools like JSON schema validators. The more guardrails and hard checks you give an agent, the better it can perform.