As Goodhart's law states, "When a measure becomes a target, it ceases to be a good measure". From an organizational management perspective, one way to partially work around that problem is by simply adding more measures thus making it harder for a bad actor to game the system. The Balanced Scorecard system is one approach to that.
Agreed, Goodhart’s Law captures the failure mode well intentioned KPIs and OKRs may miss, let alone agentic automation
This extends beyond AI agents. I'm seeing it in real time at work — we're rolling out AI tools across a biofuel brokerage and the first thing people ask is "what KPIs should we optimize with this?"
The uncomfortable answer is that the most valuable use cases resist single-metric optimization. The best results come from people who use AI as a thinking partner with judgment, not as an execution engine pointed at a number.
Goodhart's Law + AI agents is basically automating the failure mode at machine speed.