> We're very close to just being about to set an AI coding agent to brute-force a driver for anything.
This is false. To "brute force" a driver, you'd need a feedback loop between the hardware's output and the driver's input.
While, in theory, this is possible for some analog-digital traducers (e.g WI-FI radio), if the hardware is a human-interface system (joystick, monitor, mouse, speaker, etc.) you literally need a "human in the loop" to provide feedback.
Additionally, many edge-cases in driving hardware can irrevocably destroy it and even a domain-specific agent wouldn't have any physics context for the underlying risks.
Strictly speaking, I don't think we need a human to run repetitive tests. We just need the human to help with the physical parts of the testing jig.
For instance: A microphone (optionally: a calibrated microphone; extra-optionally: in an isolated anechoic chamber) is a simple way to get feedback back into the machine about the performance of a speaker. (Or, you know: Just use a 50-cent audio transformer and electrically feed the output of the amplifier portion of the [presumably active] speaker back into the machine in acoustic silence.)
And I don't have to stray too far into the world of imagination to notice that the hairy, custom Cartesian printer in the corner of the workshop quite clearly resembles a machine that can move a mouse over a surface in rather precise ways. (The worst that can happen is probably not as bad as many of us have seen when using printers in their intended way, since at least there's no heaters and molten plastic required. So what if it disassembles itself? That's nothing new.)
Whatever the testing jig consists of, the bot can write the software part of the tests, and the tests can run as repetitiously as they need to.