That’s most of my work(embedded sensor and control networks) and I’m sure that informs my methodology. I honestly don’t know much about how AI can inform standards SAAS but I have seen what happens when you just turn it loose, and in my experience it hasn’t been pretty; it works, but then when you need to change something it all crumbles like a badly engineered sandcastle.
OTOH for single purpose pipelines and simple tools. Claude can oneshot small miracles in 5 minutes that would take me 2 hours to build.
An example is the local agent framework that I had Claude spinup to do JTAG testing. I used to spend hours running tests over JTAG, now I have a local model run tests that Claude and I specify in the test catalog and it just runs them every time we add features. It took Claude minimal guidance and about 3 hours to build that tool along with a complete RAG system for document ingestion and datasheet comprehension, running locally on my laptop (I know it has a fan now lol) that only reaches out to the cloud when it runs into difficult problems. I don’t care if that is a bit of a mess, as long as it works, and it seems to , so far so good.
The testing is where Claude is basically magic, but it’s because we specify everything before we build and change the specs when we change the IRL code that it works. English is code, too, if you build structured documentation… and the docs keep the code accountable if you track the coherence.