This is a ridiculous analogy. Test the app. Read its source code. Developers could always write toxic instruction in your tools. AI may write inefficient or messy code, but it’s far from nefarious. “Asbestos” code is written intentionally by humans, not unintentionally AI.
That's a good way to guarantee nobody will use it. Who is going to test the app in a sandbox with godknowswhat kind of tooling needed to find malicious behavior and read the code? For a tool that's convenient once per decade?