> instead of working to make their product safe
Make a nondeterministic product safe how?
What exactly are you implying? It sounds to me like you're saying that if it's impossible to make a product safe, then there shouldn't be any safety requirements. I think a more sensible position is that if it's impossible to make a product safe, then it should be illegal to build.
Is this the first time you have heard of AI safety?
Lots of articles you could read on the subject and answer your own question.
(Unless your angle is: akshually, you can never make anything 100% safe)
I'm creating a new start up called QuantumFlop Electricity - there's a 10% chance it will cause a black hole to open up in the Atlantic Ocean that may eventually consume us all but a 50% chance we'll have unlimited clean energy. We'll never know for sure if at any point that black hole may open as it's borrowing energy from the 81st dimension, but the upside seems pretty good.
Should I be able to get on with it?