logoalt Hacker News

pyrale12/09/20242 repliesview on HN

> Would you want Microsoft to claim they're responsible for the "safety" of what you write with Word? For the legality of the numbers you're punching into an Excel spreadsheet? Would you want Verizon keeping tabs on every word you say, to make sure it's in line with their corporate ethos?

Would you want DuPont to check the toxicity of Teflon effluents they're releasing in your neighbourhood? That's insane. It's people's responsibility to make sure that they drink harmless water. New tech is always amazing.


Replies

nightski12/09/2024

Yes, because we know a.) that the toxicity exists and b.) how to test for it.

There is no definition of a "safe" model without significant controversy nor is there any standardized test for it. There are other reasons why that is a terrible analogy, but this is probably the most important.

show 1 reply
saurik12/10/2024

I don't see how that analogy works, especially so as in your attempt to make a point you have DuPont as the explicit actor in the direct harm, and the people drinking the water aren't even involved... like, I do not think anyone disagrees that DuPont is responsible in that one.

I also, to draw a loose parallel, think that Microsoft should be responsible for the security and correctness of their products, with potentially even criminal liability for egregiously negligent bugs that lead to harm for their users: it isn't ever OK to "move fast and break things" with my personal data or bank account. But like, that isn't what we are talking about constantly with limiting the use cases of these AI products.

I mean, do I think OpenAI should be responsible if their AI causes me to poison myself by confidently giving me bad cooking instructions? Yes. Do I think OpenAI should be responsible if their website leaks my information to third parties? Of course. Depending on the magnitude of the issue, I could even see these as criminal offenses for not only the officers of the company but also the engineers who built it.

But, I do not at all believe that, if DuPont sells me something known to be toxic, that it is DuPont's responsibility to go out of their way to technologically prevent me from using it in a way which harms other people: down that road lies dystopian madness. If I buy a baseball bat and choose to go out clubbing for the night, that one's on me. And like, if I become DuPont and make a factory to produce Teflon, and poison the local water with the effluent, the responsibility is with me, not the people who sold me the equipment or the raw materials.

And, likewise, if OpenAI builds an AI which empowers me to knowingly choose to do something bad for the world, that is not their problem: that's mine. They have no responsibility to somehow prevent me from egregiously misusing their product in such a way; and, in fact, I will claim it would be immoral of them to try to do so, as the result requires (conveniently for their bottom line) a centralized dystopian surveillance state.