We also need laws. Releasing an AI product that can (and does) do this should be like selling a car that blows your finger off when you start it up.
This is an archetypal case of where a law wouldn't help. The other side of the coin is that this is exactly a data loss bug in a product that is perfectly capable of being modified to make it harder for a user to screw up this way. Have people forgotten how comically easy it was to do this without any AI involved? Then shells got just a wee bit smarter and it got harder to do this to yourself.
LLM makers that make this kind of thing possible share the blame. It wouldn't take a lot of manual functional testing to find this bug. And it is a bug. It's unsafe for users. But it's unsafe in a way that doesn't call for a law. Just like rm -rf * did not need a law.
there are laws about waiving liability for experimental products
sure, it would be amazing if everyone had to do a 100 hour course on how LLMs work before interacting with one
Responsibility is shared.
Google (and others) are (in my opinion) flirting with false advertising with how they advertise the capabilities of these "AI"s to mainstream audiences.
At the same time, the user is responsible for their device and what code and programs they choose to run on it, and any outcomes as a result of their actions are their responsibility.
Hopefully they've learned that you can't trust everything a big corporation tells you about their products.
Google will fix the issue, just like auto makers fix their issues. Your comparison is ridiculous.
This is more akin to selling a car to an adult that cannot drive and they proceed to ram it through their garage door.
It's perfectly within the capabilities of the car to do so.
The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.
For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.