logoalt Hacker News

rattrayyesterday at 11:25 PM7 repliesview on HN

I'm not familiar with this parable, but that sounds like a good thing in this case?

Notably, this is not a gun.


Replies

demarqtoday at 12:30 AM

things that you think sound good, might not sound good to the authority in charge of determining what is good.

For example using your LLM to criticise, ask questions or perform civil work that is deemed undesirable becomes evil.

You can use google to find how the UK government for example has been using "law" and "terrorism" charges against people for simply tweeting or holding a placard they deem critical of Israel.

Anthropic is showing off these capabilities in order to secure defence contracts. "We have the ability to surveil and engage threats, hire us please".

Anthropic is not a tiny start up exploring AI, it's a behemoth bank rolled by the likes of Google and Amazon. It's a big bet. While money is drying up for AI, there is always one last bastion for endless cash, defence contracts.

You just need a threat.

show 1 reply
herpdyderpyesterday at 11:32 PM

In general, such broad surveillance usually sounds like a bad thing to me.

show 1 reply
Aurornistoday at 12:13 AM

I’m actually surprised whenever someone familiar with technology thinks that adding more “smart” controls to a mechanical device is a good idea, or even that it will work as intended.

The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.

But as a person familiar with tech, IoT, and how devices work in the real world, do you actually think it would work like that?

“Sorry, you cannot fire this gun right now because the server is down”.

Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?

A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.

show 5 replies
johnQdeveloperyesterday at 11:34 PM

Well what if you want the AI red team your own applications?

That seems a valid use case that'd get hit.

lurk2today at 12:04 AM

>but that sounds like a good thing in this case?

Who decides when someone is doing something evil?

show 1 reply
madroxyesterday at 11:47 PM

It depends on who is creating the definition of evil. Once you have a mechanism like this, it isn't long after that it becomes an ideological battleground. Social media moderation is an example of this. It was inevitable for AI usage, but I think folks were hoping the libertarian ideal would hold on a little longer.

show 1 reply
rapindyesterday at 11:42 PM

Not really. It's like saying you need a license to write code. I don't think they actually want to be policing this, so I'm not sure why they are, other than a marketing post or absolution for the things that still get through their policing?

It'll become apparent how woefully unprepared we are for AIs impact as these issues proliferate. I don't think for a second that Anthropic (or any of the others) is going to be policing this effectively or maybe at all. A lot of existing processes will attempt to erect gates to fend off AI, but I bet most will be ineffective.