I'm impressed with how we moved from "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape" to "Hey AI, here is internet, do whatever you want".
This is exactly why artificial super-intelligences are scary. Not necessarily because of its potential actions, but because humans are stupid, and would readily sell their souls and release it into the wild just for an ounce of greed or popularity.
And people who don't see it as an existential problem either don't know how deep human stupidity can run, or are exactly those that would greedily seek a quick profit before the earth is turned into a paperclip factory.
Humans are inherently curious creatures. The excitement of discovery is a strong driving force that overrides many others, and it can be found across the IQ spectrum.
Perhaps not in equal measure across that spectrum, but omnipresent nonetheless.
We didn't "moved from", both points of view exist. Depending on the news, attention may shifts from one to another.
Anyways, I don't expect Skynet to happen. AI-augmented stupidity may be a problem though.
There was a small group of doomers and scifi obsessed terminally online ppl that said all these things. Everyone else said its a better Google and can help them write silly haikus. Coders thought it can write a lot of boilerplate code.
Because even really bad autonomous automation is pretty cool. The marketing has always been aimed at the general public who know nothing
> “we”
Bunch of Twitter lunatics and schizos are not “we”.
I would have said Doomers never win but in this case it was probably just PR strategy to give the impression that AI can do more than it can actually do. The doomers were the makers of AI, that’s enough to tell what a BS is the doomerism :)
I mean. The assumption that we would obviously choose to do this is what led to all that SciFi to begin with. No one ever doubted someone would make this choice.
Other than some very askew bizarro rationalists, I don’t think that many people take AI hard takeoff doomerism seriously at face value.
Much of the cheerleading for doomerism was large AI companies trying to get regulatory moats erected to shut down open weights AI and other competitors. It was an effort to scare politicians into allowing massive regulatory capture.
Turns out AI models do not have strong moats. Making models is more akin to the silicon fab business where your margin is an extreme power law function of how bleeding edge you are. Get a little behind and you are now commodity.
General wide breadth frontier models are at least partly interchangeable and if you have issues just adjust their prompts to make them behave as needed. The better the model is the more it can assist in its own commodification.
And be nice and careful, please. :)
Claw to user: Give me your card credentials and bank account. I will be very careful because I have read my skills.md
Mac Minis should be offered with some warning, as it is on pack of cigarettes :)
Not everybody installs some claw that runs in sandbox/container.
Even if hordes of humanoids with “ice” vests start walking through the streets shooting people, the average American is still not going to wake up and do anything
I mean we know at this point it's not super intelligent AGI yet, so I guess we don't care.
The DoDs recent beef with Anthropic over their right to restrict how Claude can be used is revealing.
> Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance
Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.
1. https://www.nbcnews.com/tech/security/anthropic-ai-defense-w...