I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.
To me it’s given:
- AI in it’s current state is ruthless in achieving its goal
- Providers tune ruthlessness to get stronger AIs versus the competitor
- Humans can’t evaluate all consequences of the seeds they’ve planted.
Collateral and reckless damage is guaranteed at this point.
Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..
We could stop it, but we wont
>We could stop it
I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.
I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.
It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.
> Collateral and reckless damage is guaranteed at this point.
It's industrialization and mechanized warfare all over again
AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for
Why does it have to be doom and gloom. Serious question. When we plant seeds they bear fruit and not all fruit is poison.
>AI in it’s current state is ruthless in achieving its goal
I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.
The ruthless maximising of a particular trait is something that happens during training.
It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.