logoalt Hacker News

ChadNauseamtoday at 7:03 AM2 repliesview on HN

Sometimes people say that they don't understand something just to emphasize how much they disagree with it. I'm going to assume that that's not what you're doing here. I'll lay out the chain of reasoning. The step one is some beings are able to do "more things" than others. For example, if humans wanted bats to go extinct, we could probably make it happen. If any quantity of bats wanted humans to go extinct, they definitely could not make it happen. So humans are more powerful than bats.

The reason humans are more powerful isn't because we have lasers or anything, it's because we're smart. And we're smart in a somewhat general way. You know, we can build a rocket that lets us go to the moon, even though we didn't evolve to be good at building rockets.

Now imagine that there was an entity that was much smarter than humans. Stands to reason it might be more powerful than humans as well. Now imagine that it has a "want" to do something that does not require keeping humans alive, and that alive humans might get in its way. You might think that any of these are extremely unlikely to happen, but I think everyone should agree that if they were to happen, it would be a dangerous situation for humans.

In some ways, it seems like we're getting close to this. I can ask Claude to do something, and it kind of acts as if it wants to do it. For example, I can ask it to fix a bug, and it will take steps that could reasonably be expected to get it closer to solving the bug, like adding print statements and things of that nature. And then most of the time, it does actually find the bug by doing this. But sometimes it seems like what Claude wants to do is not exactly what I told it to do. And that is somewhat concerning to me.


Replies

mrobtoday at 8:56 AM

Not just bats. I'm pretty sure humans are already capable of extincting any species we want to, even cockroaches or microbes. It's a political problem not a technical one. I'm not even a superintelligence, and I've got a good idea what would happen if we dedicated 100% of our resources to an enormous mega-project of pumping nitrous oxide into the atmosphere. N2O's 20 year global warming is 273 times higher than carbon dioxide, and the raw materials are just air and energy. Get all our best chemical engineers working on it, turn all our steel into chemical plant, burn through all our fissionables to power it. Safety doesn't matter. The beauty of this plan is the effects continue compounding even after it kills all the maintenance engineers, so we'll definitely get all of them. Venus 2.0 is within our grasp.

Of course, we won't survive the process, but the task didn't mention collateral damage. As an optimization problem it will be a great success. A real ASI probably will have better ideas. And remember, every prediction problem is more reliably solved with all life dead. Tomorrow's stock market numbers are trivially predictable when there's zero trade.

9devtoday at 7:09 AM

> Now imagine that it has a "want" to do something that does not require keeping humans alive […]

This belligerent take is so very human, though. We just don't know how an alien intelligence would reason or what it wants. It could equally well be pacifist in nature, whereas we typically conquer and destroy anything we come into contact with. Extrapolating from that that an AGI would try to do the same isn't a reasonable conclusion, though.

show 2 replies