logoalt Hacker News

KaiserPro10/11/202415 repliesview on HN

One of the sad things about tech is that nobody really looks at history.

The same kinds of essays were written about trains, planes and nuclear power.

Before lindbergh went off the deepend, he was convinced that "airmen" were gentlemen and could sort out the world's ills.

The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

Loads of service jobs too, along with a load of manual jobs when suitable large models are successfully applied to robotics (see ECCV for some idea of the progress for machine perception.)

But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

Well AI is going to make that worse. It'll cause huge unrest (see luddite riots, peterloo, the birth of unionism in the USA, plus many more)

This brings us to the next thing that AI will be applied to: Murdering people.

Andril is already marrying basic machine perception with cheap drones and explosives. its not going to take long to get to personalised explosive drones.

AI isn't the problem, we are.

The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

But looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.


Replies

kranke15510/11/2024

We are entering a dystopia and people are still writing these wonderful essays about how AI will help us.

Microtargeted psychometrics (Cambridge Analytica, AggregateIQ) have already made politics in the West an unending barrage of information warfare. Now we'll have millions of autonomous agents. At some point soon in the future, our entire feed will be AI content or upvoted by AI or AI manipulating the algorithm.

It's like you said - this essay reads like peak AI. We will never have as much hope and optimism about the next 20 years as we seem to have now.

Reminds me of a graffiti I saw in London, while the city's cost of living was exploding and making the place unaffordable to anyone but a few:

"We live in a Utopia. It's just not ours."

show 5 replies
ManuelKiessling10/11/2024

I do not agree with the following:

> But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

I am, however, criticizing this in isolation — that is, my goal is not to invalidate (nor validate, for that matter) the rest of your text; only this specific point.

So, I do not agree. We are clearly working a lot less hours than 120 or even 60 years ago, and we are getting a lot more back for it.

The problem I have with this is that the framing is often wrong — whether some number on a paycheck goes up or down is completely irrelevant at the end of the day.

The only relevant question boils down to this: how many hours of hardship do I have to put in, in order to get X?

And X can be many different things. Like, say, a steak, or a refill at the gas station, or a bread.

Now, I do not have very good data at hand right here and right now, but if my memory and my gut feeling serves me right, the difference is significant, often even dramatic.

For example, for one kilogram of beef, the average German worker needs to toil about 36 minutes nowadays.

In 1970, it was twice as much time that needed to be worked before the same amount of beef could be afforded.

In the seventies, Germans needed to work 145 hours to be able to afford a washing machine.

Today, it’s less than 20 hours!

And that’s not even taking into account the amount of „more progress“ we can afford today, with less toil.

While one can imagine that in 1970, I could theoretically have something resembling a smartphone or a lane- and distance-keeping car getting produced for me (by NASA, probably), I can’t even begin to imagine how many hours, if not millennia, I would have needed to work in order to receive a paycheck that would have paid for it.

We get SO much more for our monthly paycheck today, and so many more people do (billions actually), it’s not even funny.

show 5 replies
jimkleiber10/11/2024

> The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

> AI isn't the problem, we are.

I think when we frame it as human _nature_, then yes, _we_ look like the problem.

But what if we frame it as human _culture_? Then _we_ aren't the problem, but rather our _behaviors/beliefs/knowledge/etc_ are.

If we focus on the former, we might just be essentially screwed. If we focus on the latter, we might be able to change things that seem like nature but might be more nurture.

Maybe that's a better framing: the base problem is human nurture?

show 3 replies
xpe10/12/2024

> AI isn't the problem, we are.

I see major problems with the statement above. First, it is a false dichotomy. That’s a fatal flaw.

Second, it is not specific enough to guide action. Pretend I agree with the claim. How would it inform better/worse choices? I don’t see how you operationalize it!

Third, I don’t even think it is useful as a rough conceptual guide; it doesn’t “carve reality at the joints” so to speak.

swatcoder10/11/2024

> One of the sad things about tech is that nobody really looks at history.

First, while I often write much of the same sentiment about techno-optimism and history, you should remember that you're literally in the den of Silicon Valley startup hackers. It's not going to be an easily heard message here, because the site specifically appeals to people who dream of inspiring exactly these essays.

> The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

Second... you're falling victim to the same trap, but simply preferring some kind of social or political technology instead of a mechanical or digital one.

What history mostly affirms is that prosperity and ruin come and go, and that nothing we engineer last for all that long, let alone forever. There's no point in dreading it, whatever kind of technology you favor or fear.

The bigger concern is that some of the acheivements of modernity have made the human future far more brittle than it has been in what may be hundreds of thousands of years. Global homogenization around elaborate technologies -- whether mechanical, digital, social, political or otherwise -- sets us up in a very "all or nothing" existential space, where ruin, when it eventually arrives, is just as global. Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy.

show 2 replies
xpe10/14/2024

> One of the sad things about tech is that nobody really looks at history.

If this were phrased as "Many proponents of technology pay little attention to societal impacts" then I would agree.

The quote above is not true in this sense: there are many technology-aware people that study history. You may already have your favorites. Off the top of my head, I recommend Brian Christian and Nick Bostrom.

binary13210/12/2024

We are already deep in the throes of a long, slow catastrophe which is not causing a change in course for the better. I’m afraid we can’t count on a hard stop at the end of this particular rope. Anything we can do to escape the heat-death / long-abomination outcome should be done post haste as if we’re already out of time. For so many people, we already are.

roenxi10/11/2024

> AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

> But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

Don't you have to pick one? It seems a bit disjointed to simultaneously complain that we are all losing our jobs and that we are working too many hours. What type of future are we looking for here?

If machines get so productive that we don't need to work, everyone losing their jobs isn't a long-term problem and may not even be a particularly damaging short-term one. It isn't like we have less stuff or more people who need it. There are lots of good equilibriums to find. If AI becomes a jobs wrecking ball I'd like to see the tax system adjusted so employers are incentivised to employ large numbers of people for small numbers of hours instead of small numbers of people for large numbers of hours - but that seems like a relatively minor change and probably not an especially controversial one.

show 1 reply
N8works10/11/2024

Yes. I used to share your viewpoint.

However, recently, I've come to understand that is AI is about the inherently unreal and that authentic human connection is really going to be where it's at.

We build because we need it after all, no?

Don't give up. We have already won.

show 1 reply
startupsfail10/12/2024

The responsibility the airmen take when they take passengers off the ground (holding their lives in their hands) is a serious one.

The types of Trump are unlikely to get a license or accumulate enough Pilot In Command hours an not be an accident, and the experience itself changes the person.

If I have a choice of who to trust, between an airman or not airman, I’d likely choose an airman.

And I’m not sure what you are referring to about Lindbergh, but among other things he was a Pulitzer Prize winning author, environmentalist and following Pearl Harbor he had fought against the aggressors.

show 1 reply
tim33310/13/2024

A sad thing of that view is it takes a pessimistically biased view of history.

Life expectancy at birth is up from like 22 years to 80 or so because most kids used to die in unpleasant ways.

The percentage of people dying through warfare and violence rather than peacefully is way down, much more than 10x.

We have far more information, comfort, ability to travel and similar.

And most of it comes down to tech. But human nature, however good thing get is to find something that's shit and focus on how awful it is.

I mean I understand - Terminator 2 with killer robots is much more entertaining that something about everyone having a nice time, but it's not the likely reality.

amelius10/11/2024

History tells us that humans will not tolerate any "creature" to exist that is smarter than them, so that is where the story will end.

show 1 reply
alexashka10/12/2024

For a non-trivial number of people, having power/status over others is what they like.

For a non-trivial number of people, they don't care what happens to others, as long as their tribe benefits.

As long as these two issues are not addressed, very little meaningful progress is possible.

> Looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.

A catastrophe won't solve anything because you'll get the same people who love power over others in power and people who don't mind fucking over others right below them, which is where humanity has always been.

mythrwy10/11/2024

But will AI be eventually used to change human nature itself?

show 1 reply