logoalt Hacker News

shaka-bear-treelast Tuesday at 4:03 AM6 repliesview on HN

Funny the original post doesn’t mention AI replacing the coding part of his job.

There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.

I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.


Replies

kace91last Tuesday at 10:23 AM

>There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).

The most interesting questions are the ones that assume human equivalency.

Suppose an AI can produce like a human.

Are you ok with merging that code without human review?

Are you ok with having a codebase that is effectively a black box?

Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?

Are you ok with being dependent on the company providing this code generation?

Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?

Will we be ok if the well of public technical discussion LLMs are feeding from dries up?

Those are the interesting debates I think.

show 3 replies
torginuslast Tuesday at 8:55 AM

I predict by March 2026, AI will be better at writing doomer articles about humans being replaced than top human experts.

show 1 reply
twodavelast Tuesday at 8:40 PM

Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it.

Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be.

Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it.

jakewinslast Tuesday at 8:58 AM

I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.

If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.

Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.

The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs

show 2 replies
botanricelast Tuesday at 2:56 PM

idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients"

It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet

zwnowlast Tuesday at 8:03 AM

> There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves...

show 1 reply