logoalt Hacker News

Reflections on AI at the End of 2025

57 pointsby danielfalbotoday at 9:38 AM70 commentsview on HN

Comments

dhpetoday at 10:53 AM

I have programmed 30K+ hours. Do LLMs make bad code: yes all the time (at the moment zero clue about good architecture). Are they still useful: yes, extremely so. The secret sauce is that you'd know exactly what to do without them.

show 3 replies
abricqtoday at 11:16 AM

> * Programmers resistance to AI assisted programming has lowered considerably. Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway: now the return on the investment is acceptable for many more folks.

Could not agree more. I myself started 2025 being very skeptical, and finished it very convinced about the usefulness of LLMs for programming. I have also seen multiple colleagues and friends go through the same change of appreciation.

I noticed that for certain task, our productivity can be multiplied by 2 to 4. So hence comes my doubts: are we going to be too many developers / software engineers ? What will happen for the rests of us ?

I assume that other fields (other than software-related) should also benefits from the same productivity boosts. I wonder if our society is ready to accept that people should work less. I think the more likely continuation is that companies will either hire less, or fire more, instead of accepting to pay the same for less hours of human-work.

show 2 replies
pikertoday at 10:54 AM

> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.

Super skeptical of this claim. Yes, if I have some toy poorly optimized python example or maybe a sorting algorithm in ASM, but this won’t work in any non-trivial case. My intuition is that the LLM will spin its wheels at a local minimum the performance of which is overdetermined by millions of black-box optimizations in the interpreter or compiler signal from which is not fed back to the LLM.

show 2 replies
torloktoday at 10:20 AM

This is a bunch of "I believe" and "I think" with no sources by a random internet person.

show 6 replies
danielfalbotoday at 9:49 AM

> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.

This makes me think: I wonder if Goodhart's law[1] may apply here. I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend. Should we care or would it be ok for AI to produce code that passes all tests and is faster? Would the AI become good at creating explanations for humans as a side effect?

And if Goodhard's law doesn't apply, why is it? Is it because we're only doing RLVR fine-tuning on the last layers of the network so all the generality of the pre-training is not lost? And if this is the case, could this be a limitation in not being able to be creative enough to come up with move 37?

[1] https://wikipedia.org/wiki/Goodhart's_law

show 2 replies
a_bonobotoday at 10:42 AM

>* For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so.

Man, Antirez and I walk in very different circles! I still feel like LLMs fall over backwards once you give them an 'unusual' or 'rare' task that isn't likely to be presented in the training data.

show 3 replies
rckttoday at 10:45 AM

> Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway

Here we go again. Statements with the single source in the head of the speaker. And it’s also not true. The llms still produce bad/irrelevant code at such rate that you can spend more time prompting than doing things yourself.

I’m tired of this overestimation of llms.

show 3 replies
agumonkeytoday at 10:06 AM

There's videos about Diffusion LLMs too, apparently getting rid of the linear token generation. But I'm no ML engineer.

show 1 reply
Fraterkestoday at 10:38 AM

It’s interesting that half the comments here are talking about the extinction line when, now that we’re nearly entering 2026, I feel the 2027 predictions have been shown to be pretty wrong so far.

show 1 reply
ctothtoday at 10:28 AM

> The fundamental challenge in AI for the next 20 years is avoiding extinction.

So nice to see people who think about this seriously converge on this. Yes. Creating something smarter than you was always going to be a sketchy prospect.

All of the folks insisting it just couldn't happen or ... well, there have just been so many objections. The goalposts have walked from one side of the field to the other, and then left the stadium, went on a trip to Europe, got lost in a beautiful little village in Norway, and decided to move there.

All this time though, the prospect of instantiating a something smarter than you (and yes, it will be smarter than you even if it's at human level because of electronic speeds...) This whole idea is just cursed and we should not do the thing.

show 1 reply
fleebeetoday at 10:07 AM

> The fundamental challenge in AI for the next 20 years is avoiding extinction.

That's a weird thing to end on. Surely it's worth more than one sentence if you're serious about it? As it stands, it feels a bit like the fearmongering Big Tech CEOs use to drive up the AI stocks.

If AI is really that powerful and I should care about it, I'd rather hear about it without the scare tactics.

show 4 replies
alexgotoitoday at 10:18 AM

> * The fundamental challenge in AI for the next 20 years is avoiding extinction.

This reminded me of the Don’t look up movie where they basically gambled with the humans extinction.

ur-whaletoday at 9:56 AM

Not sure I understand the last sentence:

> The fundamental challenge in AI for the next 20 years is avoiding extinction.

show 2 replies
HellDunkeltoday at 11:11 AM

Tldr: AI bro wrote pro-AI piece revealing nothing new under the sun.

Aiisnotabubbletoday at 10:29 AM

What also happens and it's irrelevant of AGI: global RL

Around the world people ask an LLM and get a response.

Just grouping and analysing these questions and solving them once centrally and then making the solution available again is huge.

Linearly solving the most asked questions and then the next one then the next will make, whatever system is behind it, smarter every day.

show 1 reply
feverzsjtoday at 10:26 AM

Seems they also want some AI money[0]. Guess, I'll keep using Valkey.

[0] https://redis.io/redis-for-ai/

show 2 replies
seutoday at 10:34 AM

> And I've vibe coded entire ephemeral apps just to find a single bug because why not - code is suddenly free, ephemeral, malleable, discardable after single use. Vibe coding will terraform software and alter job descriptions.

I'm not super up-to-date on all that's happening in AI-land, but in this quote I can find something that most techno-enthusiast seem to have decided to ignore: no, code is not free. There are immense resources (energy, water, materials) that go into these data centers in order to produce this "free" code. And the material consequences are terribly damaging to thousands of people. With the further construction of data centers to feed this free video coding style, we're further destroying parts of the world. Well done, AGI loverboys.

show 1 reply