logoalt Hacker News

zaken05/14/20254 repliesview on HN

Honestly it's this line that did it for me:

> AlphaEvolve enhanced the efficiency of Google's data centers, chip design and AI training processes — *including training the large language models underlying AlphaEvolve itself*.

Singularity people have been talking for decades about AI improving itself better than humans could, and how that results in runaway compounding growth of superintelligence, and now it's here.


Replies

dgacmu05/14/2025

Most code optimizations end up looking somewhat asymptotic towards a non-zero minimum.

If it takes you a week to find a 1% speedup, and the next 0.7% speedup takes you 2 weeks to find ... well, by using the 1% speedup the next one only takes you 13.86 days. This kind of small optimization doesn't lead to exponential gains.

That doesn't mean it's not worthwhile - it's great to save power & money and reduce iteration time by a small amount. And it combines with other optimizations over time. But this is in no way an example of the kind of thing that the singularity folks envisioned, regardless of the realism of their vision or not.

show 1 reply
ActorNightly05/15/2025

Long way to go Singularity. We don't even know if its possible.

Basically, singularity assumes that you can take the information about the real world "state", and compress it into some form, and predict the state change faster than reality happens. For a subset of the world, this is definitely possible. But for entire reality, it seems that there is a whole bunch of processes that are computationally irreducible, so an AI would never be able to "stay ahead" or so to to speak. There is also the thing about computational ir-reversibility - for example observing a human behavior is seeing the output of a one way hashing function of neural process in our brain that hides a lot of detail and doesn't let you predict it accurately in all cases.

Also, optimization algorithms are nothing new. Even before AI, you could run genetic algorithm or PSO on code, and given enough compute it would optimize the algorithm, including itself. The hard part that nobody has solved this is abstracting this to a low enough level to where its applicable across multiple layers that correspond to any task.

For example, let say you have a model (or rather an algorithm) that has only a single interface, and that is the ability to send ethernet packets, and it hasn't been trained on any real world data at all. If you task it with building you a website that makes money, the same algorithm that iterates over figuring out how to send IP packets then TCP packets then HTTP packets should also be able to figure out what the modern world wide web looks like and what concepts like website and money is, building its knowledge graph and searching on it and interpolating on it to figure out how to solve the problem.

factibicongue05/14/2025

[flagged]