For the people awaiting the singularity, lines like this written almost straight from science fiction:
> By suggesting modifications in the standard language of chip designers, AlphaEvolve promotes a collaborative approach between AI and hardware engineers to accelerate the design of future specialized chips."
This just means that it operates on the (debug text form of the) intermediate representation of a compiler.
Sure but remember that this approach only works for exploring an optimization for a function which has a well defined evaluation metric.
You can't write an evaluation function for general "intelligence"...
We are further getting to the point where no one on the planet understand how any of this stuff really works. This will last us until a collapse. Then we are done for.
Honestly it's this line that did it for me:
> AlphaEvolve enhanced the efficiency of Google's data centers, chip design and AI training processes — *including training the large language models underlying AlphaEvolve itself*.
Singularity people have been talking for decades about AI improving itself better than humans could, and how that results in runaway compounding growth of superintelligence, and now it's here.
The singularity has always existed. It is located at the summit of Mount Stupid, where the Darwin Awards are kept. AI is really just psuedo-intelligence; an automated chairlift to peak overconfidence.
Here is the relevant bit from their whitepaper (https://storage.googleapis.com/deepmind-media/DeepMind.com/B...):
> AlphaEvolve was able to find a simple code rewrite (within an arithmetic unit within the matmul unit) that removed unnecessary bits, a change validated by TPU designers for correctness.
I speculate this could refer to the upper bits in the output of a MAC circuit being unused in a downstream connection (perhaps to an accumulation register). It could also involve unused bits in a specialized MAC circuit for a non-standard datatype.
> While this specific improvement was also independently caught by downstream synthesis tools, AlphaEvolve’s contribution at the RTL stage demonstrates its capability to refine source RTL and provide optimizations early in the design flow.
As the authors admit, this bit-level optimization was automatically performed by the synthesis tool (the equivalent to this in the software-world is dead code elimination being performed by a compiler). They seem to claim it is better to perform this bit-truncation explicitly in the source RTL rather than letting synthesis handle it. I find this dubious since synthesis guarantees that the optimizations it performs do not change the semantics of the circuit, while making a change in the source RTL could change the semantics (vs the original source RTL) and requires human intervention to check semantic equivalence. The exception to this is when certain optimizations rely on assumptions of the values that are seen within the circuit at runtime: synthesis will assume the most conservative situation where all circuit inputs are arbitrary.
I do agree that this reveals a deficiency in existing synthesis flows being unable to backannotate the source RTL with the specific lines/bits that were stripped out in the final netlist so humans can check whether synthesis did indeed perform an expected optimization.
> This early exploration demonstrates a novel approach where LLM-powered code evolution assists in hardware design, potentially reducing time to market.
I think they are vastly overselling what AlphaEvolve was able to achieve. That isn't to say anything about the potential utility of LLMs for RTL design or optimization.