logoalt Hacker News

NitpickLawyertoday at 8:31 AM0 repliesview on HN

... and so it begins.

For a bit of context, goog already did something like this two generations of models ago, as announced in this blog post[1] from May '25:

> AlphaEvolve is accelerating AI performance and research velocity. By finding smarter ways to divide a large matrix multiplication operation into more manageable subproblems, it sped up this vital kernel in Gemini’s architecture by 23%, leading to a 1% reduction in Gemini's training time.

We are now seeing the same thing "at home", for any model. And with how RL heavy the new training runs have become, inference speedups will directly translate in faster training as well.

[1] - https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...