logoalt Hacker News

hiddencost05/14/20252 repliesview on HN

100%. LLMs are extremely useful for doing obvious but repetitive optimizations that a human might miss.


Replies

jerjerjer05/14/2025

What it essentially does is a debugging/optimization loop where you change one thing, eval, repeat it again and compare results.

Previously we needed to have a human in the loop to do the change. Of course we have automated hyperparameter tuning (and similar things), but that only works only in a rigidly defined search space.

Will we see LLMs generating new improved LLM architectures, now fully incomprehensible to humans?

show 2 replies
thesz05/15/2025

One can have obvious but repetitive optimizations with symbolic programming [1].

[1] https://arxiv.org/abs/1012.1802

Strange that AlphaEvolve authors do not compare their work to what is achievable by equality saturation. An implementation of equality saturation can take interesting integrals with very simple rules [2].

[2] https://github.com/alt-romes/hegg/blob/master/test/Sym.hs#L3...