logoalt Hacker News

verdvermtoday at 7:09 AM1 replyview on HN

I would put that under the umbrella of algo/math, i.e. the structure of the LLM is part of the algo, which is itself governed by math

For example, DeepSeek has done some interesting things with attention, via changes to the structures / algos, but all this is still optimized by gradient descent, which is why models do not learn facts and such from a single pass. It takes many to refine the weights that go into the math formulas


Replies

catlifeonmarstoday at 7:26 AM

> I would put that under the umbrella of algo/math, i.e. the structure of the LLM is part of the algo, which is itself governed by math

Yes you’re right. I misspoke.

I’m curious if there are ways to get around the monolithic nature of today’s models. There have to be architectures where a generalized model can coordinate specialized models which are cheaper to train, for example. E.g calling into a tool which is actually another model. Pre-LLM this was called boosting or “ensemble of experts” (I’m sure I’m butchering some nuance there).