logoalt Hacker News

Embarrassingly simple self-distillation improves code generation

465 pointsby Anon84today at 10:26 AM141 commentsview on HN

Comments

bensyversontoday at 12:02 PM

Really fascinating how this works; it's basically context-aware decoding. From the paper:

> Code interleaves fork positions, where several continuations are genuinely plausible and may correspond to different solution approaches, with lock positions, where syntax and semantics leave little ambiguity but a low-probability distractor tail still remains… The best global decoding setting is therefore necessarily a compromise; we call this tension the precision-exploration conflict.

In other words, just like us, the model needs to shift from "exploration" in "fork" mode (divergent thinking to produce a creative solution) to "precision" in "lock" mode (producing syntactically correct code).

What this paper shows is that their simple technique (SSD) can improve the ranking of optimal tokens in both lock and fork positions, meaning the model is more likely to explore when it should be exploring, and more likely to be precise when it needs to be.

I love that we're still learning the emergent properties of LLMs!

show 9 replies
wg0today at 12:10 PM

After TurboQuant and Gemma 4, came across the following video[0] running Gemma on local machine at 50 token/second.

That already looks like Sonnet 3x and 4 level capabilities to me where the model in question (Gemma 4) set ups whole python project with a UI and installs python libraries using uv etc.

Add this Simple Self Distillation to the picture and by 2028 I see cheaper coding model providers with much more generous usage limits in the future and power users would be mostly running their own models anyway.

Anyone using these models as "non-deterministic transpilers" from natural language to code (experienced engineers who can write code themselves) would probably not be paying to any AI providers.

[0] https://www.youtube.com/watch?v=-_hC-C_Drcw

show 2 replies
zyklu5today at 7:41 PM

Their explanation for why their idea (SSD) might work - precision-exploration conflict hypothesis - is something adaptive decoding also tries to solve.

https://ai.meta.com/research/publications/adaptive-decoding-...

khalictoday at 11:51 AM

Incredible, will translate to better coding models in the near future.

We really need to develop better tools to understand what's happening inside these NNs. Working with high-D spaces is not something we're good at, and we're basically throwing stuff at it and seeing if it sticks.

udunitoday at 5:20 PM

It's crazy how much better you can make LLM output just by asking "is this the most elegant solution?" In a loop

(Not fine tuning, but interesting none the less. If a model can so easily find a more elegant solution, why didn't it pick that in the first place?)

show 1 reply
drdrektoday at 8:48 PM

This is the "Factors" Bonanza in finance all over again. You get a generally useful model, then you over-fit it to some criteria and announce advancement in the field, then it performs worse in real life. New infinite academic article glitch just dropped boys!

0x3ftoday at 11:49 AM

Haven't read the paper yet, but it is interesting how seemingly simple many breakthroughs in ML are. Even transformers are like that. Maybe it's hindsight bias.

I suppose we just don't have a deeper underlying theory to lean on and help us 'design' anything.

show 2 replies
OxfordOutlandertoday at 8:02 PM

So... it's like a golfer who hits thousands of balls into an open field without ever once aiming for a hole. The relentless repetition flawlessly locks in their foundational muscle memory and basic swing mechanics, so when they finally step up to a real course, they don't have to waste a single thought on how to hold the club. Their basic swing is completely automatic - they can confidently take the creative, high-risk shot required to actually sink a hole-in-one.

ultramanntoday at 1:55 PM

Maybe not the thing I should be focusing on, but I was surprised this paper came from apple. I was under the impression that apples ai/LLM research was far behind the curve. I get that research is a rising tides lifts all boats situation, I just thought that I had seen lots of negative news about apples progress in the front, and heuristically haven’t seen many (any?) apple research papers make it the front page of hacker news. Wondering if anyone more familiar with apple/ai research could comment on this?

show 1 reply
p1esktoday at 3:32 PM

It’s so ironic that Apple still publishes AI research and OpenAI does not.

show 2 replies
gavinraytoday at 7:22 PM

Why have we been fed the narrative that training models on their own output progressively degrades quality?

It's the first thing anyone would think of (like a self-hosted compiler) but everything I've read said "it doesn't work."

EDIT: For context:

  > Shumailov et al. (2024) — "AI models collapse when trained on recursively generated data" (Nature, 2024)
mickdarlingtoday at 7:02 PM

I'm working on a tool to determine which portions of an LLM process can be optimized, and how to measure that optimization and check whether it's optimizable at all. The shaping pattern that they talk about here is directly relevant and makes a whole lot more processes potentially optimizable by looking at the pattern rather than if the metrics just go up or down.

l5870uoo9ytoday at 12:18 PM

> Our method, simple self-distillation (SSD), is embarrassingly simple: sample solutions from the base model with specified temperature and truncation, then fine-tune on those raw, unverified samples via standard cross-entropy loss.

So you prompt the base model for answer and then rerun the prompt with the answer from the first run?

show 2 replies
an0maloustoday at 1:35 PM

I’d like to understand AI research better and I recall some posts a while back where someone collected all the key papers that one should read, but I don’t remember enough to be able to find it. Does anyone know what I’m talking about and could link me to that post?

show 1 reply
roger_today at 11:58 AM

Skimmed this but don't have an intuitive understanding of why this works and how temperature and truncation factor in.

itmiticatoday at 2:47 PM

It’s an interesting claim, and the reported benchmark gains are large, but it is still an April 1, 2026 arXiv preprint, so I’d treat it as promising rather than settled.

dwa3592today at 3:35 PM

Can anyone help clarify these doubts - I didn't see any information about how different the test/benchmark set is from the training set. It feels like an important gap to not fill in a ML paper. What if there is an overlap between the problems in the test set and the training set?? What is the decontamination strategy of going from LCBv5 to LCBv6 ?

crustycodertoday at 4:33 PM

"SSD improves Qwen3-30B-Instruct from 42.4% to 55.3% pass@1 on LiveCodeBench v6"

I know virtually nothing about this area but my naive take is that something that means it still only passes tests around half the time doesn't seem like a particularly big jump forwards.

What am I missing?

show 1 reply
xbmcusertoday at 1:33 PM

So the chances of Singularity went up.

show 1 reply
vishnuguptatoday at 12:47 PM

Can someone please eli5 this to a friend web developer? I read the abstract but couldn’t understand much.

show 3 replies
fookertoday at 2:27 PM

I'm excited for the long tail of techniques like this that are going to be discovered over the next several decades that's going to make this technology eventually run on a toaster!

droobytoday at 12:24 PM

Fascinating...

This feels eerily similar to sleep consolidation or synaptic pruning

show 1 reply
augment_metoday at 2:42 PM

Isn't this was DeepSeek + Kimi did to Claude?

smallerizetoday at 12:12 PM

I don't suppose they published the improved models?

4b11b4today at 1:43 PM

Self-consistency meets fine-tuning?

robwwilliamstoday at 3:07 PM

Very cool. An evolutionary biologist would say: Welcome to the party!

Mutation rate modulation is the AI engineers’ heat. And selection does the trimming of the outliers.

Some more serious biomorphic thinking and we may get to the next big insight courtesy of 3+ billion years of evolution—- evolution that enabled a great ape species to write a paper like this and build LMM’s like Gemma4 that totally rock on a 3.5 pound MacBookPro M5 Max with 128 GB of RAM.

antireztoday at 1:59 PM

Another potentially usable trick is the following: based on the observation that longer token budget improves model performances, one could generate solutions using a lot of thinking budget, then ask the LLM to turn the trace into a more compact one, and later SFT on that. That said, I have the feeling the result of the paper will likely be hard to apply in practice without affecting other capabilities, and/or not superior to other techniques that provide similar improvement in sampling.

porridgeraisintoday at 4:18 PM

There's an obvious baseline which seems missing

If you sample from the base model with T=1.6, top_k=20, top_p=0.8, i.e, the decode settings used for the distillation's ground truth, does it match the SSD'd model + some decoding? Performance wise.

Their sweep is missing this. And only covers "standard" decoding settings.

maxbeechtoday at 9:19 PM

[dead]

techpulselabtoday at 4:13 PM

[dead]

aplomb1026today at 5:31 PM

[dead]

aiiarotoday at 6:02 PM

[flagged]

yubainutoday at 5:34 PM

[dead]

VoqalAItoday at 2:12 PM

[dead]

usermactoday at 12:58 PM

[dead]

pithtkntoday at 12:57 PM

[dead]

dist-epochtoday at 11:47 AM

[flagged]

show 5 replies
jofzartoday at 11:44 AM

> simple self-distillation (SSD):

Sorry apple, SSD is already taken, you can't use that acronym.

show 3 replies
politelemontoday at 11:55 AM

It's cringe worthy to see that the original paper itself is editorialised.

Title should be: Simple Self-Distillation Improves Code Generation

show 2 replies
ape4today at 11:57 AM

Shouldn't a scientific paper be using metric units (like 30T) rather than 30B.

There are two distinct billions. https://en.wikipedia.org/wiki/Billion

show 1 reply