logoalt Hacker News

a2128yesterday at 7:49 PM11 repliesview on HN

    You're not just using a tool — you're co-authoring the science.
This README is an absolute headache that is filled with AI writing, terminology that doesn't exist or is being used improperly, and unsound ideas. For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer. I can only assume somebody vibe-coded this and spent way too much time being told "You're absolutely right!" bouncing back the worst ideas

Replies

Retr0idyesterday at 11:15 PM

I don't know if this particular tool/approach is legit, but LLM ablation is definitely a thing: https://arxiv.org/abs/2512.13655

show 1 reply
userbinatortoday at 2:54 AM

"Getting high on your own supply" is exactly what I'd expect from those immersed in this new AI stuff.

paradox460yesterday at 10:22 PM

It's not just a headache, it's bad

creatonezyesterday at 8:08 PM

> For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer.

That doesn't mean there couldn't be a "concept neuron" that is doing the vast majority of heavy lifting for content refusal, though.

show 1 reply
dinunnobyesterday at 8:13 PM

Hmm, pliny is amazing - if you kept up with him on social media you’d maybe like him https://x.com/elder_plinius

show 5 replies
D-Machinetoday at 12:58 AM

> "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?)

This is not what an ablation study is. An ablation study removes and/or swaps out ("ablates") different components of an architecture (be it a layer or set of layers, all activation functions, backbone, some fixed processing step, or any other component or set of components) and/or in some cases other aspects of training (perhaps a unique / different loss function, perhaps a specialized pre-training or fine-tuning step, etc) in order to attempt to better understand which component(s) of some novel approach is/are actually responsible for any observed improvements. It is a very broad research term of art.

That being said, the "Ablation Strategies" [1] the repo uses, and doing a Ctrl+F for "ablation" in the README does not fill me with confidence that the kind of ablation being done here is really achieving what the author claims. All the "ablation" techniques seem "Novel" in his table [2], i.e. they are unpublished / maybe not publicly or carefully tested, and could easily not work at all.

From later tables, I am not convinced I would want to use these ablations, as they ablate rather huge portions of the models, and so probably do result in massively broken models (as some commenters have noted in this thread elsewhere). EDIT: Also, in other cases [1], they ablate (zero out) architecture components in a way that just seems incredibly braindead if you have even a basic understanding of the linear algebra and dependencies between components of a transformer LLM. There is nothing sound clearly about this, in contrast to e.g. abliteration [3].

[1] hhtps://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-file#ablation-strategies

[2] https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...

EDIT: As another user mentions, "ablation" has a specific additional narrower meaning in some refusal analyses or when looking at making guardrails / changing response vectors and such. It is just a specific kind of ablation, and really should actually be called "abliteration", not "ablation" [3].

[3] https://huggingface.co/blog/mlabonne/abliteration, https://arxiv.org/abs/2512.13655.

fragmedeyesterday at 11:35 PM

Alternately, it's intentional. It very effective filters out people with your mindset. You can decide if that's a good thing or not.

show 2 replies
jeffbeetoday at 1:21 AM

"Ablation studies" are a real thing in LLM development, but in this context it serves as a shibboleth by which members of the group of people who believe that models are "woke" can identify each other. In their discourse it serves a similar purpose to the phrase "gain of function" among COVID-19 cranks. It is borrowed from relevant technical jargon, but is used as a signal.

show 3 replies
robertkyesterday at 8:37 PM

You don't know what you are talking about. Obviously refusal circuitry does not live in one layer, but the repo is built on a paper with sound foundations from an Anthropic scholar working with a DeepMind interpretability mentor: https://scholar.google.com/citations?view_op=view_citation&h...

lazzlazzlazztoday at 12:34 AM

Ironic to see this comment when Pliny, the author of this codebase, is one of the most sophisticated LLM jailbreakers/red-teamers today. So presumptive and arrogant!

show 1 reply