logoalt Hacker News

D-Machinetoday at 1:40 AM1 replyview on HN

Doesn't look legit to me. You are talking about abliteration, which is real. But the OP linked tool is doing novel and very dumb ablation: zeroing out huge components of the network, or zeroing out isolated components in a way that indicates extreme ignorance of the basic math involved.

Compared to abliteration, none of the ablation approaches of this tool make even half a whit of sense if you understand even the most basic aspects of an e.g. Transformer LLM architecture, so my guess is this is BS.


Replies

hexagatoday at 6:43 AM

The terminology comes from the post[0] which kicked off interest in orthogonalizing weights w.r.t. a refusal direction in the first place. That is, abliteration was not originally called abliteration, but refusal ablation.

Ultimately though, OP is just what you get if you take the idea of abliteration and tell an LLM to fix the core problems: that refusal isn't actually always exactly a rank-1 subspace, nor the same throughout the net, nor nicely isolated to one layer/module, that it damages capabilities, and so on.

The model looks at that list and applies typical AI one-off 'workarounds' to each problem in turn while hyping up the prompter, and you get this slop pile.

[0]: https://www.lesswrong.com/posts/refusal-in-llms-is-mediated-...

show 1 reply