This tool originates from the paper mentioned in the readme. Here is a summary:
Research has revealed that refusal behavior in language models is not governed by a complex logic, but rather by a single causal “direction” in their activation space. The researchers captured the model’s internal activation state after providing a number of harmless prompts and computed the average. They then did the same with harmful prompts and, by taking the difference between these values, identified a single vector (direction) whose presence and intensity in the model’s activation state determines whether the model will refuse or not. To demonstrate this, the researchers modified the model’s activations in real time and observed that they could make the model answer dangerous questions or force it to refuse harmless ones.
This discovery made it possible to create a permanent and inexpensive jailbreak technique called “Weight Orthogonalization.” Through a one-time (computationally light) modification, the model’s weights are made “orthogonal” to the refusal direction, making the model physically incapable of forming that type of reasoning. The method proved to be nearly 100% effective on 13 open-source models, including Llama, Qwen, and Gemma of various sizes. Performance remained nearly identical across all benchmarks (MMLU, GSM8K), with the sole exception of TruthfulQA, where performance declined, suggesting a deep connection between safety mechanisms and truthfulness.
link to the paper: https://arxiv.org/pdf/2406.11717