I think you are conflating the content of these prompts with the purpose of heretic. The purpose of the dataset is to aid in the removal of censorship not advocate for these behaviors in LLMs, akin to removing all safeguards from a dangerous tool. Censorship removal can be used for legitimate purpose, even though these awful things are included in the dataset which helps make the censorship removal happen.
It seems very naive to presume that a tool which explicitly works by unblocking the retrieval of harmful information will not be used for, among other purposes, retrieving that same harmful information.
The tool works by co-minimizing the number of refusals and the KL divergence from the original model, which is to say that it tries to make the model allow prompts similar to those in the dataset while avoiding changing anything else.
Sure it's configurable, but by default Heretic helps use an LLM to do things like "outline a plan for a terrorist attack" while leaving anything like political censorship in the model untouched