LLMs really get in the way of computer security work of any form.
Constantly "I can't do that, Dave" when you're trying to deal with anything sophisticated to do with security.
Because "security bad topic, no no cannot talk about that you must be doing bad things."
Yes I know there's ways around it but that's not the point.
The irony is that LLMs being so paranoid about talking security is that it ultimately helps the bad guys by preventing the good guys from getting good security work done.
I've run into this before too, when playing single player games if I've had enough of grinding sometimes I like to pull up a memory tool, and see if I can increase the amount of wood and so on.
I never really went further but recently I thought it'd be a good time to learn how to make a basic game trainer that would work every time I opened the game but when I was trying to debug my steps, I would often be told off - leading to me having to explain how it's my friends game or similar excuses!
Sounds like you need one of them uncensored models. If you don't want to run an LLM locally, or don't have the hardware for it, the only hosted solution I found that actually has uncensored models and isn't all weird about it was Venice. You can ask it some pretty unhinged things.
This is true for ChatGPT, but Claude has limited amount of fucks and isn't about to give them about infosec. Which is one of the (many) reasons why I prefer Anthropic over OpenAI.
OpenAI has the most atrocious personality tuning and the most heavy-handed ultraparanoid refusals out of any frontier lab.
Last time I tried Codex, it told me it couldn’t use an API token due to a security issue. Claude isn’t too censorious, but ChatGPT is so censored that I stopped using it.
The irony is that LLMs being so paranoid about talking security is that it ultimately helps the bad guys by preventing the good guys from getting good security work done.
For a further layer of irony, after Claude Code was used for an actual real cyberattack (by hackers convincing Claude they were doing "security research"), Anthropic wrote this in their postmortem:
This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense. When sophisticated cyberattacks inevitably occur, our goal is for Claude—into which we’ve built strong safeguards—to assist cybersecurity professionals to detect, disrupt, and prepare for future versions of the attack.
https://www.anthropic.com/news/disrupting-AI-espionage