That's odd, because I'm using plain-old-GPT5 as the backend model for a bunch of offensive stuff and I haven't had any hangups at all. But I'm doing a multi-agent setup where each component has a constrained view of the big picture (ie, a fuzzer agent with tool calls to drive a web fuzzer looking for a particular kind of vulnerability); the high-level orchestration is still mostly human-mediated.