> All these findings were reported to both openai and anthropic and they were not interested in responding
Let’s dive into why. When we run normal bounty and responsible disclosure programs there’s usually some level of disregard for issues that can’t / won’t be fixed. They just accept the risk. Perhaps because LLMs don’t have a clean divide between control and input that’s makes the problem unsolvable. Yes. You can add more guardrails and context but that all takes more tokens and in some cases makes results worse for regular usages.
The why might be valid, but it's not excusable. If you author a product that can so easily help people cause harm, you probably should own some responsibility of the outcomes. OAI does not like this, hence the bill.
The US already messed this up with guns. Do they want to go the same path again? Answer: "probably, yes".
LLM providers are not obliged to only use LLMs to guard against hazardous output. They could use other automated and non-automated techniques. And they ought to do so if they are given good evidence that existing safeguards are inadequate. Loss of product quality or additional cost should be secondary.