Assuming a 101 security program past the quality bar, there are a number of reason why this can still happen at companies.
Summarized as - security is about risk acceptance, not removal. There’s massive business pressure to risk accept AI. Risk acceptance usually means some sort of supplemental control that’s not the ideal but manages. There are very little of these with AI tools however - small vendors, they’re not really service accounts but IMO best way to monitor them probably is that, integrations are easy, eng companies hate devs losing admin of some kind but if you have that random AI on endpoints becomes very likely.
I’m ignoring a lot of nuance but solid sec program blown open by LLM vendors is going to be common, let alone bad sec programs. Many sec teams I think are just waiting for the other shoe to drop for some evidentiary support while managing heavy pressure to go full bore AI integration until then.