One of the skills needed to effectively use AI for code is to know that telling AI "don't commit secrets" is not a reliable strategy.
Design your secrets to include a common prefix, then use deterministic scanning tools like git hooks to prevent then from being checked in.
Or have a git hook that knows which environment variables have secrets in and checks for those.
That's such an incredibly basic concept, surely AIs have evolved to the point where you don't need to explicitly state those requirements anywhere?