That it is very likely not going to work as advertised, and might even backfire.
The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.
The sad reality is that nobody ever cares about the security/ethics of their product unless they are pressured. Model evaluation against some well defined ethics framework or something like HarmBench are not without costs, nobody wants to do that. It is similar to pentesting. It is good that such suggestions are being pushed forward to make sure model owners are responsible here. It also protects authors and reduces the risk of their works being copied verbatim. I think this is what morel owners are afraid of the most.