Unless they released a model named "Tom Cruise-inator 3000," I don't see any way to legislate that intent that would provide any assurances to a developer that their misused model couldn't result in them facing significant legal peril. So anything in this ballpark has a huge chilling effect in my view. I think it's far too early in the AI game to even be putting pen to paper on new laws (the first AI bubble hasn't even popped, after all) but I understand that view is not universal.
I would say a text-based model carries a different risk profile compared to video-based ones. At some point (now?) we'd probably need to have the difficult conversation of what level of media-impersonation we are comfortable with.