If I understand correctly - if you are training a model to perform a particular task - in the end what matters is the training data - and by and large different models will largely converge on the best representation of that data for the given task, given enough compute.
So that means the models themselves aren't really IP - they are inevitable outputs from optimising using the input data for a certain task.
I think this means pretty much everyone, apart from the AI companies - will see these models as pre-competitive.
Why spend huge amounts training the same model multiple times, when you can collaborate?
Note it only takes one person/company/country to release an open source model for a particular task to nuke the business model of those companies that have a business model of hoarding them.