There are quite a few comments here on code obfuscation.
The hardest form of code obfuscation is called homomorphic computing, which is code transformed to act on encrypted data isomorphically to regular code on regular data. The homomorphic code is hard obfuscated by this transformation.
Now create a homomorphic virtual machine, that operates on encrypted code over encrypted data. Very hard to understand.
Now add data encryption/decryption algorithms, both homomorphically encrypted to be run by the virtual machine, to prepare and recover inputs, outputs or effects of any data or event information, for the homomorphic application code. Now that all data within the system is encrypted by means which are hard obfuscated, running on code which is hard obfuscated, the entire system becomes hard^2 (not a formal measure) opaque.
This isn't realistic in practice. Homomorphic implementations of even simple functions are extremely inefficient for the time being. But it is possible, and improvements in efficiency have not been exhausted.
Equivalent but different implementations of homomorphic code can obviously be made. However, given the only credible explanations for design decisions of the new code are, to exactly match the original code, this precludes any "clean room" defenses.
--
Implementing software with neural network models wouldn't stop replication, but would decompile as source that was clearly not developed independent from the original implementation.
Even distilling (training a new model on the "decompiled" model) would be dead giveaway that it was derived directly from the source, not a clean room implementation.
--
I have wondered, if quantum computing wouldn't enable an efficient version of homomorphic computing over classical data.
Just some wild thoughts.