Can you elaborate on how any sort of backdoor could be hidden in the model weights?
It's a technical possibility to hide something in the code, but that would be a bit silly since there's not that much of it here. It's not technically possible to hide a backdoor in a set of numbers that are solely used as the operands to trivial mathematical operations, so I'm very curious about what sort of hidden backdoor you think is here.
When you run their demo locally, there are two places that trigger a warning that the code loads the weights unsafely. To learn more about this issue, search "pytorch model load safety issues" on Google.