This was a compromise of the library owners github acccounts apparently, so this is not a related scenario to dangerous code in the training data.
I assume most labs don't do anything to deal with this, and just hope that it gets trained out because better code should be better rewarded in theory?