logoalt Hacker News

sebastiennight12/09/20241 replyview on HN

My understanding is that this (interesting) project has been abandoned, and since then, the face recognition models have been train to defend against it.


Replies

derefr12/09/2024

Very likely correct in the literal sense (you shouldn’t rely on the published software); but I believe the approach it uses is still relevant / generalizable. I.e. you can take whatever the current state-of-the-art facial recognition model is, and follow the steps in their paper to produce an adversarial image cloaker that will fool that model while being minimally perceptually obvious to a human.

(As the models get better, the produced cloaker retains its ability to fool the model, while the “minimally perceptually obvious to a human” property is what gets sacrificed — even their 2022 version of the software started to do slightly-evident things like visibly increasing the contour of a person’s nose.)