If what you’re trying to do is to publish prepared images of yourself, that won’t be facially recognized as you, then the answer is “not very much at all actually” — see https://sandlab.cs.uchicago.edu/fawkes/. Adversarially prepared images can still look entirely like you, with all the facial-recognition-busting data being encoded at an almost-steganographic level vs our regular human perception.
Do you know if this is still being worked on? The last "News" post from the link was 2022. Looks interesting.
My understanding is that this (interesting) project has been abandoned, and since then, the face recognition models have been train to defend against it.