It feels like a Markov chain isn't adversarial enough.
Maybe you can use an open-weights model, assuming that all LLMs converge on similar representations, and use beam-search with inverted probability and repetition penalty or just GPT-2/LLaMA outwith with amplified activations to try and bork the projection matrices, return write pages and pages of phonetically faux English text to affect how the BPE tokenizer gets fitted, or anything else more sophisticated and deliberate than random noise.
All of these would take more resources than a Markov chain, but if the scraper is smart about ignoring such link traps, a periodically rotated selection of adversarial examples might be even better.
Nightshade had comparatively great success, discounting that its perturbations aren't that robust to rescaling. LLM training corpora are filtered very coarsely and take all they can get, unlike the more motivated attacker in Nightshade's threat model trying to fine-tune on one's style. Text is also quite hard to alter without a human noticing, except annoying zero-width Unicode which is easily stripped, so there's no presence of preserving legibility; I think it might work very well if seriously attempted.