I think media outlets think way too highly of their contribution to AI.
Had they never existed, it had likely not made a dent to the AI development - completely like believing that had they been twice as productive, it had likely neither made a dent to the quality of LLMs.
Isn't the non-LLM generated text becoming more valuable for training as the web at large is flooded with slop?
Preventing new human generated text from being used by AI firms (without consent) seems like a valid strategy.
How do you think those models get trained? You can only get so far with Wikipedia, Reddit, and non-fiction works like books and academic papers.