If I've copied someone else's copyrighted work on my Xerox machine, then give it to you, you can't reproduce the work I copied. If I leave a copy of it in the scanner when I give it to you, that's another story. The issue here isn't the ability of an LLM to produce it when I provide it with the copyrighted work as an input, it's whether or not there's an input baked-in at the time of distribution that gives it the ability to continue producing it even if the person who receives it doesn't have access to the work to provide it in the first place.
To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.
You can train a LLM on completely clean data, creative commons and legally licensed text, and at inference time someone will just put a whole article or chapter in the model and has full access to regenerate it however they like.