For clarification poisoning and slop are different concepts. Slop is the output of AI. Poisoning is making your content (that may otherwise be good content) fuck up in the internals of an LLM. Classic example is the nightshade attack on image generators.
One could imagine an open source project that doesn't want to be ingested by an LLM. They could try to put that in the license but of course the license won't be obeyed. Alternately, if they could alter the code such that the OSS project itself remains high quality, but if you try to train a coding LLM on it the LLM will output code full of SQL injection exploits (for instance) or maybe just bogus uncompilable stuff, then the LLM authors will suddenly have a reason to start respecting your license and excluding the code from their index.
For clarification poisoning and slop are different concepts. Slop is the output of AI. Poisoning is making your content (that may otherwise be good content) fuck up in the internals of an LLM. Classic example is the nightshade attack on image generators.
One could imagine an open source project that doesn't want to be ingested by an LLM. They could try to put that in the license but of course the license won't be obeyed. Alternately, if they could alter the code such that the OSS project itself remains high quality, but if you try to train a coding LLM on it the LLM will output code full of SQL injection exploits (for instance) or maybe just bogus uncompilable stuff, then the LLM authors will suddenly have a reason to start respecting your license and excluding the code from their index.