> If I am unable to convince you to stop meticulously training the tools of the oppressor (for a fee!) then I just ask you do so quietly.
I'm kind of fascinated by how AI has become such a culture war topic with hyperbole like "tools of the oppressor"
It's equally fascinating how little these comments understand about how LLMs work. Using an LLM for inference (what you do when you use Claude Code) does not train the LLM. It does not learn from your code and integrate it into the model while you use it for inference. I know that breaks the "training the tools of the oppressor" narrative which is probably why it's always ignored. If not ignored, the next step is to decry that the LLM companies are lying and are stealing everyone's code despite saying they don't.
I understand how these LLMs work.
I find it hard to believe there are people who know these companies stole the entire creative output of humanity and egregiously continually scrape the internet are, for some reason, ignoring the data you voluntarily give them.
> I know that breaks the "training the tools of the oppressor" narrative
"Narrative"? This is just reality. In their own words:
> The awards to Anthropic, Google, OpenAI, and xAI – each with a $200M ceiling – will enable the Department to leverage the technology and talent of U.S. frontier AI companies to develop agentic AI workflows across a variety of mission areas. Establishing these partnerships will broaden DoD use of and experience in frontier AI capabilities and increase the ability of these companies to understand and address critical national security needs with the most advanced AI capabilities U.S. industry has to offer. The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries [0]
Is 'warfighting adversaries' some convoluted code for allowing Aurornis to 'see a 1337x in productivity'?
Or perhaps you are a wealthy westerner of a racial and sexual majority and as such have felt little by way of oppression by this tech?
In such a case I would encourage you to develop empathy, or at least sympathy.
> Using an LLM for inference .. does not train the LLM.
In their own words:
> One of the most useful and promising features of AI models is that they can improve over time. We continuously improve our models through research breakthroughs as well as exposure to real-world problems and data. When you share your content with us, it helps our models become more accurate and better at solving your specific problems and it also helps improve their general capabilities and safety. We do not use your content to market our services or create advertising profiles of you—we use it to make our models more helpful. ChatGPT, for instance, improves by further training on the conversations people have with it, unless you opt out.
[0] https://www.ai.mil/latest/news-press/pr-view/article/4242822...
[1] https://help.openai.com/en/articles/5722486-how-your-data-is...
We are not talking about inference.
The prompts and responses are used as training data. Even if your provider allows you to opt out they are still tracking your usage telemetry and using that to gauge performance. If you don’t own the storage and compute then you are training the tools which will be used to oppress you.
Incredibly naive comment.