another "ai is inherently evil" take coming from the "ai is inherently evil" blog.
i agree that specific implementations of a technology (claude, gemini, qwen) are never neutral but any tech itself (llms as a concept) is neutral you can implement it in any way you want. you can make a llm trained on diverse data, tuned for anti fascist opinions, using solar power and recycled hardware to be carbon neutral. the reason nobody is really doing it is just good old wealth inequality. as long as only big corporations can afford to use and develop llms or any other tech it will be biased to benefit them, thats why its so important to democratize it.
and for the open source part, the fact that it started as a libertarian movment dont mean it cant also be socialist. its going against the capitalist norm of exclusive property rights (including ip) and profit at all costs. sharing the product of your labor with everyone for free is one of the biggest things you can do to help, its like the online equivalent of putting food in the community fridge.
open llms let you fine tune them to add the missing under represented perspectives. you can run them locally with zero climate impact. analyze them in depth to reveal biases the devs never noticed or dont want you to see. none of that possible with closed source. the right thing to do is not avoid using ai at all costs but do everything you can to make it good. your skills and hardware access are a privilege. use it.