>Historical texts contain racism, antisemitism, misogyny, imperialist views. The models will reproduce these views because they're in the training data. This isn't a flaw, but a crucial feature—understanding how such views were articulated and normalized is crucial to understanding how they took hold.
Yes!
>We're developing a responsible access framework that makes models available to researchers for scholarly purposes while preventing misuse.
Noooooo!
So is the model going to be publicly available, just like those dangerous pre-1913 texts, or not?
It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results. I’ve never been as unimpressed by scientists as I have been in the past five years or so.
“We’ve created something so dangerous that we couldn’t possibly live with the moral burden of knowing that the wrong people (which are never us, of course) might get their hands on it, so with a heavy heart, we decided that we cannot just publish it.”
Meanwhile, anyone can hop on an online journal and for a nominal fee read articles describing how to genetically engineer deadly viruses, how to synthesize poisons, and all kinds of other stuff that is far more dangerous than what these LARPers have cooked up.
> So is the model going to be publicly available, just like those dangerous pre-1913 texts, or not?
1. This implies a false equivalence. Releasing a new interactive AI model is indeed different in significant and practical ways from the status quo. Yes, there are already-released historical texts. The rational thing to do is weigh the impacts of introducing another thing.
2. Some people have a tendency to say "release everything" as if open-source software is equivalent to open-weights models. They aren't. They are different enough to matter.
3. Rhetorically, the quote across comes across as a pressure tactic. When I hear "are you going to do this or not?" I cringe.
4. The quote above feels presumptive to me, as if the commenter is owed something from the history-llms project.
5. People are rightfully bothered that Big Tech has vacuumed up public domain and even private information and turned it into a profit center. But we're talking about a university project with (let's be charitable) legitimate concerns about misuse.
6. There seems to be a lack of curiosity in play. I'd much rather see people asking e.g. "What factors are influencing your decision about publishing your underlying models?"
7. There are people who have locked-in a view that says AI-safety perspectives are categorically invalid. Accordingly, they have almost a knee-jerk reaction against even talk of "let's think about the implications before we release this."
8. This one might explain and underly most of the other points above. I see signs of a deeper problem at work here. Hiding behind convenient oversimplifications to justify what one wants does not make a sound moral argument; it is motivated reasoning a.k.a. psychological justification.
fully understand you. we'd like to provide access but also guard against misrepresentations of our projects goals by pointing to e.g. racist generations. if you have thoughts on how we should do that, perhaps you could reach out at [email protected] ? thanks in advance!