> So is the model going to be publicly available, just like those dangerous pre-1913 texts, or not?
1. This implies a false equivalence. Releasing a new interactive AI model is indeed different in significant and practical ways from the status quo. Yes, there are already-released historical texts. The rational thing to do is weigh the impacts of introducing another thing.
2. Some people have a tendency to say "release everything" as if open-source software is equivalent to open-weights models. They aren't. They are different enough to matter.
3. Rhetorically, the quote across comes across as a pressure tactic. When I hear "are you going to do this or not?" I cringe.
4. The quote above feels presumptive to me, as if the commenter is owed something from the history-llms project.
5. People are rightfully bothered that Big Tech has vacuumed up public domain and even private information and turned it into a profit center. But we're talking about a university project with (let's be charitable) legitimate concerns about misuse.
6. There seems to be a lack of curiosity in play. I'd much rather see people asking e.g. "What factors are influencing your decision about publishing your underlying models?"
7. There are people who have locked-in a view that says AI-safety perspectives are categorically invalid. Accordingly, they have almost a knee-jerk reaction against even talk of "let's think about the implications before we release this."
8. This one might explain and underly most of the other points above. I see signs of a deeper problem at work here. Hiding behind convenient oversimplifications to justify what one wants does not make a sound moral argument; it is motivated reasoning a.k.a. psychological justification.