It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results. I’ve never been as unimpressed by scientists as I have been in the past five years or so.
“We’ve created something so dangerous that we couldn’t possibly live with the moral burden of knowing that the wrong people (which are never us, of course) might get their hands on it, so with a heavy heart, we decided that we cannot just publish it.”
Meanwhile, anyone can hop on an online journal and for a nominal fee read articles describing how to genetically engineer deadly viruses, how to synthesize poisons, and all kinds of other stuff that is far more dangerous than what these LARPers have cooked up.
> It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results. I’ve never been as unimpressed by scientists as I have been in the past five years or so.
This is absolutely nothing new. With experimental things, it's non uncommon for a lab to develop a new technique and omit slight but important details to give them a competitive advantage. Similarly in the simulation/modelling space it's been common for years for researchers to not publish their research software. There's been a lot of lobbying on that side by groups such as the Software Sustainability Institute and Research Software Engineer organisations like RSE UK and RSE US, but there's a lot of researchers that just think that they shouldn't have to do it, even when publicly funded.
I think it's more likely they are terrified of someone making a prompt that gets the model to say something racist or problematic (which shouldn't be too hard), and the backlash they could receive as a result of that.
Wow, this is needlessly antagonistic. Given the emergence of online communities that bond on conspiracy theories and racist philosophies in the 20th century, it's not hard to imagine the consequences of widely disseminating an LLM that could be used to propagate and further these discredited (for example, racial) scientific theories for bad ends by uneducated people in these online communities.
We can debate on whether it's good or not, but ultimately they're publishing it and in some very small way responsible for some of its ends. At least that's how I can see their interest in disseminating the use of the LLM through a responsible framework.
> It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results.
Even if I give the comment a lot of wiggle room (such as changing "every" to "many"), I don't think even a watered-down version of this hypothesis passes Occam's razor. There are more plausible explanations, including (1) genuine concern by the authors; (2) academic pressures and constraints; (c) reputational concerns; (d) self-interest to embargo underlying data so they have time to be the first to write-it-up. To my eye, none of these fit the category of "getting high on power".
Also, patience is warranted. We haven't seen what these researchers are doing to release -- and from what I can tell, they haven't said yet. At the moment I see "Repositories (coming soon)" on their GitHub page.
Scientists have always been generally self interested amoral cowards, just like every other person. They aren't a unique or higher form of human.
> “We’ve created something so dangerous that we couldn’t possibly live with the moral burden of knowing that the wrong people (which are never us, of course) might get their hands on it, so with a heavy heart, we decided that we cannot just publish it.”
Or, how about, "If we release this as is, then some people will intentionally mis-use it and create a lot of bad press for us. Then our project will get shut down and we lose our jobs"
Be careful assuming it is a power trip when it might be a fear trip.
I've never been as unimpressed by society as I have been in the last 5 years or so.