I think citations are an insufficient metric to judge these things on. My experience in writing a paper is that I have formed a well defined model of the world, such that when I write the introduction, I have a series of clear concepts that I use to ground the work. When it comes to the citations to back these ideas, I often associate a person rather than a particular paper, then search for an appropriate paper by that person to cite. That suggests that other means for creating that association - talks, posters, even just conversations- may have significant influence. That in turn suggests a variety of personality/community influences that might drive “scientific progress” as measured by citation.
Very cool to see Ortega on the frontpage. He was a fine thinker - phenomenally erudite and connected to his contemporary philosophers, but also eminently readable. He is not technical, rarely uses neologisms, and writes in an easy to digest "stream of thought" style which resembles a lecture (I believe he repackaged his writings into lectures, and vice versa).
I can recommend two of his works:
- The Revolt of Masses (mentioned in the article), where he analyzes the problems of industrial mass societies, the loss of self and the ensuing threats to liberal democracies. He posits the concept of the "mass individual" (hombre masa) a man who is born into the industrial society, but takes for granted the progress - technical and political - that he enjoys, does not enquire about the origins of said progress or his relationship to it, and therefore becomes malleable to illiberal rhetoric. It was written in ~1930 and in many ways the book foresees the forces that would lead to WWII. The book was an international success in its day but it remains eerily current.
- His Meditations on Technics expose a rather simple, albeit accurate philosophy of technology. He talks about the history of technology development, from the accidental (eg, fire), to the artisanal, to the age of machines (where the technologist is effectively building technology that builds technology). He also explains the dual-stage cycle in which humans switch between self-absorption (ensimismamiento) and think about their discomforts, and alteration, in which they decide to transform the world as best as they can. The ideas may not be life-changing but it's one of these books that neatly models and settles things you already intuited. Some of Ortega's reflections often come to mind when I'm looking for meaning in my projects. It might be of interest for other HNers!
Some other hypothesis:
- Newton - predicts that most advances are made by standing on the shoulders of giants. This seems true if you look at citations alone. See https://nintil.com/newton-hypothesis
- Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers. https://researchonresearch.org/largest-study-of-its-kind-sho...
If I was allowed to speculate I would make a couple of observations. First one is that resources play a huge role in research, so overall progress direction is influenced more by the economics rather than any group. For example every component of a modern smartphone got hyper optimized via massive capital injections. Second one is that this is a real world and thus likely some kind of power law applies. I don't know the exact numbers, but my expectation is that top 1% of researches produce way more output than bottom 25%.
Now that the internet exists, it's harder to reason about how hard a breakthrough was to make. Before information was everywhere instantly, it would be common for discoveries to be made concurrently, separated by years, but genuinely without either scientist knowing of the others work.
That distance between when the two (or more) similar discoveries happened gives insight into how difficult it was. Separated by years, and it must have been very difficult. Separated by months or days, and it is likely an obvious conclusion from a previous discovery. Just a race to publish at that point.
The way modern science works is:
1. You let 3 PhDs chew on the problem, make sure you are corresponding author.
2.You get your best friend and the labs “golden boy” and publish a breakthrough paper.
3. Cite “your” previous work.
Plausible. General Relativity was concieved by an extraordinary genius, but the perihelion of mercury was measured by dozens of painstaking but otherwise unexceptional people. Without that fantastically accurate measurement, GR would never have been accepted as a valid theory.
Yeah, in the same way that CEOs, founders are given all the credit for their company's breakthroughs, scientists who effectively package a collection of small breakthroughs are given all the credit for each individual advancement that lead to it. It makes sense though, humans prioritize the whole over the pieces, the label of the contents.
> Even minor papers by the most eminent scientists are cited much more than papers by relatively unknown scientists
I wonder if this is because a paper with such a citation is likely to be taken more seriously than a citation that might actually be more relevant.
Interesting, I didn't know there was such a thing (despite having read quite a lot of Ortega y Gasset).
Compare this with paradigm shifts in T. S. Kuhns The structure of scientific revolutions:
https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...
Related:
Ortega hypothesis - https://news.ycombinator.com/item?id=20247092 - June 2019 (1 comment)
> Ortega most likely would have disagreed with the hypothesis that has been named after him, as he held not that scientific progress is driven mainly by the accumulation of small works by mediocrities, but that scientific geniuses create a framework within which intellectually commonplace people can work successfully
This is hilarious
I'm very curious if anyone has tried to control for the natural hierarchies which form in Academia. e.g. A researcher who rises to the top of a funding collaboration will have a disproportionate number of citations due to their influence on funding flows. Likewise, those who influence the acceptances/reviewers at major conferences will naturally attract more citations of their work either by featuring it over other work or correctly predicting where the field was heading based on the paper flows.
Citations are the wrong metric. The correct metric to care about is human comfort.
Groundbreaking advances are usually giant leaps, and it takes time for researchers to get comfortable with them. It is in precisely this sense that the numerous contributions of the masses are useful, because their joint combination allows future geniuses to more readily accept these advances, hence giving them more "brain space" to pursue new advances.
One influential paper does not constitute an accepted theory. You need redundancy in your system. Each paper of the masses produces yet another brick for the metaphorical building.
This sounds like the concept of ‘normal science’ in paradigm theory.
I wonder: where has this hypothesis been operationalized and turned into a testable prediction (forward-looking or retroactive)?
There are plenty of examples on both sides. There's no need for one to be true and the other false. Geniuses get recognition, so it makes sense for the smurfing contributors to also get a nod.
AlexNet for example was only possible because of the developed algorithms, but also the availability of GPUs for highly parallel processing and importantly the ImageNet labelled data.
It's probably like venture capital. There are many scientists who test many hypotheses. Many are bad at generating hypotheses or running tests. Some are good at one or the other. Some are good at both and just happen to pick the ones that don't work. Some are good at all.
But you can't tell ahead of time which one is which. Maybe you can shift the distribution but often your pathological cases excluded are precisely the ones you wanted to not exclude (your Karikos get Suhadolniked). So you need to have them all work. It's just an inherent property of the problem.
Like searching an unsorted n list for a number. You kind of need to test all the numbers till you find yours. The search cost is just the cost. You can't uncost it by just picking the right index. That's not a meaningful statement.
This is interesting but how could we really determine the answer? It seems very difficult not to get pulled into my own opinions about how it "must work".
> the opposing "Newton hypothesis", which says that scientific progress is mostly the work of a relatively small number of great scientists (after Isaac Newton's statement that he "stood on the shoulders of giants")
I guess the Ortega equivalent statement would be "I stood on top of a giant pile of tiny people"
...Not quite as majestic, but hey, if it gets the job done...
I am instantly skeptical of hypotheses that sound nice and egalitarian.
Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.
Chronologies toward a working theory of advancing science, which is the subject of Orgega's contention for mediocre scholars, working on accumulating citations, footnotes, etc. For a proper understanding of technical pieces, Cal Newport's concept of deep work is essential.
Smart people know how to aggregate and apply relevant data that others worked to bring to fruition.
Naturally, the science of science study supporting the controvening „Newton hypothesis“ is pseudo scientific piffle.
My scientific study of the science of science study can prove this. Arxiv preprint forthcoming.
> The most important papers mostly cite other important papers by a small number of outstanding scientists
The question here is: yes most major accomplishments cite other "giants," but how many papers have they read and have they cited everything that influenced them?
Or do people tend to cite the most pivotal nodes on the knowledge graph which are themselves pivotal nodes on the knowledge graph while ignoring the minor nodes that contributed to making the insight possible?
Lastly -- minor inputs can be hard to cite. What if you read a paper a year ago that planted an interesting idea in your head but it wasn't conclusive, or gave you a little tidbit of information that nudged your thinking in a certain direction? You might not even remember, or the information might be background enough that it's only alluded to or indirectly contributes to the final product. Thus it doesn't get a citation. But could the final product have happened without a large number of these inputs?
I believe that this hypothesis is wrong.
More specifically, I believe that scientific research winds up dominated by groups who are all chasing the same circle of popular ideas. These groups start because some initial success produced results. This made a small number of scientists achieve prominence. Which makes their opinion important for the advancement of other scientists. Their goodwill and recommendations will help you get grants, tenure, and so on.
But once the initial ideas are played out, there is little prospect of further real progress. Indeed that progress usually doesn't come until someone outside of the group pursues a new idea. At which point the work of those in existing group will turn out to have had essentially no value.
As evidence for my belief, I point to https://www.chemistryworld.com/news/science-really-does-adva.... It documents that Planck's principle is real. Fairly regularly, people who become star researchers, wind up holding back further process until they die. After they die, new people can come into the field, pursuing new ideas, and progress resumes. And so it is that progress advances one funeral at a time.
As a practical example, look at the discovery of blue LEDs. There was a lot of work on this in the 70s and 80s. Everyone knew how important it would be. A lot of money went into the field. Armies of researchers were studying compounds like zinc selenide. The received wisdom was that galium nitride was a dead end. What was the sum contribution of these armies of researchers to the invention of blue LEDs? To convince Shuji Nakamura that if that was the right approach, he had no hope. So he went into galium nitride instead. The rest is history, and the existing field is lost.
Let's take an example that is still going on. Physicists invented string theory around 50 years ago. The problems in the approach are summed up in the quote that is often attributed to Feynman, *"String theorists don't make predictions, they make excuses." To date, string theory has yet to produce a single prediction that was verified by experiment. And yet there are thousands of physicists working in the field. As interesting as they found their research, it is unlikely that any of their work will wind up contributing anything to whatever future improved foundation is discovered for physics.
Here is a tragic example. Alzheimer's is a terrible disease. Very large amounts of money have gone into research for a treatment. The NIH by itself spends around $4 billion per year on this, on top of large investments from the pharmaceutical industry. Several decades ago, the amyloid beta hypothesis rose to prominence. There is indeed a strong correlation between amyloid beta plaques and Alzheimer's, and there are plausible mechanisms by which amyloid beta could cause brain damage.
After several decades of research, and many failed drug trials, support the following conclusion. There are many ways to prevent the buildup of amyloid beta plaques. These cure Alzheimer's in the mouse model that is widely used in research. These drugs produce no clinical improvement in human symptoms. (Yes, even Aduhelm, which was controversially approved by the FDA in 2021, produces no improvement in human symptoms.) The widespread desire for results has created fertile ground for fraudsters. Like Marc Tessier-Lavigne, whose fraud propelled him to becoming President of Stanford in 2016.
After widespread criticism from outside of the field, there is now some research into alternate hypotheses about the root causes of Alzheimer's. I personally think that there is promise in research suggesting that it is caused by damage done by viruses that get into the brain, and the amyloid beta plaques are left by our immune response to those viruses. But regardless of what hypothesis eventually proves to be correct, it seems extremely unlikely to me that the amyloid beta hypothesis will prove correct in the long run. (Cognitive dissonance keeps those currently in the field from drawing that conclusion though...)
We have spend tens of billions of dollars over several decades on Alzheimer's research. What is the future scientific value of this research? My bet is that it is destined for the garbage, except as a cautionary tale about how much damage it can cause when a scientific field becomes unwilling to question its unproven opinions.
> According to Ortega, science is mostly the work of geniuses, and geniuses mostly build on each other's work, but in some fields there is a real need for systematic laboratory work that could be done by almost anyone.
That seems correct to me. Imagine having a hypothesis named after you that a) you disagree with, and b) seems fairly doubtful at best!
I was disappointed to read he didn't name it after himself in an ironic display of humility.
("Ortega most likely would have disagreed with the hypothesis that has been named after him...")
People in humanities still haven’t understood that pretty mich everything in their fields is never all black or all white.
It’s a bizarre debate when it’s glaringly obvious that small contributions matter and big contributions matter as well.
But which contributes more, they ask? Who gives a shit, really?
Are Ortega and Newton mutually exclusive? Isn't the case much more likely that both:
- Significant advances by individuals or small groups (the Newtons, Einsteins, or Gausses of the world), enable narrowly-specialized, incremental work by "average" scientists, which elaborate upon the Great Advancement...
- ... And then those small achievements form the body of work upon which the next Great Advancement can be built?
Our potential to contribute -- even if you're Gauss or Feynman or whomever -- is limited by our time on Earth. We have tools to cheat death a bit when it comes to knowledge, chief among which are writing systems, libraries of knowledge, and the compounding effects of decades or centuries of study.
A good example here might be Fermat's last theorem. Everyone who's dipped their toes in math even at an undergraduate level will have at least heard about it, and about Fermat. People interested in the problem might well know that it was proven by Andrew Wiles, who -- almost no matter what else he does in life -- will probably be remembered mainly as "that guy who proved Fermat's last theorem." He'll go down in history (though likely not as well-known as Fermat himself).
But who's going to remember all the people along the way who failed to prove Fermat? There have been hundreds of serious attempts over the four-odd centuries that the theorem had been around, and I'm certain Wiles had referred to their work while working on his own proof, if only to figure out what doesn't work.
---
There's another part to this, and that's that as our understanding of the world grows, Great Advancements will be ever more specialized, and likely further and further removed from common knowledge.
We've gone from a great advancement being something as fundamental as positing a definition of pi, or the Pythagorean theorem in Classical Greece; to identifying the slightly more abstract, but still intuitive idea that white light is a combination of all other colours on the visible spectrum and that the right piece of glass can refract it back into its "components" during the Renaissance; to the fundamentally less intuitive but no less groundbreaking idea of atomic orbitals in the early 20th century.
The Great Advancements we're making now, I struggle to understand the implications of even as a technical person. What would a memristor really do? What do we do with the knowledge that gravity travels in waves? It's great to have solved n-higher-dimensional sphere packing for some two-digit n... but I'll have to take you at your word that it helps optimize cellular data network topology.
The amount of context it takes to understand these things requires a lifetime of dedicated, focused research, and that's to say nothing of what it takes to find applications for this knowledge. And when those discoveries are made and their applications are found, they're just so abstract, so far removed from the day-to-day life of most people outside of that specialization, that it's difficult to even explain why it matters, no matter what a quantum leap that represents in a given field.
Thomas Kuhn has entered the chat
I would say modern science is more of a teamwork than it was before since nowadays it's too large and overwhelming for one person. It's almost impossible to be a solo genius nowadays. But both "Newton" and "mediocre" scientists are still needed. Just for an analogy, in software we see a similar pattern, programs were largely slow, so one person could write an entire game or OS, but today it's almost impossible, so today's programs are usually written by a large number of average developers. But there are still a few number of exceptional people who work on key algorithms or architecture. So they are still needed.
To me, the greatest contribution of mediocre scientists is that they teach their field to the next generation. To keep science going forward, you need enough people who understand the field to generate a sufficient probability of anyone putting together the pieces of the next major discovery. That's the sense in which the numbers game is more important than the genius factor.
Conversely, entire branches of knowledge can be lost if not enough people are working in the area to maintain a common ground of understanding.