What's more alarming isn't that AI is limited to existing domain data, it's that when people push it to deviate outside those known data points it confidently hallucinates nonsense.
I wasn’t aware of the map empire, thank you!
Taking away some complexity comes at a price, and for some people, it’s hard to see that it outweighs the practicality.
As another commenter mentioned, the point of the story from Borges is that a perfectly detailed map is rather useless, because you need abstraction (it's a repeating theme in some other stories from him like the Library of Babel, and Funes the Memorious). LLMs are likely already able to exhaust the conceptual space for any given field, but some judgement is still going to be required about what to pursue. In biology and other fields this problem is even bigger because experimentation is so difficult and expensive.
The process of judgement and resource allocation will still be human for quite a while, but it's quite likely some humans will outsource their responsibility to AI to cut corners.
I got stuck for a minute on the caption "Harry Beck’s 1933 map of the London Underground" to: https://substackcdn.com/image/fetch/$s_!VsWm!,f_auto,q_auto:...
which contains Heathrow Terminals 1, 2, 3, 4 & 5 on the Picadilly line. For about 15 seconds I imagined a world where Heathrow has had 5 terminals since 1933, then I read the map itself: "Recreated by Arthurs D". Phew.
Awesome example of improving information conveyance through abstractions though!
> AI could repeat this pattern at a larger scale — generating faster results within the existing paradigm, while the structural conditions for disruptive science remain unchanged or worsen.
Worsen. LLMs discard/loses and mixes data on their statistical "compression" to create their vectorial database model. Across the time, successive feed back will be homologous to create a jpg image sourcing a jpg image that was created from another jpg image, through this "gaussian" loop.
Those faster (but worst) results will degrade real valuable data and science at a speed/rate that will statistically discard good done science on a regular basis, systematically.
IMHO.
The piano accompaniment and human narration are a nice touch.
My hot take is that mathematical and scientific 'soundness' is ultimately more of an aesthetic preference than an objective quality of reality. Good science makes sense to humans, and 'what makes sense' is ultimately what fits satisfyingly in your brain. There's nothing inherently wrong with an enormous epicycle model of reality from the perspective of the God of Math; so long as your formal system is consistent and expressive enough to represent everything then meh, it's a model. But the model that humans want to elevate to canonical status has far stricter requirements, and ultimately it's the one which the majority of sufficiently credentialed tastemakers decide is 'best'. Parsimony works well in physics where you have closed form expressions for all your stuff, but the biology cases are so much messier because it turns out that sometimes reality isn't parsimonious. All this to say that good science is a matter of taste, and while AI can gist the broad strokes of taste I've yet to see it take on the role of genuine tastemaker.
meh. I would be happier with this article if it demonstrated familiarity with the source material. "Del Rigor en la Ciencia" (On Exactitude in Science) was Borges (hilarious) investigation into Korzybski's General Semantics, a fact that was surprisingly absent from the text. Borges implied "the map is not the territory," but Korzybski actually came out in said it a decade or so before Borges wrote the story in question. Understanding the themes of Borges story is greatly informed by a passing familiarity with Korzybski's work. WorldCat tells me the 4th edition of "Science and Sanity" is widely held by libraries, and if you're interested in the assertions made in this article, you might enjoy reading (at least parts of) it.
https://search.worldcat.org/title/369632
The author completely missed the point Borges (and Korzybski) made about the utility of maps. Maps (according to both) are abstractions which allow the user to ignore irrelevant aspects of reality so other, more interesting facets come into sharper resolve. This might be why Beck's London Tube map is so well regarded. It allows the user to easily ignore aspects that are not germane to the task of deciding where and when to get on and off the tube.
But is a scientific paradigm like a map? Certainly it is an abstraction, if we take Kuhn's definition. If you're interested, I can recommend both "The Structure of Scientific Revolutions" and "The Essential Tension : Selected Studies in Scientific Tradition and Change" by Kuhn.
https://search.worldcat.org/title/4660423077
https://search.worldcat.org/title/3034084
Calling scientific paradigms maps isn't wrong, per se, but it does create more of a meta-metaphor, and a weak one at that.
Also. No. Maxwell did not replace a patchwork of equations with four short ones. That was Heaviside.
https://en.wikipedia.org/wiki/Oliver_Heaviside
Something we don't mention in polite society these days is that Maxwell proposed electromagnetic waves as propagating through an aether:
https://en.wikisource.org/wiki/A_Treatise_on_Electricity_and...
If you're going to talk about new paradigms, Maxwell is a great example, but his story is not complete without mentioning Heaviside, Michelson and Morley.
Also... I bristle at the phrase "Hypernormal Science." It's also introduced without definition or reference. Collins, et al describe it as distinct from (though seemingly related to) the word "Hypernormal" as coined by Yurchak in "Everything Was Forever Until It Was No More."
https://direct.mit.edu/posc/article-abstract/31/2/262/112751...
https://search.worldcat.org/title/1572419463
Or if you're short on time, you can get an entertaining (though not as enlightening) description from Adam Curtis' 2016 documentary HyperNormalization. You won't come away from it with a better understanding of AI, General Semantics or Popperian falsifiability, but it has a striking visual style and a very good soundtrack. And may lead to a better understanding of "hypernormal science."
And getting back to the Michelson-Morley experiment. The author talks about how their results did not cause the scientific establishment to abandon the concept of luminiferous aether. Certainly there is conservatism in science. Gigging science-monkeys tend to want to see interesting results replicated.
And this was one of the issues with the MM experiment. It took a while to replicate. We're MUCH better at replicating it these days and I would guess that thousands (maybe hundreds) of physics undergrads did this very task last year. But we've had over a century of pedagogical experience w/ this experiment. We know how to structure it to get the results we want. This was not the case in the late 1800s and in fact, several early attempts to replicate the experiment suggested the existence of an aether which was drifting slowly towards Cleveland.
And what does it say that heat flow, fluid flow, diffusion and electrostatics share equations? Does it say there's something fundamental in reality? Or does it say there's something fundamental in the way we model reality?
That being said... I think the author has hit upon something here... people are often wary of evidence which contradicts experience, even when that evidence (and not experience) is more correct.
But each of the examples he provides glosses over the process by which new paradigms overrode the old.
I deeply appreciate the author avoiding slavish fealty to fashionable AI trends. He probably could have gone further to describe more representational weakness of ESM3 and GNoME.
I fear, however, he has missed the point. It's less interesting to describe the messy ways in which AI fails than to describe the messy ways in which humans succeed. The process by which paradigms shift is messy, social and fundamentally human. It often has more to do with qualitative explanations than quantitative science. Science, as a human endeavor, is very much a story-telling exercise.
Please don't editorialize titles unless they're clearly clickbait.
"Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".
[dead]
[dead]
I find it funny how people are so concerned that AI cannot innovate, that AI coding agents only give the most bland solutions to any problem etc. when the next step in OpenAI's 5 stages to AGI is literally called "Innovators".
The article presumes that the models we have today describing everything could still be subject to a major paradigm shift.
Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.