It's certainly the case that I don't always know how the layer below works, i.e., how the compiled code executes in detail. But I have a mental model that's good enough that I can use the compiler, and I trust that the compiler authors know what they are doing and that the result is well-tested. Over forty years and a slew of different languages I've found that to be an excellent bet.
But I understand how my code works. There's a huge difference between not understanding the layer below and not understanding the layer that I am responsible for.
This article is about people using abstractions without knowing how they work. This is fine. This is how progress is made.
But someone designed the abstraction (e.g. the Wifi driver, the processor, the transistor), and they made sure it works and provides an interface to the layers above.
Now you could say a piece of software completely written by a coding agent is just another abstraction, but the article does not really make that point, so I don't see what message it tries to convey. "I don't understand my wifi driver, so I don't need to understand my code" does not sound like a valid argument.
The dependency tree is where this bites hardest in practice. A typical Node.js project pulls in 800+ transitive dependencies, each with their own release cadence and breaking change policies. Nobody on your team understands how most of them work internally, and that's fine - until one of them ships a breaking change, deprecates an API, or hits end-of-life.
The anon291 comment about interface stability is exactly right. The reason you don't need to understand CPU microarchitecture is that x86 instructions from 1990 still work. Your React component library from 2023 might not survive the next major version. The "nobody knows how the whole system works" problem is manageable when the interfaces are stable and well-documented. It becomes genuinely dangerous when the interfaces themselves are churning.
What I've noticed is that teams don't even track which of their dependencies are approaching EOL or have known vulnerabilities at the version they're pinned to. The knowledge gap isn't just "how does this work" - it's "is this thing I depend on still actively maintained, and what changed in the last 3 releases that I skipped?" That's the operational version of this problem that bites people every week.
That's not how things work in practice.
I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.
Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.
The claimed connections here fall apart for me pretty quickly.
CPU instructions, caches, memory access, etc. are debated, tested, hardened, and documented to a degree that's orders of magnitude greater than the LLM-generated code we're deploying these days. Those fundamental computing abstractions aren't nearly as leaky or nearly as in need of refactoring tomorrow.
> “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? [Paraphrasing]: interrupts, 802.11ax modulation scheme, QAM, memory models, garbage collection, field effect transistors...
To a reasonable degree, yes, I can. I am also probably an outlier, and the product of various careers, with a small dose of autism sprinkled in. My first career was as a Submarine Nuclear Electronics Technician / Reactor Operator in the U.S. Navy. As part of that training curriculum, I was taught electronics theory, troubleshooting, and repair, which begins with "these are electrons" and ends with "you can now troubleshoot a VMEbus [0] Motorola 68000-based system down to the component level." I also later went back to teach at that school, and rewrote the 68000 training curriculum to use the Intel 386 (progress, eh?).
Additionally, all submariners are required to undergo an oral board before being qualified, and analogous questions like that are extremely common, e.g. "I am a drop of seawater. How do I turn the light on in your rack?" To answer that question, you end up drawing (from memory) an enormous amount of systems and connecting them together, replete with the correct valve numbers and electrical buses, as well as explaining how all of them work, and going down various rabbit holes as the board members see fit, like the throttling characteristics of a gate valve (sub-optimal). If it's written down somewhere, or can be derived, it's fair game. And like TFA's discussion about Brendan Gregg's practice of finding someone's knowledge limit, the board members will not stop until they find something you don't know - at which point you are required to find it out, and get back to them.
When I got into tech, I applied this same mindset. If I don't know something, I find out. I read docs, I read man pages, I test assumptions, I tinker, I experiment. This has served me well over the years, with seemingly random knowledge surfacing during an incident, or when troubleshooting. I usually don't remember all of it, but I remember enough to find the source docs again and refresh my memory.
Granted I'm not a software developer, so the things I work on tend to be simpler. But the people I know who are recognized for "knowing how the whole thing works" are likely to have earned that distinction, not necessarily by actually knowing how it works but:
1. The ability and interest to investigate things and find out how they work, when needed or desired. They are interested in how things work. They are probably competent in things that are "glue" in their disciplines, such as math and physics in my case.
2. The ability to improvise an answer when needed, by interpolating across gaps in knowledge, well enough to get past whatever problem is being solved. And to decide when something doesn't need to be understood.
> This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.
That doesn’t make it OK. This is like being stuck in a room whose pillars are starting to deteriorate, then someone comes along with a sledgehammer and starts hitting them and your reaction is to shrug and say “ah, well, the situation is bad and will only get worse, but the roof hasn’t fallen on our heads yet so let’s do nothing”.
If the situation is untenable, the right course of action is to try to correct it, not shrug it off.
> AI will make this situation worse.
Being an AI skeptic more than not, I don't think the article's conclusion is true.
What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.
Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.
LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.
LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.
Huh?
The whole point of society is that you don’t need to know how the whole thing works. You just use it.
How does the water system maintain pressure so water actually comes out when you turn on the tap? That’s entirely the wrong question. You should be asking why you never needed to think about that until now, because that answer is way more mind-expanding and fascinating. Humans invented entire economic systems just so you don’t need to know everything, so you can wash your hands and go back to your work doing your thing in the giant machine. Maybe your job is to make software that tap-water engineers use everyday. Is it a crisis if they don’t understand everything about what you do? Not bloody likely - their heads are full of water engineering knowledge already.
It is not the end of the world to not know everything - it’s actually a miracle of modern society!
This also applies to other things. No one person knows how to make a pencil.
Three minute video by Milton Friedman: https://youtu.be/67tHtpac5ws?si=nFOLok7o87b8UXxY
Strange article. The problem isn’t that everyone doesn’t know how everything works, it’s that AI coding could mean there is no one who knows how a system works.
There will always be many gaps in peoples knowledge. You start with what you need to understand, and typically dive deeper only when it is necessary. Where it starts to be a problem in my mind is when people have no curiosity about what’s going on underneath, or even worse, start to get superstitious about avoiding holes in the abstraction without the willingness to dig a little and find out why.
Perhaps a dose of pragmatism is needed here?
I am no CS major, nor do I fully understand the inner workings of a computer beyond "we tricked a rock into thinking by shocking it."
I'd love to better understand it, and I hope that through my journey of working with computers, i'll better learn about these underlying concepts registers, bus's, memory, assembly etc
Practically however, I write scripts that solve real world problems, be that from automating the coffee machine, to managing infrastructure at scale.
I'm not waiting to pick up a book on x86 assembly first before I write some python however. (I wish it were that easy.)
To the greybeards that do have a grasp of these concepts though? It's your responsibility to share that wealth of knowledge. It's a bitter ask, I know.
I'll hold up my end of the bargain by doing the same when I get to your position and everywhere in between.
I take a fairly optimistic view to the adoption of AI assistants in our line of work. We begin to work and reason at a higher level and let the agents worry about the lower level details. Know where else this happens? Any human organization that existed, exists, and will exist. Hierarchies form because no one person can do everything and hold all the details in their mind, especially as the complexity of what they intend to accomplish goes up.
One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.
Get enough people in the room and they can describe "the system". Everything OP lists (QAM, QPSK, WPA whatever) can be read about and learned. Literally no one understands generative models, and there isn't a way for us to learn about their workings. These things are entirely new beasts.
> Nobody knows how the whole system works
True.
But in all systems up to now, for each part of the system, somebody knew how it worked.
That paradigm is slowly eroding. Maybe that's ok, maybe not, hard to say.
"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead
Let me figure out how exactly the human body works before using it.
It is not about having infinite width and depth of knowledge. Is about abstracting at the right level for the components are relevant enough and can assume correctness outside the focus of what you are solving.
Systems include people, that make their own decisions that affect how they work and we don’t go down to biology and chemistry to understand how they make choices. But that doesn’t mean that people decisions should be fully ignored in our analysis, just that there is a right abstraction level for that.
And sometimes a side or abstracted component deserves to be seen or understood with more detail because some of the sub components or its fine behavior makes a difference for what we are solving. Can we do that?
There’s plenty of people that know the fundamentals of the system. It’s a mistake to think that understanding specific technical details about an implementation is necessary to understand the system. It would make more sense to ask questions about whether someone could conceivably build the system from scratch if they have to. There’s plenty of people that have worked in academic fabs that have also written verilog and operating systems and messed with radios.
But people are expected to understand the part of the system they are responsible for at the level of abstraction they are being paid to operate.
This new arrangement would be perfectly fine if they aren't responsible when/if it breaks.
I think a lot of people have a fear of AI coding because they're worried that we will move from a world where nobody understands how the whole system works, to a world where nobody knows how any of it works.
Not just tech.
Does anyone on the planet actually know all of the subtleties and idiosyncrasies of the entire tax code? Perhaps the one inhabitant of Sealand and the Sentinelese but no-one in any western society.
Adam Jacob
It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened.
This post just doubled down without presenting any kind of argument. Bruce Perens
Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
Now take the median dev, compress his lack of knowledge into a lossy model, and rent that out as everyone's new source of truth.Oh so many times over the decades, having to explain to a dev why iterating over many things and performing a heavy task like a DB query, will result in bad things happening...all because they don't really comprehend how things work.
There’s a difference between abstracting away the network layer and not understanding the business logic. What we are talking about with AI slop is not understanding the business logic. That gets really close to just throwing stuff at the wall and seeing what works instead of a systematic, reliable way to develop things that have predictable results.
It’s like if you are building a production line. You need to use a certain type of steel because it has certain heat properties. You don’t need to know exactly how they make that type of steel. But you need to know to use that steel. AI slop is basically just using whatever steel.
At every layer of abstraction in complexity, the experts at that layer need to have a deep understanding of their layer of complexity. The whole point is that you can rely on certain contracts made by lower layers to build yours.
So no, just slopping your way through the application layer isn’t just on theme with “we have never known how the whole system works”. It’s ignoring that you still have a responsibility to understand the current layer where you’re at, which is the business logic layer. If you don’t understand that, you can’t build reliable software because you aren’t using the system we have in place to predictably and deterministically specify outputs. Which is code.
Reminds me of a short writing "I, Pencil"
The problem is education, and maybe ironically AI can assist in improving that
I've read a lot about programming and it all feels pretty disorganized; the post about programmers being ignorant about how compilers work doesn't sound surprising (go to a bunch of educational programming resources and see if they cover any of that)
It sounds like we need more comprehensive and detailed lists
For example, with objections to "vibe coding", couldn't we just make a list of people's concerns and then work at improving AI's outputs which would reflect the concerns people raise? (Things like security, designs to minimize tech debt, outputting for rradability if someone does need to manually review the code in the future, etc.?)
Incidentally this also reminds me of political or religious stances against technology, like the Amish take for example, as the kind of ignorance of and dependence on processes out of our control discussed seem to be inherent qualities of technological systems as they grow and become more complex.
Yes, but the person who understands a lot of the system is invaluable
It's called specialization. Not knowing everything is how we got this far.
Let me make it worse. Much worse. :)
https://youtu.be/36myc8wQhLo (USENIX ATC '21/OSDI '21 Joint Keynote Address-It's Time for Operating Systems to Rediscover Hardware)
Adam Jacob's quote is this:
"It's not slop. It's not forgetting first principles. It's a shift in how the craft work, and it's already happened."
It actually really is slop. He may wish to ignore it but that does not change anything. AI comes with slop - that is undeniable. You only need to look at the content generated via AI.
He may wish to focus merely on "AI for use in software engineering", but even there he is wrong, since AI makes mistakes too and not everything it creates is great. People often have no clue how that AI reaches any decision, so they also lose being able to reason about the code or code changes. I think people have a hard time trying to sell AI as "only good things, the craft will become better". It seems everyone is on the AI hype train - eventually it'll either crash or slow down massively.
Somebody knows how a part of a complex system works. We can't say this for complex systems created with AI. This is a road into the abyss. The article is making it worse by downplaying the issue.
I do.
To be fair, I don't know how a living human individual work, let alone how they actually work in society. I suspect I'm not alone in this case.
So nothing new under the sun, often the practices come first, then only can some theory emerge, from which point it can be leverage on to go further than present practice and so on. Sometime practice and theory are more entengled in how they are created on the go, obviously.
Script kiddies have always existed and always will.
“Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.”
It's strange to believe that Twitter/X has fallen. Virtually every major character in software, AI and tech is active on X. The people who are actually building the tools that we discuss everyday post on X.
LinkedIn is weeks/months behind topics that originate from X. It suggests you might be living in a bubble if you believe X has fallen.
what a well written article. That's actually a problem. Time will come and hit the same way it has done to aqueduct, like lost technology that no one knows how they have worked in details. Maybe it is just how engineering evolution works?
The pre-2023 abstractions that power the Internet and have made many people rich are the sweet spot.
You have to understand some of the system, and saying that if no one understands the whole system anyway we can give up all understanding is a fallacy.
Even for a programming language that is criticized for a permissive spec like C you can write a formally verified compiler, CompCert. Good luck doing that for your agentic workflow with natural language input.
Citing a few manic posts from influencers does not change that.
Wikipedia knows how it all works, and that's good enough in case we need to reboot civilization.
I would say that I understand all the levels down to (but not including) what it means for electron to repel another particle of negative charge.
But what is not possible is to understand all these levels at the same time. And that has many implications.
Humans we have limits on working memory, and if I need to swap in L1 cache logic, then I can't think of TCP congestion windows, CWDM, multiple inheritance, and QoS at the same time. But I wonder what superpowers AI can bring, not because it's necessarily smarter, but because we can increase the working memory across abstraction layers.
Isn't ceding all power to AIs run by tech companies kinda the opposite - if we have to have AI everywhere? Now no one knows how anything works (instead of everyone knowing a tiny bit and all working together), and also everyone is just dependent on the people with all the compute.
engineers pay for abstractions with more powerful hardware, but can optimize at their will (hopefully). will ai be able to afford more human hours to churn through piles of unfamiliar code?
why does the author imply not knowing everything is a bad thing? If you have clear protocol and interfaces, not knowing everything enables you to make bigger innovations. If everything is a complex mess, then no.
We keep delegating knowledge of the natural, physical world for temporary, rapidly-changing knowledge of abstractions and software tools, which we do not control (now LLM cloud tools).
The lack of comprehensive, practical, multi-disciplinary knowledge creates a DEEP DEPENDENCY on the few multinational companies and countries that UNDERSTAND things and can BUILD things. If you don't understand it, if you can't build it, they OWN you.
Understand one layer above (“why”) and one layer below (“how”).
Then you know “what” to build.
I think there's a difference between "No one understands all levels of the system all the way down, at some point we all draw a line and treat it as a black-box abstraction" vs. "At the level of abstraction I'm working with, I choose not to engage with this AI-generated complexity."
Consider the distinction between I don't know how the automatic transmission in my car works, vs. I never bothered to learn the meanings of the street signs in my jurisdiction.
There are many layers to this. But there is one style of programming that concerns me. Where you neither understand the layer above you (why the product exists and what the goal of the system is) nor the layer below (how to actually implement the behavior). In the past, many developers barely understood the business case, but at least they understood how to translate into code, and could put backpressure on the business. Now however, it's apparently not even necessary to know how the code works!
The argument seems to be, we should float on a thin lubricant of "that's someone else's concern" (either the AI or the PMs) gliding blissfully from one ticket to another. Neither grasping our goal nor our outcome. If the tests are green and the buttons submit, mission accomplished!
Using Claude I can feel my situational awareness slipping from my grasp. It's increasingly clear that this style of development pushes you to stop looking at any of the code at all. My English instructions do not leave any residual growth. I learn nothing to send back up the chain, and I know nothing of what's below. Why should I exist?