I would say that I understand all the levels down to (but not including) what it means for electron to repel another particle of negative charge.
But what is not possible is to understand all these levels at the same time. And that has many implications.
Humans we have limits on working memory, and if I need to swap in L1 cache logic, then I can't think of TCP congestion windows, CWDM, multiple inheritance, and QoS at the same time. But I wonder what superpowers AI can bring, not because it's necessarily smarter, but because we can increase the working memory across abstraction layers.
Understand one layer above (“why”) and one layer below (“how”).
Then you know “what” to build.
I think there's a difference between "No one understands all levels of the system all the way down, at some point we all draw a line and treat it as a black-box abstraction" vs. "At the level of abstraction I'm working with, I choose not to engage with this AI-generated complexity."
Consider the distinction between I don't know how the automatic transmission in my car works, vs. I never bothered to learn the meanings of the street signs in my jurisdiction.
This is a non-discussion.
You have to know enough about underlying and higher level systems to do YOUR job well. And AI cannot fully replace human review.
Yeah, it's not a problem that a particular person does not know it all, but if no one knows any of it except as a black box kind of thing, that is a rather large risk unless the system is a toy.
Edit: In a sense "AI" software development is postmodern, it is a move away from reasoned software development in which known axioms and rules are applied, to software being arbitrary and 'given'.
The future 'code ninja' might be a deconstructionist, a spectre of Derrida.
9front's manuals will teach you the basics, the actual basics of CS (plan9 intro if you know to adapt yourself, too). These are at /sys/doc. Begin with rc(1), keep upping the levels. You can try 9front in a virtual machine safely. There are instructions to get, download and set it up at https://9front.org .
Write servers/clients with rc(1) and the tools at /bin/aux, such as aux/listen. They already are irc clients and some other tools. Then, do 9front's C book from Nemo.
On floats, try them at 'low level', with Forth. Get Muxleq https://github.com/howerj/mux. Compile it:
cc -O2 -ffast-math -o muxleq muxleq.c
Edit muxleq.fth, set the constants in the file like this: 1 constant opt.multi ( Add in large "pause" primitive )
1 constant opt.editor ( Add in Text Editor )
1 constant opt.info ( Add info printing function )
0 constant opt.generate-c ( Generate C code )
1 constant opt.better-see ( Replace 'see' with better version )
1 constant opt.control ( Add in more control structures )
0 constant opt.allocate ( Add in "allocate"/"free" )
1 constant opt.float ( Add in floating point code )
0 constant opt.glossary ( Add in "glossary" word )
1 constant opt.optimize ( Enable extra optimization )
1 constant opt.divmod ( Use "opDivMod" primitive )
0 constant opt.self ( self-interpreter [NOT WORKING] )
Recompile your image: ./muxleq muxleq.dec < muxleq.fth > new.dec
New.dec will be your main Forth. Run it: ./muxleq new.dec
Get the book from the author, look at the code on how
the Floating code it's implemented in software. Learn
Forth with the Starting Forth book but for ANS forth,
and Thinking Forth after doing Starting Forth.
Finally, bacl to 9front, there's the 'cpsbook.pdf' too from Hoare on concurrent programming and threads. That will be incredibily useful in a near future. If you are a Go programmer, well, you are at home with CSP.Also, compare CSP to the concurrent Forth switching tasks. It's great to compare/debug code in a tiny Forth on Subleq/Muxleq because if your code gets relatively fast, it will fly under GForth and due to constraints you will force yourself to be a much better programmer.
CPU's? Cache's? RAM latency? Muxleq/Subleq behaves nearly the same everywhere depending on your simulation speed. In order to learn, it's there. On real world systems, glibc, the Go runtime, etc, will take care of that making a similar outcome everyhere. If not, most of the people out there will be aware of stuff from SSE2 and up to NEON under ARM.
Hint: they already are code transpilers from Intel dedicated instructions to ARM ones and viceversa.
>How garbage collection works inside of the JVM?
No, but I can figure it a little given the Zenlisp one as a slight approximation. Or... you know, Forth, by hand. And Go which seems easiers and it doesn't need a dog slow VM trying to replicate what Inferno did in the 90's which far less resources.
Sure, we have complex systems that we don't know how everything works (car, computer, cellphone, etc.) . However, we do expect that those systems behave deterministically in their interface to us. And when they don't, we consider them broken.
For example, why is the HP-12C still the dominant business calculator? Because using other calculators for certain financial calculations were non-deterministically wrong. The HP-12C may not have even been strictly "correct", but it was deterministic in the ways in wasn't.
Financial people didn't know or care about guard digits or numerical instability. They very much did care that their financial calculations were consistent and predictable.
The question is: Who will build the HP-12C of AI?
I mean the quotes in this article aren't even disagreeing except on vague value judgements with no practical consequences.
Yes you can make better and more perfect solutions with a deep understanding of every consequence of every design decision. Also you can make some real world situation thousands of times better without a deep understanding of things. These two statements don't disagree at all.
The MIPS image rendering example is perfect here. Notice he didn't say "there was some obscure attempt to load images on MIPS and nobody used it because it was so slow so they used the readily available fast one instead". There was some apparently widely used routine to load images that was popular enough it got the attention of one of the few people who deeply understands how the system worked, and they fixed it up.
PHP is an awful trash language and like half the internet was built on it and lots of people had a lot of fun and got a lot more work done because people wrote a lot of websites in PHP. Sure, PHP is still trash, but it's better to have trash than wait around for someone to 'do it right', and maybe nobody ever gets around to it.
Worse is better. https://en.wikipedia.org/wiki/Worse_is_better
[dead]
[dead]
Isn't ceding all power to AIs run by tech companies kinda the opposite - if we have to have AI everywhere? Now no one knows how anything works (instead of everyone knowing a tiny bit and all working together), and also everyone is just dependent on the people with all the compute.