It occurred to me on my walk today that a program is not the only output of programming.
The other, arguably far more important output, is the programmer.
The mental model that you, the programmer, build by writing the program.
And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.
(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)
The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
Peter Naur had that realization back in 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf
>>[2019] Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
During the Q&A, he responds "do we really want software written that humans cannot understand?!" His steadfast doubts against singularity are called into question, at least by his supporting 2019 responses.
Certainly the speaker is correct that modern hardware allows software to be crappily written — I fondly recall the "olden times" recanted about full-access operating systems of yesteryear. Those days are over...
The fact that a modern computer "needs" to be online to install an update is frustrating/concerning (e.g. for MacOS, without a USB installer must be online to update, even with stand-alone updater downloaded). Just use my local hardware (that I own) and install this software (that I have provided).
The phone number muscle memory example is perfect. There is a whole category of knowledge you only have if your hands did the work.
> The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
Perhaps on a related note, I've noticed that a lot of the positive talks about AI are about quantity. On the other hand, there is disproportionately very little deep discussion about quality. And I mean not just short term, local quality, but more long term and holistic quality (e.g. managing complexity under evolving requirements in a complex system with multiple connected parts) at real production scale, where there is much less tolerance for failure.
In all the places I've worked in throughout my career, I've felt that there have always been a tension between those who cared more about things like the mental model and holistic quality, and those who seemed to care less or were even oblivious about it. I think one contribution of the current AI hype is that it gave a more concrete shape to this split...