“If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual. Human potential manifests in individuals.” https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....
“Experts and beginners both want to use the same tools. Children want to learn saxophone, not kazoo.” - Rich Hickey
I like this framing! Personally, I'd like to see the Unix shell evolve into a system with a GUI. Where text and graphics are integrated, and the shell isn't "trapped" in the terminal.
There is a historical, path-dependent conflation in this area -- a text-based UI doesn't necessarily imply a terminal-based UI
Some links to work in that direction: https://www.oilshell.org/blog/2023/12/screencasts.html#headl...
Excel is pretty good at combining the two paradigms -- it has the textbox where you can type (almost like a URL bar), and it has the GUI part. But for all its power, I don't want to use Excel ... I like spreadsheets, and use LibreCalc a bit, though it's limited.
And I'd like if there were multiple GUI/graphical paradigms combined with text, not just the grid of cells (as useful as it is)
The ginormous computer companies don't target programmers, they want ever more users aka consumers. Apple is a good example, their products are increasingly difficult to program yet increasingly polished to appeal to the under-informed buying public.
We only need to look at how BASIC (MS) and HyperCard (Apple) were eliminated in order to remove easily accessible end-user programming. wrt spreadsheets, only a tiny minority of operators use the macro or language binding features.
I shudder at new motor vehicles being ever more software based. On this basis of any recognized bug to lines of code metric, hundreds of faults are lurking therein.
I’ve been working on my own to build in this same direction, and recently started writing about what I’ve built so far (HN link below). If you agreed with this article, give it a read.
https://news.ycombinator.com/item?id=44449848
(thanks poster for help discovering this article)
I love this so much, and eager to see what comes next in the "multi-part series".
This approach is rare, but when it works, it works really well.
I built this for myself. I call in imtropy and it is awesome.
Getting it prod ready for others takes too much time and money I don't have.
This has come up a lot at work for me the last few years.
I've ended up (unintentionally, but satellite control systems is a small world) working on systems both using the same framework (a COTS satellite C&C system) though with differing attitudes on how it should be used. The framework has a lot of issues, among them the creators don't understand that if adding two 64-bit values gets you a 32-bit value it's probably not working correctly (it has its own bespoke scripting system).
However, one of the things I think they got right, at least in principle if not in the precise mechanism, is that the end users can develop their own automation scripts, displays for telemetry and other data, etc. very easily. What's been interesting is the perspective the two projects take on this.
In one, the operators are given a write-protected baseline. This means they cannot mess with the known working (or working but with known issues) versions willy-nilly. However, the operators make modifications to scripts as, for instance, it turns out the documentation on a satellite's command language is wrong (happens more than you'd expect for 9- to 10-figure systems). They make their own modified versions of the displays. They attach to the automation hooks so that when some telemetry values pop up, specific scripts are triggered, and so on. They're programming even if they don't recognize it as such, and actively engaged with improving their system.
In the other, the operators are actively discouraged from this. They are encouraged to provide feedback, but not to make their own scripts, to attach to automation hooks, or to make their own displays (as new displays or their improved, for their mission needs, version of the display).
It's been interesting because while the former had some issues, like the operators making scripts that failed because the scripting language has subtle errors that people on my side (more actively involved in maintaining the systems) understood and they didn't, it worked pretty well. They'd send us how they want things to look and flow with working scripts and custom displays not just descriptions. It wasn't speedy or always the most congenial, but it was collaborative. In the current one, though, it's more like there's a lack of trust in the operators. An assumption that they will fuck things up and a fear of how to restore the system to a known good state overriding a desire
There are true idiots out there and malicious people, you have to take some precautions. But it's been interesting to me to see two organizations dealing with the same kinds of critical systems (screwing up a billion dollar satellite is a big deal) taking almost diametrically opposed positions on operator customization and modifications to their systems. I like the more open one better, add in some reasonable constraints (they have sims to test against, use version control and peer reviews and engineering reviews) due to the criticality of the systems in question, but let them make the modifications. A perfect system can't be engineered without input from users, and users making modifications to suit their needs shortens the loop instead of taking weeks, months, or years.
There are cases when the entire point of automation is to remove the flexible but unreliable and sometimes untrustworthy human component.
Malleable software is hard not just for technology reasons; probably the most difficult part of designing software is thinking through and cleanly handling the implications of every decision. It's easy to imagine that one could 'just make it work like this instead of that' without understanding the implications of the change or the reasons why the system is like that in the first place. Making software has never been easier than it is today, but it's still hard because designing coherent systems that work correctly in all scenarios of usage is hard.
Yes, configurability is good and scripting is a great way to safely add functionality to a system but there will always be a distinction between people who use software and just want it to work perhaps with minor tweaking, and those who build systems. It makes no more sense to throw everyone in the same bucket than saying that being able to change the oil in a car makes everyone a mechnical engineer.
Also:
> Many, many technologists have taken one look at an existing workflow of spreadsheets, reacted with performative disgust, and proposed the trifecta of microservices, Kubernetes and something called a "service mesh".
Yes, over-engineering is a thing, but piles of interlinked spreadsheets are usually thrown out precisely because the people who created them didn't have the skills necessary to build a system, and these ad-hoc systems eventually outgrow their usefulness and become unwieldy horror shows. Maybe Beryl in accounts knows the eighteen cells across three files that need to be updated when the interest rate changes, but if she leaves then we're all screwed.
> i would go one step further: the dream of malleable software is to unify users and programmers, such that there are just “operators” of a computer, and “writing a program” doesn’t sound any harder than “writing a resume”
People are on average very stupid, very lazy, don't want to think about anything deeply, and may in fact not only not only not know how to get from origin to destination may on average not even know what their destination ought to be.
Making something more accessible means enabling people who will still be called developers do more whilst knowing less it won't ever make everyone a developer.
A computer operator is someone who maintains the operation of a computer system and schedules programs submitted by users.
It’s mostly a dead profession that has been automated by operating systems.
Honestly neither programmer or user have negative connotations to me. They both imply an active role when interacting with the computer. The term I _really_ hate that’s getting way too commonly thrown around lately is consumer.
Amazing that there's no mention of AI in this post. People have been trying and failing to blur this line since the beginning of computing, and the only real success story has been excel. And it's because rigid computing systems have to draw a line somewhere between user and developer, and if that line is in the wrong place, people will either get hampered or lost. And the correct threshold is different for every user and use-case
AI is going to finally be the realization of this dream. I don't think it could have happened any other way
The divergence between users and programmers became more pronounced over time. When command line interfaces were dominant they naturally made programmers out of users, even if they didn't realize it. CLIs made “using the computer” and “programming the computer” effectively the same activity in a lot of cases. A command someone entered to run a program was itself a program. Entering the previous command again and modifying it, for instance to pipe the output of the first program into another program, was also a program. Once the desired result was achieved, that final command could be saved and used again later. Or shared with someone to be used as-is, or to be tweaked a little bit for their own use case.
Each interaction with a CLI results in a valid program that can be saved, studied, shared, and remixed. That's a powerful model for the same reasons the spreadsheet model is powerful: it's immediate, not modal, and successful interactions can be saved as an artifact and resumed later. Can we do the same things for GUIs? What is the GUI equivalent of pressing the up arrow key in a shell, where I can recall my previous interaction with the system and then modify it? Can I generate artifacts as byproducts from my interactions with a GUI system that I can save for later and share with others?