Languages should be small, not large. I find that every language I've ever used that tries to throw everything and the kitchensink at you eventually deteriorates into a mess that spills over into the projects based on that language in terms of long term instability. You should be able to take a 10 year old codebase, compile it and run it. Backwards compatibility is an absolute non-negotiable for programming languages and if you disagree with that you are building toys, not production grade systems.
Egad, no. This is how you get C++, whose core tenet seems to be “someone used this once in 1994 so we can never change it”.
Even adding a new keyword will break some code out there that used that as a variable name or something. Perfect backward compatibility means you can never improve anything, ever, lest it causes someone a nonzero amount of porting effort.
> Languages should be small, not large.
Yes. At the very least, features should carry a lot of weight and be orthogonal to other features. When I was young I used to pride myself on knowing all the ins and outs of modern C++, but over time I realized that needing to be a “language lawyer” was a design shortcoming.
All that being said I’ve never seen the functionality of Rust’s borrow checker reduced to a simpler set of orthogonal features and it’s not clear that’s even possible.
If you want or have to build a large program, something must be large, be it the language, its standard library, third party code, or code you write.
I think it’s best if it is one of the first two, as that makes it easier to add third party code to your code, and will require less effort to bring newcomers up to speed w.r.t. the code. As an example, take strings. C doesn’t really have them as a basic type, so third party libraries all invent their own, requiring those using them to add glue code.
That’s why standard libraries and, to a lesser extent, languages, tend to grow.
Ideally that’s with backwards compatibility, but there’s a tension between moving fast and not making mistakes, so sometimes, errors are made, and APIs ‘have’ to be deprecated or removed.
I suspect the problem is that every feature makes it possible for an entire class of algorithms to be implement much more efficiently and/or clearly with a small extension to the language.
Many people encounter these algorithms after many other people have written large libraries and codebases. It’s much easier to slightly extend the language than start over or (if possible) implement the algorithm in an ugly way that uses existing features. But enough extensions (and glue to handle when they overlap) and even a language which was initially designed to be simple, is no longer.
e.g., Go used to be much simpler. But in particular, lack of generics kept coming up as a pain point in many projects. Now Go has generics, but arguably isn’t simple anymore.
Haskell's user-facing language gets compiled down to Haskell "core" which is what the language actually can do. So any new language feature has a check in with sanity when that first transformation gets written.
> Backwards compatibility is an absolute non-negotiable for programming languages
What programming language(s) satisfy this criteria, if any?
George Orwell showed us that small languages constrain our thinking.
A small language but with the ability to extend it (like Lisp) is probably the sweet spot, but lol look at what you have actually achieved - your own dialect that you have to reinvent for each project - also which other people have had to reinvent time after time.
Let languages and thought be large, but only used what is needed.
I'm not sure what this is arguing against here. Anyone who follows Rust knows that it's relatively modest when it comes to adding new features; most of the "features" that get added to Rust are either new stdlib APIs or just streamlining existing features so that they're less restrictive/easier to use. And Rust has a fantastic backwards compatibility story.