You are conflating two completely separate things.
The Liskov principle basically defines when you want inheritance versus when you don't. If your classes don't respect the Liskov principle, then they must not use inheritance.
The problems from your story relate to the implementation of some classes that really needed inheritance. The spaghetti you allude to was not caused by inheritance itself, if was caused by people creating spaghetti. The fact that child classes were calling parent class methods and then the parent class would call child class methods and so on is symptomatic of code that does too much, and of people taking code reuse to the extreme.
I've seen the same things happen with procedural code - a bunch of functions calling each other with all sorts of control parameters, and people adding just one more bool and one more if, because an existing function already does most of what I want, it just needs to do it slightly different at the seventh step. And after a few rounds of this, you end up with a function that can technically do 32 different things to the same data, depending on the combination of flags you pass in. And everyone deeply suspects that only 7 of those things are really used, but they're still afraid to break any of the other 25 possible cases with a change.
People piggybacking on existing code and changing it just a little bit is the root of most spaghetti. Whether that happens through flags or unprincipled inheritance or a mass of callbacks, it will always happen if enough people are trying to do this and enough time passes.
I don't believe in free will. People (including me) are going to do the easiest thing possible given the constraints. I don't think that's bad in any way -- it's best to embrace it. In this view, the tools should optimize for right thing to do aligning with the easy thing.
So if you have two options, one better than the other, then the better way to do things should be easier than the worse way. With inheritance as traditionally implemented, the worse way is easier than the better way.
And this can be fixed! You just need to make sure all the calls go one way (making the worse outcome harder), and then very carefully consider and work through the consequences of that. Maybe this fix ends up being unworkable due to the downstream consequences, but has anyone tried?
> I've seen the same things happen with procedural code - a bunch of functions calling each other with all sorts of control parameters, and people adding just one more bool and one more if, because an existing function already does most of what I want, it just needs to do it slightly different at the seventh step. And after a few rounds of this, you end up with a function that can technically do 32 different things to the same data, depending on the combination of flags you pass in. And everyone deeply suspects that only 7 of those things are really used, but they're still afraid to break any of the other 25 possible cases with a change.
Completely agree, that's a really bad way to do polymorphism too. A great way to do this kind of closed-universe polymorphism is through full-fledged sum types.
> People piggybacking on existing code and changing it just a little bit is the root of most spaghetti. Whether that happens through flags or unprincipled inheritance or a mass of callbacks, it will always happen if enough people are trying to do this and enough time passes.
You're absolutely right. Where I go further is that I think it's possible to avoid this, via the carrot of making the better thing be easy to do, and rigorous discipline against doing worse things enforced by automation. I think Rust, for all its many imperfections, does a really good job at this.
edit: or, if not avoid it, at least stem the decline. This aligns perfectly with not believing in free will -- if it's all genetic and environmental luck all the way down, then the easiest point of leverage is to change the environment such that people are lucky more often than before.