It’s Postel’s law at the end of the day: “be conservative in what you do, be liberal in what you accept from others”. As a site owner I want my site to fail loudly and quickly before a user sees a problem; as a user I never want to see a problem.
ePub is in a nice place: the number of documents to check for errors is reasonable, and the resulting artefect is designed to be shipped and never (or rarely) amended. That means that we can shift the balance towards strict parsing. But for a web site of thousands (or millions) of documents that are being amended regularly, the balance shifts back to loose parsing as the best way of meeting user needs.
Isn't the developer always the first user? With strict parsing, testing a site before launch would show you the problem right there and allow you to fix it to launch a bug free site.
Postel's law has turned out to be good for prototypes and experiments, but a horrible way to make anything with any degree of reliability.
Postel's Law sounds nice but it can result in major problems. It results in a de facto spec that differs from the written spec, and disagreements about what a piece of data actually means can lead to bugs and even security vulnerabilities.
Having strictly parsed HTML from the start would be fine. You'd check it before you ship it and you'd make sure it's valid.
Requiring it now would be a disaster, of course. There's so much malformed HTML out there. But making HTML parsers accept garbage at the beginning was the wrong choice.