Good idea, we absolutely should replace the Web, but I have some issues with this proposal:
- We don't want multiple versions (1.1.1, 1.2.1), but we also don't want constant churn (the current dev/product fad). What we want is one thing that works well indefinitely, is backwards compatible, changes infrequently, and can be expanded if necessary. In order to achieve that, we have to abandon the idea of monolithic web browsers.
"The Web" is not a hypertext document viewer, as much as some people (myself, and Dillo probably) would like it to be. It is an application platform. So you must consider the needs of an application platform if you want a "new Web". The browser interfaces with the entire OS + a slew of protocols and libraries. It's Android in userland. It will change as constantly as OSes and tech changes, which is constant. So to get away from churn, we need to break up the application platform into layers. Those layers need to have simple, well-defined backwards-compatible interfaces, with extensions. The model for that has been around for decades; network protocols last 60+ years without needing to be replaced, but add features over time, without getting feature creep, and remain backwards compatible. There aren't a ton of versions of common internet network protocols. And importantly, you don't have to use one implementation, the way people get stuck on one browser.
The standard should follow this extremely well established pattern of layers of independent components which aren't built into a monolith. It can still have a version (initially), but we shouldn't need to change the version, we add feature flags and handshakes, the way network protocols do. The end result should be a combination of a "web POSIX" + "layered protocols/specs".
- "Pages that don't conform with the specification won't be rendered" - this simply is never going to happen. The history of software development is littered with examples of having to work around implementations of specifications. Your client can try to render strictly, but it will inevitably break on someone's implementation, and you will be forced to deal with it, or lose your customers/users.
"Having a strict grammar will likely cause humans to migrate to a language that is easy to write and is more forgiving ... The objective is that parsers can be simplified and the cost of creating tools that can manipulate the content is lowered" - This sounds like you're saying, programming is hard, so let's make the user have to work around our inability to solve hard problems. Easy is not always better.
- "Resistance to standard capture"* - I think this goes back to the layers. Remember you are building an entire Application Platform. Think about Linux and Open Source. How does it resist capture? Independent organizations and authors, loose associations, cobbled together components. There is nobody in control, so you can't capture it. This is actually the same with network protocols (other than HTTP, we all know Google controls the spec). We can take ideas from many places. As just one random example: MCP is a simple yet powerful way for independent entities to add functionality to an application both locally and remotely, yet is independent of both the client and the server. Another example is Plan9, where you can support anything in the world and use it as a file (both locally and remotely), as long as you make and run the driver for it.
- "Text first" - You just lost the room. If you want text only, stick to Gopher. An application platform requires multimedia. You would do well to craft the spec so that it can convert application presentation into a text structure. Sell it as accessibility.
- "No scripting" - Now your proposal is dead. Again, Application Platform!! People want a way to cheaply deliver and run application code in real time. I think this needs a lot of careful attention, because you don't want to continue the status quo of requiring a single monolith to interpret and execute logic for the entire application platform.