Some sort of hashing and incremental serial versioning type standards with http servers would allow hitting a server up for incremental udpates, allow clean access to content, with rate limits, and even keep up with live feeds and chats and so on.
Something like this in practice breaks a lot of the adtech surveillance and telemetry, and makes use of local storage, and incidentally empowers p2p sharing with a verifiable source of truth, which incentivizes things like IPFS and big p2p networks.
The biggest reason we don't already have this is the exploitation of user data for monetization and intrusive modeling.
It's easy to build proof of concept instances of things like that and there are other technologies that make use of it, but we'd need widespread adoption and implementation across the web. It solves the coordination problem, allows for useful throttling to shut out bad traffic while still enabling public and open access to content.
The technical side to this is already done. Merkle trees and hashing and crypto verification are solid, proven tech with standard implementations and documentation, implementing the features into most web servers would be simple, and it would reduce load on infrastructure by a huge amount. It'd also result in IPFS and offsite sharing and distributed content - blazing fast, efficient, local focused browsers.
It would force opt in telemetry and adtech surveillance, but would also increase the difference in appearance between human/user traffic and automated bots and scrapers.
We can't have nice things because the powers that be decided that adtech money was worth far more than efficiency, interoperability, and things like user privacy and autonomy.