It makes a lot of sense. Most large organizations are collections of independent teams, many of whom don't communicate with each other other than sending quarterly OKRs and status updates back to their VP. The E2E principle is what allows them to each do their thing, agnostic to what the other servers handling the request are doing, and then let higher levels of the organization reconfigure and provision the system based on the needs of the moment.
Large organizations have a well-known pattern for how to handle this tension between the E2E principle and the PoLP. It's a firewall. As per the E2E principle, this is a node in the system, usually placed near the outside, which is responsible for inspecting and sanitizing every request that enters the system. The input is untrusted external requests that may have arbitrary binary data. The output is the particular subset of HTTP that form valid requests for the server, sanitized to a minimal grammar and now trusted because you reject every packet that wasn't a well-formed request for your particular service. As an added bonus, now you can collect stats on who is sending these malformed requests, which lets you do things like DDoS protection or calling their ISP or contacting the FBI.
The article even admits this: the right solution to untrusted headers is to strip out everything you aren't explicitly expecting at the reverse proxy. If you didn't know True-Client-IP exists, don't pass it on. Allowlist and block everything by default, don't blocklist and allow everything by default.
It makes a lot of sense. Most large organizations are collections of independent teams, many of whom don't communicate with each other other than sending quarterly OKRs and status updates back to their VP. The E2E principle is what allows them to each do their thing, agnostic to what the other servers handling the request are doing, and then let higher levels of the organization reconfigure and provision the system based on the needs of the moment.
Large organizations have a well-known pattern for how to handle this tension between the E2E principle and the PoLP. It's a firewall. As per the E2E principle, this is a node in the system, usually placed near the outside, which is responsible for inspecting and sanitizing every request that enters the system. The input is untrusted external requests that may have arbitrary binary data. The output is the particular subset of HTTP that form valid requests for the server, sanitized to a minimal grammar and now trusted because you reject every packet that wasn't a well-formed request for your particular service. As an added bonus, now you can collect stats on who is sending these malformed requests, which lets you do things like DDoS protection or calling their ISP or contacting the FBI.
The article even admits this: the right solution to untrusted headers is to strip out everything you aren't explicitly expecting at the reverse proxy. If you didn't know True-Client-IP exists, don't pass it on. Allowlist and block everything by default, don't blocklist and allow everything by default.