> I like the networking perspective, but the ML perspective is such a loose analogy that it's hard to even judge.
Right. ML doesn't have to work well because it's used in situations where the cost of the errors falls on someone other than the service provider. Hallucinations require a business model where their cost is an externality, like pollution.
With an objective goal, such as tests or a spec or driving without hitting anything, to check the results, it's possible to do better, of course.
The Internet only works because fiber optic bandwidth is cheap. As someone who was working on congestion in the early days, I could see that congestion in the middle of the network had no known solution. If congestion could be pushed out to the edges, there were strategies, but there were no good solutions in the middle. And, in fact, the whole Internet would sometimes go into congestion collapse in the early 1990s, with the big peering points at MAE-EAST and MAE-WEST losing well over half of the packets. What saved the Internet was cheap long-haul bandwidth and big hardware-supported switches. This kept congestion at the fringes.
As a corollary, will we see a recurrence of congestion in the middle as FttH sees increased adoption? It's easy to believe that 10 Gbps ought to be enough for everyone, but history tells us that people will find a way to saturate any unused bandwidth (8K video with crazy bitrates, 1 TB video game installs, etc).