One interesting point was the original PageRank algorithm greatly benefited from the fact that we kinda only had "text matching" search before Google (my memory was AltaVista at the time).
Because text matching was so difficult to search with, whenever you went to a site, it would often have a "web of trust" at the bottom where an actual human being had curated a list of other sites that you might like if you liked this site.
So you would often search with keywords (often literals), then find the first site, then recursively explore the web of trust links to find the best site.
My suspicion has always been that Google (PageRank) benefited greatly from the human curated "web of trust" at the bottom of pages. But once Google came out, search was much better, and so human beings stopped creating "web of trust" type things on their site.
I am making the point that Google effectively benefited from the large amount of human labor put into connecting sites via WOT, while simultaneously (inadvertently) destroying the benefit of curating a WOT. This means that by succeeding at what they did, they made it much more difficult for a Google#2 to come around and run the exact same game plan with even the exact same algorithm.
tldr; Google harvested the links that were originally curated by human labor, the incentive to create those links are gone now, so the only remaining "links" between things are now in the Google Index.
Addendum: I asked claude to help me think of a metaphor, and I really liked this one as it is so similar.
``` "The railroad and the wagon trails"
Before railroads, collective human use created and maintained wagon trails through difficult terrain. The railroad company could survey these trails to find optimal routes. Once the railroad exists, the wagon trails fall into disuse and the pathfinding knowledge atrophies. A second railroad can't follow trails that are now overgrown. ```
> I am making the point that Google effectively benefited from the large amount of human labor...
This is exactly right, but the thing most people miss is that Google has been using human intelligence at massive scale even to this day to improve their search results.
Basically, as people search and navigate the results, Google harvests their clicks, hovers, dwell-time and other browsing behavior to extract critical signals that help it "learn" which pages the users actually found useful for the given query. (Overly simplified: click on a link but click back within a minute to go to the next link -> downrank, but spend more time on that link -> uprank.)
This helps it rank results better and improve search overall, which keeps people coming back and excluding competitors. It's like the web of trust again, except it's clicks of trust, and it's only visible to Google and is a never-ending self-reinforcing flywheel!
And if you look at the infrastructure Google has built to harvest this data, it is so much bigger than the massive index! They harvest data through Chrome, ad tracking, Android, Google Analytics, cookies (for which they built Gmail!), YouTube, Maps and so much more.
So to compete with Google Search, you don't need just a massive index, you also need the extensive web infra footprint to harvest user interactions at massive scale, which means the most popular and widely deployed browser, mobile OS, ad tracking, analytics script, email provider, maps, etc, etc.
This also explains why Google spent so many billions in "traffic acquisition costs" (i.e. payments for being the Search default) every year, because that was a direct driver to both, 1) ad revenue, and 2) maintaining its search quality.
This wasn't really a secret, but it (rightfully) turned out to be a major point in the recent Antitrust trial, which is why the proposed remedies (a TFA mentions) include the sharing of search index and "interaction data."