The challenge with scaling a url _shortener_ is that multiple urls might end up with the same short url. That presents a scaling challenge where you have to deliberately design a coordination framework across your set of machines, which introduces coordination, a DB, prefixes, and all your favorite answers to the interview question de-jour of the late 2010s.
With a URL _lengthener_ though, you don't need it at all. The sheer amount of possible outcomes means that the odds of ever getting two of the same is infinitesimally tiny.
That's only a challenge for a url shortener if it's going to need to scale to an enormous number of users. I think that's a good example of one of those "leave it for when you can't just make your one-machine more powerful or optimise your code more" situations.
There are some challenges in having multiple machines storing the URL lookups. But those all apply to both shorteners and lengtheners.
The only issue unique to shorteners is avoiding collisions, but giving each machine a different range can be done super easily by hand. No frameworks.
That's a good point, if I ever get that interview question I will push back and say we should build an URL lengthener instead.
But how are they going to handle the scale of the URLs, like, emotionally?
With a lengthener, you can make it completely deterministic with zero collisions, so you don't even have to store any state