It's almost like scientists are doing something more than a random search over language.
This search is random in the same way that AlphaGo's move selection was random.
In the Monte Carlo Tree Search part, the outcome distribution on leaves is informed by a neural network trained on data instead of a so-called playout. Sure, part of the algorithm does invoke a random() function, but by no means the result is akin to the flip of a coin.
There is indeed randomness in the process, but making it sound like a random walk is doing a disservice to nuance.
I feel many people are too ready to dismiss the results of LLMs as "random", and I'm afraid there is some element of seeing what one wants to see (i.e. believing LLMs are toys, because if they are not, we will lose our jobs).
I do hallucinate a better future as well.