This search is random in the same way that AlphaGo's move selection was random.
In the Monte Carlo Tree Search part, the outcome distribution on leaves is informed by a neural network trained on data instead of a so-called playout. Sure, part of the algorithm does invoke a random() function, but by no means the result is akin to the flip of a coin.
There is indeed randomness in the process, but making it sound like a random walk is doing a disservice to nuance.
I feel many people are too ready to dismiss the results of LLMs as "random", and I'm afraid there is some element of seeing what one wants to see (i.e. believing LLMs are toys, because if they are not, we will lose our jobs).
Are you a scientist?
You're right about the random search however the domains that the model is doing the search is quite different. In AlphaGo, you do MCTS in all possible moves in GO, therefore it is a domain specific search. Here, you're doing the search in language whereas you would like to do the search possibly on genetics or molecular data (RNA-seq, ATAC-seq etc.). For instance, yesterday Arcinstitute published Evo2, where you can actually check a given mutation would be pathogenic or not. So, starting from genetics data (among thousands of variants) you might be able to say this variant might be pathogenic for the patient given its high variant allele frequency.
On top of that you are looking at the results in cell-lines which might not reflect the true nature of what would happen in-vivo (a mouse model or a human).
So, there is domain specific knowledge, which one would like to take into account for decision-making. For me, I would trust a Molecular Tumor Board with hematologists, clinicians - and possibly computational biologists :) - over a language random tree search for treating my acute myeloid leukemia, but this is a personal choice.