How would an LLM “know” when it isn’t sure? Their baseline for truth is competent text, they don’t have a baseline for truth based on observed reality. That’s why they can be “tricked” into things like “Mr Bean is the president of the USA”
The answer is the same as how the messy bag of chemistry that is the human brain "knows" when it isn't sure:
Badly, and with great difficulty, so while it can just about be done, even then only kinda.
Humans can just as easily be tricked. Something like 25% of the American Electorate believed Obama was the antichrist.
So saying LLMs have no "baseline for truth" doesn't really mean much one way of the other, they are much smart and accurate than 99% of humans.
It would "know" the same way it "knows" anything else: The probability of the sequence "I don't know" would be higher than the probability of any other sequence.