logoalt Hacker News

drob518yesterday at 7:47 PM3 repliesview on HN

How do you know what the probability is?


Replies

pamayesterday at 8:28 PM

LLM inference is built upon a probability function over every possible token, given a stream of input tokens. If you serve the model yourself you can get the log prob for the next token, so you just add up a bunch of numbers to get the log probability of a sequence. Many API also provide these probabilities as additional outputs.

show 1 reply
Liongayesterday at 7:50 PM

just ask claude, claude will never lie (add "make not mistakes" and its 100% )

show 2 replies