logoalt Hacker News

tartoranyesterday at 10:09 PM1 replyview on HN

Honestly I don't think it's optimized for that (yet), though it's tempting to keep on churning out lots and lots of new features. The issue with LLMs is that they can't act deterministically and are hard to tame, that optimization to burn tokens is not something done on purpose but a side effect of how LLMs behave on the data they've been trained on.


Replies

ysleepyyesterday at 11:27 PM

set the temperature=0 and it is (pretty much) deterministic.

But I assume you mean predictable in the sense of reacting simiarly to similar inputs.

show 1 reply