logoalt Hacker News

hansmayertoday at 9:53 AM1 replyview on HN

You know that when A. Karpathy released NanoLLM (or however it was called), he said it was mainly coded by hand as the LLMs were not helpful because "the training dataset was way off". So yeah, your argumentation actually "reinforces" my point.


Replies

andy12_today at 10:03 AM

No, your opinion is wrong because the reason some models don't seem to have some "strong opinion" on anything is not related to predicting words based on how similar they are to other sentences in the training data. It's most likely related to how the model was trained with reinforcement learning, and most specifically, to recent efforts by OpenAI to reduce hallucination rates by penalizing guessing under uncertainty[1].

[1] https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4a...

show 1 reply