logoalt Hacker News

binarymax01/18/20252 repliesview on HN

LLMs don’t learn. They’re static. You could try to fine tune, or continually add longer and longer context, but in the end you hit a wall.


Replies

ErikBjare01/19/2025

You can provide them a significant amount of guidance through prompting. The model itself won't "learn", but if given lessons in the prompt, which you can accumulate from mistakes, it can follow them. You will always hit a wall "in the end", but you can get pretty far!

mmooss01/18/2025

But you can learn how to work with one.