logoalt Hacker News

ErikBjare01/19/20250 repliesview on HN

You can provide them a significant amount of guidance through prompting. The model itself won't "learn", but if given lessons in the prompt, which you can accumulate from mistakes, it can follow them. You will always hit a wall "in the end", but you can get pretty far!