logoalt Hacker News

tootyskootyyesterday at 12:54 AM1 replyview on HN

See pretraining section of the prerelease_notes.md:

https://github.com/DGoettlich/history-llms/blob/main/ranke-4...


Replies

pestsyesterday at 1:41 AM

I was curious, they train a 1900 base model, then fine tune to the exact year:

"To keep training expenses down, we train one checkpoint on data up to 1900, then continuously pretrain further checkpoints on 20B tokens of data 1900-${cutoff}$. "