the issue is there is very little text before the internet, so not enough historical tokens to train a really big model
I think not everyone in this thread understands that. Someone wrote "It's a time machine", followed up by "Imagine having a conversation with Aristotle."
> the issue is there is very little text before the internet,
Hm there is a lot of text from before the internet, but most of it is not on internet. There is a weird gap in some circles because of that, people are rediscovering work from pre 1980s researchers that only exist in books that have never been re-edited and that virtually no one knows about.
And it's a 4B model. I worry that nontechnical users will dramatically overestimate its accuracy and underestimate hallucinations, which makes me wonder how it could really be useful for academic research.