> according to these benchmarks
Come onnnnnn, when someone releases something and claims it’s “infinite speed up” or “better than the best despite being 1/10th the size!” do your skepticism alarm bells not ring at all?
You can’t wave a magic wand and make an 8b model that good.
I’ll eat my hat if it turns out the 8b model is anything more than slightly better than the current crop of 8b models.
You cannot, no matter hoowwwwww much people want it to. be. true, take more data, the same architecture and suddenly you have a sonnet class 8b model.
> like an insane transfer of capabilities to a relatively tiny model
It certainly does.
…but it probably reflects the meaninglessness of the benchmarks, not how good the model is.
It’s somewhere in between, really. This is a rapidly advancing space, so to some degree, it’s expected that every few months, new bars are being set.
There’s also a lot of work going on right now showing that small models can significantly improve their outputs by inferencing multiple times[1], which is effectively what this model is doing. So even small models can produce better outputs by increasing the amount of compute through them.
I get the benchmark fatigue, and it’s merited to some degree. But in spite of that, models have gotten really significantly better in the last year, and continue to do so. In some sense, really good models should be really difficult to evaluate, because that itself is an indicator of progress.
[1] https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling...