Regarding the specific use case, I was thinking this: I had Gemma 4 (a small but highly capable offline model released by Google) make a public domain cc0 encyclopedia of some core science and technology concepts[1]. I thought it was pretty good.
Separately, I've fine-tuned the Gemma 4 model[2], it was very quick (just 90 seconds), so I think it could be interesting to train it to talk like 1911 Encyclopedia Britannica.
I would use the entries as training data and train it to talk in the same style. There isn't a specific use case for why, I just think it would be interesting. For example, I could see how it writes about modern concepts in the style of 1911 Britannica.
[1] https://stateofutopia.com/encyclopedia/
[2] To talk like a pirate! https://www.youtube.com/live/WuCxWJhrkIM
That’s a fun idea — I can see the appeal of that style.
The underlying text is public domain, but the structured version here is something I put together for the site. I haven’t released a bulk dataset yet.
If you end up experimenting with it, I’d love to hear how it turns out — and I’m still figuring out what structured access might look like.