logoalt Hacker News

dlcarrieryesterday at 5:22 PM0 repliesview on HN

It's interesting that it's trained off only historic text.

Back in the pre-LLM days, someone trained a Markov chain off the King James Bible and a programming book: https://www.tumblr.com/kingjamesprogramming

I'd love to see an LLM equivalent, but I don't think that's enough data to train from scratch. Could a LoRA or similar be used in a way to get speech style to strictly follow a few megabytes worth of training data?