logoalt Hacker News

I fed 24 years of my blog posts to a Markov model

95 pointsby zdwyesterday at 8:19 PM30 commentsview on HN

Comments

vunderbayesterday at 9:41 PM

I did something similar many years ago. I fed about half a million words (two decades of mostly fantasy and science fiction writing) into a Markov model that could generate text using a “gram slider” ranging from 2-grams to 5-grams.

I used it as a kind of “dream well” whenever I wanted to draw some muse from the same deep spring. It felt like a spiritual successor to what I used to do as a kid: flipping to a random page in an old 1950s Funk & Wagnalls dictionary and using whatever I found there as a writing seed.

show 5 replies
hiltiyesterday at 11:30 PM

First of all: Thank you for giving.

Giving 24 years of your experience, thoughts and life time to us.

This is special in these times of wondering, baiting and consuming only.

lacunaryyesterday at 9:40 PM

I recall a Markov chain bot on IRC in the mid 2000s. I didn't see anything better until gpt came along!

show 1 reply
hexnutsyesterday at 11:37 PM

I just realized, one of the things that people might start doing is making a gamma model of their personality. I won't even approach who they were as a person, but it will give their Descendants (or bored researchers) a 60% approximation of who they were and their views. (60% is pulled from nowhere to justify my gamma designation, since there isn't a good scale for personality mirror quality for LLMs as far as I'm aware.)

show 1 reply
swyxyesterday at 9:10 PM

now i wonder if you can compare vs feeding into a GPT style transformer of a similar Order of Magnitude in param count..

show 2 replies
anthkyesterday at 11:10 PM

Megahal/Hailo (cpanm -n hailo for Perl users) can still be fun too.

Usage:

      hailo -t corpus.txt -b brain.brn
Where "corpus.txt" should be a file with one sentence per line. Easy to do under sed/awk/perl.

      hailo -b brain.brn
This spawns the chatbot with your trained brain.

By default Hailo chooses the easy engine. If you want something more "realistic", pick the advanced one mentioned at 'perldoc hailo' with the -e flag.

atum47yesterday at 9:32 PM

I usually have this technical hypothetical discussions with ChatGpt, I can share if you like, me asking him this: aren't LLMs just huge Markov Chains?! And now I see your project... Funny

show 3 replies