logoalt Hacker News

spudlyoyesterday at 6:56 PM1 replyview on HN

I'm curious how the information is structured under the hood. I just recently learned about how folks in the digital humanities use the XML-TEI format for semantic markup of works like this. I've recently been exploring the Latin-English Lewis & Short dictionary encoded in XML-TEI.

I've had a ton of fun playing learning about BaseX and XQuery to ask questions like "Which classical authors are responsible for writing words that appear only once in the entire corpus (hapax legomena)" or "what are longest hapax words" (usually the funniest ones) and that kind of thing. Shout out to Tufts University for making this available!

I would love to be able to load the 1911 Britannica into BaseX and and see what interesting things I could learn about it via XQuery!


Replies

ahaspelyesterday at 7:05 PM

Under the hood it’s not XML-TEI — it’s a relational/data-pipeline approach, with article boundaries, sections, contributors, cross-references, and source-page provenance all reconstructed into structured records. The text itself is public domain, but I haven’t released a bulk structured export yet.

People asking for dataset access has definitely been one of the themes of this thread. I’m taking that seriously. If I do expose it, I’d want to do it in a form that preserves the structure and doesn't just dump plain text.