A few things about AI-led projects like this come to my mind — first, it’s cool to see all this pulled together. I’m sure the design will read “Claude 2026” soon, but that’s fine - it’s clean and generally has reasonable UX.
There are some real rough spots - for instance, the Latin texts are generated via OCR from scanned documents directly; they’re not from some other scholarly corpus that’s been checked. I only looked at a few, but they all have significant transcription difficulties. Sources are linked, and those sources seem to be archive.org scans. Of course, getting a fluid-sounding translation out of a somewhat shitty transcription is something AI will do for you happily, but it’s harder to get it to tell you where it’s gone off the rails.
That’s not the thing that comes to mind, though. What comes to mind is that projects like this are super useful scaffolding, and I hope it’s built as such. Transcription will get better. Actually I’m pretty sure it could be better now, given the output quality. Translations of better transcriptions will be better. Plus we will likely have higher quality translation tech available.
So, I’d like to see a project like this lean in to that iterative side of this kind of scholarship/hobby/historical work and make versioning and logging of updates part of the interface. Starting in the late 1990s many academic projects did this with large corpuses of documents, (I’m familiar at the least with the Yale Jonathan Edwards project), and used crowd sourced support — there’s no reason not to include facilities that interleave the AI and interested Latin/Roman scholars here.
In my mind with that done, this could turn into a genuinely useful tool. Which would be cool!