It's trivial to normalise the various formats, and there were a few libraries and ML models to help parse PDFs. I was tinkering around with something like this for academic papers in Zotero, and the main issue I ran into was words spilling over to the next page, and footnotes. I totally gave up on that endeavour several years ago, but the tooling has probably matured exponentially since then.
As an example, all the academic paper hubs have been using this technology for decades.
I'd wager that all of the big Gen AI companies have planned to use this exact dataset, and many or them probably have already.
> It's trivial to normalise the various formats,
Ha. Ha. ha ha ha.
As someone who as pretty broadly tried to normalize a pile of books and documents I have legitimate access to, no it is not.
You can get good results 80% of the time, usable but messy results 18% of the time, and complete garbage the remaining 2%. More effort seems to only result in marginal improvements.