logoalt Hacker News

bhaneylast Wednesday at 2:46 AM5 repliesview on HN

Honestly I don't think it would be that costly, but it would take a pretty long time to put together. I have a (few years old) copy of Library Genesis converted to plaintext and it's around 1TB. I think libgen proper was 50-100TB at the time, so we can probably assume that AA (~1PB) would be around 10-20TB when converted to plaintext. You'd probably spend several weeks torrenting a chunk of the archive, converting everything in it to plaintext, deleting the originals, then repeating with a new chunk until you have plaintext versions of everything in the archive. Then indexing all that for full text search would take even more storage and even more time, but still perfectly doable on commodity hardware.

The main barriers are going to be reliably extracting plaintext from the myriad of formats in the archive, cleaning up the data, and selecting a decent full text search database (god help you if you pick wrong and decide you want to switch and re-index everything later).


Replies

serial_devlast Wednesday at 7:06 AM

The main barriers for me would be:

1. Why? Who would use that? What’s the problem with the other search engines? How will it be paid for?

2. Potential legal issues.

The technical barriers are at least challenging and interesting.

Providing a service with significant upfront investment needs with no product or service vision that I’ll likely to be sued for a couple of times a year, probably losing with who knows what kind of punishment… I’ll have to pass unfortunately.

show 4 replies
notpushkinlast Wednesday at 5:19 AM

I think there’s a couple ways to improve it:

1. There’s a lot of variants of the same book. We only need one for the index. Perhaps for each ISBN, select the format easiest to parse.

2. We can download, convert and index top 100K books first, launch with these, and then continue indexing and adding other books.

show 4 replies
tomthelast Wednesday at 7:14 AM

I wonder if you could implement it with only static hosting?

We would need to split the index into a lot of smaller files that can be practically downloaded by browsers, maybe 20 MB each. The user types in a search query, the browser hashes the query and downloads the corresponding index file which contains only results for that hashed query. Then the browser sifts quickly through that file and gives you the result.

Hosting this would be cheap, but the main barriers remain..

show 1 reply
greggsylast Wednesday at 7:16 AM

It's trivial to normalise the various formats, and there were a few libraries and ML models to help parse PDFs. I was tinkering around with something like this for academic papers in Zotero, and the main issue I ran into was words spilling over to the next page, and footnotes. I totally gave up on that endeavour several years ago, but the tooling has probably matured exponentially since then.

As an example, all the academic paper hubs have been using this technology for decades.

I'd wager that all of the big Gen AI companies have planned to use this exact dataset, and many or them probably have already.

show 1 reply
trollbridgelast Wednesday at 1:50 PM

Decent storage is $10/TB, so for $10,000 you could just keep the entire 1PB of data.

A rather obvious question is if someone has trained an LLM on this archive yet.

show 1 reply