logoalt Hacker News

stanfordkidyesterday at 6:58 PM4 repliesview on HN

I don't see how you get around LLMs scraping data without also stopping humans from retrieving valid data.

If you are NYTimes and publish poisoned data to scrapers, the only thing the scraper needs is one valid human subscription where they run a VM + automated Chrome, OCR and tokenize the valid data then compare that to the scraped results. It's pretty much trivial to do. At Anthropic/Google/OpenAI scale they can easily buy VMs in data centers spread all over the world with IP shuffling. There is no way to tell who is accessing the data.


Replies

8bitsruletoday at 1:37 AM

>I don't see how you get around LLMs scraping data without also stopping humans from retrieving valid data.

I do a lot of online research. I find that many information sources have a prominent copyright notice on their pages. Since the LLM's can read, that ought to be a stopper.

I'm getting tired of running into all of these "verifying if you're human" checks ... which often fail miserably and keep me from reading (not copying) the pages they're paid to 'protect'.

(It's not as though using the web wasn't already much harder in recent years.)

conartist6yesterday at 7:39 PM

I don't see how you can stop the LLMs ingesting any poison either, because they're filling up the internet with low-value crap as fast as they possibly can. All that junk is poisonous to training new models. The wellspring of value once provided by sites like StackoverFlow is now all but dried up. AI culture is devaluing at an incredible rate as it churns out copied and copies and copies and more copies of the same worthless junk.

show 1 reply
ciaranmcayesterday at 7:52 PM

And most of the big players now have some kind of browser or bowser agent that they could just leverage to gather training data from locked down sources.

th0ma5yesterday at 7:37 PM

[dead]