logoalt Hacker News

tomalaci10/01/202417 repliesview on HN

This is pretty much progress on dead internet theory. The only thing I think that can stop this and ensure genuine interaction is with strong, trusted identity that has consequences if abused/misused.

This trusted identity should be something governments need to implement. So far big tech companies still haven't fixed it and I question if it is in their interests to fix it. For example, what happens if Google cracks down hard on this and suddenly 60-80% of YouTube traffic (or even ad-traffic) evaporates because it was done by bots? It would wipe out their revenue.


Replies

brookst10/01/2024

> It would wipe out their revenue.

Disagree. YouTube's revenue comes from large advertisers who can measure real impact of ads. If you wiped out all of the bots, the actual user actions ("sign up" / "buy") would remain about the same. Advertisers will happily pay the same amount of money to get 20% of the traffic and 100% of the sales. In fact, they'd likely pay more because then they could reduce investment in detecting bots.

Bots don't generate revenue, and the marketplace is somewhat efficient.

show 6 replies
cryptonector10/01/2024

> This trusted identity should be something governments need to implement.

Granting the premise for argument's sake, why should governments do this? Why can't private companies do it?

That said, I've long thought that the U.S. Postal Service (and similarly outside the U.S.) is the perfect entity for providing useful user certificates and attribute certificates (to get some anonymity, at least relative to peers, if not relative to the government).

The USPS has:

  - lots of brick and mortar locations
  - staffed with human beings
  - who are trained and able to validate
    various forms of identity documents
    for passport applications
UPS and FedEx are also similarly situated. So are grocery stores (which used to, and maybe still do have bill payment services).

Now back to the premise. I want for anonymity to be possible to some degree. Perhaps AI bots make it impossible, or perhaps anonymous commenters have to be segregated / marked as anonymous so as to help everyone who wants to filter out bots.

show 3 replies
joseda-hg10/01/2024

This still breaks some parts of the internet, where you wouldn't want to associate your identity with your thoughts or image

show 1 reply
pilgrim010/01/2024

I think on the same lines. Digital identity is the hardest problem we’ve been procrastinating in solving since forever, because it has the most controversial trade offs, which no two persons can agree on. Despite the well known risks, it’s something only a State can do.

show 1 reply
mike_hearn10/01/2024

Well you're about to find out, because YouTube is doing a massive bot/unofficial client crackdown right now. YTDL, Invidious etc are all being banned. Perhaps Google got tired of AI competitors scraping YouTube.

In reality, as others have pointed out, Google has always fought bots on their ad networks. I did a bit of it when I worked there. Advertisers aren't stupid, if they pay money for no results they stop spending.

show 2 replies
romanovcode10/01/2024

> This trusted identity should be something governments need to implement.

I rather live with dead-internet than this oppressive trash.

datadrivenangel10/01/2024

You would assume that Advertising companies with quality ad space would be able to show higher click through rates and higher impression to purchase rates -- overall cost per conversion -- by removing bots that won't have a business outcome from the top of the funnel.

But attribution is hard, so showing larger numbers of impressions looks more impressive.

show 1 reply
jenny9110/01/2024

> This trusted identity should be something governments need to implement.

I have been thinking about this as well. It's exactly the kind of infrastructure that governments should invest in to enable new opportunities for commerce. Imagine all the things you could build if you could verify that someone is a real human somehow with good accuracy (without necessarily verifying their identity).

nxobject10/01/2024

I think that's also part of Facebook's strategy of being as open with llama as possible – they can carve out the niche as the "okay if we're going to dive head first into the dead internet timeline, advertisers will be comforted by the fact that we're a big contributor to the conversation on the harms of AI – by openly providing models for study."

show 1 reply
solumunus10/01/2024

> For example, what happens if Google cracks down hard on this and suddenly 60-80% of YouTube traffic (or even ad-traffic) evaporates because it was done by bots? It would wipe out their revenue.

Nonsense. Advertisers measure results. CPM rates would simply increase to match the increased value of a click.

bityard10/01/2024

I've been thinking about how AI will affect ad-supported "content" platforms like YouTube, Facebook, Twitter, porn sites, etc. My prediction is that as AI-generated content improves in quality, or at least believability, they will not prohibit AI-generated content, they will embrace it whole-heartedly. Maybe not at first. But definitely gradually and definitely eventually.

We know that these sites' growth and stability depends on attracting human eyeballs to their property and KEEPING them there. Today, that manifests as algorithms that analyze each person's individual behavior and level of engagement and uses that data to tweak that user's experience to keep them latched (some might say addicted, via dopamine) to their app on the user's device for as long as possible.

Dating sites have already had this down to a science for a long time. There, bots are just part of the business model and have been for two decades. It's really easy: you promise users that you will match them with real people, but instead show them only bots and ads. The bots are programmed to interact with the users realistically over the site and say/do everything short of actually letting two real people meet up. Because whenever a dating site successfully matches up real people, they lose customers.

I hope I'm wrong, but I feel that social content sites will head down the same path. The sites will determine that users who enjoy watching Reels of women in swimsuits jump on trampolines can simply generate as many as they need, and tweak the parameters of the generated video based on the user's (perceived) preferences: age, size, swimsuit color, height of bounce, etc. But will still provide JUST enough variety to keep the user from getting bored enough to go somewhere else.

It won't just be passive content that is generated, all those political flamewars and outrage threads (the meat and potatoes of social media) could VERY well ALREADY be LLM-generated for the sole purpose of inciting people to reply. Imagine happily scrolling along and then reading the most ill-informed, brain-dead comment you've ever seen. You know well enough that they're just an idiot and you'll never change their mind, but you feel driven to reply anyway, so that you can at LEAST point out to OTHERS that this line of thinking is dangerous, then maybe you can save a soul. Or whatever. So you click Reply but before you can type in your comment, you first have to watch a 13-second ad for a European car.

But of course the comment was never real, but you, the car, and your money definitely are.

zackmorris10/01/2024

The real problem is how to prove identity while also guaranteeing anonymity.

Because Neo couldn't have done what he did by revealing his real name, and if we aren't delivering tech that can break out of the Matrix, what's the point?

The solution will probably involve stuff like Zero-Knowledge Proofs (ZKPs), which are hard to reason about. We can imagine a future where all user data is end-to-end encrypted, circles of trust are encrypted, everything runs through onion routers, etc. Our code will cross-compile to some kind of ZKP VM running at some high multiple of computing power needed to process math transactions, like cryptocurrency.

One bonus of that is that it will likely be parallelized and distributed as well. Then we'll reimplement unencrypted algorithms on top of it. So ZKP will be a choice, kind of like HTTPS.

But when AI reaches AGI in the 2040s, it will be able to spoof any personality. Loosely that means it will have an IQ of 1000 and beat all un-augmented humans in any intellectual contest. So then most humans will want to be augmented, and the arms race will quickly escalate, with humanity living in a continuous AR simulation by 2100.

If that's all true, then it's basically a proof of what you're saying, that neither identity nor anonymity can be guaranteed (at least not simultaneously) and the internet is dead or dying.

So this is the golden age of the free and open web, like the wild west. I read a sci fi book where nobody wore clothes because with housefly-size webcams everywhere, there was no point. I think we're rapidly headed towards realtime doxxing and all of the socioeconomic eventualities of that, where we'll have to choose to forgive amoral behavior and embrace a culture of love, or else everyone gets cancelled.

show 2 replies
paulnpace10/02/2024

Governments can solve technical problems no one else can?

gregw13410/01/2024

What's best practice for preventing bot abuse, for mere mortal developers? Would requiring a non-voip phone number at registration be effective?

show 3 replies
dom9610/01/2024

What do governments need to implement? They already give you a passport which can be used as a digital ID.

show 3 replies
kjkjadksj10/01/2024

On the other hand I think the best social media out there today is 4chan. Entirely anonymous. Also, the crass humor and nsfw boards act as a great filter to keep out advertising bot networks from polluting the site like it did with reddit. No one one wants to advertise on 4chan or have their brand associated with it, which is great for quality discussion on technical topics and niche interests.

show 1 reply