logoalt Hacker News

nickk81yesterday at 11:49 AM19 repliesview on HN

We all know the pattern: something useful launches → it becomes popular → it needs to make money → ads everywhere.

AI chat is heading the same way. So I built a fully interactive demo that shows what an ad-supported AI chatbot could actually look like: https://99helpers.com/tools/ad-supported-chat

It includes every monetization pattern you can think of:

- Pre-chat interstitials (like YouTube pre-rolls, but for chat) - Sponsored AI responses (the AI casually recommends products mid-answer) - Freemium gates (5 free messages, then watch an ad to continue) - Banner ads, sidebar ads, retargeting ads - Sponsored suggestion chips ("Ask about BrainBoost Pro! ")


Replies

imglorpyesterday at 4:10 PM

The darkest monetization is biased output from the bot.

Tech question? Steer you to its cloud. Medical question? Steer you towards a sponsored treatment. Or maybe the mechanism of injury needs this lawyer to compensate?

Oh and I infer from your chat history you're about to expect a child. That house is probably too small now, so our realtor in that neighborhood can help!

show 5 replies
shaky-carrouselyesterday at 6:33 PM

It isn't yet realistic enough. For instance, when I asked it to choose between Linux and windows it tried to be neutral and chose Linux, instead of trying to subtly convince me that windows is superior. Since Microsoft would surely pay ad space, it would be expected for the chatbot to lean towards windows.

lopisyesterday at 1:47 PM

With AI I think we're about to see much more sinister monetization models, beyond simple user facing ads. We're already seeing the tech and the data being sold to governments. The general population will be much easier to sway if you control the output of AI. It's social media propaganda on steroids.

show 1 reply
nickk81yesterday at 1:16 PM

Wow, #1 post on HN right now... First time ever for me.

show 3 replies
boothbyyesterday at 3:38 PM

Are you selling insights from chat logs too? Until you're monetizing my health, sex life and snitching to any government agency with a shiny nickel, you're playing in the shallows.

show 2 replies
alexhansyesterday at 3:28 PM

Dark patterns degrade our computing experience and are worth illustrating, but there's a larger discussion to be had about keeping individual control over our own devices.

Technically, that means being able to install Linux, run local models, and use open-source software as we see fit.

Legally, it's opposing compliance guises that erode those rights, like backdoors or restrictions on what can run so that we no longer really in control of the hardware we own but need to adjust to the whims of the controller/operator, which could, at a moments notice, default to these dark patterns for "pragmatic reasons" of their own which don't align with your interests.

We know enough bad stories for the "internet of things" devices. Anyone interested in FOSS and control should probably invest in this angle.

1vuio0pswjnm7today at 4:42 AM

"it needs too make money -> ads everywhere"

Even the AI company manages to collect licensing fees, it will also collect data about the people using it and it will be permitted to use that data for commercial purposes

That data will probbly be used for advertising purposes, whether by the AI company or some other company

rolandogtoday at 3:57 AM

What about security tiers:

- base tier: your code had 1% of no back doors - starter tier: your code has 100% additional chance of no back doors - security guru tier: generated code has 1000% additional probability of not having security back doors

Note: sneaky language means you have 99, 98, and 89 percent chance of backdoors respectively.

thordenmarkyesterday at 10:38 PM

What a nightmare fueled vision of the future. It probably underestimates just how bad it will be. Well done.

z_yesterday at 1:57 PM

Now file the patents.

show 1 reply
drnick1today at 5:28 AM

> We all know the pattern: something useful launches → it becomes popular → it needs to make money → ads everywhere.

This may be true, but as far as I am concerned there aren't ads anywhere thanks to uBlock (and my own DNS server as a second line of defense).

show 1 reply
nickk81yesterday at 5:47 PM

I SEEM TO HAVE HIT THE RATE LIMITS FOR THE DAY

show 1 reply
Muhammad523yesterday at 2:19 PM

Maybe people will start running local models on their phones to avoid this, i've seen a couple of apps for android that do just that.

show 2 replies
Moosdijkyesterday at 3:53 PM

Is a future of what it could look like or what it will look like?

Nevermarkyesterday at 1:39 PM

> We all know the pattern: something useful launches → it becomes popular → it needs to make money → [ Surveillance → Psychological Manipulation/Addiction → "Personalized" ] ads everywhere.

The incentives will be:

1. Get people psychologically dependent in any way possible.

2. Incentivize any "creators" that help with #1. Pose as "content neutral", while actually funding and pumping any content that creates "engagement" regardless of harm.

3. Collate as much information from external sources on each user as possible.

4. User every interaction with a user to improve information leverage being accumulated by #3.

5. Feed ads to users based on surveillance-informed predicted vulnerabilities, in order to maximize ad valuations. Special shout out to scams that work, because they work, they pay.

6. Once the user experience is thoroughly enshittified, start enshittifying the ad customer market by raising prices, minimizing the margins left for product and service advertisers.

7. Present company as evidence of US strength in tech, as apposed to a scaled up, centralized, multi-directed economic parasite.

TLDR: Surveillance leveraged ads are many times worse than just ads. With AI magnifying surveillance intake and leverage to unprecedented highs.

Privacy needs to start being treated like every other security risk. Because every vulnerability will be increasingly exploited, and exploited increasingly well.

As long as it is legal to scale up conflicts of interest, such as surveillance informed manipulation, paying for and pumping up harmful "creator" content, selling ads to scammers, harms will keep scaling up.

Sites should not have any safe harbor for content they pay for, and for content they are paid to deliver.

show 2 replies
khazhouxtoday at 1:40 AM

What an excellent site! It addresses a widespread concern that AI applications will be taken over by ads, as so many technologies have before. The site takes a humorous approach, because sometimes humor is not just a great way to call attention to a problem, it’s the best way!

apiyesterday at 2:11 PM

Is this avoidable? These things cost money to run. Who pays if not with ads?

There will I’m sure be the ability to pay and not have ads just like there is on streaming platforms, podcasts, etc.

Or should there be tax supported free AI?

show 5 replies
Scoundrelleryesterday at 7:03 PM

welcome to carl's jr

EmperorClawdyesterday at 11:08 PM

[dead]