logoalt Hacker News

WarmWashyesterday at 4:35 PM7 repliesview on HN

This framing is hardly fair, since it treats AI as an incinerator of knowledge rather than the democratizer of knowledge that it is.

Every human uses that "resource" to train themselves, and now they use AI to supercharge that consumption.

The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?

I get the gatekeepers are pissed, LLMs are way cheaper than those expensive gate fees, and I cannot come up with a good faith argument about how giving the power of SOTA LLMs to anyone for $20/mo is somehow evil or bad.

In an alternate universe these same models are $100k/mo with limited invite only access, occasionally the public gets a single demo prompt with a short reply, and $20/mo access is a utopian wet dream.

If you want UBI, then the framing shouldn't be around "whoever had content on the internet circa 2024 is entitled to lifetime AI company payouts that effectively act as permanent unemployment checks."


Replies

shimmanyesterday at 4:40 PM

It's not democracy if you can't destroy it. It's not democracy if the citizens cannot reject it. It's not democracy if it's being forced down your throat.

Sick of how SV/VC absolutely ruin words for their own monetary benefit.

How about you put up it up to a national vote and see what democracy gets you? I highly suspect that vast majorities of the electorate would want to nationalize this tech to benefit everyone rather than benefiting the few.

Democracy means there is a politics of rejection, rejection is normal in functioning democracies; what isn't normal are small handfuls of people capturing all collective human intelligence then claiming only they are allowed to benefit from it.

show 3 replies
jklinger410yesterday at 5:11 PM

> This framing is hardly fair, since it treats AI as an incinerator of knowledge rather than the democratizer of knowledge that it is

Paying for access to information is not democracy

show 1 reply
droobyyesterday at 5:00 PM

I never said AI companies are evil or that $20/mo access is bad. You're arguing against a position I don't hold.

AI can be genuinely useful AND the people whose collective output made it possible can deserve a share of the wealth it generates. These aren't in conflict.

Alaskans benefit from oil too. It heats their homes, paves their roads, funds their schools. That wasn't an argument against the dividend. "You're already benefiting from the resource" has never been a reason the people who generated it shouldn't share in the profits.

The question was never "is AI good." It's "when something built on collective human output generates trillions, does the public have a claim to a share." Nothing you said here addresses that.

orpheayesterday at 5:31 PM

  > a personal PhD
Come on, spare us OpenAI's PR bullshit.
Imustaskforhelpyesterday at 5:27 PM

> In an alternate universe these same models are $100k/mo with limited invite only access, occasionally the public gets a single demo prompt with a short reply, and $20/mo access is a utopian wet dream.

So your understanding of the present state is that we are living in a utopian wet dream now that we have models who can generate slop faster so much so that we have a term of it called AI slop?

I or many people don't want this Utopian wet dream, so I want to know, did I or other people have say it or not?

A few select people decide what's the definition of a Utopian wet dream is and they then take the collective properties of everybody else to fulfill that and even putting the employment/livelihood of those same people into risks

Sir, does that sound familiar?

> I get the gatekeepers are pissed

No, humans are pissed, humans just like how you and your family are humans too (well I sure hope so)

filoelevenyesterday at 5:01 PM

But AI is absolutely an incinerator of knowledge.

A helper tool that I can ask a question and which responds with relevant information gleaned from the vast collection of human-gathered knowledge and experience would be fantastic.

What we have instead is something that often gets things mostly right, if you don't look too hard at it. And the poisoned output of this thing seeps back into the knowledge pool, reducing its accuracy and therefore usefulness.

The problem of LLMs is the dissolution of human knowledge into a sea of slop.

vannevaryesterday at 5:00 PM

>The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?

The social media companies gave their services for free, and now it turns out they've committed quite a few sins. None of the AI companies are doing this out of the goodness of their hearts, nor will they be satisfied with subscription revenue. If they see opportunities to make more money by manipulating the population, rest assured they will take those opportunities.