logoalt Hacker News

dingnutsyesterday at 8:24 PM9 repliesview on HN

it's not like gambling, it is gambling. you exchange dollars for chips (tokens -- some casinos even call the chips tokens) and insert it into the machine in exchange for the chance of a prize.

if it doesn't work the first time you pull the lever, it might the second time, and it might not. Either way, the house wins.

It should be regulated as gambling, because it is. There's no metaphor, the only difference from a slot machine is that AI will never output cash directly, only the possibility of an output that could make money. So if you're lucky with your first gamble, it'll give you a second one to try.

Gambling all the way down.


Replies

NathanKPyesterday at 8:52 PM

This only makes sense if you have an all or nothing concept of the value of output from AI.

Every prompt and answer is contributing value toward your progress toward the final solution, even if that value is just narrowing the latent space of potential outputs by keeping track of failed paths in the context window, so that it can avoid that path in a future answer after you provide followup feedback.

The vast majority of slot machine pulls produce no value to the player. Every single prompt into an LLM tool produces some form of value. I have never once had an entirely wasted prompt unless you count the AI service literally crashing and returning a "Service Unavailable" type error.

One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.

show 3 replies
princealiiiiiyesterday at 8:28 PM

> It should be regulated as gambling, because it is.

That's wild. Anything with non-deterministic output will have this.

show 3 replies
rapindyesterday at 9:24 PM

By this logic:

- I buy stock that doesn't perform how I expected.

- I hire someone to produce art.

- I pay a lawyer to represent me in court.

- I pay a registration fee to play a sport expecting to win.

- I buy a gift for someone expecting friendship.

Are all gambas.

You aren't paying for the result (the win), you are paying for the service that may produce the desired result, and in some cases one of may possibly desirable results.

show 2 replies
squeaky-cleanyesterday at 8:46 PM

So how exactly does that work for the $25/mo flat fee that I pay OpenAI for chatgpt. They want me to keep getting the wrong output and burning money on their backend without any additional payment from me?

show 2 replies
csallenyesterday at 10:16 PM

Books are not like gambling, they are gambling. you exchange dollars for chips (money — some libraries even give you digital credits for "tokens") and spend them on a book in exchange for the chance of getting something good out of it.

If you don't get something good the first time you buy a book, you might with the next book, or you might not. Either way, the house wins.

It should be regulated as gambling, because it is. There's no metaphor — the only difference from a slot machine is that books will never output cash directly, only the possibility of an insight or idea that could make money. So if you're lucky with your first gamble, you'll want to try another.

Gambling all the way down.

abletonliveyesterday at 9:34 PM

Yikes. The reactionary reach for more regulation from a certain group is just so tiresome. This is the real mind virus that I wish would be contained in Europe.

I almost can't believe this idea is being seriously considered by anybody. By that logic buying any CPU is gambling because it's not deterministic how far you can overclock it.

Just so you know, not every llm use case requires paying for tokens. You can even run a local LLM and use cline w/ it for all your coding needs. Pull that slot machine lever as many times as you like without spending a dollar.

show 1 reply
mystified5016yesterday at 8:47 PM

I run genAI models on my own hardware for free. How does that fit into your argument?

show 1 reply