"Again, we are not doing this because we want this to be the future. It is not because we want to expand to chain AI-run retail stores across the world. It is not for economic opportunity.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."
I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.
Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?
I once saw an interview with a guy who was into extreme body modification of an unprintable and life-altering nature. He said something to the effect of, "I like challenging people's conception of what humans are." I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."
For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."
I think it’s easier just to recognize words as free and to value them as such. Actions have value.
Not for the economic opportunity of building AI-run retail stores. For the much larger economic opportunity of selling AI's to run retail stores!
Pickaxes and shovels and whatnot.
I don’t find this disingenuous.
The more typical AI fondation model company claim of “it’s so dangerous only we and people that pay us enough should hand access” is what I think is BS.
I don’t see anything wrong with trying to understand something, which is what this seems to be about. I also don’t see anything wrong with an AI operated store generally, and it of course makes sense, and is valuable, to learn about how the limitations.
To be fair, they're running this with oversight, the blog states they're ensuring the people employed are actually properly employed with the parent company. You know for sure that someone WILL run this experiment without those oversights, so while their "care" is probably more about liability there is still some truth to what they say.
It is moral to throw your toddler into the pool so that later in life they are less likely to drown.
I'm not saying you should take them seriously*, but if you were to take them seriously, that when they say "we believe this future is coming regardless" they do in fact believe this, well, how can I put it?
Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.
* I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"
I'll file this under "Resistance is futile".
“Again, we are not doing this because we want the Torment Nexus to be the future.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running the Torment Nexus.”
> Supporting people that want more AI regulation to stop this?
How are you supposed to know what sort of regulation is needed if you don't even know what the issues are yet? Similarly, won't it be much easier to make the case for regulation if you can point to results of experiments like this one instead of just hypotheticals?
I honestly thought the whole thing was satire and that that line was a riff on OpenAI.
I think it's actually useful to see how AIs behave in such situations. It's going to happen, and understanding what AIs do is good to try to mitigate areas or actions that could be dangerous. It's hard to guard against the unknown if they're unknown.
I'm all for replacing CEOs with AI.
The narrative was quite dystopian. But we are half way there now anyway
"Guys, the Future All Knowning AI is forcing us to do this; don't blame us, blame the super intelligent future indistinguishable from magic!"
[dead]
> When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want?
“It only remains to point out that in many cases a person’s way of earning a living is also a surrogate activity. Not a PURE surrogate activity, since part of the motive for the activity is to gain the physical necessities and (for some people) social status and the luxuries that advertising makes them want. But many people put into their work far more effort than is necessary to earn whatever money and status they require, and this extra effort constitutes a surrogate activity. This extra effort, together with the emotional investment that accompanies it, is one of the most potent forces acting toward the continual development and perfecting of the system, with negative consequences for individual freedom.”
-- Industrial Society and Its Future (1995)