The article basically describes the user sign up, find it empty other than marketing ploys designed by humans.
It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
The paper you're talking about is "Deal or No Deal? End-to-End Learning for Negotiation Dialogues" and it was just AIs drifting away from English. The crazy news article was from Forbes with the title "AI invents its own language so Facebook had to shut it down!" before they changed it after backlash.
Not related to alignment though
https://www.forbes.com/sites/tonybradley/2017/07/31/facebook...
The alignment angle doesn't require agency or motives. It's much more about humans setting goals that are poor proxies for what they actually want. Like the classical paperclip optimizer that is not given the necessary constraints of keeping earth habitable, humans alive etc.
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
It's not an issue for the platform if AIs had their own motives or not. Humans may want information or actions to happen in the real world. For example if you want your AI to rearrange your living room it needs to be able to call some API to make that happen in the real world. The human might not want to be in the loop of taking the AIs new design and then finding a person themselves to implement it.
T/he danger is more mundane: it'll be used to back up all the motivated reasoning in the world, further bolstering the people with to much power and money.
what if I prompt it with a task that takes one year to implement? Will it then have agency for a whole year?
> But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue.
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
> The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other.
Colossus the Forbin Project
https://www.imdb.com/title/tt0064177
https://www.amazon.com/Colossus-D-F-Jones/dp/1473228212