logoalt Hacker News

stingraycharlestoday at 11:14 AM11 repliesview on HN

Why, though? What, really, does anyone envision the next decade with government + AI is going to be like?

Obviously mass surveillance is already happening. Obviously the line between “human kills other human” is blurring for a long time already, eg remote operated drones. Missiles are already remotely controlled and navigating and detecting and following moving targets autonomously.

What’s the goal of people who think deleting their OpenAI account will make an impact?


Replies

mentalgeartoday at 11:26 AM

Because openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:

* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.

* also, he warned that "ads would be the last resort" for LLM companies.

Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.

While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists not marketing guys.

show 1 reply
maxbondtoday at 11:31 AM

Recently I left an HN comment pointing out that there was a typo on Ars Technia's staff page. One copy editor had the title "Copy Editor" and the other "Copyeditor." Several days later the typo was fixed. I'm confident that it was because someone at Ars saw my comment.

I left a comment describing how I am deleting my OpenAI account. I think there's a good chance someone at OpenAI sees it, even if only aggregated into a figure in a spreadsheet. Maybe a pull quote in a report.

You do your best at the margin, have faith it will count for something in aggregate and accept that sometimes you're tilting at windmills. I know most of my breathe is wasted but I can't reliably tell which.

podgorniytoday at 11:39 AM

We are obviously dying. What's the point of doing anything in between now and the last moment? What goal of people who think that doing anything will make any impact?

--

Some people do that as a symbolic action. Some to keep own terms as much as they can. Some hope their actions will join others actions and will turn into a signal for decision makers. For others this action reduces the area of their exposure. Others believe in something and just follow their beliefs.

BTW following own set of beliefs is what you're (we all) doing here. You believe that surveillance is already happening and nothing can be done about it, that single action does not matter, that there are no other reasons for action other than direct visible impact, etc. Seems that you analyze others through own set of beliefs and it can not explain actions of others. This inability to explain others suggests that the whole model is flawed in some way. So what is the nature of your beliefs? Did you choose them or they were presented you without alternatives? What are alternatives then? Do these beliefs serve your interests or others?

designerarvidtoday at 11:19 AM

Maybe people believe that the US is better off not having a government that coerces private companies? This is a way of showing that.

/non-US and just guessing

show 1 reply
coredev_today at 12:08 PM

When did the US poulation stop believing in a better society and world? A bad progression is something that can be fixed. We do not need AI in weapons, we need a law that forces the children of presidents starting war to automatically be conscripted to the front line of said war.

show 1 reply
duskdozertoday at 12:01 PM

Any one individual's vote is probably not going to change the result of an election. So, why do people vote? Individual actions in aggregate have effects. And even if you think it's ultimately futile, sometimes it's about saying "I don't think this is acceptable."

kledrutoday at 11:21 AM

Kind of signal that we do not want to pay for our surveillance ourselves. I did not write funeral though.

ozgungtoday at 11:32 AM

“Predictive programming“ in action. Predicting something beforehand and getting used to it should’t make a wrong thing acceptable.

Ethics is about knowing and acting right or wrong. Not about how we feel about them.

throwaway20261today at 11:21 AM

It's all about money in the end. If people keep spending money with these companies, it reinforces their notion that the money will keep flowing despite what they do. Cancelling slows down that revenue stream, giving time for other entities which are less misanthropic to catch up and counterbalance the negative side effects from these companies.

syllogismtoday at 12:42 PM

The actions of the US government here are openly corrupt.

The point of the supply chain risk provisions is to denote, you know, supply chain risks. The intention is not to give the Pentagon a lever it can pull to force any company to agree to any contract it wants.

Hegseth doesn't even pretend that Anthropic is actually a supply chain risk. The argument for designating them so is that _they won't do exactly what the government wants_.

People use the term "fascism" a lot and people have kind of tuned it out, but what do you call a government that deals itself the power to compel any company to accept any contract, and declare it a pariah on thin pretext if it objects?

By taking the deal under these conditions OpenAI is accepting this. They're saying, "Well, sucks to be them, life goes on". They're consenting to the corruption and agreeing to profit from it. But they'll be next, and if the next company in line has the same stand then yeah, the government can force any company to do anything. There's nothing normal about this.

vee-kaytoday at 11:17 AM

AI will get access to missiles, fighter jets, attack drones, and even nuclear launch codes - that's the fear.

Even when the bombs drop from the sky, at least those humans who had deleted their OpenAI account can rest easy, knowing that that they weren't the ones supporting the AI that will delete humanity.

show 2 replies