Scrolled thru.
> A lot of companies say they are going to change the world; we actually did.
Just couldn’t resist. So much of it reads like a marketing message.
Sam - when you say all society will benefit and that’s what you’re working towards, you can’t just say that. Nobody believes you and more importantly nobody has any reason to believe you. When you lead with that, and say nothing about what you are actually doing towards it, you make people work against you. When you put yourself up as a dictator for the collective needs of humanity, you have to put up or shut up.
So many put huge faith in you, but it’s turned out to be in the end entirely about you.
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
Unserious answer about a very serious event.
I don't believe a word of Sam's "I believe" section.
In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
The molotov cocktail was thrown at the metal gate, not at the house and they arrested some kind of a disturbed person:
https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...
It was a performative action.
I'm sure there will be a thorough investigation, unlike in the Suchir Balaji murder case where they rubber stamped suicide after half an hour despite him being a whistleblower.
In his interview with Theo Von when asked what he wants his legacy to be and how he wants to be remembered, Sam said something to the effect of: “I don’t think about how I will be remembered I just want to have impact.” I think that’s naive and leads to having, uh, negative impact.
I don’t think history will smile upon him. Always good to think about how you want people to feel about your impact on them.
Sam: someone firebombed my house... anyways, enough of that, let me sell you my product.
Violence like this is not the answer. However, this post feels like a thinly veiled attempt at using this alarming attack to reclaim public goodwill after the New Yorker article the other day.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
Altman wants to seem relatable and personable even though he’s one of the wealthiest and most powerful people in the world. You don’t get that option when you control a technology that has the potential to alter so many lives, especially when you just sold said technology to the US military. All the talk around democratizing AI rings hollow.
The implication of Altman’s blog seems to be “stop writing critical articles about me because it will cause more violence.” However, the rich and powerful cannot use this excuse to escape objective scrutiny.
Sounds like this was just a crazy guy upset at OpenAI. Not great but an isolated incident.
That said… is anyone going to be surprised when the laid off masses torch a data center or worse? IMO, it’s only a matter of time before we see organized anti-AI terrorism too. When you have people out there saying “AI will kill us all” then it’s easy to justify using violence to stop that outcome.
What article is he referencing in the fourth paragraph? The New Yorker one? I got the impression that it was careful in its reporting and by no means one-sided.
Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.
I don’t think this will do much to help his image.
They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
An interesting thing about one facet of how society as developed over the past decade and a half, I think, is that a byproduct of more people being conscious of the quest to monetise almost anything is that it has also raised the level of general scepticism on whether something is marketing or real. So you have increasingly more scenarios where an objectively bad thing can happen to someone but any public response is scrutinised and questioned within a hint of its life sometimes rightly sometimes not. I don’t particularly like it but that’s where we are at guess
Historically, was it always so common for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?
Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
Just take a second to consider this: if HN, probably one of the less reactionary places on the internet, and one of the most capitalist-friendly, is this angry at this point, before the mass job losses even start, what in the name of God do you think the general public is going to be like when they’ve been going on for years?
If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
"AI has to be democratized" - pretty weak coming from ClosedAI
Is the underground bunker in New Zealand ready yet? Better check on it.
Not that I excuse this behavior, but it's expected is it not? He's claimed to have built the replacement for human labor while participating in the regulatory capture that ensures that process screws the affected parties out of any effective recourse.
He's stood atop a soapbox, in earshot of everybody, and shouted to the corporations that because of him, they can now fire hundreds of thousands — millions — of people with impunity. It doesn't matter that it's not true and that the firings are probably not actually due to AI. But he's standing in front of them and providing the cover.
He's a marketing guy. He made himself the face of AI. His message out of the gate was that it was going to replace human workers. What did he think was going to happen?
It's like all of these people think that humanity has evolved out of the collective rage spirals that powered political revolutions in the 1500's, 1600's, 1700's — every 100's. Nope. It's always still there. We've had a middle class for awhile to mask it but it's being hollowed out and when it collapses completely, that ugly and ever-present human urge to eat the rich will rage right back to the surface again. Yet, they all seem to be apt to fight to be first in line to be the face of injustice during a volatile period for some reason.
It's kind of baffling but also interesting to witness.
This is an odd choice of a thread for a laundry list of complaints about AI and about a person that, say what you will, is nowhere near the list of planetary "really bad guys". Even if we limit it to tech, the list starts with someone way richer, then goes through four or five way-shadier people.
If you're OK with victim-shaming here, doesn't it say more about you than Altman? What does it say about your viewpoint?
It's never OK to physically attack someone like this. Full stop.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
Altman really needs some better coaching on how to sound like a real human, he's not pulling it off here. Who witnesses someone firebombing their home (which is terrible btw), thinks for a second about their family then writes a diatribe full of AI marketing bs. He doesn't even attempt to make it sound personal. He could have incorporated his feelings about his child growing up in an AI dominated world or something to that effect, even as trite as that sounds, it would ring more believably human than what was written here.
'Discourse is getting too hot' says Man selling Large Language Microwaves
1) It's terrible that this has happened. People who do this are evil.
2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.
3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.
We still haven't made AGI, so I don't understand what he's saying they did.
It makes me sick to see this sort of faux-high-minded, self-important horse manure. Slowly squeezing the blood and goodness out of the entire world is no biggie, but heaven forbid any acts of direct physical violence. Bad things are only okay if you do them sneakily, gradually, over time, for the most self-serving reasons, and cloak them in misty-eyed pabulum; then they magically become good things.
If Sam Altman believed a billionth of his own twaddle about "prosperity for everyone" he would be systematically dismantling everything he has built and working to dismantle everything like it that anyone else has built.
There's a great bit from "War and Peace":
> "Who are they? Why are they running? Can it be they're running to me? Can it be? And why? To kill me? ME, whom everybody loves so?" He remembered his mother's love for him, his family's his friends', and the enemy's intention to kill him seemed impossible.
Sam Altman and his ilk live in a similar bubble where it's inconceivable that they might be 100% wrong about the biggest-picture conception of what they are doing. They will expend countless hours responding to criticisms and actions large and small, but they will never accept the possibility that they are the bad guys, that everything they've worked for has been a net harm to humanity. Instead they will keep Molotov-cocktailing the world in their own insane way until they burn it all to ashes.
>“Once you see AGI you can’t unsee it.” It has a real "ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”. The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.
The analogy has 2 simple rules and you can't even follow them:
#1 It MUST be destroyed.
#2 SOMEONE has to have the ring until then.
Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.
Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
I have many disagreements with Sam Altman. But physical attacks are never the answer. Especially attacking one's family.
I can't help but be reminded of last year, when our landlords (chill boomers) sold the house my girlfriend and I were renting the basement of (to presumably rich asshole millenials). The demographic doesn't really matter, but the old landlords kept us in us in the loop throughout the process, we knew as much as we could going into the new year. Apparently the new buyers wanted to keep us as tenants. Day 2 of them taking possession, the man came down with his innocent toddler and introduced themselves. He seemed friendly enough, and on Day 3 he came down in the middle of the day and handed me eviction notice papers.
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
It is fair to be critical of Sam and other tech leaders regarding AI, but he has done nothing to begin to justify violence or even the threat of violence against him or his family.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me."
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
> The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
> AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.
What a bullshit thing for someone who is not actually democratizing access to AI to say.
My theory is a lot of the anti-AI sentiment is specifically US geopolitical adversaries (pick one or more: China, Russia, Iran, ...) who want a bad outcome for the US (AI as potential AGI; AI as one of the few successful economic sectors of the US; general desire to cause societal disruption or collapse and AI as convenient target). Probably >95% of the really bad stuff (the micron fab disruption, attacks on AI datacenters, ...) is probably root-cause that, possibly executed by useful idiots, people paid by organizations, etc. 5% is normal NIMBY stuff. Approximately measure 0 is Zizian death cultists.
I don't any of these will be dissuaded by cute family photos. Fortunately the frontier model companies and major infrastructure providers are able to pay for top-tier corporate security (although tech people generally have been unwilling to do this at home for lifestyle reasons), but I'd be afraid for people elsewhere in the supply chain.
(And destructive attack is all on top of the normal corporate espionage, infiltration, subversion, etc.)
> The world deserves huge amounts of AI and we must figure out how to make it happen.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
It’s funny how this happens the very same moment we get to read about Claude’s Mythos and a New-Yorker article. I really doubt the attacker is up to date with either…
The only thing surprising here is how naive you guys are. He is a marketing&sales guy in the first place.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.
How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
I’m probably going to get flames for this, but it would not surprise me in the least if Altman staged this. Given his history, it’s exactly the kind of thing he would do. Think about it - Elon has launched a smear campaign against him prior to the trial and Altman is getting crushed by negative press. Despite his efforts, he has been having trouble getting the media to pay attention to what he has to say about it. Solution? Rise above the noise with something even more newsworthy, and use it to push his personal PR, even mentioning and retorting Musk.
Think about something else: your house gets firebombed at 3:45am. How long until the cops wrap up and are done interviewing you? Two hours? How long until your family calms down and you can have alone time to write? He states it’s still night when he’s writing it. Yet he finds enough time alone to write a well-thought-out essay?
Yeah…seems likely.
To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.
There are people actively insinuating in this thread that Sam should be...killed, and they are still up. Very odd moderation, surely there is a better way to flag these things.
People are not able to afford food, housing, energy, healthcare, or anything else right now because of Sam and the other scum bags.
Because of him people are suffering immensely.
My heart goes out to everyone in this situation.
> My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: “Once you see AGI you can’t unsee it.”
Except nobody has seen AGI. Not even close.
This is both horrible and not at all surprising.
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Still horrible and not right.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
I am glad you feel my pain, Mr. Altman.
This is a predictable outcome of what people like Altman are doing, and probably will happen more and more.
Altman and co. are massively changing society, putting people out of work, etc. It is systemic violence on a massive scale. Systemic violence is "acceptable" violence, but it usually leads to a sudden outburst of plain old subjective violence like this.
Is there no vein of fear and loathing you won't tap?
Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?