The writing was on the wall as soon as it went all-in on commercializing the tech.
This will never happen, LLMs are already being used very unsafely, and if this HN headline stays where it is OpenAI will quietly remove their charter from their website.
The reality is that current models are simply nowhere near AGI. Next token prediction has been pushed very far, and proven to have applicability far beyond the original domain it was designed for (reasoning models are an application I would not have predicted) but it is fundamentally not AGI. It has no real world model, no ability to learn in any but superficial ways, and without extensive scaffolding this is all very obvious when you use them.
Everyone in this thread is debating definitions. The only question that actually matters is economic: when does AI flip from "powerful automation with humans propping it up" to autonomous output?
Go look at any production AI deployment today. Humans still review, correct, supervise. AI handles volume, humans handle judgment. Judgment is the bottleneck. You haven't replaced labor. You've moved it.
Global labor comp is ~$50T/year. The entire capex cycle is a bet that AI captures a real fraction of that. Whether you call that threshold AGI or not is irrelevant. Capital markets don't care about your definition. They care about whether labor decouples from output.
> The impotence of naive idealism in the face of economic incentives.
I don't think it was so much the naivety of idealism, but more an adoption of idealism and related language to help market what was actually being built: a profit-first organization that's taking its true form little by little.
AGI isn't going to happen within the next 30 years so this is moot. The actual researchers have said so many times. It's only the business people and laypeople whooping about AGI always being imminent.
You cannot get real, actual AGI (the same ability to perform tasks as a human) without a continuous cycle of learning and deep memory, which LLMs cannot do. The best LLM "memory" is a search engine and document summarizer stuffed into a context window (which is like having someone take an entire physics course, writing down everything they learn on post-it notes, then you ask a different person a physics question, and that different person has to skim all the post-it notes, and then write a new post-it note to answer you). To learn it would need RL (which requires specific novel inputs) and retraining (so that it can retain and compute answers with the learned input). This would all take too much time and careful input/engineering along with novel techniques. So AGI is too expensive, time consuming, and difficult for us to achieve without radically different designs and a whole lot more effort.
Not only are LLMs not AGI, they're still not even that great at being LLMs. Sure, they can do a lot of cool things, like write working code and tests. But tell one "don't delete files in X/", and after a while, it will delete all the files in "X/", whereas a human would likely remember it's not supposed to delete some files, and go check first. It also does fun stuff like follow arbitrary instructions from an attacker found in random documents, which most humans also wouldn't do. If they had a real memory and RL in real-time, they wouldn't have these problems. But we're a long way away from that.
LLMs are fine. They aren't AGI.
Purely anecdotal, but GPT 5.4 has been better than Opus 4.6 this past week or so since it came out. It’s interesting to see it rank fairly low on that table. Opus “talks” better and produces nicer output (or, it renders better Markdown in OpenCode) than 5.4.
> “Automated AI research intern by Sep 2026, full AI researcher by Mar 2028”
Funny how timely this is, with Karpathy's Autoresearch hitting the top of HN yesterday (and this being an indication that frontier labs probably have much larger scale versions of this)
It's clever and funny, but nobody is legitimately near AGI, and their own AML Corp link proves Altman believes as much:
> Achieving AGI, he conceded, will require “a lot of medium-sized breakthroughs. I don’t think we need a big one.”
> At the Snowflake Summit in June 2025, Altman predicted that 2026 would mark a breakthrough when AI systems begin generating “novel insights” rather than simply recombining existing information. This represents a threshold he considers critical on the path to AGI.
Though I'm sure they'll try to change the charter before we get to that point, but yeah.
'.. if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. '
Which such project is that, though? And would it accept OpenAI's assistance?
AGI, having access to our world, is precarious as alignment with humans is never guaranteed. Having a buffering medium, aka a simulation environment where AI operates might be a better in-between solution.
“The impotence of naive idealism in the face of economic incentives.”
“The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.”
Amen
> The impotence of naive idealism in the face of economic incentives
A great point. I saw blinding idealism during the early days of GPT era.
These charters are as useful as new year resolutions.
I disagree with the headline:
"Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project."
I claim that currently no "value-aligned, safety-conscious project comes close to building AGI", both for the reasons
- "value-aligned, safety-conscious" and
- "close to building AGI".
So, based on this charter, OpenAI has no reason to surrender the race.
sidenote: Those Grok rankings in arena dot ai don't make sense. The avg rank for grok 420 seems to be ~10 but the overall rank puts it at 4 right behind opus and gemini.
This is taking Sam Altmans PR statements as proof of AGI?
Even the quote they used questions the premise of the article
> “We basically have built AGI” (later: “a spiritual statement, not a literal one”)
I think the brunt of the disruption regarding AI is already behind us for LLMs at least. It's possible we'll see improvements over the following months/years, but government will inevitably start to catchup to the level of disinformation and confusion that AI has brought to this world.
Laws & regulations that needs to be created to reign in AI will undoubtedly increase the opportunity cost of training LLMs.
For some, it might be similar to the early 2000s, but I think it's just a healthy rebalance of what AI is, and how the society needs to implement this new, hardly controllable, paradigm. With this perspective, OpenAI has a lot to lose as it hasn't been able to create a moat for itself compared to, let's say, Anthropic.
Hah can you imagine a world where OpenAi says to all the people who have dumped billions in : "well we lost guys, sorry about that, were just gonna help Google now".
I'll eat my hat after I sell you a bridge.
Two days from now and ClosedAI will remove their charter...
Why the title change?
previous title: Based on its own charter, OpenAI should surrender the race
AI will be used wherever computers, silicon, RAM, software, GPUs and robots are today.
And that's it.
Everything beyond that is nuance.
Nuance matters, but it's not the real story, it's the side show.
Why was the submission headline changed?
OpenAI:
- we are building Open AI - only if you have more than $10B net worth
- we are against using AI for military purposes - except when that case is allowed by government
- we are on a mission to help humanity - again, we define humanity as set of people with more than $10B net worth
- surrender? - sure, sure, we will, only to people with more than $10B net worth, they can do whatever they want to our models, we will surrender to them
Mission statements and blog posts are meaningless. Cap tables steer behavior and simultaneously protect interests. Stop forming unions or opining on Hacker News. We need to find a way to get citizens on the cap table in a meaningful way (and not at the very, very, very, very end of the waterfall underneath debt holders, hedge funds, governments, preferred investors). We are building this world for us. As it stands, don't fret about a robot taking your job: just make sure you own one of the robots.
When AI can kill at scale, it means no person is too small or too insignificant to not be worth getting hunted down and killed, it will be cheap and easy. Before it used to be that only high value targets would be worth killing. But now even you, a nobody, will be killed. You say or do something the government doesn’t like, it’s over for you.
Words are Meaningless in the real world. It’s amazing that no one here gets that
" if a value-aligned, safety-conscious project " and which project is that?
Are you sure Anthropic isn't aware of this and angling for this? And are you sure what Anthropic say is really value-aligned and safety concious? The PR bit surely is working right?
Time to get rid of charter and be a normal member of this capitalism :)
Words on a piece of paper mean absolutely nothing. What matters the most is the real intent of the leaders of the company (something that changes over time and that changes over time, that is, what matters to then and who they are). Sam Altman clearly isn't a man of deep principles regarding humanity and ethics. He seems to regard his legacy, OAI impact and money above everything else. Some of the rest of the leadership do seem to think differently, but I also believe they no longer have the social and political capital to stop Sam.
> Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do
> It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.
No, the spirit is clearly meant for near AGI and we aren’t near AGI
""" I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together. """
- Caitlin Kalinowski, previously head of robotics at OpenAI
https://www.linkedin.com/posts/ckalinowski_i-resigned-from-o...
We should be starting these discussions pointing out that Sam Altman is a serial liar.
The way Sam Altman bungled the Pentagon deal by swooping in a few hours after Anthropic was fired should be grounds for OpenAI finding another CEO.
[dead]
>uses arena ranking only
>claims to be some topshot data scientist
okay
"Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks." [Wikipedia]
One can argue that they have already achieved this. At least for short termed tasks. Humans are still better at organization, collaboration and carrying out very long tasks like managing a project or a company.
Anytime I see "Artificial General Intelligence," "AGI," "ASI," etc., I mentally replace it with "something no one has defined meaningfully."
Or the long version: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."
Or the short versions: "Skippetyboop," "plipnikop," and "zingybang."