We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.
All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.
There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.
You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.
But just because that's true doesn't mean this article isn't very much relevant and needed.
Because it is.
For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.
After the US launched its attack on Iran, the ethical AI lab's CEO wrote: "Anthropic has much more in common with the Department of War than we have differences." - https://www.anthropic.com/news/where-stand-department-war
OP says they’ve been working on this for 18 months. Most of what you’ve said wasn’t the case until much more recently.
We should stop talking about potential problems or perpetrators, when we have talked about them “enough”?
That would be irrational.
We should give air time to other problems?
I think everyone agrees with that.
You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.
Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.
Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.
So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.
[flagged]
[flagged]
> whether the company that branded itself as the ethical AI lab actually is one
FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.
Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.
We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.