But is that really the problem in big tech today? To me it looks like sooner or later they cave from their ideals (or leadership changes) and that the reason every time is that they want to make even more money.
Or, perhaps even more likely, the ideals inevitably get corrupted by access to unthinkable economic power/leverage, like it happened with more or less all other giants with strongly idealistic initial leadership and leadership may actually delude itself into thinking they're still on the right track as a sort of a defense mechanism. Back when they published the article on the Claude-operated mass-scale data breach last year, the conclusions were delivered in a bafflingly casual tone as if it was a weather report: yeah, the world has become a lot more dangerous now (on its own), so you may want to start using Claude for cyber-defense and we are doing our best to help you protect your business. I rolled my eyes at that so hard they popped out of their sockets. Weren't you... the guys... who made it that way and enabled that very attack? Very convenient to sell weapons to both sides, isn't it, not at all like a mafia business. Very responsible and ideal-driven.
Consider also the part that is going unsaid in the address: Amodei is strongly against the use of Claude for mass surveillance of Americans but he says nothing about mass surveillance of anybody else (and, in fact, is proactively giving foreign intelligence a green light in his address) and is deliberately avoiding any discussion on the fact that his relationship with the Pentagon is mediated through the contract with Palantir they signed something like 1.5 years ago. Palantir is a company whose business is literally mass surveillance, by the way! I, too, am so ideal-driven that I willingly make deals with the devil! But now that he's successfully captured the popular sentiment, people are going to consider him the moral champion without bothering to look at these and other glaring contradictions.
Sure, sooner or later. I don't want to even guess where the new AI companies are on the path that leads to that destination, but right now it looks like Anthropic is not at that stage. Heck, even though a lot of people find Sam Altman slimy, even OpenAI isn't yet at that stage.
I believe that this is classical behaviour of every share holder driven business. You can build on ideals from start, but once you acquire some position, money making is on the menu. Eg. deliberately worsening user experience for better revenue.
Possiblity to turn on heated seats in car you own for a small monthly fee is absurd yet very real. I'm looking forward to enshittification of current AI tools.
I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.