A concern:
More and more plainly, OpenAI and Anthropic are making plays to own (and lease) the "means of production" in software. OK - I'm a pretty happy renter right now.
As they gobble up previously open software stacks, how viable is it that these stacks remain open? It seems perfectly sensible to me that these providers and their users alike have an interest in further centralizing the dev lifecycle - eg, if Claude-Code or Codex are interfaces to cloud devenvs, then the models can get faster feedback cycles against build / test / etc tooling.
But when the tooling authors are employees of one provider or another, you can bet that those providers will be at least a few versions ahead of the public releases of those build tools, and will enjoy local economies of scale in their pipelines that may not be public at all.
It's not any different from the launch of the FSF. There's a simple solution. If you don't want your lunch eaten by a private equity firm, make sure whatever tool you use is GPL licensed.
somebody looked at Claude Code's binaries and Anthropic is testing out their own app platform called antspace. Not sure why people are shocked, they've been cloning features of their API customers and adding them to their core products since day 1. Makes sense they will take user data and do it for Claude Code by copying features or buying up what developers are using so they can lock people into a stack. These are the same people that trained on every scrap of data they could get their hands on and now complain about distilling models from their output
https://x.com/AprilNEA/status/2034209430158619084
Ironically this type of stuff really makes me doubt their AGI claims, why would they bother with this stuff if they were confident of having AGI within the next few years? They would be focused on replacing entire industries and not even make their models available at any price. Why bother with a PaaS if you think you are going to replace the entire software industry with AGI?
You know until today I dismissed all those concerns about uv being a commercial product but now I am very concerned.
Microsoft has been a reasonable steward of github and npm considering everything but I don't feel so good about OpenAI this makes me reconsider my use of uv and Python as a whole because uv did a lot to stop the insanity. Not least Microsoft has been around since 1975 whereas I could picture OpenAI vanishing instantly in a fit of FOMO.
In the many darker timelines that one can extrapolate, capturing essential tech stacks is just a pre-cursor to capturing hiring.
Once we start seeing Open AI and Anthropic getting into the certifications and testing they'll quickly become the gold standard. They won't even need to actually test anyone. People will simply consent to having their chat interactions analyzed.
The models collect more information about us than we could ever imagine because definitionally, those features are unknown unknowns for humans. For ML, the gaps in our thinking carry far richer information about is than our actual vocabularies, topics of interest, or stylometric idiosyncrasies.
If it ever goes bad, well I hope that that’s an impetus for new open source projects to be started — and with improvements over and lessons learned from incumbent technologies, right at the v1 of said projects.
It's even worse than that, Astral took over python-build-standalone and uv uses its Python builds on all platforms.
That means OpenAI will be able to do whatever they want to your Python binaries, including every Python binary in your deployments, with whatever telemetry that want to instrument in the builds.
Also notable that after Anthropic’s acquisition of Bun, the vast majority of the communication and seeming effort from Jared on twitter seemed to shift to fixing issues with Claude Code.
I imagine many of these efforts benefitted the community as a whole, but it does make sense that the owners will have these orgs at least prioritize their own internal needs.
Hmm, from my perspective, an essential step to legitimize "vibecoding" in an enterprise setting is to to have a clearly communicated best practice - and have the LLM be hyper optimized for that setting.
Like having a system prompt which takes care of the project structure, languages, libraries etc
It's pretty much the first step to replacing devs, which is their current "North Star" (to be changed to the next profession after)
Once they've nailed that, the devs become even more of a tool then they're already are (from the perspective of the enterprise).
Honestly, for now they seem to be buying companies built around Open Source projects which otherwise didn't really have a good story to pay for their development long-term anyway. And it seems like the primary reason is just expertise and tooling for building their CLI tools.
As long as they keep the original projects maintained and those aren't just acqui-hires, I think this is almost as good as we can hope for.
(thinking mainly about Bun here as the other one)
I think the good news here is that since OpenAI is a zombie company at this point this particular acquisition shouldn't be too concerning - and from what I've seen Anthropic has been building out in a direction of increased specialization. That said vertical integration is as much of a problem as it always was and it'd be excellent to see some sane merger oversight from the government.
Stop using MIT licensed software being run by small vc backed operations if you value stability. They are risky and often costly Trojan horses.
But how does this work out in the long run, in the case of AGI?
If AGI becomes available, especially at the local and open-source level, shouldn't all these be democratized - meaning that the AGI can simply roll out the tooling you need.
After all, AGI is what all these companies are chasing.
Vercel did this as well. I called it "ecosystem capture" but I like "means of production" too.
These companies are telling us software development is over. They are positioning themselves as the means of production. You want to build anything you do it through them. And since ‘software is solved’ this is not a software but a user acquisition.
it never made sense to have devs all over the world doing the same task with tiny variation. Centralization was inevitable. LLMs might have been a step change but the trajectory was already set.
If it becomes too antagonistic, people will change. The desire to build things is larger than any given iron fist du jour. Just ask Oracle or IBM.
"Bearded German philosopher" once again being uncannily applicable to 21st century happenings...
The risk is one of them buys it and pulls an Oracle / Java (Sun) stunt.
It's open source. Forking it is an option. And with AI, one-shotting a replacement is an option as well. Or having it make changes to your fork. Just because you can, doesn't mean you should do that of course.
The point is that the value of accumulated know how and skill that lead to things like uv isn't lost even if the worst would happen to the company or people behind it. I don't think there are many signs of that. I don't think they had much of a revenue model around providing OSS tools. It's problematic for a lot of VC funded companies. An exit like this is as good as it gets. OpenAI now pays them to do their thing. Investors are probably pretty happy. And we maybe get to skip the enshittification that seems inevitable with the whole IPO/hedge funds circus that many vc funded OSS companies end up being subjected to. Problem solved. Congratulations to the team. They can continue doing what they love doing in a company that clearly loves all things python. And who knows what they can do next when freed from having to worry about making investors happy?
Big companies and OSS have always had symbiotic relationships. Some of the largest contributors to open source are people working in big companies. OpenAI fits this tradition beautifully. Most big software companies actively contribute to OSS projects that are relevant or important to them. Even very secretive companies like Apple or profit focused sharks like Oracle. Google, Meta, IBM. There are very few large software companies that aren't doing that. OSS without this very large scale corporate sponsor ships would just be a niche thing. Yes there are a lot of small projects. I have a few of my own even. But most of the big ones have some for profit businesses behind them.
The real meta question is of course if we still need a lot of the people centered development tooling when AIs are starting to do essentially all of the heavy lifting in terms of coding. I think we might need very different tools soon.
This is a logical conclusion of most open source tools in a capitalist economy, it's been this way for decades.
Equivalent or better tools will pop up eventually, heck if AI is so fantastic then you could just make one of your own, be the change you want to see in the world, right?
These are MIT/Apache 2. Sure they can buy and influence the direction but they can't prevent forks if they stray from what users want.
Of course they're trying to capture existing tech stacks. The models themselves are plateauing (most advancement is coming from the non-LLM parts of the software), they took too much VC money so they need to make some of it back. So gobbling up wafers, software, etc... is the new plan for spending the money and trying to prevent catastrophic losses.
>As they gobble up previously open software stacks, how viable is it that these stacks remain open?
My question is if them gobbling up the alternatives will make room for other alternatives to grow.
>> "means of production" in software
Ah yes, it was impossible to write software before these companies existed, and the only way to write software is via the products from these companies. They sure do control the "means of production".
Not a concern: LLMs can fork all these products while they're still open licensed.
As the cost of building software trends towards $0 I don't see how one can realistic own "the" means of production rather than "a" means. Any competitor can generate a similar product cheaply.
Explain to me how this is any different than Microsoft, Blackrock, Google, Oracle, Berkshire or any other giant company acquiring their way to market share?
If our corporate overlords are gonna buy up all that is good I’d rather it have been Anthropic and not that wierdo humans-need-food-and-care-for-inference-so-LLMs aren’t-that-power-hungry Sam Altman. Man that guy is weird.
Oh well. They’ll hopefully get options and make millions when the IPO happens. Everyone eventually sells out. Not everyone can be funded by MIT to live the GNU maximalist lifestyle.
[dead]
[dead]
They’re not ahead on code though, and they recently announced giving up on rich media AI entirely lmao
The fuck does OpenAI have to offer?
Nothing I need.
The only reason Gemini is the best is UX, really running my own Mistral 7b is more than fine.
Because slow ass Gemini is still a slightly more convenient experience I use that.
Nobody OWNS nor will own the means of essentially thoughts. It’s such a silly idea I wonder if it’s propaganda.
It’s a small tool shop building a tiny part of the Python ecosystem, let’s not overstate their importance. They burned through their VC money and needed an exit and CLI tool chains are hyped now for LLMs, but this mostly sounds like an acquihire to me. Dev tools are among the hardest things to monetize with very few real winners, so good for them to get a good exit.