In my opinion LLMs are intellectual property theft. Just as if I started distributing copies of books. This substantially reduces the incentive for the creation of new IP.
All written text, art work, etc needs to come imbued with a GPL style license: if you train your model on this, your weights and training code must be published.
Tailwind Labs relied on a weird monetization scheme. Revenue was proportional to the pain of using the framework. The sudden improvement in getting desired UI without relying on pre-built templates killed Tailwind Labs.
There are many initiatives in a similar spot, improving your experience at using Next.js would hurt Vercel. Making GitHub actions runners more reliable, stable and economical would hurt Microsoft. Improving accessibility to compute power would hurt Amazon, Microsoft and Google. Improving control and freedom over your device would hurt apple and Google.
Why should we be sympathetic to the middleman again?
If suddenly CSS became pleasant to use, Tailwind would be in a rough spot. See the irony?
"Give everything away for free and this people will leave technology", geohot said something like this and I truly appreciate. Technology will heal finally
>"Value is shifting to operations: deployment, testing, rollbacks, observability. You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running"
I've been doing exactly that since AI came out :-D
You absolutely can prompt your way to 3.5 nines of uptime (even more), but you need to know what you're doing and correct it.
Even very well aligned models like Opus will make traps for your infrastructure. For example you tell it to write a fluxCD implementation of some application, in your k8 cluster, following your conventions and best practices described in some md files. And it does this, very nicely. But unless you tell it in advance every detail it will do something extremely stupid mid way through.
For example, let's say you have a database and it needs to create a new instance (with gitops) for the app. It adds the new DB and it gets created, but instead of using a tool that already exists (or proposing one if it doesn't) to sync the DB access credentials from the DB na espace to the app namespace it will read the credential, encrypt it and store in the app namespace.
What's the problem with that? Well, none unless you rotate these credentials. In which case your app will stop working, possibly after you tested it and decided it's good enough to use seriously, despite having a HA DB.
There are a dozen things like this in each "ai project", but even with this. With the time needed to review everything it saves a lot of time.
I wish I could upvote this more than once. The author gets it, you have to sell outcomes. Not features. Seems like every open source company that doesn’t market an outcome to buyers will face a similar threat. And this particular go to market strategy was “brittle” before AI.
To everything in this article that states what AI cannot do... I would like to add a big fat "YET!" and remind everyone to buckle up...
Right now its convenient to look at Tailwind and discuss what their doing wrong.
But eventually most other business models will be stress tested in the same way - sooner or later.
"The value got extracted, but compensation isn't flowing back. That bothers me, and it deserves a broader policy conversation.
What I keep coming back to is this: AI commoditizes anything you can fully specify. [...]
So where does value live now? In what requires showing up, not just specifying. Not what you can specify once, but what requires showing up again and again."
This seems like a useful framing to be aware of, generally.
The internet has always kinda run on the ambiguity of "does the value flow back". A quote liberated from this article itself; all the content that reporters produce that's laundered back out through twitter; 12ft.io; torrents; early youtube; late youtube; google news; apache/mit vs gnu licenses; et cetera..
> AI didn't kill Tailwind's business. It stress tested it.
The earthquake didn’t destroy the building — it stress tested it.
I see "hackers" in these comments are now advocating to make "criminal contempt of business model" a serious thing, instead of a mere meme used to describe draconian copyright and patent laws.
Many FOSS business models did, explicitly or implicitly, rely, not on direct obfuscation or overcomplication, but on not making things easy. So you sell not the product, services around the product.
This is not exclusive to FOSS. It is also the basic model of most non SaaS B2B, often for good reason referred to by the derogatory term 'consulting-ware'.
AI eats into these services, as it commoditizes them. 80%+ of what used to take a specialist for that product can now be handled by a good generalist + AI.
Leaving aside the business model impact for a second, getting rid of obfuscation incentives is intrinsically good thing for a user community.
Dries' solution, offering operations as a SaaS or managed service, is meeting a need AI can't as easily match, but not exactly for the the stated reason. What the client is actually buying is making something someone else's problem. And CIOs will always love this more than anything if they can credibly justify it.
Where AI does impact this is in that latter part. If AI does significantly commoditize operational expertise, then the cost of in-house operating is (sometimes dramatically) lowered, and thus the justification gap on the CIO side for spending outside widens. How much this will drive a decision will be highly variable between businesses and projects.
2-5 years from now after the AI bubble bursts, and they are trying to rent us $300 PCs since every component is 5x the price, we will look back at all the damage and copyright law that was completely bypassed and ignored when it was convenient, after all those years of claiming evil China "stole" from companies (only to then pass laws where they can virtually steal anything they want, even utilizing private repositories on Github that they acquired by buying the site, completely ignoring the licenses)...
Or how Meta downloaded 70tb+ of books and then got law enforcement to nuke libgen and z-lib to create a "moat", and all our tools start dying/disappearing because the developers are laid off since an AI "search engine" just regurgitates it, THEN and only then will most people understand the mistake that this was.
Let's not even begin with what Grok just recently did to women on X, completely unacceptable, I really, really wish for the EU to grow some and take a stand, it is clear that China is just as predatory as America and both are willing to burn it all in order to get a non existent lead in non existent "technology" that snake oil salesmen have convinced 80 year olds in government that is the next "revolution".
In my opinion, governments are going to have to tax the big tech companies hard and distribute the money to funding bodies like Arts Council. I can see a future "Tech Council" for open source software organisations to apply for funding. It'll get to the point where every OSS developer has their own Community Interest Company or join with a few other devs to create a CIO in order to acquire funding.
Of course, now you're opening a whole other can of worms. In the UK, only 1 in 9 Arts Council funding applications is successful.
To call it a stress "test" is dismissive.
A stress test on a bank doesn't actually erase the revenue and financially jeopardize the bank.
Implementing layoffs is not a stress test.
Companies providing AI services should offer ads for the things the AI is using. And I don't mean "Tailwind could pay Google to advertise in Gemini", I mean "Google should be clearly and obviously linking back to Tailwind when it uses the library in its output"
They already do this sort of thing inside outputs from Deep Research, and possibly without. But the output should be less muted, inline, recessed and more "I have used Tailwind! Check out how to make it better [HERE](link)"
They should be working with the owners of these projects, especially ones with businesses, and directing traffic to them. Not everyone will go, and that's fine, but at least make the effort. The infrastructure is in place already.
And yes, right now this would not be across-the-board for every single library, but maybe it could be one day?
It's the same problem news sites have been facing for years because of Google News and Facebook. Their solution so far has been country-level bans on it (Canada).
This feels like the OpenSSL problem where we do probably need some kind of industry organization to maintain these things. There’s a chicken and the egg problem that these AI companies need someone to keep maintaining tailwind if they want it to keep working in their prompts.
Maybe that limits the ability for the head of tailwind to run their own business and make more income, but something gotta give.
> So where does value live now? In what requires showing up, not just specifying. Not what you can specify once, but what requires showing up again and again.
Sounds like even more incentive for "managing" problems and creating business models around them instead of solving them.
Killed by AI or competitors offering Tailwind templates and UI kits at a much lower price or for free?
Tailwind plus is available for one time payment that provides lifetime access to current and future components. With AI cutting off the flow for new buyers, revenue shrivels up much quicker than what it would've been if it was a recurring subscription.
Framing it as a "conduit" disruption might make a lot of assumptions about the fundamental economic value of software in the future. In a world (whether near term or long term) where you can just ask the computer to make whatever software you want, what are the economics of retailing/licensing any software at all? Open source or otherwise?
So AI is attempting to replace SAP as the traditional way of testing if you company is strong enough ?
> Value is shifting to operations: deployment, testing, rollbacks, observability. You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.
I agree somewhat but eventually these can be automated with AI as well.
The root of the issue is that Tailwind was selling something that people can now recreate a bespoke version of in mere minutes using a coding agent. The other day I vibe coded a bespoke dependabot/renovate replacement in an hour. That was way easier than learning any of these tools and fighting their idiosyncrasies that don’t work for me. We no longer need Framer because you can prompt a corporate website faster than you can learn Framer. It is, fortunately or unfortunately, what it is and we all have to adapt.
I want to be clear, it sucks for Tailwind for sure and the LLM providers essentially found a new loophole (training) where you can smash and grab public goods and capture the value without giving anything back. A lot of capitalists would say it’s a genius move.
Calling it a stress test seems a bit off. Would we say that invention of lightbulbs was a "stress test" for candle related business models? Or would we just say that business models had to change in response to current events.
I wonder how much impact shadcn had on their business.
I'd note a couple of things:
Not to nitpick but if we are going to discuss the impact of AI, then I'd argue "AI commoditizes anything you can specify." is not broad enough. My intuition is "AI commoditizes anything you can _evaluate/assess_." For software automation we need reasonably accurate specifications as input and we can more or less predict the output. We spend a lot of time managing the ambiguity on the input. With AI that is flipped.
In AI engineering you can move the ambiguity from input to the output. For problems where there is a clear and cheaper way of evaluating the output the trade-off of moving the ambiguity is worth it. Sometimes we have to reframe the problem as an optimization problem to make it work but same trade-off.
On the business model front: [I am not talking specifically about Tailwind here.] AI is simply amplifying systemic problems most businesses just didn't acknowledge for a while. SEO died the day Google decided to show answer snippets a decade ago. Google as a reliable channel died the day Google started Local Services Advertisement. Businesses that relied on those channels were already bleeding slowly; AI just made it sudden.
On efficiency front, most enterprises could have been so much more efficient if they could actually build internal products to manage their own organizational complexity. They just could not because money was cheap so ROI wasn't quite there and even if ROI was there most of them didn't know how to build a product for themselves. Just saying "AI first" is making ROI work, for now, so everyone is saying AI efficiency. My litmus test is fairly naive: if you are growing and you found AI efficiency then that's great (e.g. FB) but if you're not growing and only thing AI could do for you is "efficiency" then there is a fundamental problem no AI can fix.
Business & time are business model stress tests.
> You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.
Uh, yeah you can. There’s a whole DevOps ecosystem of software and cloud services (accessible via infrastructure—as-code) that your agents can use to do this. I don’t think businesses who specialize in ops are safe from downsizing.
They could build something like Lovable but with better design/frontend defaults.
Maybe they just over-hired for their business model.
One of the biggest shortcomings of Open Source was that it implicitly defaulted to a volunteer model and so financing the work was always left as an exercise for the reader.
Hence (as TFA points out) open source code from commercial entities was just a marketing channel and source of free labor... err, community contributions... to auxiliary offerings that actually made money. This basic economic drive is totally natural but creates dynamics that lead to suboptimal behaviors and controversy multiple times.
For instance, a favorite business model is charging for support. Another one was charging for a convenient packaging or hosting of an “open core” project. In either case, the incentives just didn’t align towards making the software bug-free and easily usable, because that would actively hamper monetization. This led to instances of pathological behavior, like Red Hat futzing with its patches or pay-walling its source code to hamper other Linux vendors.
Then there were cases where the "open source" branding was used to get market-share, but licenses restricted usage in lucrative applications, like Sun with Java. But worse, often a bigger fish swooped in to take the code, as they were legally allowed to, and repackage it in their own products undercutting the original owners. E.g. Google worked around Sun's licensing restrictions to use Java completely for free in Android. And then ironically Android itself was marketed as "open source" while its licensing came with its own extremely onerous restrictions to prevent true competition.
Or all those cases when hyperscalers undercut the original owners’ offerings by providing open source projects as proprietary Software as a Service.
All this in turn led to all sorts of controversies like lawsuits or companies rug-pulling its community with a license change.
And aside from all that, the same pressures regularly led to the “enshittification” of software.
Open Source is largely a socialist (or even communist) movement, but businesses exist in a fundamentally capitalistic society. The tensions between those philosophies were inevitable. Socialists gonna socialize, but capitalists gonna capitalize.
With AI, current OSS business models may soon be dead. And personally I would think, to the extent they were based on misaligned incentives or unhealthy dynamics, good riddance!
Open Source itself will not go away, but it will enter a new era. The cost of code has dropped so much, monetizing will be hard. But by the same token, it will encourage people, having invested so much fewer resources creating it, to release their code for free. A lot of it will be slop, but the quantity will be overwhelming.
It’s not clear how this era will pan out, but interesting times ahead.
[dead]
> You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.
This is completely wrong. Agents will not just be able to write code, like they do now, but will also be able to handle operations, security, continuing to check, and improve the systems, tirelessly.
I know for a fact that all SOTA models have linux source code in them, intentionally or not which means that they should follow the GPL license terms and open-source part of the models which have created derivative works out of it.
yes, this is indirectly hinting that during training the GPL tainted code touches every single floating point value in a model making it derivative work - even the tokenizer isn't immune to this.
>Open Source was never the commercial product. It's the conduit to something else.
this is correct. If you open source your software, then why are you mad when companies like AWS, OpenAI, etc. make tons of money?
Open Source software is always a bridge that leads to something else to commercialize on. If you want to sell software, then pick Microsoft's model and sell your software as closed source. If you get mad and cry about making money to sustain your open source project, then pick the right license for your business.
This goes way deeper than open source businesses.
Imagine I’m a company just big enough to entertain adopting Salesforce for CRM. It’s a big chunk of money, but our sales can absorb the pain.
With GenAI as an enterprise architect, one of the options I’m now recommending is to create a custom CRM for our business and skip the bloated enterprise SaaS platform.
We can gather CRM requirements fast, build fast, and deliver integrations to our other systems incredibly fast using tools like Claude Code. Our sales people can make feature requests and we can dogfood them in a few days, maybe hours.
GenAI development tools are rapidly changing how I think about enterprise software development.
If your core business is software-based offerings, your moat has been wiped out.
A handful of senior engineers can replicate any SaaS, save licensing costs, and build custom applications that were too risky in the past.
The companies that recognize what is happening and adapt will win.