The copyright angle is the most underrated part of this story. Anthropic built their models on other people's code under the fair use argument, but the moment their own code leaks they reach for DMCA takedowns. You can't have it both ways. The clean room reimplementations are the natural consequence of the legal framework they themselves advocated for.
I wonder what happened to the person that wrote "Coding as Creative Expression" (https://build.ms/2022/5/21/coding-as-creative-expression/)?
I'm not (just) being glib. That earlier article displays some introspection and thoughtful consideration of an old debate. The writing style is clearly personal, human.
Today's post is not so much. It has LLM fingerprints on it. It's longer, there are more words. But it doesn't strike me as having the same thoughtful consideration in it. I would venture to guess that the author tried to come up with some new angles on the news of the Claude Code leak, because it's a hot topic, and jotted some notes, and then let an LLM flesh it out.
Writing styles of course change over time, but looking at these two posts side by side, the difference is stark.
I personally found it really amusing how they weaponized the legal system to DMCA all the claude code source code repositories. Code ingested into the model is not copyrightable, but produced code apparently is when by legal definition computer generated code can not be copyrighted and that's one of their primary arguments in legal cases.
> It should serve as a warning to developers that the code doesn’t seem to matter, even in a product built for developers.
Code doesn't matter IN THE EARLY DAYS.
This is similar to what I've observed over 25 years in the industry. In a startup, the code doesn't really matter; the market fit does.
But as time goes on your codebase has to mature, or else you end up using more and more resources on maintenance rather than innovation.
> Many software developers have argued that working like a pack of hyenas and shipping hundreds of commits a day without reading your code is an unsustainable way to build valuable software, but this leak suggests that maybe this isn’t true — bad code can build well-regarded products.
The product hasn't been around long enough to decide whether such an approach is "sustainable". It is currently in a hype state and needs more time for that hype to die down and the true value to show up, as well as to see whether it becomes the 9th circle of hell to keep in working order.
I created Hyperlambda (https://hyperlambda.dev), so I spend a lot of time thinking about accidental complexity and implementation surface area. One thing leaks like this keep reminding me is that a surprising amount of software risk comes from packaging and delivery details rather than the main logic people spend all their time reviewing.
> But then the clean room implementations started showing up. People had taken Anthropic’s source code and rewritten Claude Code from scratch in other languages like Python and Rust.
Seems like the phrase "clean room" is the new "nonplussed"... how does this make any sense?
Seems equally valid to come out of this with the takeaway that code quality _does_ matter, because poor coding practices are what led to the leak.
Sure, the weights are where the real value lives, but if the quality is so lax they leak their whole codebase, maybe they are just lucky they didn’t leak customer data or the model weights? If that did happen, the entire business might evaporate overnight.
I dont understand why so many people have the need to emphasize the code vs product battle. There is no battle. Coding/developing/software engineering is a skill, that just like any other skill has certain requirements and best practices that have to be followed in order to make a quality, maintainable and adaptable application that can stand the test of time from the software perspective. Product, features, marketing bla bla that is entrepreneurship part, and is not related to software, other than directing the software requirements,but not beacuse product people think about requirements, but instead just because they come naturally from the required features they envision.Just because programmers can write code doesnt mean they can ship good products.Just because a plumber can lay pipes doesnt mean he can run his own company or invent a new way of laying pipes. But I will tell you that a bad plumber who lays pipes and doesnt know how to connect them, bend them or shield them will surely have inferior product/service in the long term. And by the way, success of a company is measured over time, we will see where claude code will be in 10 years time when the hype dips a little, then we can say yeah the code was bad but everyone loved it and uses it still. I mean they leaked entire code online and this guy says yeah code was bad but who cares, what world are we living in? The fact that anything got leaked is a serious breach of best practices and security also at this point, something a company that used to work for DoD(W) shouldnt be doing, it can even be considered a national threat at this point. I know mistakes happen, I do them all the time, but then again the 'best' companies should be almost immune to mistakes cause stakes are high.But of course, move fast and break things is more important.Am I wrong?
In my opinion, the “code is garbage” argument is a moot point. Anthropic is in the business of removing humans from the SDLC. As long as their models can understand and update the code they generate it can remain garbage. They’re not optimizing for human comprehension of the output. They don’t even want you looking at it. And eventually the models will get good enough that you won’t have to.
Code quality matters when it becomes the code you have to maintain after six months. I'm honestly surprised by how some features of Claude Code seem to be held together with gum and duct tape.
That being said, if you're just beginning and looking for your market fit, or pitching to investors with a flashy demo, it doesn't need to be an architectural miracle, in fact it will waste your time.
I feel the conclusions here are a bit thin.
Code quality tends to have an impact on more than just aesthetics - and Claude Code certainly feels like a buggy mess from an end user's perspective.
Of course people still use Claude Code, but that is certainly because of the underlying models first and foremost. Most products don't have such a moat and would not nearly see as much tolerance from end users. If the Max subscriptions could be used with other harnesses, I am sure Anthropic would have to compete harder on the quality of the harness (to be fair, most AI based tooling seems pretty alpha these days, but eventually things will stabilize).
Polish is not everything, clearly, but it is a factor, and I feel Claude Code is maybe the worst example to use here, as it doesn't at all generalize to most other products.
And this is why so much software today runs extremely hot.
It's creators clearly care not for the efficiency of how it is built, which translates directly into how it runs.
This blog post is effectively being apologetic about the fact that this is alright, since at least they got product market fit. Except Anthropic is never going to go back and clean up the mess once (if) they become profitable.
I doubt anyone will like how things will be in 5 years time if this trend of releasing badly engineered spaghetti continues.
I don't think it's vibe coded garbage. Sure, the 3000-line print.ts is terrible, but there's some good patterns in there that were definitely prompted in by some experienced engineers -- the feature flag setup, the `..I_VERIFIED_THIS_IS_NOT_PATH_OR_CODE` funny type hints, the overall structure. Just the usual signs this started as a PoC but quickly evolved into something much bigger. The codebase is a really interesting read.
> The success of Claude Code and Cursor at the higher end of the market shows that even the people pickiest about their software (developers) will use your software regardless of how good the code is.
Seems wrong. Devs will whine, moan and nitpick about even free software but they can understand failure modes, navigate around bugs and file issues on GitHub. The quality bar is 10-100x amongst non-techno-savvy folks and enterprise users that are paying for your software. They’re far more “picky”.
> So of course the first thing people did was point and laugh.
This just validates my theory that open-sourcing old code that people have sentimental attachments for, and that you won't ever make any money off of, again is actually a terrible idea.
Everything about this leak is a long list of arguments why you shouldn't ever open source anything.
We, the developer community, have really dropped the ball here.
Reminds me of a question that I asked couple of years ago - https://news.ycombinator.com/item?id=37176689
Points from the article.
1. The code is garbage and this means the end of software.
Now try maintaining it.
2. Code doesn’t matter (the same point restated).
No, we shouldn’t accept garbage code that breaks e.g. login as an acceptable cost of business.
3. It’s about product market fit.
OK, but what happens after product market fit when your code is hot garbage that nobody understands?
4. Anthropic can’t defend the copyright of their leaked code.
This I agree with and they are hoist by their own petard. Would anyone want the garbage though?
5. This leak doesn’t matter
I agree with the author but for different reasons - the value is the models, which are incredibly expensive to train, not the badly written scaffold surrounding it.
We also should not mistake current market value for use value.
Unlike the author who seems to have fully signed up for the LLM hype train I don’t see this as meaning code is dead, it’s an illustration of where fully relying on generative AI will take you - to a garbage unmaintainable mess which must be a nightmare to work with for humans or LLMs.
Afaik you can run Claude Code locally but every single demo i see uses it exclusively with apis, so are the local models already good enough to be worth it or is the only reasonable use for claude code with cloud models?
OP should expand on #1, why he thinks it’s garbage. Claude Code is the REPL harness Anthropic built, can read, write, edit, bash. Pi, Gemini, Codex do the same, but they are not hinted as garbage. Where’s the beef?
" The real value in the AI ecosystem isn’t the model or the harness — it’s the integration of both working seamlessly together. "
Wut? The value in the ecosystem is the model. Harnesses are simple. Great models work nearly identically in every harness
How likely is it that the broken release workflow that produced the leak was Claude's own work?
Claude Code proves you don't need quality code — you just need hundreds of billions of dollars to produce a best-in-class LLM and then use your legal team to force the extreamly subsidised usage of it through your own agent harness. Or in other words, shitty software + massive moat = users.
Seriously, if Anthropic were like oAI and let you use their subscription plans with any agent harness, how many users would CC instantly start bleeding? They're #39 in terminal bench and they get beaten by a harness that provides a single tool: tmux. You can literally get better results by giving Opus 4.6 only a tmux session and having it do everything with bash commands.
It seems premature to make sweeping claims about code quality, especially since the main reason to desire a well architected codebase is for development over the long haul.
Who cares that the code is garbage? As the models get bigger and more powerful it will be trivial to fully refactor the whole codebase. It’s coming sooner than you think.
I think this misses the target.
First, the twitter quote is standard toxic clapback nonsense. Gambling makes billions and does not add any value. Even facebook can argue it adds more value than gambling so this one is a dud.
People use claud code because of claud the model and not claud the harness. Cursor or a hacked up agent loop using opus or whatever are about as good. The magic is in the model not the harness here. This isnt to say the hardness is doesnt do anything.
The other bit this misses is that yes the product matters more then the code, and if the product burns battery/ram/etc doing nothing because the ai has crappy code or maybe something leaks or has a security issue, then that impacts the product.
99.9999% of consumers never give code a single thought.
Most corporations never give code a single thought.
In the race to market, quality always suffers, and with such high stakes, it should surprise no one that AI companies are vibe-coding their own slop.
> bad code can build well-regarded products.
Yes, exactly. Products.
It seems like me and all the engineers I've known always have this established dichotomy: engineers, who want to write good code and to think a lot about user needs, and project managers/ executives/sales people, who want to make the non-negative numbers on accounting documents larger.
The truth is that to write "good software," you do need to take care, review code, not single-shot vibe code and not let LLMs run rampant. The other truth is that good software is not necessary good product; the converse is also true: bad product doesn't necessarily mean bad software. However there's not really a correlation, as this article points out: terrible software can be great product! In fact if writing terrible software lets you shit out more features, more quickly, you'll probably come ahead in business world than someone carefully writing good software but releasing more slowly. That's because the priorities and incentives in business world are often in contradiction to priorities and incentives in human world.
I think this is hard to grasp for those of us who have been taught our whole lives that money is a good scorekeeper for quality and efficacy. In reality it's absolutely not. Money is Disney bucks recording who's doing Disney World in the most optimal way. Outside of Disney World, your optimal in-park behavior is often suboptimal for out-of-park needs. The problem is we've mistaken Disney World for all of reality, or, let Walt Disney enclose our globe within the boundaries of his park.
> The object which labor produces confronts it as something alien, as a power independent of the producer.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
From a moral perspective, I would argue that this is still theft of IP, even if it's a "clean room reimplementation". The code carries valuable information about what works and what doesn't — knowledge that Anthropic had to discover through real work and iteration. It's the same as a Chinese factory duplicating a product: they skipped the entire R&D phase and saved time and money.