> In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them.
What a wild and speculative claim. Is there any source for this information?
The line right after this is much worse:
> Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.
Wow, finance people certainly don't understand programming.
I completely agree. This guy is way outside his area of expertise. For those unaware, Howard Marks is a legendary investment manager with a decades-long impressive track record. Additionally, these "insights" letters are also legendary in the money management business. Personally, I would say his wisdom is one notch below Warren Buffett. I am sure he is regularly asked (badgered?) by investors what he thinks about the current state and future of AI (LLMs) and how it will impact his investment portfolio. The audience of this letter is investors (real and potential), as well as other investment managers.
It's not. And if your team is doing this you're not "advanced."
Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.
Which is great! But it's not a +1 for AI, it's a -1 for them.
I have heard many software developers confidently tell me "pilots don't really fly the planes anymore" and, well, that's patently false but also the jetliners autopilots do handle much of the busy work during cruise, and sometimes during climb-out and approach. And they can sometimes land themselves, but not efficiently enough for a busy airport.
Is it not sort of implied by the stats later: "Revenues from Claude Code, a program for coding that Anthropic introduced earlier this year, already are said to be running at an annual rate of $1 billion. Revenues for the other leader, Cursor, were $1 million in 2023 and $100 million in 2024, and they, too, are expected to reach $1 billion this year."
Surely that revenue is coming from people using the services to generate code? Right?
I'm on a team like that and I see it happening in more and more companies around. Maybe "many" does a heavy lifting in the quoted text, but it is definitely happening.
Probably their googly-eyed vibe coder friend told them this and they just parroted it.
If true I’d like to know who is doing this so I can have exactly nothing to do with them.
I've had claude code compose complex AWS infrastructure (using pulumi IAC) that mostly works from a one-shot prompt.
Yes and no. There is the infamous quote of Microsoft, about 30%(?) of their code being written by AI now. And technically, it's probably not that such a wild claim in certain areas. AI is very good at barfing up common popular patterns, and companies have a huge amount of patternized software, like UIs, tests, documentation or marketing-fluff. So it's quite easy to "outsource" such grunt-work if AI has the necessary level.
But to say that they don't write any code at all, it's really stretched. Maybe I'm not good enough at AI-assisted and vibe coding, but code-quality always seems to drop down really hard the moment one steps a bit outside the common patterns.
Here's the lede they buried:
>The key is to not be one of the investors whose wealth is destroyed in the process of bringing on progress.
They are a VC group. Financial folks. They are working largely with other people's money. They simply need not hold the bag to be successful.
Of course they don't care if its a bubble or not, at the end of the day, they only have to make sure they aren't holding the bag when it all implodes.
Wow, reading these comments and I feel like I've entered a parallel reality. My job involves implementing research ML and I use it literally all the time, very fascinating to see how many have such strong negative reactions. As long as you are good at reviewing code, spec-ing carefully, and make atomic changes - why would you not be using this basically all the time?
Seen it first hand. scan your codebase, plan extension or rewrite or both, iterate with some hand holding and off you go. And it was not even an advanced developer driving the feature (which is concerning).
I think he might be misrepresenting it a bit, but from what I've seen every software company I know of heavily uses agentic AI to create code (except some highly regulated industries).
It has become a standard tool, in the same way that most developers code with an IDE, most developers use agentic AI to start a task (if not to finish it).
It's often true. But not when it's easier to code than to explain.
No, but there are huuuuuge incentives by people publishing such statements.
Everyone is doing this extreme pearl clutching around the specific wording. Yeah, it's not 100% accurate for many reasons, but the broader point was about employment effects, it doesn't need to completely replace every single developer to have a sizable impact. Sure, it's not there yet and it's not particularly close, but can you be certain that it will never be there?
Error bars, folks, use them.
I just did a review and 16% of our committed production code was generated by an LLM. Almost 80% of our code comments are LLM
This is mission critical robotics software
I only write around 5% of the code I ship, maybe less. For some reason when I make this statement a lot of people sweep in to tell me I am an idiot or lying, but I really have no reason to lie (and I don't think I'm an idiot!). I have 10+ years of experience as an SWE, I work at a Series C startup in SF, and we do XXMM ARR. I do thoroughly audit all the code that AI writes, and often go through multiple iterations, so it's a bit of a more complex picture, but if you were to simply say "a developer is not writing the code", it would be an accurate statement.
Though I do think "advanced software team" is kind of an absurd phrase, and I don't think there is any correlation with how "advanced" the software you build is and how much you need AI. In fact, there's probably an anti-correlation: I think that I get such great use out of AI primarily because we don't need to write particularly difficult code, but we do need to write a lot of it. I spend a lot of time in React, which AI is very well-suited to.
EDIT: I'd love to hear from people who disagree with me or think I am off-base somehow about which particular part of my comment (or follow-up comment https://news.ycombinator.com/item?id=46222640) seems wrong. I'm particularly curious why when I say I use Rust and code faster everyone is fine with that, but saying that I use AI and code faster is an extremely contentious statement.
AI writes most of the code for most new YC companies, as of this year.
I'm on a team like this currently. It's great when everyone knows how to use the tools and spot/kill slop and bad context. Generally speaking, good code gets merged and MUCH more quickly than in the past.
source: me
I wrote 4000 lines of Rust code with Codex - a high throughput websocket data collector.
Spoiler: I do not know Rust at all. I discussed possible architectures with GPT/Gemini/Grok (sync/async, data flow, storage options, ...), refined a design and then it was all implemented with agents.
Works perfectly, no bugs.
> What a wild and speculative claim. Is there any source for this information?
Not sure it's a wild speculative claim. Claiming someone had achieved FTL travel would fall into that category. I'd call it more along the lines of exaggerated.
I'll make the assumption that what I do is "advanced" (not React todo apps: Rust, Golang, distributed systems, network protocols...) and if so then I think: it's pretty much accurate.
That said, this is only over the past few moths. For the first few years of LLM-dom I spent my time learning how they worked and thinking about the implications for understanding of how human thinking works. I didn't use them except to experiment. I thought my colleagues who were talking in 2022 about how they had ChatGPT write their tests were out of their tiny minds. I heard stories about how the LLM hallucinated API calls that didn't exist. Then I spent a couple of years in a place with no easy code and nobody in my sphere using LLMs. But then around six months ago I began working with people who were using LLMs (mostly Claude) to write quite advanced code so I did a "wait what??..." about-face and began trying to use it myself. What I found so far is that it's quite a bit better than I am at various unexpected kinds of tasks (finding bugs, analyzing large bodies of code then writing documentation on how it works, looking for security vulnerabilities in code) or at least it's much faster. I also found that there's a whole art to "LLM Whispering" -- how to talk to it to get it to do what you want. Much like with humans, but it doesn't try to cut corners nor use oddball tech that it wants on its resume.
Anyway, YMMV, but I'd say the statement is not entirely false, and surely will be entirely true within a few years.
It's not exactly wrong. Not since the advent of AI systems (a.k.a. compilers) have developers had to worry about code. Instead they type in what they want and the compiler generates the code for them.
Well, except developers have never had to worry about code as even in the pre-compiler days coders, a different job done by a different person, were responsible for producing the code. Development has always been about writing down what you want and letting someone or something else generate the code for you.
But the transition from human coders to AI coders happened like, what, 60-70 years ago? Not sure why this is considered newsworthy now.
At $WORK, we have a bot that integrates with Slack that sets up minor PRs. Adjusting tf, updating endpoints, adding simple handlers. It does pretty well.
Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.
I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.
FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.