A lot of people down on AI in this thread, but I'm watching the industry slip over the line of trust with these latest frontier models. GPT 5.5 is the first model good enough for me to just let rip.
Every jira ticket I see now has acceptance criteria, reproduction steps, and detailed information about why the ticket exists.
Every commit message now matches the repo style, and has detailed information about what's contained in the commit.
Every MR now has detailed information about what's being merged.
Every code base in the teams around me now has 70 to 90%+ code coverage.
Every line of code now comes with best practices baked in, helpful comments, and optimized hot paths.
I regularly ship four features at a time now across multiple projects.
The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.
People keep screaming that tech debt is going to pile up, but I think it's going to be exactly the opposite. Software is going to pile up because developing it is now cheap.
Most code before llms sucked. Most projects I on-boarded to were a massive ball of undocumented spaghetti, written by humans. The floor has been raised significantly as to what bad code can even look like, and fixing issues is now basically free if your company is willing to shell out for tokens.
I agree with most of this, I just have sort of turned a blind eye to what the code actually probably looks like. Reviews are rapid, and I’ll admit I do feel like I’m betraying my inner programmer by just optimizing directly against the claims of token bot. But the way I see it, as long as the numbers don’t lie I’m okay with the process.
> I regularly ship four features at a time now across multiple projects.
Many people are missing the fact that LLMs allow ICs to start operating like managers.
You can manage 4 streams now. Within a couple years, you may be able to manage 10 streams like a typical manager does today.
IME, LLMs don't speed you up that much if 1) you're already an expert at what you're doing (inherently not scalable), 2) you're only working on one thing (doesn't make sense when you can manage multiple streams), or 3) doing something LLMs are particularly bad it (not many remaining coding tasks, but definitely still some).
Everyone talks about productivity as if that is the only metric that matters in the business.
The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.
I wonder about the hallucination. Reading someone's writing doesn't take all that long.
> I regularly ship four features at a time now across multiple projects.
Can that happen without you? I would assume this is the next step. I don't find it either good or bad, but I'm genuinely curious where this all goes.
> GPT 5.5 is the first model good enough for me to just let rip.
You know this is the exact same thing said during Opus 4.6, right?
That makes it hard to believe because it's the same "last week's model was so much behind you can't even comprehend" meme that's been going on throughout last year.
More info dumped into tickets and projects is great for understanding for both people and LLM. But hopefully not LLM generated.
I think numerically this is the exception - and it's a fantastic exception! But in practice what I've seen is things getting worse because people still just aren't very good at thinking, so the great-looking Jira ticket actually turns out to be nonsensical in some subtle way, whereas before it was just lacking in some obvious way that could immediately be called out and had an obvious solution.
I.e. it's making good output better, but it's making mediocre output (which is most output) worse by adding volume and the appearance of quality, creating a new layer of FUD, stress, tedium, and unhappiness on top of the previously more-manageable problems that come with mediocre output.
I'm still seeing this even with the newest models, because the problem is the user, not the model - the model just empowers them to be even worse, in a new and different way.
> Software is going to pile up because developing it is now cheap.
https://somehowmanage.com/2020/10/17/code-is-a-liability-not...
For people who like to tick boxes, which is essentially most of the above, AI is welcome. That includes managers.
It still has nothing to do with software engineering. All good code was written by humans. AI took it, plagiarizes it, launders it and repackages it in a bloated form.
Whenever I look deeply at an AI plagiarized mess, it looks like it is 90% there but in reality it is only 50%. Fixing the mess takes longer than writing it oneself.