Opus 4.5 ate through my Copilot quota last month, and it's already halfway through it for this month. I've used it a lot, for really complex code.
And my conclusion is: it's still not as smart as a good human programmer. It frequently got stuck, went down wrong paths, ignored what I told it to do to do something wrong, or even repeat a previous mistake I had to correct.
Yet in other ways, it's unbelievably good. I can give it a directory full of code to analyze, and it can tell me it's an implementation of Kozo Sugiyama's dagre graph layout algorithm, and immediately identify the file with the error. That's unbelievably impressive. Unfortunately it can't fix the error. The error was one of the many errors it made during previous sessions.
So my verdict is that it's great for code analysis, and it's fantastic for injecting some book knowledge on complex topics into your programming, but it can't tackle those complex problems by itself.
Yesterday and today I was upgrading a bunch of unit tests because of a dependency upgrade, and while it was occasionally very helpful, it also regularly got stuck. I got a lot more done than usual in the same time, but I do wonder if it wasn't too much. Wasn't there an easier way to do this? I didn't look for it, because every step of the way, Opus's solution seemed obvious and easy, and I had no idea how deep a pit it was getting me into. I should have been more critical of the direction it was pointing to.
What bothers me about posts like this is: mid-level engineers are not tasked with atomic, greenfield projects. If all an engineer did all day was build apps from scratch, with no expectation that others may come along and extend, build on top of, or depend on, then sure, Opus 4.5 could replace them. The hard thing about engineering is not "building a thing that works", its building it the right way, in an easily understood way, in a way that's easily extensible.
No doubt I could give Opus 4.5 "build be a XYZ app" and it will do well. But day to day, when I ask it "build me this feature" it uses strange abstractions, and often requires several attempts on my part to do it in the way I consider "right". Any non-technical person might read that and go "if it works it works" but any reasonable engineer will know that thats not enough.
Putting the performance aside for now as I just started trying out Opus 4.5, can't say too much yet, I don't hype or hate AI as of now, it's simply useful.
Time will tell what happens, but if programming becomes "prompt engineering", I'm planning on quitting my job and pivoting to something else. It's nice to get stuff working fast, but AI just sucks the joy out of building for me.
Trying to not feel the pressure/anxiety from this, but every time a new model drops there is this tiny moment where I think "Is it actually different this time?"
Opus 4.5 has become really capable.
Not in terms of knowledge. That was already phenomenal. But in its ability to act independently: to make decisions, collaborate with me to solve problems, ask follow-up questions, write plans and actually execute them.
You have to experience it yourself on your own real problems and over the course of days or weeks.
Every coding problem I was able to define clearly enough within the limits of the context window, the chatbot could solve and these weren’t easy. It wasn’t just about writing and testing code. It also involved reverse engineering and cracking encoding-related problems. The most impressive part was how actively it worked on problems in a tight feedback loop.
In the traditional sense, I haven’t really coded privately at all in recent weeks. Instead, I’ve been guiding and directing, having it write specifications, and then refining and improving them.
Curious how this will perform in complex, large production environments.
Author of the post here.
I appreciate the spirited debate and I agree with most of it - on both sides. It's a strange place to be where I think both arguments for and against this case make perfect sense. All I have to go on then is my personal experience, which is the only objective thing I've got. This entire profession feels stochastic these days.
A few points of clarification...
1. I don't speak for anyone but myself. I'm wrong at least half the time so you've been warned.
2. I didn't use any fancy workflows to build these things. Just used dictation to talk to GitHub Copilot in VS Code. There is a custom agent prompt toward the end of the post I used, but it's mostly to coerce Opus 4.5 into using subagents and context7 - the only MCP I used. There is no plan, implement - nothing like that. On occasion I would have it generate a plan or summary, but no fancy prompt needed to do that - just ask for it. The agent harness in VS Code for Opus 4.5 is remarkably good.
3. When I say AI is going to replace developers, I mean that in the sense that it will do what we are doing now. It already is for me. That said, I think there's a strong case that we will have more devs - not less. Think about it - if anyone with solid systems knowledge can build anything, the only way you can ship more differentiating features than me is to build more of them. That is going to take more people, not more agents. Agents can only scale as far as the humans who manage them.
New account because now you know who I am :)
I had a similar set of experiences with GPT 5.x over the holiday break, across somewhat more disparate domains: https://taoofmac.com/space/notes/2025/12/31/1830
I hacked together a Swift tool to replace a Python automation I had, merged an ARM JIT engine into a 68k emulator, and even got a very decent start on a synth project I’ve been meaning to do for years.
What has become immensely apparent to me is that even gpt-5-mini can create decent Go CLI apps provided you write down a coherent spec and review the code as if it was a peer’s pull request (the VS Code base prompts and tooling steer even dumb models through a pretty decent workflow).
GPT 5.2 and the codex variants are, to me, every bit as good as Opus but without the groveling and emojis - I can ask it to build an entire CI workflow and it does it in pretty much one shot if I give it the steps I want.
So for me at least this model generation is a huge force multiplier (but I’ve always been the type to plan before coding and reason out most of the details before I start, so it might be a matter of method).
I've noticed a huge drop in negative comments on HN when discussing LLMs in the last 1-2 months.
All the LLM coded projects I've seen shared so far[1] have been tech toys though. I've watched things pop up on my twitter feed (usually games related), then quietly go off air before reaching a gold release (I manually keep up to date with what I've found, so it's not the algorithm).
I find this all very interesting: LLMs dont change the fundamental drives needed to build successful products. I feel like I'm observing the TikTokification of software development. I dont know why people aren't finishing. Maybe they stop when the "real work" kicks in. Or maybe they hit the limits of what LLMs can do (so far). Maybe they jump to the next idea to keep chasing the rush.
Acquiring context requires real work, and I dont see a way forward to automating that away. And to be clear, context is human needs; i.e. the reasons why someone will use your product. In the game development world, it's very difficult to overstate how much work needs to be done to create a smooth, enjoyable experience for the player.
While anyone may be able to create a suite of apps in a weekend, I think very few of them will have the patience and time to maintain them (just like software development before LLMs! i.e. Linux, open source software, etc.).
[1] yes, selection bias. There are A LOT of AI devs just marketing their LLMs. Also it's DEFINITELY too early to be certain. Take everything Im saying with a one pound grain of salt.
Opus 4.5 really is something else. I've been having a ton of fun throwing absurdly difficult problems at it recently and it keeps on surprising me.
A JavaScript interpreter written in Python? How about a WebAssembly runtime in Python? How about porting BurntSushi's absurdly great Rust optimized string search routines to C and making them faster?
And these are mostly just casual experiments, often run from my phone!
A couple weeks ago I had Opus 4.5 go over my project and improve anything it could find. It "worked" but the architecture decisions it made were baffling, and had many, many bugs. I had to rewrite half of the code. I'm not an AI hater, I love AI for tests, finding bugs, and small chores. Opus is great for specific, targeted tasks. But don't ask it to do any general architecture, because you'll be soon to regret it.
I've been on a small adventure of posting more actively on HN since the release of Gemini 3, trying to stir debate around the more “societal” aspects of what's going on with AI.
Regardless of how much you value Cloud Code technically, there is no denying that it has/will have huge impact. If technology knowledge and development are commoditised and distributed via subscription, huge societal changes are going to happen. Image what will happen to Ireland if Accenture dissolves, or what will happen to the millions of Indians when IT outsourcing becomes economically irrelevant. Will Seattle become new Detroit after Microsoft automates Windows maintenance? What about the hairdressers, cooks, lawyers, etc. who provided services for IT labourers/companies in California?
Lot of people here (especially Anthropic-adjacent) like to extrapolate the trends and draw conclusions up to the point when they say that white-collar labourers will not be needed anymore. I would like these people to have courage to take this one step further and connect this resolution with the housing crisis, loneliness epidemic, college debts, and job market crisis for people under 30.
It feels like we are diving head first into societal crisis of unparalleled scale and the people behind the steering wheel are excited to push the accelerator pedal even more.
Sonnet 4.5 did it for me. Cant imagine coding without it now, and if you look at my comments from three months ago, you'll see I'm eating crow now. I easily hit >10x productivity with Sonnet 4.5 and Opus. I use Opus for my industry C and math work and Sonnet 4.5 for my swiftui side project.
I think the gap between Sonnet 4.5 and Opus is pretty small, compared to the absolute chasm between like gpt-4.1, grok, etc. vs Sonnet.
I had an app I wanted for over a decade. I even wrote a prototype 10 years ago. It was fine but wasn't good enough to use, so I didn't use it.
This weekend I explained to Claude what I wanted the app to do, and then gave it the crappy code I wrote 10 years ago as a starting point.
It made the app exactly as I described it the first time. From there, now that I had a working app that I liked, I iterated a few times to add new features. Only once did it not get it correct, and I had to tell it what I thought the problem was (that it made the viewport too small). And after that it was working again.
I did in 30 minutes with Claude what I had try to do in a few hours previously.
Where it got stuck however was when I asked it to convert it to a screensaver for the Mac. It just had no idea what to do. But that was Claude on the web, not Claude Code. I'm going to try it with CC and see if I can get it.
I also did the same thing with a Chrome plugin for Gmail. Something I've wanted for nearly 20 years, and could never figure out how to do (basically sort by sender). I got Opus 4.5 to make me a plugin to do it and it only took a few iterations.
I look forward to finally getting all those small apps and plugins I've wanted forever.
Reading this blog post makes me wanna rethink my career, Opus 4.5 is really good I was recently working on solving my own problem by developing a software solution and let me tell you it was really good at it,
If I had done the same thing Pre LLM era it would have taken me months
So I decided to try the revered hands-off approach and have Claude Code create me a small tool in JS for *.dylib bundle consolidation on macOS.
I have used AskUserQuestionTool to complete my initial spec. And then Opus 4.5 created the tool according to that extensive and detailed spec.
It appeared to work out of the box.
Boy how horrific was the code. Unnecessary recursions, unused variables, data structures being built with no usage, deep branch nesting and weird code that is hard to understand because of how illogical it is.
And yes, it was broken on many levels and did not and could not do the job properly.
I then had to rewrite the tool from scratch and overall I have definitely spent more time spec'ing and understanding Claude code than if I have just written this tool from scratch initially.
Then I tried again for a small tool I needed to run codesign in parallel: https://github.com/egorFiNE/codesign-parallel
Same thing. Same outcome, had to rewrite.
I second this article - I built twelve iOS/Mac apps in two weeks with Opus 4.5 - four of them are already in the App Store - I’m a Rails Engineer and never had the time to learn Swift but man does Opus 4.5 make that not even matter - it even handles entitlements, logo & splash screen generation, refactors to remove dead code, edge case assent and hardening, Multiplatform app design, and more - I’m yet to run into a use case it can’t handle for most general use cases - that said, I have found some common mistakes it makes (by common I mean almost every time); puts iOS line list line items in buttons making them blue when they should not be, doesn’t set defaults for new data structure variables which crashes the app when changing the data structure after the fact, design consistent after the first shot (minor things like white background instead of grey background like all the other screens already, etc) - the one thing that i know it cant do well (and no other model that I know of can do this well either) is ASTM bi-directional communications (we work with pathology analysers that use this 1995 frame-based communication standard), even when you load it up with the spec and supporting docs - I suspect this is due to a dirty of available codebases that tackle this problem due to its niche and generally proprietary nature…
I see these posts left and right but no one mentions the _actual_ thing developers are hired for, responsibility. You could use whatever tools to aid coding already, even copy paste from StackOverflow or take whole boilerplate projects from Github already. No AI will take responsibility for code or fix a burning issue that arises because of it. The amount of "responsibility takers" also increases linearly with the size of the codebase / amount of projects.
Me and Opus have a lot in common. We both hit our weekly limit on Monday at 10am.
The problem with this is none of this is production quality. You haven’t done edge case testing for user mistakes, a security audit, or even just maintainability.
Yes opus 4.5 seems great but most of the time it tries to vastly over complicate a solution. Its answer will be 10x harder to maintain and debug than the simpler solution a human would have created by thinking about the constraints of keeping code working.
Opus 4.5 is currently helping me write a novel, comprehensive and highly performant programming language with all of the things I've ever wanted, done in exactly my opinionated way.
This project would have taken me years of specialization and research to do right. Opus's strength has been the ability to both speak broadly and also drill down into low-level implementations.
I can express an intent, and have some discussion back and forth around various possible designs and implementations to achieve my goals, and then I can be preparing for other tasks while Opus works in the background. I ask Opus to loop me in any time there are decisions to be made, and I ask it to clearly explain things to me.
Contrary to losing skills, I feel that I have rapidly gained a lot of knowledge about low-level systems programming. It feels like pair programming with an agentic model has finally become viable.
I will be clear though, it takes the steady hand of an experience and attentive senior developer + product designer to understand how to maintain constraints on the system that allow the codebase to grow in a way that is maintainable on the long-term. This is especially important, because the larger the codebase is, the harder it becomes for agentic models to reason holistically about large-scale changes or how new features should properly integrate into the system.
If left to its own devices, Opus 4.5 will delete things, change specification, shirk responsibilities in lieu of hacky band-aids, etc. You need to know the stack well so that you can assist with debugging and reasoning about code quality and organization. It is not a panacea. But it's ground-breaking. This is going to be my most productive year in my life.
On the flip side though, things are going to change extremely fast once large-scale, profitable infrastructure becomes easily replicable, and spinning up a targeted phishing campaign takes five seconds and a walk around the park. And our workforce will probably start shrinking permanently over the next few years if progress does not hit a wall.
Among other things, I do predict we will see a resurgence of smol web communities now that independent web development is becoming much more accessible again, closer to how it when I first got into it back in the early 2000's.
So much of the conversation is around these models replacing software engineers. But the use cases described in the article sound like pretty compelling business opportunities; if the custom apps he built for his wife's business have been useful, probably there are lots of businesses that would pay for the service he just provided his wife. Small, custom apps can be made way more cheaply now, so Jeven's paradox says that demand should go up. I think it will.
I would love to hear from some freelance programmers how LLMs have changed their work in the last two years.
I'm kind of surprised how many people are okay with deploying code that hasn't been audited.
I read If Anyone Builds It Everyone Dies over the break. The basic premise was that we can't "align" AI so when we turn it loose in an agent loop what it produces isn't necessarily what we want. It may be on the surface, to appease us and pass a cursory inspection, but it could embed other stuff according to other goals.
On the whole, I found it a little silly and implausible, but I'm second guessing parts of that response now that I'm seeing more people (this post, the Gas Town thing on the front page earlier) go all-in on vibe coding. There is likely to be a large body of running software out there that will be created by agents and never inspected by humans.
I think a more plausible failure mode in the near future (next year or two) is something more like a "worm". Someone building an agent with the explicit instructions to try to replicate itself. Opus 4.5 and GPT 5.2 are good enough that in an agent loop they could pretty thoroughly investigate any system they land on, and try to use a few ways to propagate their agent wrapper.
Mm this is my experience as well, but I'm not particularly worried about software engineering a whole.
If anything this example shows that these cli tools give regular devs much higher leverage.
There's a lot of software labor that is like, go to the lowest cost country, hire some mediocre people there and then hire some US guy to manage them.
That's the biggest target of this stuff, because now that US guy can just get equal or hight code in both quality and output without the coordination cost.
But unless we get to the point where you can do what I call "hypercode" I don't think we'll see SWEs as a whole category die.
Just like we don't understand assembly but still need technical skills when things go wrong, there's always value in low level technical skills.
It's also the feeling I have, opus is not a ground-breaking model by any means.
However, Opus 4.5 is incredible when you give it everything it needs, a direction, what you have versus what you want and it will make it work, really, it will work. The code might me ugly, undesirable, would only work for that one condition, but with futher prompting you can evolve it and produce something that you can be proud of.
Opus is only as good as the user and the tools the user gives to it. Hmm, that's starting to sound kind-of... human...
I have a different concern: the SOTA products are expensive and get dumbed down on busy times. My personal strategy has been to be a late follower, where I adopt new AI tools when the competition has caught up with the previous SOTA, and now there are many tools that are cost effective and equally good.
Can't wait for when the competition catches up with Claude Code, especially the open source/weights Chinese alternatives :)
I'll argue many of his cases are things that are straightforward except for the boilerplate that surrounds them which are often emotionally difficult or prone to rabbit holes.
Like that first one where he writes a right-click handler, off the top of my head I have no idea how I would do that, I could see it taking a few hours to just set up a dev environment, and I would probably overthink the research. I was working on something where Junie suggested I write a browser extension for Firefox and I was initially intimidated at the thought but it banged out something in just a few minutes that basically worked after the second prompt.
Similarly the Facebook autoposter is completely straightforward to code but it can be so emotionally exhausting to fight with authentication APIs, a big part of the coding agent story isn't just that it saves you time but that they can be strong when you are emotionally weak.
The one which seems the hardest is the one that does the routing and travel time estimation which I'd imagine is calling out to some API or library. I used to work at a place that did sales territory optimization and we had one product that would help work out routes for sales and service people who travel from customer to customer and we had a specialist code that stuff in C++ and he had a very different viewpoint than me, he was good at what he did and could get that kind of code to run fast but I wouldn't have trusted him to even look at applications code.
LLMS like Opus, Gemini 3, and GPT-5.2/5.1-Codex-max, are phenomenal for coding and have only very recently crossed that gap between being "eh" and being quite fantastic to let operate on their own agentically. The major trade-off being a fairly expensive cost. I ran up $200 per provider after running through 'pro' tier limits during a single week of hacking over the holidays.
Unfortunately, it's still surprisingly easy for these models to fall into really stupid maintainability traps.
For instance today, Opus adds a feature to the code that needs access to a db. It fails because the db (sqlite) is not local to the executable at runtime. Its solution is to create this 100 line function to resolve a relative path and deal with errors and variations.
I hit ESC and say "... just accept a flag for --localdb <file>". It responds with "oh, that's a much cleaner implementation. Good idea!". It then implements my approach and deletes all the hacks it had scattered about.
This... is why LLMs are still not Senior engineers. They do plainly stupid things. They're still absurdly powerful and helpful, but if you want maintainable code you really have to pay attention.
Another common failure is when context is polluted.
I asked Opus to implement a feature by looking up the spec. It looked up the wrong spec (a v2 api instead of a v3) -- I had only indicated "latest spec". It then did the classic LLM circular troubleshooting as we went in 4 loops trying to figure out why calculations were failing.
I killed the session, asked a fresh instance to "figure out why the calculation was failing" and it found it straight away. The previous instance would have gone in circles for eternity because its worldview had been polluted by assumptions made -- that could not be shaken.
This is a second way in which LLMs are rigid and robotic in their thinking and approach -- taking the wrong way even when directed not to. Further reading on 'debugging decay': https://arxiv.org/abs/2506.18403
All this said, the number of failure scenarios gets ever smaller. We've gone from "problem and hallucination every other code block" to "problem every 200-1000 code blocks".
They're now in the sweet spot of acting as a massive accelerator. If you're not using them, you'll simply deliver slower.
I really wonder what means for software moving forward. In the last few months I've used Claude Code to build personalized versions of Superwhisper (voice-to-text), CleanShot X (screenshot and image markup), and TextSniper (image to text). The only cost was some time and my $20/month subscription.
The question I keep asking myself is "how feasible will any of this be when the VC money runs out?" Right now tokens are crazy cheap. Will the continue to be?
what strikes me about these posts is they praise models for apps | utilities commonly found on GitHub.
ie well known paths based on training data.
what's never posted is someone building something that solves a real problem in the real world - that deals with messy data | interfaces.
I like a.i to do the common routine tasks that I don't like to do like apply tailwind styles but being renter and faking productivity that's not it
Does anyone have a boring, multi-hour-long coding session with an agent that they've recorded and put on Vimeo or something?
As many other commentators have said, individual results vary extremely widely. I'd love to be able to look at the footage of either someone who claims a 10x productivity increase, or someone who claims no productivity increase, to see what's happening.
I switched my subscription from Claude to ChatGPT around 5.0 when SOTA was Sonnet 4.5 and found GPT-5-high (and now 5.2-high) so incredibly good, I could never imagine Opus is on its level. I give gpt-5.2-high a spec, it works for 20 minutes and the result is almost perfect and tested. I very rarely have to make changes.
It never duplicates code, implements something again and leaves the old code around, breaks my convention, hallucinates, or tells me it’s done when the code doesn’t even compile, which sonnet 4.5 and Opus 4.1 did all the time
I’m wondering if this had changed with Opus 4.5 since so many people are raving about it now. What’s your experience?
Claude - fast, to the point but maybe only 85% - 90% there and needs closer observation while it works
GPT-x-high (or xhigh) - you tell it what to do, it will work slowly but precise and the solution is exactly what you want. 98% there, needs no supervision
It is very funny to start your article off with a bunch of breathless headlines about agents replacing human coders by the end of 2025, none of which happened, then the rest of the article is "okay but this time for real, an agent really WILL replace human coders."
I have used Claude Code for a variety of hobby projects. I am truly astounded at its capabilities.
If you tell it to use linters and other kinds of code analysis tools it takes it to the next level. Ruff for Python or Clippy for Rust for example. The LLM makes so much code so fast and then passes it through these tools and actually understands what the tools say and it goes and makes the changes. I have created a whole tool chain that I put in a pre commit text file in my repos and tell the LLM something like "Look in this text file and use every tool you see listed to improve code quality".
That being said, I doubt it can turn a non-dev into a dev still, it just makes competent devs way better still.
I still need to be able to understand what it is doing and what the tools are for to even have a chance to give it the guardrails it should follow.
I gave it a try, I asked to do a reddit like forum and it did pretty good but damn I quickly hit the daily limit of the $20 pro account, and it took 10% of the monthly just to do the setup and some basics. I knew LLM were expensive to run but I've never felt it directly. Even if the code is good it's kinda expensive for what you get.
Ho it was also quite funny it used the exact same color as hackernews and a similar layout.
Anthropic dropped out of the general "AGI" race and seems to be purely focused on coding, maybe racing to get the first "automated machine learning programmer". Whatever the case, it seems to be paying (coding) dividends to just be focusing on coding.
I see Anthropics marketing campaign is out in full force today ahead of their IPO.
I was not expecting a couple of new apps being built, when the premise of the blog post talks about replacing "mid level engineers"
the thing about being an engineer at commercial capacity is "maintaining/enhancing an existing program/software system that has been developed over years by multiple people(including those who already left) and do it in a way that does not cause any outages/bugs/break existing functionality.
while the blog post mentions about the ability of using AI to generate new applications, but it does not talk about maintaining one over a longer period of time. for that, you would need real users, real constraints, and real feature requests which preferably pay you so you can priortize them.
I would love to see such blog posts where for example, a PM is able to add features for a period of one month without breaking the production, but it would be a very costly experiment.
Don't want to discredit Opus at all, it's easy at directed tasks but it's not the silver bullet yet.
It is best in its class, but trips up frequently with complicated engineering tasks involving dynamic variables. Think: Browser page loading, designing for a system where it will "forget" to account for race conditions, etc.
Still, this gets me very excited for the next generation of models from Anthropic for heavy tasks.
I’ve been saying this a countless time, LLM are great to build toy and experimental projects.
I’m not shaming but I personally need to know if my sentiment is correct or not or I just don’t know how to use LLMs
Can vibe coder gurus create operating system from scratch that competes with Linux and make it generate code that basically isn’t Linux since LLM are trained on said the source code …
Also all this on $20 plan. Free and self host solution will be best
I've only started but I mostly use Claude Code for building out code that has been done a million times. So its good at setting up a project to get all the boiler plate crap out of the way.
When you need to build out specific feature or logic, it can fail hard. And the best is when you have something working, and it fixes something else and deletes the old code that was working, just in a different spot.
I agree with the OP that I can get LLM's to do things now that I wouldn't even attempt a year ago, but I feel it has more to do with my own experience using LLM's (and the surrounding tools) than the actual models themselves.
I use copilot and change models often, and haven't really noticed any major differences between them, except some of the newer ones are very slow.
I generally feel the smaller and faster ones are more useful since they will let me discover problems with my prompt or context faster.
Maybe I'm simply not using LLM's in a way that lets the superiority of newer models reveal itself properly, but there is a huge financial incentive for LLM makers to pretend that their model has game-changing "special sauce" even if it doesn't.
After reading that article, I see at least one thing that Opus 4.5 is clearly not going to change.
There is no fixed truth regarding what an "app" is, does, or looks like. Let alone the device it runs on or the technology it uses.
But to an LLM, there are only fixed truths (and in my experience, only three or four possible families of design for an application).
Opus 4.5 produces correct code more often, but when the human at the keyboard is trying to avoid making any engineering decisions, the code will continue to be boring.
Yep, I literally built this last night with Opus 4.5 after my wife and I challenged each other to a typing competition. I gave it direction and feedback but it wrote all the actual code. Wasn't a one shot (maybe 3-4 shot) but didn't really have to think about it all that hard.
https://chronick.github.io/typing-arena/
With another more substantial personal project (Eurorack module firmware, almost ready to release), I set up Claude Code to act as a design assistant, where I'd give it feedback on current implementation, and it would go through several rounds of design/review/design/review until I honed it down. It had several good ideas that I wouldn't have thought of otherwise (or at least would have taken me much longer to do).
Really excited to do some other projects after this one is done.
The worst part about this is that you can't know anymore whether the software you trustingly install on your hardware is clean or if it was coded by a misaligned coding model with a secret goal that it has hidden from its prompt engineer and from you.
This could pretty much be the beginning of the end of everything, if misaligned models wanted to they could install killswitches everywhere. And you can't trust security updates either so you are even more vulnerable to external exploits.
It's really scary, I fear the future, it's going to be so bad. It's best to not touch AI at all and stay hidden from it as long as possible to survive the catastrophe or not be a helping part of it. Don't turn your devices into a node of a clandestine bot net that is only waiting to conspire against us.
Yea, my issue with Opus 4.5 is it's the first model that's good enough that I'm starting to feel myself slip into laziness. I catch myself reviewing its output less rigorously than I had with previous AI coding assistants.
As a side project / experiment, I designed a language spec and am using (mostly) Opus 4.5 to write a transpiler (language transpiles to C) for it. Parser was no problem (I used s-expressions for a reason). The type checker and transpiler itself have been a slog - I think I'm finding the limits of Opus :D. It particularly struggles with multi-module support. Though, some of this is probably mistakes made by me while playing architect and iterating with Claude - I haven't written a compiler since my senior year compiler design course 20+ years ago. Someone who does this for a living would probably have an easier time of it.
But for the CRUD stuff my day job has me doing? Pffttt... it's great.
All great until the code in production pushed by Opus 314.15 breaks and Opus 602.21, despite it's many tries, can't fix it and ends it with "I apologize". That's when you need a developer who can be told "fix it". But what if all the developers then are "Opus 600+ certified" ai-native and are completely incapable of working without it's assistance? World powers decide to open the forbidden vault in the Arctic and despite many warnings on the chamber, decide to raise the foul-mouthed programmer-demon called Torvalds....
Yeah Opus 4.5 is a massive step change in my experience. I feel like I’m working with a peer, not a junior I’m having to direct. I can give it highly ambiguous and poorly specified tasks and it… just does it.
I will note that my experience varies slightly by language though. I’ve found it’s not as good at typescript.
I guess the best analogy I can think of is the transition from writing assembly language and the introduction of compilers. Now, (almost) no one knows, or cares, what comes out of the compiler. We just assume it is optimized and that it represents the source code faithfully. Seems like code might go that way too and people will focus on the right prompts and can simply assume the code will be correct.
Most software engineers are seriously sleeping on how good LLM agents are right now, especially something like Claude Code.
Once you’ve got Claude Code set up, you can point it at your codebase, have it learn your conventions, pull in best practices, and refine everything until it’s basically operating like a super-powered teammate. The real unlock is building a solid set of reusable “skills” plus a few agents for the stuff you do all the time.
For example, we have a custom UI library, and Claude Code has a skill that explains exactly how to use it. Same for how we write Storybooks, how we structure APIs, and basically how we want everything done in our repo. So when it generates code, it already matches our patterns and standards out of the box.
We also had Claude Code create a bunch of ESLint automation, including custom ESLint rules and lint checks that catch and auto-handle a lot of stuff before it even hits review.
Then we take it further: we have a deep code review agent Claude Code runs after changes are made. And when a PR goes up, we have another Claude Code agent that does a full PR review, following a detailed markdown checklist we’ve written for it.
On top of that, we’ve got like five other Claude Code GitHub workflow agents that run on a schedule. One of them reads all commits from the last month and makes sure docs are still aligned. Another checks for gaps in end-to-end coverage. Stuff like that. A ton of maintenance and quality work is just… automated. It runs ridiculously smoothly.
We even use Claude Code for ticket triage. It reads the ticket, digs into the codebase, and leaves a comment with what it thinks should be done. So when an engineer picks it up, they’re basically starting halfway through already.
There is so much low-hanging fruit here that it honestly blows my mind people aren’t all over it. 2026 is going to be a wake-up call.
(used voice to text then had claude reword, I am lazy and not gonna hand write it all for yall sorry!)
Edit: made an example repo for ya
https://github.com/ChrisWiles/claude-code-showcase