logoalt Hacker News

Ask HN: How is AI-assisted coding going for you professionally?

191 pointsby svaratoday at 3:58 PM314 commentsview on HN

Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.

If you've recently used AI tools for professional coding work, tell us about it.

What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?

Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.

The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.


Comments

viccistoday at 10:32 PM

Haven't seen this mentioned yet, but the worst part for me is that a lot of management LOVES to use Claude to generate 50 page design documents, PRDs, etc., and send them to us to "please review as soon as you can". Nobody reads it, not even the people making it. I'm watching some employees just generate endless slide decks of nonsense and then waffle when asked any specific questions. If any of that is read, it is by other peoples' Claude.

It has also enabled a few people to write code or plan out implementation details who haven't done so in a long (sometimes decade or more) time, and so I'm getting some bizarre suggestions.

Otherwise, it really does depend on what kind of code. I hand write prod code, and the only thing that AI can do is review it and point out bugs to me. But for other things, like a throwaway script to generate a bunch of data for load testing? Sure, why not.

show 3 replies
fastasucantoday at 8:55 PM

It makes my work suck, sadly. Team dynamics also contributes to that, admittedly.

Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.

When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.

Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.

I had appreciated any contribution to my code, but thinking that it would be so easy to just take my code and finishing it by asking Claude was rather insulting.

show 1 reply
humbleharbingertoday at 10:01 PM

I'm an engineer at Amazon - we use Kiro (our own harness) with Opus 4.6 underneath.

Most of my gripes are with the harness, CC is way better.

In terms of productivity I'm def 2-4X more productive at work, >10x more productive on my side business. I used to work overtime to deliver my features. Now I work 9-5 and am job hunting on the side while delivering relatively more features.

I think a lot of people are missing that AI is not just good for writing code. It's good for data analysis and all sorts of other tasks like debugging and deploying. I regularly use it to manage deployment loops (ex. make a code change and then deploy the changes to gamma and verify they work by making a sample request and verifying output from cloudwatch logs etc). I have built features in 2 weeks that would take me a month just because I'd have to learn some nitty technical details that I'd never use again in my life.

For data analysis I have an internal glue catalog, I can just tell it to query data and write a script that analyzes X for me.

AI and agents particularly have been a huge boon for me. I'm really scared about automation but also it doesn't make sense to me that SWE would be automated first before other careers since SWE itself is necessary to automate others. I think there are some fundamental limitations on LLMs (without understanding the details too much), but whatever level of intelligence we've currently unlocked is fundamentally going to change the world and is already changing how SWE looks.

show 3 replies
hdhdhsjsbdhtoday at 6:35 PM

It has made my job an awful slog, and my personal projects move faster.

At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.

In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.

I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.

show 7 replies
Izkatatoday at 9:13 PM

I don't use it.

I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.

One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.

Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.

A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.

show 3 replies
onlyrealcuzzotoday at 6:29 PM

I work at a FAANG.

Professionally, I have had almost no luck with it, outside of summarizing design docs or literally just finding something in the code that a simple search might not find: such is this team's code that does X?

I am yet to successfully prompt it and get a working commit.

Further, I will add that I also don't know any ICs personally who have successfully used it. Though, there's endless posts of people talking about how they're now 10x more productive, and everyone needs to do x y an z now. I just don't know any of these people.

Non-professionally, it's amazing how well it does on a small greenfield task, and I have seen that 10x improvement in velocity. But, at work, close to 0 so far.

Of the posts I've seen at work, they typically tend to be teams doing something new / greenfield-ish or a refactor. So I'm not surprised by their results.

show 7 replies
wk_endtoday at 8:58 PM

Around a year ago I started a new position at a very large tech company that I won't name, working on a pre-existing web project there. The code base isn't terrible - though not very good either, by-and-large - but it's absolutely massive, often over-engineered, pretty unorthodox, and definitely has some questionable design decisions; even after more than a year of working with it I still feel like a beginner much of the time.

This year I grudgingly bit the bullet and began using AI tools, and to my dismay they've been a pretty big boon for me, in this case. Not just for code generation - they're really good at probing the monolith and answering questions I have about how it works. Before I'd spend days pouring over code before starting work to figure out the right way to build something or where to break in, pinging people over in India or eastern Europe with questions and hoping they reply to me overnight. AI's totally replaced that, and it works shockingly well.

When I do fall back on it for code generation, it's mostly just to mitigate the tedium of writing boilerplate. The code it produces tends to be pretty poor - both in terms of style and robustness - and I'll usually need to take at least a couple of passes over it to get it up to snuff. I do find this faster than writing everything out by end in the end, but not by a lot.

For my personal projects I don't find it adds much, but I do enjoy rubber ducking with ChatGPT.

kreyenborgitoday at 10:44 PM

Net negative. I do find it genuinely useful for code review, and "better search engine" or snippets, and sometimes for rubber ducking, but for agent mode and actual longer coding tasks I always end up rewriting the code it makes. Whatever it produces always looks like one of those students who constantly slightly misunderstands and only cares about minor test objectives, never seeing the big picture. And I waste so much time on the hope that this time it will make me more productive if only I can nudge it in the right direction, maybe I'm not holding it right, using the right tools/processes/skills etc. It feels like javascript frameworks all over again.

VoidWhisperertoday at 11:13 PM

It has definitely made me more productive. That said, that productivity isn't coming from using it to write business logic (I prefer to have an in-depth understanding of the logical parts of the codebases that I'm working on. I've also seen cases in my work codebases where code was obviously AI generated before and ends up with gaping security or compliance issues that no one seemed to see at the time).

The productivity comes from three main areas for me:

- Having the AI coding assistance write unit tests for my changes. This used to be by far my least favorite part of my job of writing software, mostly because instead of solving problems, it was the monotonous process of gathering mock data to generate specific pathways, trying to make sure I'm covering all the cases, and then debugging the tests. AI coding assistance allows me to just have to review the tests to make sure that they cover all the cases I can think of and that there aren't any overtly wrong assumptions

- Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole

- Quick test scripts. It has been extremely useful for generating quick SQL data for testing things, along with quick one-off scripts to test things like external provider APIs

greenpizza13today at 10:00 PM

I work at a very prominent AI company. We have access to every tool under the sun. There are various levels of success for all levels — managers, PMs, engineers.

We have cursor with essentially unlimited Opus 4.6 and it’s fundamentally changed my workflow as a senior engineer. I find I spend much more time designing and testing my software and development time is almost entirely prompting and reviewing AI changes.

I’m afraid my coding skills are atrophying, in fact I know the are, but I’m not sure if the coding was the part of my job I truly enjoyed. I enjoy thinking higher-level: architecture, connecting components, focusing on the user experience. But I think using these AI tools is a form of golden handcuffs. If I go work at a startup without the money I pay for these models, I think for the first time in my career I would be less likely to be able to successfully code a feature than I could last year.

So professionally there are pros and cons. My design and architecture skills have greatly improved as I am spending more time doing this.

Personally it’s so much fun. I’ve made several side projects I would have never done otherwise. Working with Claude code on greenfield projects is a blast.

show 1 reply
notatoadtoday at 11:09 PM

It’s completely inconsistent for me, and any time I start to think it is amazing, I quickly am proven wrong. It definitely has done some useful things for me, but as it stands any sort of “one shot” or vibecoding where I expect the ai to complete a whole task autonomously is still a long ways off.

Copilot completions are amazingly useful. chatting with the chatbot is a super useful debugging tool. Giving it a function or database query and asking the ai to optimize it works great. But true vibe coding is still, imho, more of a party trick than an actual productivity multiplier. It can do things that look useful, and it can do things that solve immediate self-contained problems. but it can’t create launchable products that serve the needs of multiple users.

0x6d61646ftoday at 11:11 PM

I had automation setup for anything I needed for work, gen AI made me feel like I had to babysit a dumb junior developer so I lost interest

Managment uses it to make mock websites then doesn't listen when we point out flows, so nothing new there

Some in digital marketing are using it for data collection/anlysis, but it reaches wrong conclusions 50% of the time (their words) so they are slowly dropping it and using it for meneal tasks and simple automations

In design we had a trial period but has the same issue as coding: either it makes something a senior designer could have made in 2 minutes or it introduces errors that take a long time to fix, to then do it again the next prompt

we are a senior dev team, although relative small, and to me it seems like it only really works as a subsitute for junior devs... but the point of junior devs is to grow someone into a senior with the knowledge you need in the company so i don't really get the usecase overall

QuadrupleAtoday at 7:00 PM

As a veteran freelance developer - aside from some occasional big wins, I'd say it's been net neutral or even net negative to my productivity. When I review AI-generated code carefully (and if I'm delivering it to clients I feel that's my responsibility) I always find unnecessary complexity, conceptual errors, performance issues, looming maintainability problems, etc. If I were to let it run free, these would just compound.

A couple "win" examples: add in-text links to every term in this paragraph that appears elsewhere on the page, plus corresponding anchors in the relevant page parts. Or, replace any static text on this page with any corresponding dynamic elements from this reference URL.

Lose examples: constant, but edit format glitches (not matching searched text; even the venerable Opus 4.6 constantly screws this up), unnecessary intermediate variables, ridiculously over-cautious exception-handling, failing to see opportunities to isolate repeated code into a function, or to utilize an existing function that exactly implements said N lines of code, etc.

outimetoday at 11:11 PM

I'm enjoying it. At this stage though, I just don't see much value if you don't have any prior knowledge of what you're doing. Of course you can use LLMs to get better at it but we're not yet at the point where I'd trust them to build something complex without supervision... nor is anyone suggesting that, except AI CEOs :)

I do wonder what will happen when real costs are billed. It might end up being a net positive since that will make you think more about what you prompt, and perhaps the results will be much better than lazily prompting and seeing what comes out (which seems to be a very typical case).

simonwtoday at 7:02 PM

The majority of code I've written since November 2025 has been created using agents, as opposed to me typing code into a text editor. More than half of that has been done from my iPhone via Claude Code for web (bad name, great software.)

I'm enjoying myself so much. Projects I've been thinking about for years are now a couple of hours of hacking around. I'm readjusting my mental model of what's possible as a single developer. And I'm finally learning Go!

The biggest challenge right now is keeping up with the review workload. For low stakes projects (small single-purpose HTML+JS tools for example) I'm comfortable not reviewing the code, but if it's software I plan to have other people use I'm not willing to take that risk. I have a stack of neat prototypes and maybe-production-quality features that I can't ship yet because I've not done that review work.

I mainly work as an individual or with one other person - I'm not working as part of a larger team.

show 1 reply
turlockmiketoday at 8:38 PM

I stopped writing code a year ago. Claude code is a multiplier when you know how to use it.

Treat it like an intern, give it feedback, have it build skills, review every session, make it do unit tests. Red green refactor. Spend time up front reviewing the plan. Clearly communicate your intent and outcomes you want. If you say "do x" it has to guess what you want. If you say "I want this behaviour and this behaviour, 100% branch unit tested, adhearing to contributing guidelines and best practices, etc" it will take a few minutes longer, but the quality increases significantly.

I uninstalled vscode, I built my own dashboard instead that organizes my work. I get instant notifications and have a pr review kick off a highly opinionated or review utilizing the Claude code team features.

If you aren't doing this level of work by now, you will be automated soon. Software engineering is a mostly solved problem at this point, you need to embed your best practices in your agent and keep and eye on it and refine it over time.

show 9 replies
shmeltoday at 10:00 PM

I got insanely more productive with Claude Code since Opus 4.5. Perhaps it helps that I work in AI research and keep all my projects in small prototype repos. I imagine that all models are more polished for AI research workflow because that's what frontier labs do, but yeah, I don't write code anymore. I don't even read most of it, I just ask Claude questions about the implementation, sometimes ask to show me verbatim the important bits. Obviously it does mistakes sometimes, but so do I and everyone I have ever worked with. What scares me that it does overall fewer mistakes than I do. Plan mode helps tremendously, I skip it only for small things. Insisting on strict verification suite is also important (kind of like autoresearch project).

pikertoday at 10:01 PM

I am working on a sub 100KLOC Rust application and can't productively use the agentic workflows to improve that application.

On the other hand, I have tried them a number of times in greenfield situations with Python and the web stack and experienced the simultaneous joy and existential dread of others. They can really stand new projects up quick.

As a founder, this leaves me with what I describe as the "generation ship" problem. Is it possible that the architecture we have chosen for my project is so far out of the training data that it would be faster to ditch the project and reimplement it from scratch in a Claude-yolo style? So far, I'm convinced not because the code I've seen in somewhat novel circumstances is fairly mid, but it's hard to shake the thought.

I do find chatting with the models incredibly helpful in all contexts. They are also excellent at configuring services.

show 1 reply
causalzaptoday at 11:02 PM

I’ve been a web dev for 10+ years, and my professional pivot in 2026 has been moving away from "content-first" sites to "tool-led" content products. My current stack is Astro/Next.js + Tailwind + TypeScript, with heavy Python usage for data enrichment.

What’s working:

Boilerplate & Layout Shifting: AI (specifically Claude 4.x/5) is excellent for generating Astro components and complex Tailwind layouts. What used to take 2 hours of tweaking CSS now takes 15 minutes of prompt-driven iteration.

Programmatic SEO (pSEO) Analysis: I use Python scripts to feed raw data into LLMs to generate high-volume, structured analysis (300+ words per page). For zero-weight niche sites, this has been a massive leverage point for driving organic traffic.

Logic "Vibe Checks": When building strategy engines (like simulators for complex games), I use AI to stress-test my decision-making logic. It’s not about writing the core engine—which it still struggles with for deep strategy—but about finding edge cases in my "Win Condition" algorithms.

The Challenges:

The "Fragment" Syntax Trap: In Astro specifically, I’ve hit issues where AI misidentifies <> shorthand or hallucinates attribute assignments on fragments. You still need to know the spec inside out to catch these.

Context Rot: As a project grows, the "context window" isn't the problem; it's the "logic drift." If you let the AI handle too many small refactors without manual oversight, the codebase becomes a graveyard of "almost-working" abstractions.

The Solution: I treat AI as a junior dev who is incredibly fast but lacks a "mental model" of the project's soul. I handle the architecture and the "strategy logic," while the AI handles the implementation of UI components and repetitive data transformations.

Stack: Astro, TypeScript, Python scripts for data. Experience: 10 years, independent/solo.

wg0today at 9:14 PM

I foresee that the AI blindness at CEO/CFO level and the general hype (from technical and non technical press and media) in our society that software engineering is over etc will result in severe talent shortage in 5-7 years resulting in bidding wars for talent driving salaries 3x upwards or more.

show 1 reply
robbbbbbbbbbbbtoday at 10:28 PM

Context: micro (5 person) software company with a mature SaaS product codebase.

We use a mix of agentic and conversational tools, just pick your own and go with it.

For Unity development (our main codebase and source of value) I give current gen tools a C- for effectiveness. For solving confined, well modularisable problems (eg refactor this texture loader; implement support for this material extension) it’s good. For most real day to day problems it’s hopelessly confused by the large codebase full of state, external dependency on chunks of Unity, implicit hardware-dependent behaviours, etc. It has no idea how to work meaningfully with Unity’s scene graph or component model. I tried using MCP to empower it here: on a trivial test project it was fine. In a real project it got completely lost and broke everything after eating 30k tokens and 40 minutes of my time, mostly because it couldn’t understand the various (documented) patterns that straddled code files and scene structure.

For web and API development I give it an A, with just a little room for improvement. In this domain it’s really effective all the way down the logical stack from architectural and deployment decisions all the way down to implementation details and debugging including digging really deep in to package version incompatibilities and figuring out problems in seconds that would take me hours. My one criticism would be the - now familiar - “junior developer” effect where it’ll often run ahead with an over engineered lump of machinery without spotting a simpler more coherent pattern. As long as you keep an eye on it it’s fine.

So in summary: if what you’re doing is all in text, nothing in binary, doesn’t involve geometric or numerical reasoning, and has billions of lines of stack overflow solutions: you’ll be golden. Otherwise it’s still very hit and miss.

INTPenistoday at 10:15 PM

I'm always skeptical to new tech, I don't like how AI companies have reserved all memory circuits for X years, that is definitely going to cause problems in society when regular health care sector businesses can't scale or repair their infra, and the environmental impact is also a discussion that I am not qualified to get into.

All I can say for sure is that it is absolutely useful, it has improved my quality of life without a doubt. I stick to the principle that it's here to improve my work life balance, not increase output for our owners.

And that it has done, so far. I can accomplish things that would have taken me weeks of stressful and hyperfocused work in just hours.

I use it very carefully, and sparingly, as a helpful tool in my toolbox. I do not let it run every command and look into every system, just focused efforts to generate large amounts of boilerplate code that would require me to have a lot of docs open if I were to do it myself.

I definitely don't let it read or write my e-mails, or write any text. Because I always loved writing, and will never stop loving it.

It's here to stay, because I'm not alone in feeling this way about it. So the staunch AI-deniers are just wasting their time. Just like any other tech, it's going to be used against humans, against the already oppressed.

I definitely recognize that the tech has made some people lose their minds. Managers and product owners are now vibe coding thinking they can replace all their developers. But their code base will rot faster than they think.

mikelevinstoday at 10:59 PM

It's going pretty well, though it took at least six months to get there. I'm helped by knowing the domain reasonably well, and working with a principal investigator who knows it well and who uses LLMs with caution. At this stage I use Claude for coding and research that does not involve sensitive matters, and local-only LLMs for coding and research that does. I've gradually developed some regular practices around careful specification, boundaries, testing, and review, and have definitely seen things go south a few times. Used cautiously, though, I can see it accelerating progress in carefully-chosen and -bounded work.

sornaensistoday at 8:59 PM

I have good success using Copilot to analyze problems for me, and I have used it in some narrow professional projects to do implementation. It's still a bit scary how off track the models can go without vigilance.

I have a lot of worry that I will end up having to eventually trudge through AI generated nightmares since the major projects at work are implemented in Java and Typescript.

I have very little confidence in the models' abilities to generate good code in these or most languages without a lot of oversight, and even less confidence in many people I see who are happy to hand over all control to them.

In my personal projects, however, I have been able to get what feels like a huge amount of work done very quickly. I just treat the model as an abstracted keyboard-- telling it what to write, or more importantly, what to rewrite and build out, for me, while I revise the design plans or test things myself. It feels like a proper force multiplier.

The main benefit is actually parallelizing the process of creating the code, NOT coming up with any ideas about how the code should be made or really any ideas at all. I instruct them like a real micro-manager giving very specific and narrow tasks all the time.

michelbtoday at 8:51 PM

I’m not a professional developer but I can find my way around several languages and deployment systems. I have used Claude to migrate a mediumsized Laravel 5 app to Laravel 11 in about 2-3 days. I would not have dared to touch it otherwise.

In my day job I’m currently a PM/operations director at a small company. We don’t have programmers. I have used AI to build about 12 internal tools in the past year. They’re not very big, but provide huge productivity gains. And although I do not fully understand the codebase, I know what is where. Three of these tools I’m now recreating based on our usage and learnings.

I have learned a ton about all kinds of development concepts in a ridiculously short timeframe.

er453rtoday at 10:47 PM

For my specific niche (medical imaging) all current models still suck. The amount of expert knowledge required to understand the data and display it in the right way - probably never was in the training set.

We have this one performance-critical 3D reconstruction engine part, that just just has to go FAST through billions of voxels. From time to time we try to improve it, by just a bit. I have probably wasted at least 2 full days with various models trying out their suggestions for optimizations and benchmarking on real-world data. NONE produced an improvement. And the suggested changes look promising programming-wise, but all failed with real-world data.

These models just always want to help. Even if there is just no way to go, they will try to suggest something, just for the sake of it. I would just like the model to say "I do not know", or "This is also the best thing that I can come up with"... Niche/expert positions are still safe IMHO.

On the other hand - for writing REST with some simple business logic - it's a real time saver.

mindwoktoday at 10:39 PM

I quit my job and went out on my own freelancing.

So far, it's been fantastic. I can do more things for clients, much faster, than I ever dreamed would be possible when I've attempted work like this before.

I think the biggest problem with AI coding is that it simply doesn't fit well into existing enterprise structures. I couldn't imagine being able to do anything productive when I'm stuck having to rely on other teams or request access to stuff from the internet like I did in previous jobs.

tryauuumtoday at 10:16 PM

I had a couple of nice moments, like claude helping me with rust (which I don't understand) and claude finding a bug in a python library I was using

Also some not so nice moments (small rust changes were OK, but with a big one claude fumbled + I couldn't really verify that it worked so I didn't merge to code to master even when it seemingly worked)

I think it really helps to break the ice so to say. You no longer feel the tension, the pain of an empty page. You ask claude to write something, and improving something is so mentally easier

Also I mostly use claude as a spell checker / linter for the projects I'm too lazy to install proper tools for that. vim + claude, what else would you need

Luckily my company pays for the subscription, speding personal money on LLMs (especially on US LLMs) would feel strange for some reason. Ideally I want to own an LLM, have it at home but I am too lazy

abcde666777today at 10:48 PM

Two contexts:

1. Workplace, where I work on a lot of legacy code for a crusty old CRM package (Saleslogix/Infor), and a lot of SQL integration code between legacy systems (System21).

So far I've avoided using AI generated code here simply because the AI tools won't know the rules and internal functions of these sets of software, so the time wrangling them into an understanding would mitigate any benefits.

In theory where available I could probably feed a chunk of the documentation into an agent and get some kind of sensible output, but that's a lot of context to have to provide, and in some cases such documentation doesn't exist at all, so I'd have to write it all up myself - and would probably get quasi hallucinatory output as a reward for my efforts.

2. Personally where I've been working on an indie game in Unity for four years. Fairly heavy code base - uses ECS, burst, job system, etc. From what I've seen AI agents will hallucinate too much with those newer packages - they get confused about how to apply them correctly.

A lot of the code's pretty carefully tuned for performance (thousands of active NPCs in game), which is also an area I don't trust AI coding at all, given it's a conglomeration of 'average code in the wild that ended up in the training set'.

At most I sometimes use it for rubber ducking or performance. For example at one point I needed a function to calculate the point in time at which two circles would collide (for npc steering and avoidance), and it can be helpful to give you some grasp of the necessary math. But I'll generally still re-write the output by hand to tune it and make sure I fully grok it.

Also tried to use it recently to generate additional pixel art in a consistent style with the large amount of art I already have. Results fell pretty far short unfortunately - there's only a couple of pixel art based models/services out there and they're not up to snuff.

ecopoesistoday at 10:19 PM

I'm a manager at a large consumer website. My team and I have built a harness that uses headless Claude's (running Opus) to do ticket work, respond to and fix PR comments, and fix CI test failures. Our only interaction with code is writing specs in Jira tickets (which we primarily do via local Claudes) and adding PR comments to GitHub PRs.

The speed we can move at is astounding. We're going to finish our backlog next quarter. We're conservatively planning on launching 3x as many features next quarter.

Claude is far from perfect: it's made us reassess our coding standards since code is primarily for Claude now, not for humans. So much of what we did was to make code easier for the next dev, and that just doesn't matter anymore.

show 1 reply
Ger_Onimotoday at 10:00 PM

I'm mostly really enjoying it! While it's not my main job, I've always been a tool builder for teams I work on, so if I see a place where a little UI or utility would make people's life easier, I'd usually hack something together in a few hours and evolve it over time if people find it useful. That process is easily 10x faster than before.

My main work is training Text-to-Speech models, and the friction of experimenting with model features or ideas has dropped massively. If I want to add a new CFG implementation, or conditioning vector, 90% of the time Opus can one-shot it. It generally does a good job of making the model, inference and training changes simultaneously so everything plays nicely. Haven't had any major regressions or missed bugs yet, but we'll see!

The downside is reviewing shitty PRs where it's clear the engineer doesn't fully understand what they're doing, and just a general attitude of "I dunno, Claude suggested it" that's getting pretty exhausting.

molavetoday at 10:52 PM

1. Generate unit tests beyond the best-case scenario. Analogy: Netflix's Chaos Monkey

2. Incremental cleanup: I also use it as a fancier upgrade of Visual Studio's Code Analysis feature and aid me in finding areas to refactor.

3. Treating the model as a corpus of prior knowledge and discussions, I can form a 'committee of agents' (Security, Reliability, UX engineer POVs) to help me view my work at a more strategic level.

My additional twist to this is to check against my organization's mission statement. That way, I hope I can help reduce mission drift that I observe was a big issue behind dysfunctional companies.

ChrisMarshallNYtoday at 9:35 PM

Define "professional."

I write stuff for free. It's definitely "professional grade," and lots of people use the stuff I ship, but I don't earn anything for it.

I use AI every day, but I don't think that it is in the way that people here use it.

I use it as a "coding partner" (chat interface).

It has accelerated my work 100X. The quality seems OK. I have had to learn to step back, and let the LLM write stuff the way that it wants, but if I do that, and perform minimal changes to what it gives me, the results are great.

Kon5oletoday at 9:40 PM

Like many others I started feeling it had legs during the past few months. Tools and models reached some level where it suddenly started living up to some of the hype.

I'm still learning how to make the most of it but my current state is one of total amazement. I can't believe how well this works now.

One game-changer has been custom agents and agent orchestration, where you let agents kick off other agents and each one is customized and keeps a memory log. This lets me make several 1000 loc features in large existing codebases without reaching context limits, and with documentation that lets me review the work with some confidence.

I have delivered several features in large legacy codebases that were implemented while I attended meetings. Agents have created greenfield dashboards, admin consoles and such from scratch that would have taken me days to do myself, during daily standups. If it turned out bad, I tweaked the request and made another attempt over lunch. Several useful tools have been made that save me hours per week but I never took the time to make myself.

For now, I love it. I do feel a bit of "mourning the craft" but love seeing things be realized in hours instead of days or weeks.

cadamsdotcomtoday at 10:30 PM

Models aren’t reliable, and it’s a bottleneck.

My solution was to write code to force the model down a deterministic path.

It’s open source here: https://codeleash.dev

It’s working! ~200k LOC python/typescript codebase built from scratch as I’ve grown out the framework. I probably wrote 500-1000 lines of that, so ~99.5% written by Claude Code. I commit 10k-30k loc per week, code-reviewed and industrial strength quality (mainly thanks to rigid TDD)

I review every line of code but the TDD enforcement and self-reflection have now put both the process and continual improvement to said process more or less on autopilot.

It’s a software factory - I don’t build software any more, I walk around the machine with a clipboard optimizing and fixing constraints. My job is to input the specs and prompts and give the factory its best chance of producing a high quality result, then QA that for release.

I keep my operational burden minimal by using managed platforms - more info in the framework.

One caveat; I am a solo dev; my cofounder isn’t writing code. So I can’t speak to how it is to be in a team of engineers with this stuff.

show 1 reply
theshrike79today at 10:56 PM

I've shipped full features and bug fixes without touching an IDE for anything significant.

When I need to type stuff myself it's mostly just minor flavour changes like Claude adding docstrings in a silly way or naming test functions the wrong way - stuff that I fixed in the prompt for the next time.

And yes, I read and understand the code produced before I tag anyone to review the PR. I'm not a monster =)

bgdkbtvtoday at 9:45 PM

For professional work, I like to offload some annoying bug fixes to Claude and let it figure it out. Then, perusing the changes to make sure nothing silly is being added to the codebase. Sometimes it works pretty well. Other times, for complicated things I need to step in and manually patch. Overall, I'm a lot less stressed about meeting deadlines and being productive at work. On the other hand, I'm more stressed about losing my employment due to AI hype and its effectiveness.

For my side projects, I do like to offload the tedious steps like setup, scaffolding or updating tasks to Claude. Things like weird build or compile errors that I usually would have to spend hours Googling to figure out I can get sorted in a matter of minutes. Other than that, I still like to write my own code as I enjoy doing it.

Overall, I like it as a tool to assist in my work. What I dislike is how much peddling is being done to shove AI into everything.

jellyfishbeavertoday at 10:28 PM

Same attitudes as others in this thread.

For personal projects and side company, I get to join in on some of the fun and really multiply the amount of work I can get through. I tend to like to iterate on a project or code base for awhile, thinking about it and then tearing things down and rebuilding it until I arrive at what I think is a good implementation. Claude Code has been a really great companion for this. I'd wager that we're going to see a new cohort of successful small or solo-founder companies that come around because of tools like this.

For work, I would say 60% of my company's AI usage is probably useless. Lots of churning out code and documents that generate no real value or are never used a second time. I get the sense that the often claimed "10x more productive" is not actually that, and we are creating a whole flood of problems and technical debt that we won't be able to prompt ourselves out of. The benefit I have mostly seen myself so far is freeing up time and automating tedious tasks and grunt work.

jebarkertoday at 9:44 PM

I work in an R&D team as research scientist/engineer.

Cursor and Claude Code have undoubtedly accelerated certain aspects of my technical execution. In particular, root causing difficult bugs in a complicated codebase has been accelerated through the ability to generate throwaway targeted logging code and just generally having an assistant that can help me navigate and understand complex code.

However, overall I would say that AI coding tools have made my job harder in two other ways:

1. There’s an increased volume of code that requires more thorough review and/or testing or is just generally not in keeping with the overall repo design.

2. The cost is lowered for prototyping ideas so the competitive aspect of deciding what to build or which experiment to run has ramped up. I basically need to think faster and with more clarity to perform the same as I did before because the friction of implementation time has been drastically reduced.

ramoztoday at 10:25 PM

Right now I enjoy the labs' cli harnesses, Claude Code, and Codex (especially for review). I do a bunch of niche stuff with Pi and OpenCode. My productivity is up. Some nuances with working with others using the same AI tools- we all end up trying to boil the ocean at first- creating a ton of verbose docs and massive PRs, but I/they end up regressing on throwing up every sort of LLM output we get. Instead, we continously refine the outputs in a consumable+trusted way.

My workday is fairly simple. I spend all day planning and reviewing.

1. For most features, unless it's small things, I will enter plan mode.

2. We will iterate on planning. I built a tool for this, and it seems that this is a fairly desired workflow, given the popularity through organic growth. https://github.com/backnotprop/plannotator

  - This is a very simple tool that captures the plan through a hook (ExitPlanMode) and creates a UI for me to actually read the plan and annotate, with qol things like viewing plan diffs so I can see what the agent changed.
3. After plan's approved, we hit eventual review of implementation. I'll use AI reviewers, but I will also manually review using the same tool so that I can create annotations and iterate through a feedback loop with the agents.

4. Do a lot of this / multitasking with worktrees now.

Worktrees weren't something I truly understood the value of for a while, until a couple weeks ago, embarrassingly enough: https://backnotprop.com/blog/simplifying-git-worktrees/

show 1 reply
drrobtoday at 6:03 PM

I've only recently begun using copilot auto-complete in Visual Studio using Claude (doing C# development/maintenance of three SaaS products). I've been a coder since 1999.

The suggestions are correct about 40% of the time, so I'm actually surprised when they're right, rather than becoming reliant on them. It saves me maybe 10 minutes a day.

show 1 reply
tdifftoday at 10:40 PM

I just wonder if there are comments in this thread from anthropic bots, marketing itself

tintortoday at 11:01 PM

Using Claude Code professionally for the last 2 months (Max plan) at Rhoda AI and love it!

Software Engineering has never been more enjoyable.

Python, C++, Docker, ML infra, frontend, robotics software

I have 5 concurrent Claude Code sessions on the same mono repo.

Thank you Anthropic!

zmjtoday at 8:51 PM

It's great. I'd guess 80-90% of my code is produced in Copilot CLI sessions since the beginning of the year. Copilot CLI is worse than Claude Code, but not by a huge amount. This is mostly working in established 100k+ LOC codebases in C# and TypeScript, with a couple greenfield new projects. I have to write more code by hand in the greenfield projects at their formative stage; LLMs do better following conventions in an existing codebase than being consistent in a new one.

Important things I've figured out along the way:

1. Enable the agent to debug and iterate. Whatever you'd do to test and verify after you write your first pass at an implementation, figure out a way for an agent to do it too. For example: every API call is instrumented with OpenTelemetry, and the agent has a local collector to query.

2. Make scripts or skills to increase the reliability of fallible multi-step processes that need to be repeated often. For example: getting an oauth token to call some api with the appropriate user scopes for the task.

3. Continually revise your AGENTS.md. I'll often end a coding session by asking the agent whether there's anything from this session that should be captured there. That adds more than it removes, so every few days I'll compact it by having an agent reword the important stuff for conciseness and get rid anything obvious from implementation.

parastitoday at 9:55 PM

We're using Augment Code heavily on a "full rewrite of legacy CRM with 30 years of business rules/data" Laravel project with a team size of 4. Augment kind of became impossible to avoid once we realized the new guy is outpacing the rest of us while posessing almost no knowledge of code and working fully in the business requirements domain, extracting requirements from the customer and passing them to AI, which was encoding them in tests and implementing them in code.

I'm using `auggie` which is their CLI-based agentic tool. (They also have a VS Code integration - that became too slow and hung often the more I used it.) I don't use any prompting tricks, I just kind of steer the agent to the desired outcome by chatting to it, and switch models as needed (Sonnet 4.6 for speed and execution, GPT 5.1 for comprehension and planning).

My favorite recent interaction with Augment was to have one session write a small API and its specification within the old codebase, then have another session implement the API client entirely from the specification. As I discovered edge cases I had the first agent document them in the spec and the second agent read the updated spec and adjust the implementation. That worked much, much better than the usual ad hoc back and forth directly between me and one agent and also created a concise specification that can be tracked in the repo as documentation for humans and context for future agentic work.

adamwong246today at 11:11 PM

I've never been more productive. If only I had a job...

jantbtoday at 6:07 PM

Using claude-code for fixing bugs in a rather huge codebase. Reviews the fixes and if i think it wrote something I would make a pr off i use it. Understanding is key I think and giving it the right context. I have about 20 years of experience of programming and I’m letting it code in a domain and language I know very well. It saves me a lot when the bug requires finding a needle in a haystack.

HorizonXPtoday at 8:45 PM

I am having the greatest time professionally with AI coding. I now have the engineering team I’ve always dreamed of. In the last 2 months I have created:

- a web-based app for a F500 client for a workflow they’ve been trying to build for 2 years; won the contract

- built an iPad app for same client for their sales teams to use

- built the engineering agent platform that I’m going to raise funding

- a side project to do rough cuts of family travel videos (https://usefirstcut.com, soft launch video: https://x.com/xitijpatel/status/2026025051573686429)

I see a lot of people in this thread struggling with AI coding at work. I think my platform is going to save you. The existing tools don’t work anymore, we need to think differently. That said, the old engineering principles still work; heck, they work even better now.

nzoschketoday at 9:08 PM

It’s going very well.

Experience level: very senior, programming for 25 years, have managed platform teams at Heroku and Segment.

Project type: new startup started Jan ‘26 at https://housecat.com. Pitch is “dev tools for non developers”

Team size: currently 2.

Stack: Go, vanilla HTML/CSS/JS, Postgres, SQLite, GCP and exe.dev.

Claude code and other coding harnesses fully replaced typing code in an IDE over the past year for me.

I’ve tried so many tools. Cursor, Claude and Codex, open source coding agents, Conductor, building my own CLIs and online dev environments. Tool churn is a challenge but it pays dividends to keep trying things as there have been major step functions in productivity and multi tasking. I value the HN community for helping me discover and cut through the space.

Multiple VMs available over with SSH with an LLM pre-configured has been the latest level up.

Coding still hard work designing tests, steering agents, reviewing code, and splitting up PRs. I still use every bit of my experience every day and feel tired at end of day.

My non-programmer co-founder, more of a product manager and biz ops person, has challenges all the time. He generally can only write functional prototypes. We solve this by embracing the functional prototype and doing a lot of pair programming. It is much more productive than design docs or Figma wireframes.

In general the game changer is how much a couple of people can get done. We’re able to prototype ideas, build the real app, manage SOC2 infra, marketing and go to market better than ever thanks to the “willing interns” we have. I’ve done all this before and the AI helps with so much of the boilerplate.

I’m looking for beta testers and security researcher of the product, and a full time engineer if anyone is interested in seeing what a “greenfield” product, engineering culture and business looks like in 2026. Contact info in my profile.

stephbooktoday at 6:35 PM

I develop prototypes using Claude Code. The dead boring stuff.

"Implement JWT token verification and role checking in Spring Boot. Secure some endpoints with Oauth2, some with API key, some public."

C# and Java are so old, whatever solutions you find are 5 years out of date. Having an agent implement and verify the foundation is the perfect fit. There's no design, just ever-chaning framework magic. I'd do the same "Google and debug" cycle, but 10 times slower.

🔗 View 50 more comments