>I built a genetic algorithm simulator with interactive visualizations showing evolution in real-time including complex fitness functions, selection pressure, mutation rates, and more in 1 day. I didn't write a single line of the code.
I'm really trying to understand this. From a learner point of view this is useless because you aren't learning anything. From an entrepreneur point of view it's useless too, I suppose? I wouldn't ship something I'm not 100% sure about how it works.
- AI generated article - Overconfident claims (which are based on solo dev) - Spending an absurd amount on an LLM subscription - No actual details, just buzzwords and generic claims
If AI-hype was a person
Meanwhile here’s me still just using the ChatGPT web chat asking it for code snippets.
> You no longer need to review the code. Or instruct the model at the level of files or functions. You can test behaviors instead.
Maybe for a personal project but this doesn't work in a multi-dev environment with paying customers. In my experience, paying attention to architecture and the code itself results in a much more pliable application that can be evolved.
> Use Claude Code if you
> a) never plan on learning and just care about outputs, or
> b) are an abstraction maximilist.
As a Claude Code user for about 6 months, I don't identify with either of these categories. Personally I switched to Claude Code because I don't particularly enjoy VScode (or forks thereof). I got used to a two window workflow - Claude Code for AI-driven development, and Goland for making manual edits to the codebase. As of a few months ago, Claude Code can show diffs in Goland, making my workflow even smoother.
use both ~ Claude Code as main driver, hook it up to Cursor with /ide in Claude code to review or make other manual adjustments.
Have OpenAI Codex do code reviews, it’s the best one so far at code reviews. Yes, it’s ironic (or not) that the code writer is not the best reviewer.
I really love AI for lots of things, but, when I'm reading a post, the AI aesthetic has started to grate. I read articles and they all have the same "LLM" aesthetic, and I feel like I'm reading posts written by the same person.
Sure, the information is all there, but the style just puts me off reading it. I really don't like how few authors have a voice any more, even if that voice is full of typos and grammatical errors.
"my experience from 5 years of coding with AI" immediately disregarded the rest of TFA.
>> You no longer need to review the code. Or instruct the model at the level of files or functions. You can test behaviors instead.
I think this is where things will ultimately head. You generate random code, purely random in raw machine readable binary, and simply evaluate a behavior. Most random generated code will not work. some, however, will work. and within that working code, some will be far faster and this is the code that is used.
No different than what a geneticist might do evaluating generated mutants for favorable traits. Knowledge of the exact genes or pathways involved is not even required, one can still select among desired traits and therefore select for that best fit mechanism without even knowing it exists.
If you like Claude Code, then Gas Town (recent discussion [1]) will probably blow your mind. I'm just trying to get a grip on it myself. But it sounds incredible.
>> You no longer need to review the code.
You also no longer need to work, earn money, have a life, read, study, know anything about the world. This is pure fantasy my brain farts hard when I read sentences like that
This is a good guide on how to use Claude code. My perspective (from an early adopter of LLMs for coding) is similar. Though, open code has a lot of potential as well. So I'm happy that Claude Code is not the only option. But a key aspect imo is that, using these tools is also a skill; And there's a lot of knowledge involved in making something good with the assistance of Claude code vs doing slop. Specially as soon as you deviate from a very basic application / work in a larger repo with multiple people. There's a layer of context that these tools don't quite have, and it's very difficult to consistently provide them with. I can see this being less the case as context windows and the reliability of larger context retrieval is solved.
The "Council of models" is a good first step, but ultimately I found myself settling on an automated talent acquisition pipeline.
I have a BIRTHING_POOL.md that combines the best AGENTS.md and introduces random AI-generated mutations and deletions. The candidates are tested using take-home PRs which are reviewed by HR.md and TECH_MANAGER.md. TECH_MANAGER.md measures completion rate per tokens (effectiveness) and then sends the stack ranking of AGENT.mds to HR to manage the talent pool. If agent effectiveness drops low enough, we pull from the birthing pool and interview more candidates.
The end result is that it effectively manages a wider range of agent talents and you don't get into these agent hive mind spirals you get if every worker has the same system prompt.
I am still a WindSurf user. It has the quirk of deciding for itself on any given day whether to use ChatGPT 5.2 or Claude Opus 4.5 in Cascade (its agentic side panel). I've never noticed much of a difference, they are both amazing.
I thought the difference must be in how Claude Code does the agentic stuff - reasoning with itself, looping until it finds an answer, etc. - but I have spent a fair amount of time with Claude Code now and found that agentic experience to be about the same between Cascade and Claude Code.
What am i missing? (serious question, i do have Claude Code FOMO like the OP)
Anyone tried claude code with z.ai subscription?
It's only for a fraction of the price, but 3 times as much limits.
I currently use github subscription for hobby projects.
Top 0.01% user of a code LLM demonstrates extreme unwillingness to learn anything.
> my experience from 5 years of coding with AI
What AI have you been using for 5 years of coding?
I'm copacetic to the notion we're not at enterprise codebase level (yet), but everyone who still thinks agentic coding stops at React CRUD apps needs to update their priors.
I needed a poc RAG pipeline to demo concepts to other teams. Built and tested this over the weekend, exclusively with Claude Code and a little OpenCode. Mix of mobile app and breaking out Android terminal to allow Sonnet 4.5 to run the dotnet build chain on tricky compilation issues.
This reads like a useful guide, not an answer to the question "why use Claude code over cursor" that the author includes at the beginning.
Claude code all the way! If anybody wants to help me beta test my own web-based set up for managing multiple claude code instances on hetzner vpses: clodhost.com!
I have recently started using Github Copilot for some small personal projects and am blown away how fast it is possible to create a solution that does the job. Of course its not optimized for security or scaling, but it doesn't have to be. Most mindblowing moment was when Copilot searched API documentation and implemented everything just asking me to add my API key to the .env. Wild times.
Regarding not reviewing the output: AI works great when you’re trying to sell it.
That being said, after seeing inside a couple YC backed SaaS companies I believe you can get by without reading the code. There are bugs _everywhere_, yet one of these companies made it years and sold. Currently going through the onerous process of fixing this as new company has a lot of interest in reducing the defect count. It is painful, difficult, and feeling like the company bought a lemon.
I think the reality is there’s a lot of money to be made with buggy software. But, there’s still plenty of money in making reliable software as well (I think?).
Just completely hilarious how 6 months ago about 50% of hackernews comments were AI denialists telling everybody they were full of shit and that LLM was not useful. That group is awfully quiet nowadays. The bar has clearly moved to "eventually we won't even need to do code reviews".
LLM denialists were always wrong and they should be embarrassed to share their bad model of reality and how the world works.
Do we really need to qualify our power user level down to 100ppm percentiles...?
Wouldn’t this kind of setup eat away tokens at a very fast rate and even the max plan will be quickly overrun? Isn’t this a viable workflow to use Claude code to create just one pull request at a time, lightly review the code and allow it to be merged?
I'd love to have you as a top user on autohand [.] ai/cli and interested in your experience with us.
what the fuck this articles supposed to be? interesting read at first but down below it become really disorganized almost like aslop
Is it just me, or has Claude Code gotten really stupid the last several days. I've been using it almost since it was publicly released, and the last several days it feels like it reverted back 6 months. I was almost ready to start yolo-ing everything, and now it's doing weird hallucinations again and forgetting how to edit files. It used to go into plan mode automatically, now it won't unless I make it.
> I was a top 0.01%
wow
> This is a guide that combines:
> 1. my experience from 5 years of coding with AI
It is a testament to the power of this technology that the author has managed to fit five years of coding with AI in between 2023 and now
[dead]
I'd love to have you as a top user on autohand [.] ai/cli and interested in your experience with us. We're a bootstrap ai lab.
For most of 2025, I ignored popular agents because I wanted to stay within my preferred text editor (Emacs). Thanks to ACP (https://agentclientprotocol.com), I no longer live under a rock ;) I built https://github.com/xenodium/agent-shell and now get a native experience. Claude Code works great. If curious what that looks like, I made a video recently https://xenodium.com/bending-emacs-episode-10-agent-shell