logoalt Hacker News

saulpwyesterday at 6:58 PM10 repliesview on HN

> That wouldn’t even be a big violation of the vibe coding concept. You’re reading the innards a little but you’re only giving high-level, conceptual, abstract ideas about how problems should be solved. The machine is doing the vast majority, if not literally all, of the actual writing.

Claude Code is being produced at AI Level 7 (Human specced, bots coded), whereas the author is arguing that AI Level 6 (Bots coded, human understands somewhat) yields substantially better results. I happen to agree, but I'd like to call out that people have wildly different opinions on this; some people say that the max AI Level should be 5 (Bots coded, human understands completely), and of course some people think that you lose touch with the ground if you go above AI Level 2 (Human coded with minor assists).

[0] https://visidata.org/ai


Replies

jaccolayesterday at 7:33 PM

It's also a context-specific scale. I work in computer vision. Building the surrounding app, UI, checkout flow, etcetera is easily Level 6/7(sorry...) on this scale.

Building the rendering pipeline, algorithms, maths, I've turned off even level 2. It is just more of a distraction than it's worth for that deep state of focus.

So I imagine at least some of the disconnect comes from the area people work in and its novelty or complexity.

show 2 replies
lukevyesterday at 8:32 PM

I like this framing, but it does seem to imply that a whole dev shop, or a whole product, can or should be built at the same level.

The fact is, I think the art of building well with AI (and I'm not saying it's easy) is to have a heterogenously vibe-coded app.

For example, in the app I'm working on now, certain algorithmically novel parts are level 0 (I started at level 1, but this was a tremendously difficult problem and the AI actually introduced more confusion than it provided ideas.)

And other parts of the app (mostly the UI in this case) are level 7. And most of the middleware (state management, data model) is somewhere in between.

Identifying the appropriate level for a given part of the codebase is IMO the whole game.

show 1 reply
rapindyesterday at 7:09 PM

I'm at a 5, and only because I've implemented a lot of guardrails, am using a typed functional language with no nulls, TDD red/green, and a good amount of time spent spec'ing. No way I'd be comfortable enough this high with a dynamic language.

I could probably get to a 7 with some additional tooling and a second max 20 account, but I care too much about the product I'm building right now. Maybe for something I cared less about.

IMO if you're going 7+, you might as well just pick a statically typed and very safe (small surface area) language anyways, since you won't be coding yourself.

show 1 reply
freediddyyesterday at 7:25 PM

That's an interesting list. I think that the humans that will make the most progress in the next few years are the ones that push themselves up to the highest level of that list. Right now is a period of intense disruption and there are many coders that don't like the idea that their way of life is dead. There are still blacksmiths around today but for the most part it's made by factories and cheap 3rd world labor. I think the same is currently happening with coding, except it will allow single builders and designers to do the same thing as an entire team 5 years ago.

show 2 replies
francisofasciiyesterday at 7:29 PM

At work I am at level 4, but my side projects have embarrassingly crept into Level 6. It is very tempting to accept the features as is, without taking the time understand how it works

sbysbyesterday at 7:04 PM

> some people say that the max AI Level should be 5

> of course some people think that you lose touch with the ground if you go above AI Level 2

I really think that this framing sometimes causes a loss of granularity. As with most things in life, there is nuance in these approaches.

I find that nowadays for my main project I where I am really leaning into the 'autonomous engineering' concept, AI Level 7 is perfect - as long as it is qualified through rigorous QA processes on the output (ie it is not important what the code does if the output looks correct). But even in this project that I am really leaning into the AI 'hands-off' methodology, there are a few areas that dip into Level 5 or 4 depending on how well AI does them (Frontend Design especially) or on the criticality of the feature (in my case E2EE).

The most important thing is recognizing when you need to move 'up' or 'down' the scale and having an understanding of the system you are building

rectangyesterday at 7:14 PM

> https://visidata.org/ai

Thanks for that list of levels, it's helpful to understand how these things are playing out and where I'm at in relation to other engineers utilizing LLM agents.

I can say that I feel comfortable at approximately AI level 5, with occasional forays to AI level 6 when I completely understand the interface and can test it but don't fully understand the implementation. It's not really that different from working on a team, with the agent as a team member.

kfarryesterday at 8:30 PM

To clarify, does this mean that Anthropic employees don't understand Claude Code's code since it's level 7? I've got to believe they have staff capable of understanding the output and they would spend at least some time reviewing code for a product like this?

show 2 replies
forrestthewoodsyesterday at 7:21 PM

Interesting breakdown of levels. I like it.

I’m not sure I believe that Level 7 exists for most projects. It is utterly *impossible* for most non-trivial programs to have a spec that doesn’t not have deep, carnal knowledge of the implementation. It can not be done.

For most interesting problems the spec HAS to include implementation details and architecture and critical data structures. At some point you’re still writing code, but in a different language, and it migtt hurt have actually been better to just write the damn struct declarations by hand and then let AI run with it.

show 1 reply
h14hyesterday at 9:00 PM

[dead]