Why is it important that a dev can’t do fizzbuzz without ai?
If they can ship code that matches a spec, why does it matter if they’re using ai or not?
Genuinely curious.
Fizzbuzz is such an incredibly simple problem if you can’t do it I struggle to see how you’d be able to complete any task that requires very basic reasoning and very basic coding knowledge. And if an AI system can do those parts, what am I getting for spending tens of thousands of pounds per year by hiring a person who can’t? Wouldn’t I just tag codex on the tickets?
I’m not talking about gotcha level stuff here where the first time it didn’t compile because of a bracket or anything, or even first time wrong. They couldn’t do Fizzbuzz in a language of their choice, at all.
Those that could were always annoyed at having to do such things because how could someone coming for a contract position not be able to do this? Without seeing what a filter it really was.
> If they can ship code that matches a spec, why does it matter if they’re using ai or not?
The inability to write fizzbuzz strongly implies their inability to understand what they've shipped. Review is some significant portion of the job. Understanding of the product is also part of the job.
Specs are also in a sense, scaled down, fuzzy, natural language descriptions of a feature. The fuzziness is the source of a bugs, or at least a mismatch between the actual desired feature and what was written down at spec writing time. As such, just matching a spec is just the bare minimum that a good dev should be doing. They should be understanding what the spec is _not_ saying, understanding holes in their implementation, how their implementation enables or hinders the next feature and the next, next feature, etc. I don't think any of that is possible without understanding what was actually implemented.
For the same reason it's important your mechanic can identify which parts of a car are the wheel.
Who cares as long as the car is fixed, right? As long as the mechanic can Chinese-room his way to a working car, why does it matter how much of it he actually understands?
And why hire the mechanic instead of hiring the Chinese room?
Why hire them at all then, just ask them what their favorite AI is and use that
To understand the code they are shipping requires some level of proficiency. Their inability to do fizzbuzz without AI calls that into question.
If you can’t even write a for loop, how can you verify the ai code you generated isn’t going to wipe the prod database?
It’s about deeply understanding what you’re doing. Like as a kid before you knew how to ride a bike, you could sit on a bike and peddling, but until it “clicked” you couldn’t balance and keep going forward stable. Fizzbuzz tests your ability to reason through a problem that seems simple on its face, but is easy to get wrong and/or overthink.
How will you know that it produced correct code if you don’t know how to write it yourself?
If they’re not a value add over the base AI, they aren’t worth hiring over just using the base AI.
First: FizzBuzz is a test to know if you understand the most basic constructs of programming. The kind of thing you learn in the first week of CS101. I forgot what it was, and when I looked at the problem I knew the answer.
More broadly: In the short/medium term, we still need humans who have the skills to understand software largely on their own. We will always need those who understand software engineering and architecture. Perhaps in 25 years LLMs will be so good that learning Python by hand will be like learning assembly today. But not yet.
The field is not ready for new practitioners to be know-nothing Prompt engineers. If we do that, we cut the legs out from under the education pipeline for programming.
If you can’t do fizzbuzz without AI you have no business being in this career.
It doesn't. It's just a low-end skill filter that got really popular. It could have easily been replaced by other tests like is this word a palindrome.
> If they can ship code that matches a spec, why does it matter if they’re using ai or not?
I am perfectly capable of writing specs, and feeding them to 3 separate copies of Claude Code all by myself. Then I task switch between the tmux windows based on voice messages from the pack of Claudes. This workflow is fine for some things, and deeply awful for others.
Basically, if a developer is just going to take my spec and hand it to Claude Code, then they're providing zero value. I could do that myself, and frequently do.
The actual bottleneck is people who can notice, "The god object is crumbling under the weight of managing 6 separate concerns with insufficient abstraction." Or "Claude has created 5 duplicate frameworks for deploying the app on Docker. We need to simplify this down to 1 or we're in hell." I will happy fight to hire people who can do the latter work. But those people can all solve fizzbuzz in their sleep.
People who just "ship code that matches a spec" without understanding the technical details are providing close to zero value right now.
There is an interesting niche for people with deep knowledge of customer workflows who can prompt Claude Code. These people can't build finished products using Claude. But they can iterate rapidly on designs until they find a hit. Which we can then fix using people with deeper engineering knowledge and taste.
But if you're not bringing either deep customer knowledge or actual engineering knowledge, you're not adding much these days.