The interesting thing about Rule 1 is that it makes Rules 3-5 follow almost mechanically. If you genuinely accept that you cannot predict where the bottleneck is, then writing straightforward code and measuring becomes the only rational strategy. The problem is most people treat these rules as independent guidelines rather than as consequences of a single premise.
In practice what I see fail most often is not premature optimization but premature abstraction. People build elaborate indirection layers for flexibility they never need, and those layers impose real costs on every future reader of the code. The irony is that abstraction is supposed to manage complexity but prematurely applied it just creates a different kind.
Can't agree more on 5. I've repeatedly found that any really tricky programming problem is (eventually) solved by iterative refinement of the data structures (and the APIs they expose / are associated with). When you get it right the control flow of a program becomes straightforward to reason about.
To address our favorite topic: while I use LLMs to assist on coding tasks a lot, I think they're very weak at this. Claude is much more likely to suggest or expand complex control flow logic on small data types than it is to recognize and implement an opportunity to encapsulate ideas in composable chunks. And I don't buy the idea that this doesn't matter since most code will be produced and consumed by LLMs. The LLMs of today are much more effective on code bases that have already been thoughtfully designed. So are humans. Why would that change?
Once upon a time in the 90's I was at work at 2am and I needed to implement a search over a data set. This function was going to be eventually called for every item, thus if I implemented it as a linear search, it would be n^2 behavior. Since it was so late and I was so tired, I marked it as something to fix later, and just did linear search.
Later that week, now that things were working, I profiled the n^2 search. The software controlled a piece of industrial test equipment, and the actual test process would take something around 4 hours to complete. Using the very worst case, far-beyond-reasonable data set, if I left the n^2 behavior in, would have added something like 6 seconds to that 4 hour runtime.
(Ultimately I fixed it anyways, but because it was easy, not because it mattered.)
Rule 3 gets me into trouble with CS majors a lot. I'm an EE by education and entered into SW via the bottom floor(embedded C/ASM) so it was late in my career before I knew the formal definition of big-O and complexity.
For most of my career, sticking to rule 3 made the most sense. When the CS major would be annoying and talk about big-O they usually forgot n was tiny. But then my job changed. I started working on different things. Suddenly my job started sounding more like a leetcode interview people complain about. Now n really is big and now it really does matter.
Keep in mind that Rob Pike comes from a different era when programming for 'big iron' looked a lot more like programming for an embedded microcontroller now.
I think it's fine and generous that he credited these rules to the better-known aphorisms that inspired them, but I think his versions are better, they deserve to be presented by themselves, instead of alongside the mental clickbait of the classic aphorisms. They preserve important context that was lost when the better-known versions were ripped out of their original texts.
For example, I've often heard "premature optimization is the root of all evil" invoked to support opposite sides of the same argument. Pike's rules are much clearer and harder to interpret creatively.
Also, it's amusing that you don't hear this anymore:
> Rule 5 is often shortened to "write stupid code that uses smart objects".
In context, this clearly means that if you invest enough mental work in designing your data structures, it's easy to write simple code to solve your problem. But interpreted through an OO mindset, this could be seen as encouraging one of the classic noob mistakes of the heyday of OO: believing that your code could be as complex as you wanted, without cost, as long as you hid the complicated bits inside member methods on your objects. I'm guessing that "write stupid code that uses smart objects" was a snappy bit of wisdom in the pre-OO days and was discarded as dangerous when the context of OO created a new and harmful way of interpreting it.
Running the same codebase for 10+ years with a small team is what finally made me fully internalize these rules.
I've always been a KISS/DRY person but over a decade there are plenty of moments where you're tempted to reach for a fancier database or rewrite something in a trendier stack. What's actually kept things running well at scale is boring, known technologies and only optimizing in the places where it actually matters.
We wrote our principles down recently and it basically just reads like Pike's rules in different words: https://www.geocod.io/code-and-coordinates/2025-09-30-develo...
I feel like 1 and 2 are only applicable in cases of novelty.
The thing is, if you build enough of the same kinds of systems in the same kinds of domains, you can kinda tell where you should optimize ahead of time.
Most of us tend to build the same kinds of systems and usually spend a career or a good chunk of our careers in a given domain. I feel like you can't really be considered a staff/principal if you can't already tell ahead of time where the perf bottleneck will be just on experience and intuition.
Heh, in the early days of C++ (1990ish) I had a notable application of 3+4 involving a doubly linked list with cache pointers (time-sequence data browser so references were likely "nearby" as the user zoomed in; spec was to handle streaming data eventually.) Had problems with it crashing in pointer-related ways (in 1990, nobody had a lot of C++ experience) so I cooked up a really dumb "just realloc an array" version so I could figure out if the problem was above or below data structure... and not only didn't the "dumb" version crash, it was also much faster (and of faster order!) due to amortized realloc - doing a more expensive operation much less often turns out to be a really good trick :-)
It's interesting to contrast "Measure. Don't tune for speed until you've measured" with Jeff Dean's "Latency Numbers Every Programmer Should Know" [0].
Dean is saying (implicitly) that you can estimate performance, and therefore you can design for speed a priori - without measuring, and, indeed, before there is anything to measure.
I suspect that both authors would agree that there's a happy medium: you absolutely can and should use your knowledge to design for speed, but given an implementation of a reasonable design, you need measurement to "tune" or improve incrementally.
There are very few phrases in all of history that have done more damage to the project of software development than:
"Premature optimization is the root of all evil."
First, let's not besmirch the good name of Tony Hoare. The quote is from Donald Knuth, and the missing context is essential.
From his 1974 paper, "Structured Programming with go to Statements":
"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
He was talking about using GOTO statements in C. He was talking about making software much harder to reason about in the name of micro-optimizations. He assumed (incorrectly) that we would respect the machines our software runs on.
Multiple generations of programmers have now been raised to believe that brutally inefficient, bloated, and slow software is just fine. There is no limit to the amount of boilerplate and indirection a computer can be forced to execute. There is no ceiling to the crystalline abstractions emerging from these geniuses. There is no amount of time too long for a JVM to spend starting.
I worked at Google many years ago. I have lived the absolute nightmares that evolve from the willful misunderstanding of this quote.
No thank you. Never again.
I have committed these sins more than any other, and I'm mad as hell about it.
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
It's so true, when specing things I always try to focused on DDL because even the UI will fall into place as well, and a place I see claude opus fail as well when building things.
Worth noting these were not written as rules of programming generally but rules specifically targeted at complexity. They are lifted from the "Complexity" section of Rob's "Notes on Programming in C".
https://www.lysator.liu.se/c/pikestyle.html http://www.literateprogramming.com/pikestyle.pdf
> Rule 5 is often shortened to "write stupid code that uses smart objects".
This is probably the worst use of the word "shortened" ever, and it should be more like "mutilated"?
The attribution to Hoare is a common error — "Premature optimization is the root of all evil" first appeared in Knuth's 1974 paper "Structured Programming with go to Statements."
Knuth later attributed it to Hoare, but Hoare said he had no recollection of it and suggested it might have been Dijkstra.
Rule 5 aged the best. "Data dominates" is the lesson every senior engineer eventually learns the hard way.
Later never comes" is true, but the fix isn't to optimize early, it's to write code simple enough that optimization is easy when later finally does come. That's what Rule 5 is really about. Get the data structures right and the rest is tractable.
The first four are kind of related. For me the fifth is the important – and oft overlooked – one:
> Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
> Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants.
I get where he's coming from, but I've seen people get this very wrong in practice. They use an algorithm that's indeed faster for small n, which doesn't matter because anything was going to be fast enough for small n, meanwhile their algorithm is so slow for large n that it ends up becoming a production crisis just a year later. They prematurely optimized after all, but for an n that did not need optimization, while prematurely pessimizing for an n that ultimately did need optimization.
I don't disagree with these principles, but if I wanted to compress all my programming wisdom into 5 rules, I wouldn't spend 3 out of the 5 slots on performance. Performance is just a component of correctness : if you have a good methodology to achieve correctness, you will get performance along the way.
My #1 programming principle would be phrased using a concept from John Boyd: make your OODA loops fast. In software this can often mean simple things like "make compile time fast" or "make sure you can detect errors quickly".
"Epigrams in Programming" by Alan J. Perlis has a lot more, if you like short snippets of wisdom :) https://www.cs.yale.edu/homes/perlis-alan/quotes.html
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
Always preferred Perlis' version, that might be slightly over-used in functional programming to justify all kinds of hijinks, but with some nuance works out really well in practice:
> 9. It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.
Rule 5 is definitely king. Code acts on data, if the data is crap, you're already lost.
edit: s/data/data structure/
Rule 5 is the one that took me longest to internalize. Coming from frontend development into building a full product with a real database, I kept reaching for complex query logic when the real fix was just restructuring the data. Once the schema was right the queries became obvious. Brooks was right 50 years ago and it's still true.
Given that Rob Pike seems to think having a shell script that uses tar is better than having a `-r` flag for cp [1], I wouldn't give much weight to his philosophy of programming :P.
These rules aged well overall. The only change I would make these days is to invert the order.
Number 5 is timeless and relevant at all scales, especially as code iterations have gotten faster and faster, data is all the more relevant. Numbers 4 and 3 have shifted a bit since data sizes and performance have ballooned, algorithm overhead isn't quite as big a concern, but the simplicity argument is relevant as ever. Numbers 2 and 1 while still true (Amdahl's law is a mathematical truth after all), are also clearly a product of their time and the hard constraints programmers had to deal with at the time as well as the shallowness of the stack. Still good wisdom, though I think on the whole the majority of programmers are less concerned about performance than they should be, especially compared to 50 years ago.
Rule 2 is the one that keeps biting me. You can spend days micro-optimizing functions only to realize the real bottleneck was storing data in a map when you needed a sorted list. The structure of the data almost always determines the structure of the code.
The opposite conclusion can be taken from the premise of rule #1 "You can't tell where a program is going to spend its time"
If you can't tell in advance what is performance critical, then consider everything to be performance critical.
I would then go against rule #3 "Fancy algorithms are slow when n is small, and n is usually small". n is usually small, except when it isn't, and as per rule #1, you may not know that ahead of time. Assuming n is going to be small is how you get accidentally quadratic behavior, such as the infamous GTA bug. So, assume n is going to be big unless you are sure it won't be. Understand that your users may use your software in ways you don't expect.
Note that if you really want high performance, you should properly characterize your "n" so that you can use the appropriate technique, it is hard because you need to know all your use cases and their implications in advance. Assuming n will be big is the easy way!
About rule #4, fancy algorithms are often not harder to implement, most of the times, it means using the right library.
About rule #2 (measure), yes, you absolutely should, but it doesn't mean you shouldn't consider performance before you measure. It would be like saying that you shouldn't worry about introducing bugs before testing. You should do your best to make your code fast and correct before you start measuring and testing.
What I agree with is that you shouldn't introduce speed hacks unless you know what you are doing. Most of performance come from giving it consideration on every step. Avoiding a copy here, using a hash map instead of a linear search there, etc... If you have to resort to a hack, it may be because you didn't consider performance early enough. For example, if took care of making a function fast enough, you may not have to cache results later on.
As for #5, I agree completely. Data is the most important. It applies to performance too, especially on modern hardware. To give you an very simplified idea, RAM access is about 100x slower than running a CPU instruction, it means you can get massive speed improvement by making your memory footprint smaller and using cache-friendly data structures.
When I was young I took time to "optimize" my code where it obviously had no impact, like on simple python scripts. It was just by ego, to look smart. I guess the "early optimization" takes are aimed at young developers who want to show their skills on completely irrelecant places.
Of course with experience, you start to feel when the straight forward suboptimal code will cause massive performance issued. In this case it's critical to take action up front to avoid the mess. Its called software architecture, I guess.
I can’t emphasize the importance of rule-5 enough.
I learnt about rule-5 through experience before I had heard it was a rule.
I used to do tech due diligence for acquisition of companies. I had a very short time, about a day. I hit upon a great time saver idea of asking them to show their DB schema and explain it. It turned out to be surprisingly effective. Once I understood the scheme most of the architecture explained itself.
Now I apply the same principle while designing a system.
Previous discussion: https://news.ycombinator.com/item?id=15776124 (8 years ago, 18 comments)
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
I'm a big fan of Data Oriented Design. Once you conceptualize how data is stored and transformed in your program, it just has to be reflected in data structures that make it possible.
Modern design approaches tend to focus on choosing a right abstraction like columnar/row layout, caching etc. They mostly fail to optimally work with the data. Optimal in this case meaning getting most of all underlying hardware capabilities, for example reading large and preferably continuous blocks of data from magnetic storage, parallel data processing, keeping intermediate results in CPU caches, utilizing all physical SSD queues.
The "bottleneck" model of performance has limitations.
There are a lot of systems where useless work and other inefficiencies are spread all over the place. Even though I think garbage collection is underrated (e.g. Rustifarians will agree with me in 15 years) it's a good example because of the nonlocality that profilers miss or misunderstand.
You can make great prop bets around "I'll rewrite your Array-of-Structures code to Structure-of-Arrays code and it will get much faster"
https://en.wikipedia.org/wiki/AoS_and_SoA
because SoA usually is much more cache friendly and AoS makes the memory hierarchy perform poorly in a way profilers can't see. The more time somebody spends looking at profilers and more they quote Rule 1 the more they get blindsided by it.
Pretty much live by these in practice... I've had a lot of arguments over #3 though... yes nested loops can cause problems... but when you're dealing with < 100 or so items in each nested loop and outer loop, it's not a big deal in practice. It's simpler and easier to reason with... don't optimize unless you really need to for practical reasons.
On #5, I think most people tend to just lean on RDBMS databases for a lot of data access patterns. I think it helps to have some fundamental understandings in terms of where/how/why you can optimize databases as well as where it make sense to consider non-relational (no-sql) databases too. A poorly structured database can crawl under a relatively small number of users.
Rule 5 doesn't seem to get a lot of attention but I've refactored many complicated nested branchy functions into a table over the years, and it almost always improves speed, size, and ease of future modification.
These rules apply equally well to system architecture. I've been trying to talk our team out of premature optimization (redis cluster) and fancy algorithms (bloom filters) to compensate for poor data structures (database schema) before we know if performance is going to be a problem.
Even knowing with 100% certainty that performance will be subpar, requirements change often enough that it's often not worth the cost of adding architectural complexity too early.
Any software developer who hasn’t read _The Practice of Programming_ by Kernighan and Pike should. It’s not that long and much of it is timeless.
I believe the "premature evil" quote is by Knuth, not Hoare?!
In a world of AI coding I think rule 5 is still as important than ever. I don’t validate everything Claude does but I do put attention on data structure design since it’s so important.
Rule 5 about data dominating resonates most in modern systems. The trend of just throwing more code at it when most performance and correctness issues come down to how data flows through the system. Most junior engineers optimize the wrong layer because they start with the code instead of the data model.
I used this with Claude to cross compare against my current project and it found 11 pretty significant improvements. Very awesome set of prompts for the ai to then work on.
> Data structures, not algorithms, are central to programming
I'm a big time leetcode interview hater and reading that felt validating. Why the f*ck am I always asked about algorithms.
In programming, the only rule to follow is that there are no rules: only taste and design efforts. There are too many different conditions and tradeoffs: sometimes what is going to be the bottleneck is actually very clear and one could decide to design with that already in mind, for instance.
"N is usually small" might need to be revisited.
Wonder how many premature optimizations choose C/C++ over python?
Optimization usually trades complexity for speed. Complexity hinders debugging and maintenance. Don't optimize unless you have to and not before you know where the bottleneck is. Straightforward common sense advice as long as hardware is not persistently constraining.
There's an important property that emerges from rules 3 and 4 — because the simple algorithm is easier to implement correctly, you can test the fancy algorithm for correctness by comparing its output to the simple one.
Very performance focused. Could more accurately be 5 rules of perf. Good list, though.
Here’s the modern version: https://grugbrain.dev/
Idk about rule 1, in my experience it's usually pretty clear which part of code is slow. Maybe it depends on projects, programming language, etc
This reminds me of a portion of a talk Jonathan Blow gave[1], where he justifies this from a productivity angle. He explains how his initial implementation for virtually everything in Braid used arrays of records, and only after finding bottlenecks did he make changes, because if he had approached every technical challenge by trying to find the optimal data structure and algorithm he would never have shipped.
"There's a third thing [beyond speed and memory] that you might want to optimize for which is much more important than either of these, which is years of your life required per program implementation." This is of course from the perspective of a solo indie game developer, but it's a good and interesting perspective to consider.
[1] https://www.youtube.com/watch?v=JjDsP5n2kSM