[Author here]
A friend told me my post was gaining some momentum on HN and I've read through the comments and found a bunch of good insights.
I especially liked the one from @arach: "this feels like language from another era", which I hope means it's evident my post was written by someone who loves the craft and refrain from prompting an LLM for "a blog post on topic X".
I also try to decide whether I side with the numerous people who want to sort development task into either trivial or novel. Someone wrote that if we break each issue down to its components, the above holds, but idk whether the developers who does not want to estimate are keen on doing such a rigorous breakdown of each feature.
Regardless, super fun to read all the comments. You have already found my blog and feel free to connect in any way as you see fit (or not at all).
From both the developer and manager side of things, I've found that the most important attribute of estimates is frequently the least paid attention to: that they be kept up to date.
When you discover more work hidden under that "simple" pile of code, you absolutely HAVE to update your estimate. Add more points, add more tickets, whatever. But then your various managers have the ammunition to decide what to do next - allocate more resources to the project, descope the project, push back the release date, etc.
Far too frequently, the estimate is set in stone at the start of the project and used as a deadline that is blown past, with everyone going into crisis mode at that point. The earlier the estimate is updated, the calmer and more comprehensive action everyone responsible can take.
The biggest problem I've seen isn't the estimate itself but the telephone game that happens after. You say "probably 2-3 weeks" to your manager, who tells the PM "about 2 weeks", who tells sales "mid-month", who tells the customer "the 15th".
By the time it reaches the customer, your rough guess with explicit uncertainty has become a hard commitment with legal implications. And when you miss it, the blame flows backward.
What's worked for me: always giving estimates in writing with explicit confidence levels, and insisting that any external date includes at least a week of buffer that I don't know about. That way when the inevitable scope creep or surprise dependency shows up, there's room to absorb it without the fire drill.
This is why I push for Kanban whenever I am a PO. If we can ballpark an estimate, I can prioritize it. If we cannot ballpark an estimate, I can prioritize the research to clear out some of the unknowns. But most importantly, we set an expectation of rolling feature rollouts, not inflexible release dates. We communicate both internally and externally the next few things we are working on, but no hard dates. The article correctly identifies that hard release dates communicated to customers are the root cause of problems, so I simply don't give such things out.
I think the main issue with time estimates is that they don't follow a normal distribution (like many things we are taught in school) but log-normal distribution, which is very skewed and has a big right tail. This misconception is the reason people fail to understand why time estimates are inherently hard.
What the article suggests is basically Kanban. It's the most effective SW development method, and similar scheduling system (dispatch queue) is used by operating systems in computers. However, management doesn't want Kanban, because they want to promise things to customers.
You can make good estimates, but it takes extra time researching and planning. So you will spend cycles estimating instead of maximizing throughput, and to reduce risk, plan is usually padded up so you lose extra time there according to the Parkinson's law. IME a (big) SW company prefers to spend all these cycles, even though technically it is irrational (that's why we don't do it in the operating systems).
The best hack for improving estimation is first never giving a single number. Anyone asking for a single number, without context, doesn't know what they are doing; it's unlikely that their planning process is going to add any value. I think they call this being "not even wrong".
Instead you should be thinking in probability distributions. When someone asks for your P90 or P50 of project completion, you know they are a serious estimator, worth your time to give a good thoughtful answer. What is the date at which you would bet 90:10 that the project is finished? What about 99:1? And 1:99? Just that frameshift alone solves a lot of problems. The numbers actually have agreed-upon meaning, there is a straightforward way to see how bad an estimate really was, etc.
At the start of a project have people give estimates for a few different percentiles, and record them. I usually do it in bits, since there is some research that humans can't handle more than about 3 bits +/- for probabilistic reasoning. That would be 1:1, 2:1, 4:1, 8:1, and their reciprocals. Revisit the recorded estimates during the project retrospective.
You can make this as much of a game as you want. If you have play-money at your company or discretionary bonuses, it can turn into a market. But most of the benefit comes from playing against yourself, and getting out of the cognitive trap imposed by single number/date estimates.
The trouble with estimation is that few places record the estimates and the actuals for future reference.
As I've pointed out before, the business of film completion bonds has this worked out. For about 3% to 5% of the cost of making a movie, you can buy an insurance policy that guarantees to the investors that they get a movie out or their money back.
What makes this work is that completion bond companies have the data to do good estimations. They have detailed spending data from previous movie productions. So they look at a script, see "car chase in city, 2 minutes screen time", and go to their database for the last thousand car chase scenes and the bell curve of how much they cost. Their estimates are imperfect, but their error is centered around zero. So completion bond companies make money on average.
The software industry buries their actual costs. That's why estimation doesn't work.
When teams don't need strong estimates, then Kanban works well.
When teams do need strong estimates, then the best way I know is doing a project management ROPE estimate, which uses multiple perspectives to improve the planning.
https://github.com/SixArm/project-management-rope-estimate
R = Realistic estimate. This is based on work being typical, reasonable, plausible, and usual.
O = Optimistic estimate. This is based on work turning out to be notably easy, or fast, or lucky.
P = Pessimistic estimate. This is based on work turning out to be notably hard, or slow, or unlucky.
E = Equilibristic estimate. This is based on success as 50% likely such as for critical chains and simulations.
I think that executives requiring estimates of time from product owners (PMs, Engineering Managers) is an instrument for putting them into de-facto 'debt' servitude, and provides a constant stream of justification for dismissal with cause. As others have commented, if the ability to time perfectly was there, it would no longer have been an innovative product. Same with requiring sales forecasts from salespeople. There's no way for the salesperson to know, so they are constantly on the chopping block for falling short of forecasts they are forced to generate. I imagine above is more or less tacitly acknowledged in tip-sharing conversations between & among execs & their investors.
One thing frustrating for me is when folk leave $BigCo, with it's methods (ie: estimate time to complete, sprint planning) and try to apply those same methods at a very early company.
Estimates don't work there at all - everything is new.
So, flip it. Use known values to prioritize work. That is: client demand and (potential) revenue. Then allocate known time/budget to the necessary project, see how far you get, iterate. Team can move faster. Looks chaotic.
At some (uncomfortable) point however, need to rotate into the "standard" process.
One thing that changed my way of thinking about estimates is reading that 86% of engineering projects, regardless of what kind of engineering (chemical, infrastructure, industrial, etc) go over budget (in time or money).
Missing estimates isn't unique to software, but it's common across all engineering fields.
I have never been in a project, where estimates were spot-on, and I do this for 15 years now. By now hundreds of features have floated by the river and hundreds of meetings have been held.
Estimations are often complicated, because there are way too many variables involved to give accurate estimates. Politics within companies, restructuring of teams. The customer changes their mind, the reality you've expected is slightly different, your architecture has shortcomings you find out late in a project, the teams your work depends on disband, … and a million other things.
Theoretically you could update your estimates in a SCRUM meeting, sure, but to be honest, this has always been nothing but a fantasy. We rarely do work in a void. Our features have been communicated higher up and have already been pitched to customers. In a fully transparent and open organization you might update your estimates, and try to explain this to your customers. In reality though? I have never seen this.
While this sounds very negative, my take on it is not to waste too much time on estimates. Give a range of time you expect your features to fall into, and go on with getting your work done.
I tried to search for the CMMI keyword in the whole discussion to find something relatable. Guess like everything else people happen to loose memory over a cycle and discover it again as if thats something new. Another thing I realize from this is that I am probably getting older.
Businesses have many aspects of their operation that are unpredictable: ask the legal team exactly what the result of a legal action will be, and when it will be completed by, and you'll get a fuzzy answer. Ask the marketing team exactly how many new signups will result from this new ad campaign and you'll get a fuzzy answer. Businesses can easily cope with unpredictability, this is not a problem.
The problem is that software development has not been treated as one of these unpredictable operations. It absolutely is unpredictable, we know this from decades of experience and all the academic research on the subject. But, probably because of history, politics and power imbalances, software dev teams have very rarely managed to convince executive teams that software development must be treated as unpredictable. As a result, software dev as a profession has a bad reputation for not delivering on their promises.
When we do manage to convince executive teams that software development is unpredictable, good things happen. The author's point that "tax software must be released at tax time" still happens, but the planning around the specific feature set takes into account that if the release date is fixed then the feature set in the release will not be. Again, it's not a problem to deal with the unpredictability, as long as everyone accepts that it is unpredictable.
I've always seen estimates as trying to guess the highest number the PO will accept, the time or effort involved in actually completing the task is irrelevant. I have never had a PO or anyone else complain that a task was completed more quickly than expected. However I do have to be careful to not tell them it is complete too early, lest they start expecting shorter cycles.
At least in my company we've stopped calling them "estimates". They are deadlines, which everyone has always treated "estimates" as anyway.
Unfortunately in the real world deadlines are necessary. The customer is not just mad that they didn't get the shiny new thing, especially in the case of B2B stuff, the customer is implementing plans and projects based on the availability of X feature on Y date. Back to the initial point, these deadlines often come down to how quickly the customer is going to be able to implement their end of the solution, if they aren't going to be ready to use the feature for six months there's no reason for us to bust our asses trying to get it out in a week.
As a largely solo dev I found I can't estimate well unless it's a common task, and it's easy to find tasks grow exponentially if it touches too many layers.
Asking "how long do you want me to spend on this?" got better results, because I got more idea how important tasks were to the business and can usually tell if something is going to take longer than they want. (Or know when we need to discuss scoping it back, or just abandoning the feature)
I largely agree.
Another way to say this is that an estimate becomes a commitment to not learn.
Re-planning is seen as failure by [management].
Re-planning is what happens when you learn that a previous assumption was not correct.
You should be encouraging learning, as this is THE cornerstone of software development.
My success rate went way up by only making 4 estimates.
1 day
1 week
1 month
1 year
This communicates the level of uncertainty/complexity. 5 days is way too precise. But saying 1 week is more understandable if it becomes 2 weeks.
I don’t estimate in hours or use any number other than 1
In the 70's a Brazilian programmer told me his method was to make his best guess how long something would take, double it, then promise that amount plus-or-minus 50%.
I'll give you random estimate and start working The more I work, the more the estimate can be refined
By the time my work is done, that estimate will be perfect
Wait
Perhaps my true job is to create perfect estimate ? Is coding only a side-effect ?
A couple of decades back PMs used to look at historical data to guide the estimates for a new project. If a similar coding work took 2 weeks on average in the past, that gives some basis.
So, I think the issue is about whether it is a routine workflow work which has well-tested historical timelines or not.
Nevertheless, estimates are needed at some granularity level. When you order something on Amazon, would like an estimate on when the item would be delivered to you?
Even if coding work can't be estimated, the overall project requires estimation. Someone need to commit to timelines and come under pressure. Distribution of that pressure is only fair.
I've found the best estimates are not estimates but non negotiable deadlines with fixed outcome.
"You have six weeks to do X for $$$" or "I'll get it done in six weeks or you don't pay"
Where i work there is no penalty for being late or not hitting a deadline. Life goes on and work continues. I have seen when there are specific dates and metrics and suddenly people work in focused effort and sometimes work weekends or celebrate with finishing early.
If you know your team's velocity, this methodology has worked well for me in the past:
- ask enough questions about the scope of work to do a prototype
- build a prototype to identify constraints / complexity
- ask followup questions
- break the body of work down into a list of tickets (spike/discovery tickets included)
- use the PERT formula to estimate the total number of story points for the body of work
- take the total estimated points and divide by your team velocity for a duration of time (e.g. 100 total points / 40 points per 2 weeks = 2.5 weeks)
Estimates are difficult, but if you monitor your estimate vs actual, and then adjust your next estimate accordingly, then it becomes easier to have decent estimates.
There is also the adage that if you are late, then 100% of the time the customer will be unhappy, but if you are early, then 100% of the time the customer will be pleased.
So make people more pleased :-)
So always over estimate a bit (25%)
I wonder how different the perception of these projects being late or (massively) over budget would be if we used different words. Bear with me here...
Words mean things. Estimate carries a certain weight. It's almost scientific sounding. Instead, we should use the word "guess".
It's exactly equivalent, but imagine the outcome if everyone in the chain, from the very serious people involved in thinking up the project, to funding the project, to prioritising and then delivering the project, all used the word "guess"
Now, when the project is millions of dollars over budget and many months/years late, no one is under any pretence that it was going to be anything else.
I tried this once. It turns out serious people don't like the idea of spending millions of dollars based on "guessing", or even letting developers "play" in order to better understand the guesses they are forced to make, even when it turns un-educated guesses into educated guesses.
Of course, none of this would improve the outcome, but at least it sets expectations appropriately.
I've always found this sound, rational ROI driven approach to product management a little off the mark. Software isn't like real estate or investing in TBills - you don't invest $X in development and get a nice 10% annualized return on investment over the next 10 years or something like that despite how seductive such thinking can be.
It is largely a "hits" business where 1% of the activities that you do result in 99% of the revenues. The returns are non-linear so there should be almost no focus on the input estimation. If your feature only makes sense if it can be done in 3 months but doesn't make economic sense if it takes > 6 months - delete feature.
Companies that need estimates have an Estimations department.
These are usually companies that are led by and perform engineering work.
Software developers aren’t engineers.
Project managers have no authoritative training, certification or skills to manage software development projects.
30 years ago my boss at a large defense/aviation contractor told me estimating software projects was a very valuable skill, but all estimates were always wrong because they are simplifications and to keep that in mind -- his words.
Mainly they are useful to build belief and keep a direction towards the goal.
Models of any kind in whatever domain are necessarily always something less than reality. That is both their value and weakness.
So estimates are models. Less than reality. Therefore we should not expect them to be useful beyond 'plans are useless, but planning is indispensable' -- I think thats' Eisenhower.
It's also difficult for LLMs it seems. If I forget to add instructions to skip resource estimates, Claude will estimate a week or two, then bang it out in under an hour.
For humans, 2x the original estimate.
"Predictions are hard; especially about the future." –Yogi Berra, MLB HoF catcher & manager
I don’t think I dislike being forced to estimate. I dislike being asked to estimate by the same people that consistently make those estimates worthless by introducing changes at literally every point in the process.
https://youtu.be/QVBlnCTu9Ms?si=k_UolNc2o6UFGS9f
#NoEstimates
Yes, be agile. Yes, measure all the things.
But estimation sets everyone up for disappointment.
I have a solution but I don't think companies care about this level of meta-analysis on how people work together. They think they do but in reality they just care about optics, and the status quo culture has a huge weight on continuing in the same direction, largely dictated by industry "standards".
In essence, estimates are useless. There should only be deadlines and the confidence of engineers of achieving the deadline. To the extent there are estimates, it should be an external observation on the part of PMs and POs based not only on the past but also on knowledge of how each team member performs. This of course only works if engineers are ONLY focusing on technical tasks, not creating tickets or doing planning. The main point of failure in an abstract sense is making people estimate or analyze their own work, which comes with a bias. This bias needs to be eliminated and at the same time you give engineers the opportunity to optimize their workflows and maximize their output.
TLDR, engineers should only focus on strictly technical because it allows to optimize within the domain, meanwhile other roles (whoever PM, PO or other) should be creating tasks, and estimating. Of course this doesn't work because there are hard biases in the industry they are hard to break.
My experience is that I have to basically always overestimate if I can get away with it because otherwise if something goes wrong, I will pushed to do free overtime to complete all the work assigned in a given sprint.
In my experience, most of the time estimates are difficult because prerequisites and constraints are not clear or not fully known.
But, if you are enough experienced, with some fermi-style math, you will do good enough estimates most of the time.
It is often not just about the difficulty of estimation the time for specific tasks but also what the assumptions put in were. Sometimes these assumptions werent written out explicitly or other times they were (deliberately?) removed. Just one example of a broken example: a project can be run in many ways. You can have an estimate done based on A-team resources and high priority but the moment the contract or whatever is done, it is decided to outside the whole work to a new team who never worked on the code before and sit thousands of miles away. To compensate 2-3x as many people are assigned. Add in a non technical project manager and scrum master and all kinds of resources that were never envisaged but who will report time in on the project etc. You get the idea. And this was just done type of assumption that could broken!
There’s a well-established Agile technique that in my experience actually succeeds at producing usable estimates.
The PM and team lead write a description of a task, and the whole team reads it together, thinks about it privately, and then votes on its complexity simultaneously using a unitless Fibonacci scale: 1,2,3,5,8,13,21... There's also a 0.5 used for the complexity of literally just fixing a typo.
Because nobody reveals their number until everyone is ready, there's little anchoring, adjustment or conformity bias which are terribly detrimental to estimations.
If the votes cluster tightly, the team settles on the convergent value. If there’s a large spread, the people at the extremes explain their thinking. That’s the real value of the exercise: the outliers surface hidden assumptions, unknowns, and risks. The junior dev might be seeing something the rest of the team missed. That's's great. The team revisits the task with that new information and votes again. The cycle repeats until there’s genuine agreement.
This process works because it forces independent judgment, exposes the model-gap between team members, and prevents anchoring. It’s the only estimation approach I’ve seen that reliably produces numbers the team can stand behind.
It's important that the scores be unitless estimates of complexity, not time. How complex is this task? not How long will this task take?
One team had a rule that if a task had complexity 21, it should be broken down into smaller tasks. And that 8 meant roughly implementing a REST API endpoint of complexity.
A PM can use these complexity estimations + historical team performance to estimate time. The team is happy because they are not responsible for the PM's bad time estimation, and the PM is happy because the numbers are more accurate.
A clear description with background appears in Mike Cohn’s original writeup on Planning Poker: https://www.mountaingoatsoftware.com/agile/planning-poker
Maybe software industry should learn from other industries where people don't see estimates as something unthinkable?
I have read about the rule of multiplying any time estimate by 1.57, but ime it should be multiplied by 7.85 rather.
Prototyping goes a long way in helping come up with estimates.
I always say, "it's a prediction, not a promise."
Estimates are like traveling from point A to point B in completely unpredictable terrain. Sometimes you can just go straight and beat the estimate easy. Sometimes you will find massive valley and you need to build a bridge to continue while your PM is furiously yelling in the radio that estimated deadline was a week ago.
https://en.wikipedia.org/wiki/Delphi_method > https://news.ycombinator.com/item?id=26368011
https://blog.pragmaticengineer.com/yes-you-should-estimate/ > https://news.ycombinator.com/item?id=27006853
https://josephmate.github.io/PowersOf2/ Complexity Estimator
https://earthly.dev/blog/thought-leaders/ > https://news.ycombinator.com/item?id=27467999
https://jacobian.org/2021/may/20/estimation/ > https://news.ycombinator.com/item?id=27687265
https://tomrussell.co.uk/writing/2021/07/19/estimating-large... > https://news.ycombinator.com/item?id=27906886
https://www.scalablepath.com/blog/software-project-estimatio...
https://estinator.dk/ > https://news.ycombinator.com/item?id=28104934
https://news.ycombinator.com/item?id=28662856 How do you do estimates in 2021?
https://web.archive.org/web/20170603123809/http://www.tuicoo... Always Multiply Your Estimates by π > https://news.ycombinator.com/item?id=28667174
https://lucasfcosta.com/2021/09/20/monte-carlo-forecasts.htm... > https://news.ycombinator.com/item?id=28769331
https://tinkeredthinking.com/index.php?id=833 > https://news.ycombinator.com/item?id=28955154
https://blog.abhi.se/on-impact-effort-prioritization > https://news.ycombinator.com/item?id=28979210
https://www.shubhro.com/2022/01/30/hacks-engineering-estimat...
https://www.paepper.com/blog/posts/monte-carlo-for-better-ti...
https://drmaciver.substack.com/p/task-estimation-101 > https://news.ycombinator.com/item?id=32177425
https://morris.github.io/e7/#?t=
https://stevemcconnell.com/17-theses-software-estimation-exp...
https://www.doomcheck.com/ > https://news.ycombinator.com/item?id=34440872
https://github.com/kimmobrunfeldt/git-hours
https://pm.stackexchange.com/questions/34768/why-are-develop... > https://news.ycombinator.com/item?id=35316808
https://erikbern.com/2019/04/15/why-software-projects-take-l... > https://news.ycombinator.com/item?id=36720573
https://news.ycombinator.com/item?id=42173575
https://www.thecaringtechie.com/p/8-guaranteed-ways-to-annoy... > https://news.ycombinator.com/item?id=43146871
estimates are only difficult cause entire industry has convinced everyone that we do something special which we have no idea how long it will take to do :)
An LLM's guess is as good as anyone's.
The thing that sucks is that when I avoid giving estimates, I'm not trying to be difficult, I'm being honest about the unknowns of the project and the inherent uncertainties and messiness of software development. I'm helping protect myself and the rest of the team from making plans based off of bad estimates.
But I get all this pushback when I do that, such that the path of least resistance is to give some bullshit estimate anyway. Or I get asked to make a "rough guesstimate", which inevitably turns itself into some sort of deadline anyway.
Garbage in, garbage out. Inaccurate estimates, unreasonable timelines, stressed devs and upset PMs.
I'm so over working on software teams.
It's a shame that developers read these articles and not project managers.
The unique thing about estimates in software engineering is that if you do it right, projects should be impossible to estimate!
Tasks that are easiest to estimate are tasks that are predictable, and repetitive. If I ask you how long it'll take to add a new database field, and you've added a new database field 100s of times in the past and each time they take 1 day, your estimate for it is going to be very spot-on.
But in the software world, predictable and repetitive tasks are also the kinds of tasks that are most easily automated, which means the time it takes to perform those tasks should asymptotically approach 0.
But if the predictable tasks take 0 time, how long a project takes will be dominated by the novel, unpredictable parts.
That's why software estimates are very hard to do.