The Cone of Uncertainty
The American Association of Cost Engineers invented this model for engineering and construction in the chemical industry.
Barry Boehm, amongst many others, called it the funnel uncertainty and described its effect on estimation in software projects. Barry described wideband Delphi, an adaptation of the delphi method, to estimate the effort (and thereby cost) of a software project using consensus-based technique. The technique forgoes the independent consultation of each individual expert (or oracle, see the Greek origin) in delphi, for a group discussion and estimation to create an estimation spread. Steve McConnell described various techniques and heuristics around estimation and uncertainty in various IEEE articles.
The problem with this depiction of a cone of uncertainty is that in practice it is rarely right. The symmetric cone of uncertainty assumes that the estimators will under-estimate as much as they over-estimate effort.
- Are project estimates on average going to remain about the same?
- Is the estimation spread then symmetric?
In my experience for most software projects over 3 decades, the average estimate of effort grows more over time than it is likely to be reduced. In large software and IT projects, this effort (and cost) growth is likely to be very significant, some projects like Electronic Case File and Sentinel are good (or in fact shockingly bad!) examples of under-estimation and massive effort and cost overruns.
So a symmetric cone of uncertainty is very unlikely to be true in many software projects.
Therefore the cone of uncertainty is more likely to be asymmetric, sometimes it remains within the original highest estimate, but sometimes it even exceeds the highest initial estimate. So even in a well estimated project, the cone of uncertainty is more likely to look like the second diagram on the right.
The average estimate is likely to increase. For experts estimating small to medium projects this is usually within the initial spread (as shown in the 2nd diagram). As projects become larger and more complex (often in a mistaken belief that large IT projects need large organizations), the probability of estimate increasing dramatically over time is even greater.
What do most businesses, managers and project personnel most fear?
When I ask business people like project managers and accountants why they fear uncertainty the answer is that they fear things going wrong and causing problems. The problems then take time and effort to fix.
They simply equate uncertainty with risk.
This is a false simplification. Yes uncertainty can lead to risks that cost time money and effort to fix.
But uncertainty has another side too.
In the agile world, we know that certainty is usually a myth!
- Customer requirements are not always clear, and they change.
- Estimates to do work are never 100% accurate.
- Delivery dates vary depending on what work still needs to be done and the productivity of the team.
In fact, if you have an agile mindset...
UNCERTAINTY = OPPORTUNITY
Agile methods usually set time boxes for each activity to limit the amount of work done to a very manageable increment of work. In Scrum the fundamental time box is the development interval or iteration, and this is called a Sprint. This can be from 1 to 4 weeks in duration, see Agile and Scrum.
All other events and ceremonies in Scrum are also time-boxed.
This helps focus estimation and reduce uncertainty, because the work planned for an interval of a few weeks is fairly predictable. There will still be some uncertainty, particularly when a team first is formed and works on a new project. This disappears when a team is coached to a Ready Ready Doine Done status.
What I want to propose is a new visual paradigm for depicting uncertainty in a project using an Agile method like Scrum.
In the near term, we reduce uncertainty in what we are going to do next by grooming our product backlog so that the user stories describing the required features or work increments are well defined with a minimum of documentation. The user stories are said to be 'ready', because they are seen to be 'doable'.
During sprint planning the team commits to a reasonably well estimated scope of work for the next sprint. They estimate the scope and complexity of each piece of work, rejecting items they cannot estimate or re-scope them in a way that they can commit to deliver with a known estimate. In XP and Scrum we use story points as this estimate (some people like using T-shirt sizes, but I think this is lame). This reduces uncertainty of what will be done even more. The user stories are now said to be in a 'ready ready' state, because they are now 'doable' within the sprint time box.
The team then performs the required work, and have a known 'done' product increment at the end of the sprint. If it has been fully tested, user accepted and no work remains to be completed, the product increment is now said to be in a 'done done' state.
But the work in the future is still uncertain to a much greater extent than the work for the sprint. This is an advantage because it allows the team and product owner to be Agile, both in terms of what they may develop and deliver next (business agility).
So what I am proposing is to replace the cone of uncertainty with a Bubble of uncertainty and the related Estimation Bubble..
As we plan and then work sprint by sprint, we squeeze uncertainty out of the bubble iteratively over time. We do not try to squeeze uncertainty out at the start, which is usually not possible anyway. We avoid estimating what is uncertain in the future, which is wasted effort. But we do estimate what we will do soon, because that is what we are more certain about delivering incrementally.
As far as I have been able to determine, no-one has previously described this paradigm for uncertainty and its effect on estimation. I hope it is useful for you.
© Han van Loon