Probabilistic forecasting is an approach that helps teams produce dependable delivery predictions based on their past performance data. A lot of teams, however, still question whether they should use this method. Today, we’ll dispel four of the most widely spread myths of probabilistic forecasting, which all-too-often prevent teams from embracing the advantages of the most reliable approach to making delivery commitments.
4 Myths of Probabilistic Forecasting
Let’s separate the misconceptions from the facts.
Myth #1: We Need Large Amounts of Data to Perform Probabilistic Forecasting
The fact that probabilistic forecasts are based on your past performance data doesn’t mean that you need a ton of data in order to come up with reliable delivery predictions. Whether you have been collecting data from the very beginning of your board creation, or you are just getting started with new teams is beside the point.
The main prerequisite of producing reliable forecasts is to maintain a stable system. If your delivery workflow is optimized for predictability, you will need 20 to 30 completed items to come up with accurate results. It’s not about quantity. It’s all about taking control of your management practices and ensuring you deliver results in a consistent manner.
If your system doesn’t produce the results you are hoping for and you’d like to explore the proven roadmap to optimize your delivery systems for predictability, I’d be thrilled to welcome you to our Sustainable Predictability program.
Myth #2: Probabilistic Forecasting Only Works with Items of the Same Size
Story sizing into even pieces is a widely-spread activity, which is often considered to be a prerequisite to making reliable future predictions. This is one of the biggest myths of probabilistic forecasting.
The concept of artificially splitting your work items into even pieces to be able to produce an accurate delivery forecast is not valid. In fact, resizing your stories is not only completely irrelevant to forecasting, but it can also have a negative effect on the goals you’re trying to achieve.
The main prerequisite to making accurate delivery forecasts lies in maintaining a thin-tailed distribution. Stable systems produce thin-tailed distributions.
In a stable system, you will probably have items of different sizes, and the matter of how fast they are released will only depend on their priority. It’s the urgency of the items that matters the most. If your items have the same priority, they have to be processed in a FIFO manner.
Even if you have to split your work items into smaller pieces of work, you should always strive to come up with potentially releasable increments that still bring customer value.Once again, that will not have any influence on the accuracy of your probabilistic forecast.
The basis of your forecasts contains work items of different sizes. There is no need to estimate or compare your items – the effort time you need to complete your work does not equate to delivery time. 60% to 99% of your delivery time is waiting time and this is something you cannot estimate. That’s what makes the probabilistic forecasting approach way more reliable than deterministic estimating; it takes into account all the variability in your system.
Myth #3: Probabilistic Predictions Are Difficult to Interpret and Use
No, they aren’t, and you don’t need to have a Master’s degree in math to understand them. There are tools at your disposal that will help you produce accurate probabilistic forecasts. One such tool is the Monte Carlo simulation.
The simulation uses a large number of random trials based on past throughput data to predict the throughput for a future time frame. You define the start date and the number of tasks and the simulation provides a range of delivery dates and the probability that comes with each date. For any date in the future, it uses the throughput of a random day in the past in order to simulate how many work items are likely to get done.
For example, let’s say on Sep 10th, you’ve had a throughput of 6 tasks. The simulation takes this number and assumes that this is how many assignments will be completed on Apr 14th. To project the probable throughput of Apr 15th, it takes the throughput of another random day in the past and so on.
The simulation is repeated tens of thousands of times before the results are presented in the form of a probability distribution with percentiles increasing from left to right. In this simulation, we set a backlog of 78 tasks and we want to start working on it on Apr 14th. The simulation tells us that there is an 85% probability that we can finish all the backlog items by Jul 21st. The further you go in time, the greater the certainty of completing all the tasks.
Myth #4: Probabilistic Forecasting Doesn’t Account For Story Splitting
Let’s take a step back. Just because you have 200 stories in your backlog, this doesn’t mean that these exact 200 stories will be delivered on the date you’ve committed. That’s not what the Monte Carlo simulation is telling you. What the simulation is telling you is “If you have a budget for 200 items, they will be done by date X and there is Y% certainty that you’ll achieve that goal”.
You’ll probably split your stories, some of them will drop off, more will be added, you’ll discover defects and additional work will come in between. You can take any 200 items you want, the simulation Monte Carlo produced will still be valid.
Story splitting is about determining whether something is more complex than we initially assumed. If you split your initial story into 3 other stories, that doesn’t necessarily mean that you’ll work on all the 3 new stories. When it comes to story splitting, the most important part is to propose the most feasible option that will still solve your customer’s problem.
Furthermore, the fact that you’ve come up with an initial delivery commitment doesn’t mean your job is done. The release planning phase is just the beginning. Don’t fall into the trap of assuming that everything will go as planned.
Once you begin your work and start delivering results, you should continuously reevaluate your forecast and adjust your course accordingly. You need to look into your plan from a continuous perspective and decide over time how best to fill the spots for these 200 items.
Moreover, your delivery rate will vary based on the knowledge discovery process, any changes in your team setup or the stability of your workflow. All these factors will affect the base you used to perform your initial prediction. That’s why continuous forecasting is essential – to be able to deliver on time, you need to make sure that you have your finger on the pulse of the project.
The Advantages of Forecasting
Here at Nave, we have more than 10,000 customers who use probabilistic forecasting to either plan their next releases or predict the delivery date of a project with a fixed scope. This method brings more value than the traditional estimating approach in so many different ways.
Probabilistic forecasting has proven to be way more accurate than deterministic estimation, as it uses models based on your own past performance data. It doesn’t rely on intuition or gut feeling.
It’s much less time & effort-consuming. We’ve seen how projects spanning greater than 1 year and costing in excess of $10,000,000 have taken less than 1 day using only a couple of people to analyze the data to build a reliable, high-quality forecast.
Probably the biggest advantage of this approach is the fact it’s not deterministic. Forecasting clearly defines the risk associated with certain outcomes in terms of %. When it comes to planning, the main focus now moves from “when will it be done?” to “how much risk are you willing to take?”.
Related posts
Meet the Author

Sonya Siderova is a passionate product manager and a driving force behind Nave, a Kanban analytics suite that helps teams improve their delivery speed through data-driven decision making. When she's not catering to her two little ones, you might find Sonya absorbed in a good heavyweight boxing match or behind a screen crafting a new blog post.