There are some questions that I get asked pretty much every single day. How much data do we need to make reliable delivery predictions? What if we don’t currently have any historical data? What if we have plenty of data, but we don’t trust it? How do we choose the rolling window of data to base our delivery predictions on?

If I had a dollar for every time someone asked me these questions, then I could afford to take my entire family on a month-long, all-inclusive, five-star tropical vacation. Did I mention that I have a large family?

But, since these questions won’t pay for my bills (or my dream vacation), I will focus my attention on bringing some more clarity around the topic and helping you identify the dataset you have to use when making probabilistic forecasts… while still dreaming of margaritas on the beach. And I adore margaritas!

How Much Data Do You Need to Provide Accurate Delivery Forecasts?

The fact that probabilistic forecasts are based on your past performance doesn’t mean that you need a ton of data in order to come up with reliable delivery predictions. Whether you have been collecting data from the very beginning of your board creation, or you are just getting started with new teams, this is beside the point.

The main prerequisite of producing reliable forecasts is to maintain a stable delivery system. Stable systems are delivery systems that are optimized for predictability. If your delivery system is optimized for predictability, then you won’t actually need any more than 20 or 30 completed items to come up with accurate results. It’s not about quantity – it’s all about taking control of your management practices and ensuring you deliver results in a consistent manner.

The accuracy of your forecasts strongly depends on the stability of your system. In fact, if you don’t maintain a stable system, nothing will work. There will be no approach that can give you a reliable delivery prediction.

With that in mind (assuming you have a stable system in place), to be able to make reliable forecasts, you will need to use relevant data. So, how do you distinguish relevant from irrelevant data?

If your delivery system doesn’t produce the results you are hoping for and you’d like to explore the proven roadmap to optimize your workflows for predictability, I’d be thrilled to welcome you to our Sustainable Predictability program!

How to Distinguish Relevant From Irrelevant Data?

If you’ve recently changed your workflow, introduced new process policies, if there are new team members joining the team or leaving the team, then you have to observe how these changes affect the shape of your cycle time-frequency distribution, so that you can separate relevant from irrelevant data.

This is where the Cycle Time Histogram and more precisely, the Cycle Time Average Trends widget, comes in handy. Using this tool, you can observe any changes to your system design or the working practices by tracking how the mean trends have developed over time.

Relevant forecasting data - Cycle Time Averages

If there are discontinuities, usually associated with system design changes (hopefully with the intention to improve the predictability of the delivery system), then you should only use your data up to the point where the mean cycle time remains consistent.

In other words, we want a cycle time histogram that reflects the current conditions and the capability of the team, unpolluted with older data that is no longer relevant.

Analyzing the example above, we only want to use the data from the beginning of August onwards. At exactly that time, the team introduced a simple pull strategy that immensely improved the predictability of their delivery workflow.

The data prior to August 2021 is not relevant anymore. It doesn’t represent the current system design of this team and as such, it shouldn’t be used as a base of your delivery predictions.

What if You Don’t Have Data That Reflects Your Future Conditions?

Let’s say that you need to forecast the delivery date of a project that takes place in December when everyone is taking some well-deserved time off for the holidays, but you don’t have data that accounts for that situation. Probably, it wouldn’t be relevant to go back in time and use the data from December the previous year – chances are, your current setup has changed significantly since then.

If that’s the case, the best you can do is to scale down your performance data accordingly. Let’s say that you know everyone will be off in the last week of the month. You can then assume that reducing your delivery rate for the month by 30% would be reasonable.

This is where the scale factor in Monte Carlo comes into play.

Relevant forecasting data - Monte Carlo - Scale Factor

The scale factor is used for high uncertainty scenarios where you expect drastic changes in the throughput of the system, but you don’t have past performance data to account for that (public holidays, someone is about to leave/join the team, etc).

A 0.5 scale will mean that you expect your throughput to be twice lower, 2.0 means twice higher throughput. In the example above, we expect the throughput to decrease by 30% so we set the scale factor to 0.7. The simulation now tells us that if we have a scope of 10 tasks and we initiate our project on Dec 1st, there is an 85% chance of delivering on Jan 7th.

A word of caution here! Use the scale factor as a last resort and only if you don’t have the data to work out your scenario. If you do and it represents your current setup, by all means, use that data instead.

I know I’ve said this many times, but I just can’t emphasize it enough. If you are not maintaining a stable delivery system and if you don’t take control of your management practices, even collecting the data from the very beginning of your board creation won’t enable you to make reliable delivery predictions.

If your system is unstable, the gaps between your percentiles will be huge. If you say that there is a 50% chance of delivering your project by December 10th and an 85% probability of delivering by March 30th, no one will buy that forecast.

So, here is my best advice. Don’t worry too much about the amount of data you have to collect to produce a probabilistic forecast. Instead, focus on stabilizing your delivery system. The more stable your system is, the more predictable it becomes.

Think about how the workflow that is generating that data performs? The fact is, if your delivery system is unstable, regardless of the method you use, your predictions will be unreliable. And even having more data at your disposal won’t enable you to come up with an accurate delivery commitment.

Last but definitely not least (in fact I’d argue, probably most important), don’t forget to reevaluate your forecast regularly, using the data that reflects your current conditions.

Continuous forecasting is essential to make sure you are on track and you are still able to hit your targets. Reevaluating your forecast on a regular basis will enable you to adjust your course accordingly and build your reputation as a reliable service provider!

Do you find this article valuable?
Rating: 5 stars (7 readers voted)