5 minute read

Measuring and Managing Forecast Accuracy for Continuous Improvement

This is Part 4 of a 5 Part series on Forecasting by Dan Bursik. To read Part 3, click here.

In my previous blogs related to effective forecasting I have covered several basic concepts that carry forward into this discussion. Whether you want to consider these as key concepts or catchy phrases, here is a quick summary of those ideas:

You can’t get a good schedule out of a bad forecast.
If there is no right place for the hours to be positioned, then you can’t create a good schedule even with a good forecast.
If you haven’t gotten serious about your best practices, labor standards or labor modeling, you probably have bigger challenges to attend to before forecast accuracy optimization.
The better your data and standards can combine to anticipate the work content of your associate’s work plan, the more important that accurate forecasting will become.
Your ultimate objective is to put the right people in the right place at the right time doing the right things.

So if you are at a point where forecast accuracy matters in the correct placement of hours, what is the best way to evaluate the accuracy of your forecasts?

Ultimately, there are two ways. The first is to measure the accuracy of the forecast metric itself against the actual value you experience. Produce customers forecast Tuesday versus Produce customers actually served Tuesday. Or, Deli service counter customers forecast from 1:00 to 1:15 Saturday versus the actual number served at that time.

With this approach you may need several tests to assess whether a forecast meets your needs because the time interval you evaluate for accuracy also matters. Take a simple approach to store sales as a metric. If you test weekly accuracy let’s hope you are within 2 percent of what the actual shows. Is that good enough?

Well, it depends.

Since you don’t place labor on the basis of weekly sales, getting that close at the weekly level doesn’t tell the whole story. If every day was within 2 percent then it sure looks better. But let’s say one day was high by a huge margin and another day was low by an equally large miss. In the total they would seem to cancel one another out. But if you allocated labor by the daily work content, would that be good enough? I think not. Tell the customers who were poorly served Wednesday that you spent their service labor Monday and it won’t give anyone cause to applaud. Two wrongs don’t make a right just because you grade it at a higher level.

The point is that if you are evaluating your forecast accuracy at the metric level then you need to be careful that the level you evaluate is appropriate. And just as errors Tuesday don’t cancel out more errors Wednesday, Thursday morning errors don’t cancel out Thursday evening misses either. Make your evaluation at a weekly, a daily, and an interval level for the best insight possible.

So if grading the accuracy of the metric is the first approach, let’s now look at the second. This approach doesn’t look at the metric.

Instead it looks at the hours you calculate from the metric.

Now, again, this approach is only meaningful if you’ve really determined that there are correct places for those hours to be to do the work and to satisfy all service expectations.

If you can say that, then evaluating whether the forecast hours align with the hours earned from the actual metric volumes experienced is the true test of forecast accuracy.

Again, if your goal through labor modeling is to put the right people in the right place at the right time doing the right things, then what matters in forecast accuracy is the degree to which hours get misplaced – put where they are not needed, or absent from where they are needed. The absolute sum of those differences is what your continuous improvement efforts are geared to eliminate.

You can argue that minor variations, especially in task or production labor don’t cause much pain so long as your workforce utilizes the time and gets all the work done. Of course that does not hold true for hours associated with direct customer service.

Is it possible to quantify the delta between your planned hours and your earned hours? It should be, however it is easier in some systems than in others. Your system should capture multiple iterations of your planning process from the original system forecast and scheduling requirements to those impacted by forecasting or scheduling edits by your central labor team or store personnel. Unfortunately, if your system does not capture that original version you may discover that you have plenty of error but you won’t necessarily know if that error came from system algorithms or through various edits made which may have improved or degraded the original system plan.

If you can clearly quantify the deltas it puts you in a great position to assess cause and effect; to consider the use of alternate algorithms or to consider whether special events or tags ought to have been present as a part of your forecast. If you withhold important information from the forecasting process, there is no way any system can anticipate the impact of the event. Like anything in process improvement, it’s an opportunity to trace and explore the root cause issues and to diminish their impact in future week forecasts. That, to me, is continuous improvement in forecast accuracy.

So, to recap, forecast accuracy is about understanding the gaps between what you forecast and what actually happens. You can evaluate forecast accuracy either at the metric level or based on the hours generated from your metric forecast. If you evaluate the metric elements be sure that the time granularity of your analysis gets to the levels that matter. Start with the weekly but be prepared to go to the daily and interval levels if that is where the metric forecast would impact the placement of hours. You can also evaluate the difference between forecast and earned hours. Arguably, this is what matters most. However, if you do this analysis it may lead you back to the metrics to find the root cause issues in the data set you select for forecasting, in the operations used in the algorithms of your forecasting process, or in identifying the historical special events or tags that your system needs to forecast more accurately.

I’ve got one more blog to offer on this topic regarding best practices associated with accurate forecasting and some lessons I’ve learned over the years. Let me share one that should already be clear: managing forecast accuracy for continuous improvement is not an event, but a journey. It’s a key part of putting the right people at the right place at the right time doing the right things to deliver your brand and satisfy your customers.

Continue reading

Let’s Connect