Can Improving Forecast Accuracy Address Our Demand Planning Woes?

Can Improving Forecast Accuracy Address Our Demand Planning Woes?

If “the forecast is always wrong,” is improving forecast accuracy even the solution to our demand planning woes? In times that continue to defy our ability to predict them, the words of famous statistician George Box have never been more right: “All models are wrong, but some are useful.” So what can we do to make models more useful? Artificial intelligence and machine learning (AI/ML) can improve forecast accuracy, but a bigger problem is the failure to set accurate expectations around forecasting models, not the accuracy of the models themselves. For supply chains to get more use from their models, we need to “trust the box;” recognize that models are not the holy grail; and remember that a forecast is an input into making better decisions, not an end in and of itself.

Trusting the box

Why bother with forecasting if the model is always wrong? Because moving from guessing from the gut to statistical forecasting reduces bias and increases objectivity, based on years of research by statisticians codified into forecasting software. In spite of investment in effective tools, I regularly hear from supply chain leaders express frustration that their teams don’t “trust the box.” Leaders want planners to use the forecasts the software “box” generates and focus their time on the exceptions to increase the efficiency of the process.

So why don’t planners “trust the box?” Professors Robert Fildes and Paul Goodwin have spent a significant amount of their eminent careers studying the adoption of forecasting tools, which they describe in a recent case study. In spite of ample evidence that overriding the forecast is fraught with bias, rarely adds value but always takes time, humans persist in intervening, valuing their judgement over the box. They document many reasons why this is the case, from observable human biases we readily discount to incentive misalignment to “algorithm aversion.”

Algorithm aversion is the term coined by a trio of researchers from the University of Chicago and Wharton, whose studies observe that humans are more forgiving of what they perceive to be mistakes when made by humans than when made by an algorithm. If people think the model is wrong, they lose confidence in “the box” but not in the human. Fears of displacement by AI/ML compound this effect, especially given the black box nature of these models.

The increased accuracy AI/ML models can produce is typically at the cost of interpretability. And as Fildes and Goodwin report, people will persist in an inefficient process if organizational factors support that decision by rewarding their vested interests. So what can be done to get planners to trust the box? Growing interest in AI/ML has stimulated focus on interpretability, so today tools exist to make these fancier models more explainable. Select software that incorporates these features. Invest in a sales and operations planning (S&OP) process that balances the box with human judgment to build consensus and stakeholder buy-in. And document the actual impact of judgment using a forecast valued-added process in order to make clear when it adds value and when it does not.

Set your sights on the box, not the holy grail

The opposite of algorithm aversion might be what I call “algorithm aspiration,” which is the expectation that AI/ML is the holy grail, the only end to the quest for increased forecast accuracy. When executives think the problem is the poor demand planner whose “forecast is always wrong,” they are prone to fall into the trap of thinking that the solution is fancier math in the form of AI/ML models. Sometimes I get asked if a company can leapfrog over statistical forecasting and just start with AI/ML. To answer that question, let me justify why AI/ML generates so much interest before explaining the limits of leapfrogging.

AI/ML has tremendous potential business potential, which is one reason why Deloitte and MHI found that the adoption rate of AI in supply chains jumped 43% from 2020 to 2021, with an additional 60% intending to adopt it in the next few years. AI/ML can be applied to supply problems but the biggest interest is in demand. It can boost standard forecasting but also incorporate new signals other than sales history, expanding the range of inputs informing future demand in a technique called demand sensing. Gaining a better handle on demand is compelling, especially when recent sales history is often no longer a good predictor. So why might it be a bad idea to skip statistical forecasting and jump right to AI/ML?

First, if a company isn’t yet using statistical forecasting, their overall demand planning process likely needs to mature and their data quality improved. The most relevant AI/ML methods for forecasting are machine learning models, which need large volumes of clean(ish) data. These models work by “learning” from past patterns, which requires sufficient data to “train” the math to find the pattern.

Secondly, as leading forecasting expert Spyros Makridakis and others argue, ML models do not always produce better results, especially when the cost of implementing them is considered. The best path forward is to carefully evaluate the use case for applying AI/ML and weigh benefits against costs. A well-engineered solution applied to the right segment can absolutely be well worth the investment, but this is a decision that must be made in context of the business, not just a single forecast accuracy metric.

A forecast is an input, not a decision

A supply chain’s job is to get the right stuff to the right place at the right time while satisfying customers and maximizing profits. Demand planning is mission-critical to fulfilling that job, but it is one input into one link in the chain. Even a perfect forecast may not matter if the plant lacks capacity to fulfill it or supplies are short to produce it or distribution is unable to deliver it. The complexity of a supply chain is that all these questions are connected.

Improving forecast accuracy is critical, but so is increasing the agility with which the company can respond to inevitable disruptions, transparency to understand their full impact, and collaboration to enable the best decisions for the entire business, not just one silo. As one of our customers said to me this week, in his business a useful model has forecast accuracy of 70-75%, and beyond that he sees diminishing returns. Since the number can’t be perfect, his priority is increasing their ability to be nimble.

Fixation on the forecast can distract from the fact that not all forecasts are equally important. A planner who has thousands of SKUs to forecast must find ways to focus her efforts where they matter most. Part of the value of AI/ML in demand planning is to automate the obvious so the planner can allocate her time to the products with higher variability, volume, and margin. But ultimately her forecast is an input into a series of decisions the business must make – not an end in itself. Her time would also be well spent investing in relationships, which will allow her to build trust, explain her process, and learn more about the broader context.

Conclusion

I’m a big believer in the power of AI/ML but also in setting appropriate expectations around it. Investing in newer approaches to forecast accuracy can improve demand planning, but don’t limit investment to fancy math. Balance belief in better models with building a better process, relationships, and the ability to respond quickly, collaboratively, and clearly. The combination will be far more powerful.

demand planningPolly Mitchell-Guthrie is the VP of Industry Outreach and Thought Leadership at Kinaxis, the leader in empowering people to make confident supply chain decisions. Previously she served in roles as director of Analytical Consulting Services at the University of North Carolina Health Care System, senior manager of the Advanced Analytics Customer Liaison Group in SAS’ Research and Development Division, and Director of the SAS Global Academic Program.

Mitchell-Guthrie has an MBA from the Kenan-Flagler Business School of the University of North Carolina at Chapel Hill, where she also received her BA in political science as a Morehead Scholar. She has been active in many roles within INFORMS (the Institute for Operations Research and Management Sciences), including serving as the chair and vice chair of the Analytics Certification Board and secretary of the Analytics Society.

Leave a Reply