While most attention for Budget 2017 is on policies, arguably the bigger driver of budget outcomes are the forecasts that are largely beyond the government’s control and out of the public eye – such as the iron ore price and the future of the Australian residential property market.
Many significant government policy, investment or financial decisions have at their core a forecast or prediction about the future, so much so that financial forecasts shape the world we live in, whether those forecasts are made by economists, investors or corporate decision makers.
The thing is most forecasters have no idea how good their forecasts are in the short, medium and long term, according to research by Tetlock & Gardner.
Forecasting mistakes are inevitable given the complexity and inherent unpredictability of many aspects of government and investment decision-making.
Of course, to be wrong when attempting a difficult task is no crime. However, worse than merely being wrong, the empirical evidence shows that forecasts tend to be systematically biased – largely beyond the forecasters’ awareness – and that the impact of these biases is compounded by our overconfidence in the validity of these forecasts.
The good news is that forecasting has been shown to be able to be improved by applying targeted strategies.
In the Macquarie University Finance Professionals’ Series this month I outline some of the warning signs and behavioural biases that can be used to help identify systematic forecasting errors. These include complexity, optimism, lack of empirical rigour, over-confidence and rational behaviour/incentives.
First, not all forecasts are prone to error. Demographic forecasts, for example, can typically be made with great accuracy into the future. We have a fair idea of how many 15 year olds will become 16, which helps to determine secondary education policies, for example. So where are errors most likely? We should go looking for forecast errors within systems with inherent complexities, non-linearities, ambiguities, sensitivities and feedback loops, for example.
Can we forecast the outcome of the US presidential election or Brexit? Not reliably, at least in part because, as we saw with Trump, the outcome had some of these inherent complexities. It was sensitive to swings within small electoral colleges that could magnify forecasting errors to a presidential scale.
Second, can investors forecast company earnings, or companies forecast the outcome of significant projects or acquisitions? Again, depending on the context, not reliably, with systematic deviations being underpinned by a range of factors; such being seduced by the narrative of the project or investment and under-weighting the importance of the “outside view” and of mean reversion to “base rates”.
Third, we should also expect to find forecasting errors where empirical rigor is lacking. The rigor that is needed is not the type evidenced by an extensive pack of supporting charts, statistics and detailed arguments. (Beware these on Budget night!) Unfortunately, this type of rigor, while ubiquitous among professional forecasters, is no antidote to error.
The type of rigor we need is one that systematically collects historical forecasts and analyses them against realised outcomes. Before accepting a forecast as having any basis more reliable than rolling a die or consulting the stars, we should assess the forecasters’ historical track record. It is in it that record that the seeds of future forecasting success can be sewn.
Fourth, and perhaps ironically, we should anticipate error where forecasts are made with the most confidence. Whether we are aware of it or not, too often we use confidence as a proxy for competence.
And finally, sometimes it can be quite rational to make systematic errors when predicting the future. A CEO (or perhaps national Treasurer) with a bias to optimism is more likely to garner the confidence and support of their employees (or constituents). While this optimism might be unjustified by the evidence, through its motivational impact, it might just lead the company (or economy) to better outcomes. But we need to be careful, as these incentives don’t always align nicely.
Budget forecasts will of course never be completely accurate, but with the application of these techniques we can have more confidence that the newly minted policies announced in Federal Budgets are indeed on the right track.
Simon Russell is a behavioural finance consultant & speaker, author of “Applying Behavioural Finance in Australia”, and a Macquarie University Masters of Applied Finance alumnus. He will discuss applications of forecasting research at the MAFC Finance Professionals’ Series in Sydney on May 23.