Had a couple of experiences recently with client teams who are being measured against "benchmarks"!! These were popular in the eighties – but I had hoped they were dying out. Put bluntly – they are a nightmare. What is the point and how dangerous is it to simply try to drive your performance based on benchmarks.

For example: what is "benchmark forecast accuracy"? In any field this is a scarily meaningless concept – but in forecasting it is especially pernicious. Let’s illustrate with weather forecasting.

Some forecasters have it easier than others

In the UK, according to one source, the weather forecast accuracy is 55% - whereas in Brisbane it is 75%. Both of these figures however are "perceived" accuracy – and have little to contribute on the relative performance of the forecasters.

Even when we have consistent measures – what does it mean? In Portland Maine the forecast accuracy is 69% - but in Phoenix Arizona it’s 89% - using exactly the same assessment process. Why is this – are the Portland forecasters especially incompetent? No! In Phoenix the forecaster is on to a pretty sure fire thing by simply predicting it’s going to be hot, dry and sunny. In coastal Portland it’s a much trickier exercise.

And so it is with SKUs and businesses. Large, stable SKUs – easy; small, volatile SKUs – tricky. Ditto customers, channels, longer or shorter horizons, different levels of aggregation – a whole panoply of differences will drive what is or is not forecastable. That’s why we use Forecast Value Add (COV)® as our measure, and even then 0% may be the best that is achievable.