I recently joined a discussion on Kaiser Fung's blog Junk Charts , When to use the start-at-zero rule concerning when charts should force a 0 into the Y-axis. BTW - If you have not done so, add his blog to your RSS feed, it's superb and I have become a frequent visitor.

On this particular post, I would completely agree with his thoughts was it not for this one metric I have problems visualizing, Forecast Accuracy.

Forecast Accuracy is a very, very widely used sales-forecasting metric that is based on a statistical one, so let's start there.

The statistical metric (Mean Absolute Percentage Error) looks at the average absolute forecast error as a percentage of actual sales. Some of the errors will be positive and some negative but by taking the absolute value we lose the sign and just look at the magnitude of error. (We handle optimism or pessimism in the forecast with a different "bias" metric).

There is occasionally heated discussion in the sales forecasting community about exactly how this should be calculated but let's save that for another day as all forms I am familiar with have the same properties with regard to plotting results.

- perfect forecasts would have no error and return 0% MAPE, this is our base.
- there is no effective upper bound on the metric

If we were to look at this across a range of product groups (A thru K) it might look something like this. The Y-axis is forced to start at 0 and the length of the bars have meaning, Product D really does have almost twice the error rate of product A. This plots out very nicely, it's hard to misunderstand and the start-at-zero rule certainly does apply.

Forecast Accuracy = 1 - MAPEI can only assume this metric was created in the sense of

*"bigger numbers are better"*. It's in widespread use, it's part of the business forecasting language, and no, I can't change it. As you can see below, perfect forecasts are now at 100% and there is no lower bound on the metric, it can easily be negative.

This causes me a problem. Check out the chart below: this is the same data as before but now expressed as Forecast Accuracy rather than MAPE in a standard Excel chart. Excel is trying to help (bless it) and put the 0 value in without my help. Work in supply-chain and you will see a lot of these.

MAPEForecast Accuracy0% 100% 20% 80% 40% 60% 60% 40% 80% 20% 100% 0% 120% -20% 140% -40% 160% -60%

The zero value has no special meaning on this metric, so starting at 0 is very misleading: 80% accuracy (20% MAPE) is not twice as good as 40% accuracy (60% MAPE).

Allowing the minimum of the y-axis to float does not solve this either (below)

"Abandon it" you say "go to a line chart". Line charts often have floating axes and yes they do not emphasize relative size nearly as much as a bar-chart does (below).

Perhaps it's less confusing/misleading than the previous charts but I still don't like it. because there is data I want to compare relative sizes for (the MAPE) and line-charts seem most useful when trying to show patterns. I have no reason to expect a useful pattern to form from product categories: I just sorted then alphabetically.

My thanks to the contributors on Junk Charts for helping me clarify my thinking on this. I don't know that there is a great answer but as it's one I run into all the time I do want to find a better solution. (FYI - It's just hit me that there are another set of supply-chain metrics for order fill-rates than have the exact same problem)

The best I have been able to do with it so far is shown below, by forcing the upper limit on the Y-axis to 100% and letting the lower limit float, I am trying to emphasize the negative space between the top of the bar and 100%, essentially the error rate.

I'm not entirely happy though, those heavy bars do draw the eye, how about a dot-plot instead ?

You would still have to learn how to read it properly ...

Or how about this? Inspiration or desperation? I'm now plotting the bars down from the 100% mark, emphasizing MAPE while still using the Forecast Accuracy scale. I'm not entirely sure yet, but I

*think*I like it and if I generalize the "start at 0" idea to "start at base" it may even fit the rule.

What do you think? Which version best handles the compromise between a user's desire to see the metric they know and my desire to show them relative error rates? Have you a better idea? I would love to hear it - this one really bugs me ! Can you think of any other examples of metrics where 0 is meaningless?

this is a good example based on the dot plots http://betterevaluation.org/evaluation-options/dot_plot

ReplyDelete