Here's today's question (via email):
I am the Marketing Manager for a Technology Consulting firm in PerĂº. The partners and I have held long discussions over the real benefit and application of industry performance ratios such as operational ratios for our business.
What is your take on these indicators? Are they actually available and if so are they useful as benchmarks?
While there are many reputable firms who make a living producing industry reports on performance ratios, I always find them so darn expensive to buy. On the surface, they look as if they give valuable information, but the longer you think about it, the less information content they really seem to contain.
Many of these industry studies are based on information collected from the companies themselves, with little attempt at external validation. As we all know from financial scandals, there's more than one way for a company to present any set of financial statistics, and many of these studies do not spend much time trying to achieve consistency in what they report. In many cases they cannot.
For example, you say your firm has "partners." That probably means they would like to know a ratio like "profit per partner." But you can play a thousand games with that -- who's a partner? What's profit?
There is so much potential for misunderstanding when each ratio is taken in isolation.
For example, you might look at one company and see it has a margin of 50 percent, while a second has only a margin of 3 percent. Does that mean the first company is doing better? No, it could just mean that the first company is a jewellery store (high-end consultants, high fees, low-leverage) and the second company is a supermarket (low prices, lots of leverage). The supermarket could still have a better return on investment than the jewellery store, which has to finance its own very large inventory.
This seems like an obvious example, but the reasoning applies to all ratios -- they only make sense when you can put them in the context of all the other ratios and extract the whole picture.
Next, you have the problem of averaging. Suppose you relied on a survey company which averaged the margins of the two companies I described above, and reported that the average margin was 26.5%. Does that average number have any meaning? Probably not, but that's what gets reported in most surveys.
A third problem I have is that I suffer the tragic history that I have received a high distinction in mathematical statistics. That means I can never look at a survey average without asking myself "But what's the standard deviation?" It tells you nothing to know you're 10 percent above the standard industry ratio if they don't tell you what the normal variation around the average is. Is 10 percent above a great performance or just "noise?" But the information to judge the variance rarely gets reported.
I could go on, but now I have to reverse my conclusion on you. In spite of all of this (and the problems of inter-regional comparisons, etc, etc.) I still would want to look at the numbers. If you know what you're doing (that's a big IF) and you don't rely on them too much, then it's better to know the reported operational ratios than to not know them.
As someone trained in numbers, I always like to say that there is no such thing as an objective, stand-alone numerical measurement system. All measures are, at best, SIGNALS to prompt further investigation and reflection. Used that way, they can be very helpful and I'd look at the industry ratios (as long as they weren't too expensive to get.)
Anybody else want to help the questioner with some experiences and views?
3 comments:
I have three thoughts to add:
1. In one of my roles, we had access to very specific productivity comparisons between machines that made newsprint papers. The numbers were collected and reported in the same way by all the companies involved, and had high credibility. The operators I worked with eagerly awaited those statistics and worked hard to come out on top. In that very specific case, benchmarks were great motivators.
2. I have used competitive cost curves, generated by third parties, for years. While they are usually wrong at the detailed level, they are mostly directionally correct. If your operation comes out at the 78th percentile on the cost curve, you know it might really be 60th or 90th, but it darn sure won't be in the 25th percentile. Those curves have helped me break through denial and instigate needed improvement.
3. However, the most useful benchmarking for me has always been us against us. How are we doing now compared to how we did five years ago, and what can we do to drive our key ratios forward? In that case, we know how the numbers were collected, we know the context, and we know all the measures of dispersion and reliability.
In a rational world we would assume that the reason for wanting benchmark data is that we have a desire to improve our performance, (surely you wouldn't spend all that money just to satisfy your ego on your relative position in the industry)
My usual experience is that far too little effort is put into getting the organisation's mindset into the right place before the benchmarking data is released.
Do we really want use this data to improve, or will we they just expend a load of energy in explaining why the data is not relevant.
My experience tells me that the latter is far more common.
The second problem with benchmarking is that we are only comparing ourselves against the current best, which severely restricts your horizons.
Kent Blumberg noted that his most successful benchmarking compared their current performance to past performance.
I agree with that, but would take it one step further.
Rather than looking back and seeing how far we have come we need to project it forward and see how far we could go.
This view of benchmarking identifies the gaps between were we currently are and a "perfect world" scenario, it invariably elicits the respond "we could never do that".
As long as we are bold enough to say "yes you can, let's go try" we will have opened up our horizons indefinitely.
"... the most useful benchmarking for me has always been us against us".
Bravo.
More, it turns out that operational benchmarks are generally unique to each organisation.
I believe there are fundamental measures that are interesting in any discipline, but they are never directly actionable by themselves.
That's the hidden attraction of operational metrics -- the assumption that simply getting the number will tell you what to fix, and when you are done.
I think a better strategy with any of these numbers, internal of "benchmarks" is to apply a "Sesame Street test" and go investigate the stuff that doesn't match.
Ask "Why are these different?" and you find out all kinds of things, things that you might go change if you want to.
Assert: "We must do better than that over there", and that's what you'll get.
The measure will do better -- no matter what the consequences of doing so.
If you use numbers as a crutch to abdicate your responsibility to run the business, well, you get what you deserve.
And, BTW, you're not adding much value.
Let's see, two measurement systems theirs and ours, both measure the right stuff, both measure the same stuff, so all we have to do is drive our number to be better than theirs.
A trained monkey could do that.
Wouldn't be paid much, however.
Me, I use operational ratios all the time, as a tool for understanding how my business is working.
I wonder.
What change or insight are the partners you are working with going after with these discussions about numbers.
What would they like to see happen?
Post a Comment