History professor and prolific author Jerry Muller came to this topic through his own frustrations as a university department chair and what he saw as the purposelessness of an increasing fixation on metrics.
He opens by citing the television series The Wire, in which police—pressed by politicians to reduce the rate of major crimes—either overlook them or deploy judicious reclassification. The distortive effects of poorly considered metrics are a central theme. Referencing another series, Bodies, a senior surgeon advises a junior on maintaining success rates: “the superior surgeon uses his superior judgement to steer clear of any situation that might test his superior skill”.
Muller concedes that attempting to measure performance is intrinsically desirable—as is a level of transparency. But regrettably we live in an age of mismeasurement, over measurement, misleading measurement and counterproductive measurement. He suggests the problem is not measurement itself, but excessive and inappropriate measurement. What is actually measured should be a reasonable proxy for what is intended to be assessed and must be combined with a degree of judgement. Then it has value in understanding and informing performance.
Our contemporary fixation with metrics is tagged to Tom Peters 1986 motto: “What gets measured gets done”. Over time this was distorted to mean anything that can be measured can be improved. This, Muller suggests, has led to several false beliefs: that judgement acquired by talent and personal experience can be replaced with standardised data; that by making metrics public we can be assured that institutions are carrying out their specified purpose; and that the best way to motivate people is to reward or penalise based on measured performance.
We are given a wonderful addition to the army of acronyms: OMD—Obsessive Measurement Disorder. Andrew Natsios[i] suggest this an intellectual dysfunction rooted in the notion that counting everything in government programmes will produce better policy choices and improved management. This position is supported by Campbell’s Law[ii] which holds that “the more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor”.
One of the most useful sections of the book outlines recurring flaws. The key ones are worth restating:
- measuring the most easily measurable—what is easiest is rarely most important
- measuring the simple when the desired outcome is complex
- measuring inputs rather than outcomes—one of our favourites (means not ends)
- degrading information quality through standardisation—removing nuance through simplification
- gaming through creaming—as per the ‘superior surgeon’
- improving numbers through lowering standards—shifting the goal posts
- improving numbers through omission or distortion—as per The Wire reference
- cheating—simple fraudulent behaviour.
One of the joys of the book is the many examples of flawed thinking in measurement. One of the most famous—and possibly most damaging—became known as the McNamara[iii] Fallacy. McNamara’s style of thinking led to a culture within the Pentagon imbued with a managerial ethos, pursuing measurable efficiencies at odds with the actual strategic thinking the military required.
Daniel Yankelovich,[iv] who coined the term, suggests: “The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.”
Muller takes aim at the various attempts to make higher education ‘more efficient’, notably Thatcher’s Conservative Party. Of that effort, Eric Kedourie[v] observed that ‘under the slogan of efficiency a great fraud was being perpetrated’, observing that efficiency is not a general and abstract attribute but always relative to the object in view.
The book has chapters exploring many misdirected metrics inside education, medicine, policing, philanthropy, foreign aid, the military and business. They are fascinating and salutary reading.
Muller cites, for example, an anonymous American governance consultant who commented that the introduction of the Sarbanes-Oxley Act of 2002 resulted in directors being so obsessed with financial reporting that they had little time for the value-adding roles of strategy and future thought.
Novel approaches to metrics are included. One of the best, suggested by an Australian officer for assessing local security in Afghanistan, is using the presence of exotic vegetables in the market as a measure of perceived peace and wellbeing. Simply put, if farmers from out of town felt safe enough to travel to market and get home with the cash then there was some level of peace in evidence.
The book concludes with two highly useful sections, the first on predictable but unintended negative consequences, which include:
- goal displacement—misalignment metrics not related to organisational ends
- short termism—we all know this one…quarterly targets
- cost in employee time—enthusiasm-sapping pointlessness of over-measurement
- diminishing utility—marginal cost outweighing marginal benefit
- rewarding luck—outcomes independent of effort
- discouraging innovation and risk-taking—penalising failure
- discouraging cooperation and common purpose—focus on individual not collective performance.
The final section is a ten-point checklist on when and how to use metrics. This ensures that purpose, utility, cost and possible goal diversion are considered. The final paragraph reminds us that metrics are not a silver bullet and that it is not a question of metrics versus judgement but rather metrics informing human judgement. This should include understanding of the relative weight to give information, what may be distorted, and appreciating what can’t be measured.
This is a highly readable, well researched and referenced book—and often highly amusing. Ultimately it is a warning about the damage that our obsession with metrics is causing and how we can begin to fix the problem.
Niall Fergusson[vi] is quoted, and his quip is a good concluding observation: “those whom the gods want to destroy, they first teach math”.
[i] Andrew Natsios. Senior US public servant with lengthy international development experience
[ii] Discovered independently by two separate research teams in the seventies but ultimately named after American social psychologist Donald T Campbell
[iii] Robert McNamara, accountant and youngest ever Harvard professor turned Secretary of Defence at the time of the Vietnam War
[iv] Daniel Yankelovich. Corporate Priorities: (1972). A continuing study of the new demands on business
[v] Eric Kedourie. British conservative historian and political theorist