I’ve discussed misleading health care statistics in several of my recent posts, especially as they are used in comparison with other industrialized nations in way that is not favorable toward the United States. (Here, I feel that I should make something clear: none of these arguments should be misconstrued as a sort of “we’ve got the best health care system in the world” screed on my part–I’m simply making them because I believe that lots of folks don’t actually know where these numbers come from, and to demonstrate that assessing the quality of a health care system is an extremely difficult task. For more on that, head over to The Incidental Economist for Aaron Carroll’s ten-part series on the quality of the U.S. health care system. Readers of this blog might not agree with much of what Carroll thinks as far as solutions go, but keep an open mind. There is A LOT to learn there.)
Anyway, I was perusing Carroll’s series this morning and ended up following links until I found myself in the comments section of a TIE post, where a commenter linked to a letter to the editor of The New England Journal of Medicine . The letter was written by Philip Musgrove, who served as the editor-in-chief of a WHO report from 2000 that ranked the health care system of the United States as the 37th best in the world. Although Musgrove was in a position of influence when it came to the report, he did not have control over the rankings system. I’m going to post the entirety of the letter below, since I think Musgrove summarizes the issues with the ranking better than I probably could.
In their Perspective article (Jan. 14 issue), Murray and Frenk review a number of indicators of the relatively poor state of the population’s health in the United States. Most, if not all, of this information is well known to readers of the Journal, and the authors’ use of it is not objectionable. However, Murray and Frenk begin their discussion by referring to the World Health Report 2000, Health Systems: Improving Performance, from the World Health Organization (WHO), which ranked the U.S. health care system 37th in the world, and this is objectionable. (I was editor-in-chief of the World Health Report 2000 but had no control over the rankings of health systems.) Fully 61% of the numbers that went into that ranking exercise were not observed but simply imputed from regressions based on as few as 30 actual estimates from among the 191 WHO member countries. Where the United States is concerned, data were available only for life expectancy and child survival, which together account for only 50% of the attainment measure. Moreover, the “responsiveness” component of attainment cannot be compared across countries, and the estimates of responsiveness for some countries were manipulated. This is not simply a problem of incomplete, inaccurate, or noncomparable data; there are also sound reasons to mistrust the conceptual framework behind the estimates, since it presupposes a production function for health system outcomes that depends only on a country’s expenditure on health and its level of schooling, ignoring all cultural, geographic, and historical factors.
The number 37 is meaningless, but it continues to be cited, for four reasons. First, people would like to trust the WHO and presume that the organization must know what it is talking about. Second, very few people are aware of the reason why in this case that trust is misplaced, partly because the explanation was published 3 years after the report containing the ranking. Third, numbers confer a spurious precision, appealing even to people who have no idea where the numbers came from. Finally, those persons responsible for the number continue to peddle it anyway. To quote Wolfgang Pauli’s dismissal of a theory opposed to quantum mechanics, “Not only is it not right, it’s not even wrong!” Analyzing the failings of health systems can be valuable; making up rankings among them is not. It is long past time for this zombie number to disappear from circulation.
Philip Musgrove, Ph.D.
Health Affairs, Bethesda, MD