The New York Times made waves earlier today when it reported that the Census Bureau will be making extensive changes to the surveys it uses in order to gather information on health insurance status across America. The changes, which will purportedly help provide a more accurate view of insurance statuses across the population, will also apparently be so vast as to render side-by-side comparisons with data from old surveys impossible.
The Census Bureau , the authoritative source of health insurance data for more than three decades, is changing its annual survey so thoroughly that it will be difficult to measure the effects of President Obama ’s health care law in the next report, due this fall, census officials said.
The changes are intended to improve the accuracy of the survey, being conducted this month in interviews with tens of thousands of households around the country. But the new questions are so different that the findings will not be comparable, the officials said.
As pretty much everybody knows, shifting survey methodologies from year-to-year would totally confound any patterns that emerged from the data, which is why many (including myself) responded quickly and negatively to this development. If the pre-ACA data from the year 2013 was collected using the old survey methods while the newer post-ACA data from the current year was collected using the revamped method, we’d be left in a position without a meaningful and reliable way to interpret any of the changes. Did the enrollment numbers go up? Did they go down? Either way, how much of that was the ACA and how much of it was the differences in the method themselves? To perform such an alteration between the last year before the full implementation of the ACA and the first year after it would have destroyed an opportunity to try and observe the effects of policy playing out on a national scale.
As Sarah Kliff quickly pointed out, however, the changes to the Census Bureau’s methods may not actually have all that much of an effect, because the information for this survey is collected on a one-year lag. That means that the data from 2013 hasn’t been collected yet, and when it is collected it will be collected using the new methods. Kliff apparently got this information from a senior official within the Obama administration.
It might not be time to freak out quite yet: What’s being missed here is that the Obama administration will use the new survey questions to collect data for 2013, the year prior to Obamacare’s health insurance expansion, a senior administration official says.
The Census Bureau reports the health insurance rate with a one-year delay; in September 2013, for example, the agency reported the percent of Americans without coverage in 2012. It will most likely report the uninsured rate for 2013 sometime this coming fall.
In other words: The survey will make it difficult to compare the uninsured rate for 2012, the last year for the old questions, and 2013, the first year for the new questions. But making the change now means that 2013 and 2014 – the year before and after Obamacare’s big programs started – are using the same question set.
If that’s accurate (I have no reason to believe it isn’t), then it is welcome news indeed. It’s still possible that new survey methods could weird up the results somehow, but as others have already pointed out all over Twitter, the Census Bureau is hardly the only source of information on health insurance and enrollment across the population (though the NYT does refer to the Census Bureau surveys as “authoritative”). Multiple other polls exist, so they could be used for reference should any unforeseen issues arise with the Census polls.
The fact that new Census Bureau methods will capture at least one year of pre-ACA info is important, but it’s worth stating the obvious: one year of pre-ACA info gives us way less to work with than many years of pre-ACA info. The ACA didn’t drop into existence out of a vacuum in 2014, and people, employers, and insurance companies have been changing their behavior since its passage all the way back in 2010. To compare 2013 to 2014 still will not be able to give us a clean look at exactly what is going on, but it’s definitely a lot better than being left without any single point of reference whatsoever.
Any information we can get our hands on is going to be valuable (though I doubt much of it will be definitive enough to really settle debate and win believers from one side to the other). I tend to sympathize, though, with the point that Tyler Cowen made in response to this news, in a blog post entitled “Dept. of Pure Coincidence.”
Obviously with a big new law you need new questions too, I suppose, plus the old questions ought not to hang around. You can read more here .
As a side note, I have been reading far too many blog posts about “numbers enrolled” as a metric of success for Obamacare. That has never been a good test of the serious criticisms (and defenses) of ACA.
I’ve written in the past about the fact that the problems with the online exchanges and enrollment numbers have never been, at least in my eyes, the main area of concern when it comes to the administration’s current health reform efforts. Think about it this way: is there a single high-profile example of anybody changing their mind on the ACA, one way or the other, on the basis of the numbers that we’ve seen thus far? It is just too early to interpret data this messy with any degree of confidence in your conclusions, and the ACA is about a lot more than the sheer volume of people who sign up for health insurance. It’s going to be a long time before people really have a good idea of how well or poorly the ACA has performed, and I suspect that even then its supporters will support it and its opponents will not.