There is a lot more to this study than meets the eye.
Just published today in American Journal of Managed Care:
Some tourist attractions feature an “A” tour for newbies and then a “behind-the-scenes” tour for those of us who truly need lives. For instance, I confess to having taken Disney’s Magic Kingdom underground tour, exploring, among other things, the tunnels through which employees travel so as not to be seen out of costume in the wrong “Land.”
Likewise, there have been many reviews of the recent wellness study conducted by the National Bureau of Economic Research (NBER), the first-ever randomized control study of a wellness program. This, however, is the first review to go beyond the “A” tour of the headlines.
By way of background, the headline is that the mainstream wellness program the investigators examined at the University of Illinois did not noticeably move the needle on employee health. They didn’t address return-on-investment (ROI), because there obviously was none. Achieving a positive ROI would require moving the health risk needle—not just by a little, but rather by enough to significantly improve the health of many employees. Then, since wellness-related events, such as heart attacks, would not otherwise have befallen these employees immediately, this improvement would have to be sustained over several years before there was a statistical chance of some events being avoided.
Finally, the magnitude of this improvement would have to be great enough to violate the rules of arithmetic, because it is not mathematically possible to avoid enough medical events to break even on wellness. For instance, it actually costs about $1 million to avoid a heart attack through a screening program.
This finding, therefore, represents an existential threat to conventional wellness programs.
It all boils down to: why would an associate professor (Damon Jones) publicly humiliate his own dean (Katherine Baicker — yes, the very same Katherine Baicker who always seems to be on the wrong side of every wellness debate) …unless he is absolutely sure he is right?
She can’t fire him now because that would get picked up by the lay media. Perhaps she should have paid him $130,000 not to disclose the results.
I don’t understand how a study which itself uses a participant vs. non- participant comparison research methodology proves that “the entire participant vs nonparticipant methodology is invalid.” That just doesn’t make any sense.
Rather than proving that the participant vs. non-participant research methodology is invalid, it seems to me that what we have going on here is related more to the hierarchy of evidence rather than research methodology. Under the hierarchy of evidence framework, using a random control strategy gives the NBER study’s results more value, weight or credibility than the results of a participant vs. non-participant study which does not randomize participants and non-participants. This, however, in no way invalidates the participant vs. non-participant (controls) research methodology. They are two very different things, at least as I am remembering my graduate school research methods classes.
Also, without knowing how you are defining health Al, I am not sure it is accurate to say that the NBER study results found that they did “not noticeably move the needle on employee health,” especially since the researchers stated in January that they had yet to analyze the first year biometric data. Has there been an updated news release from the researchers regarding the analysis of the biometric data that I have not seen?
In regards to the randomization in this study, what I find most interesting is the researchers’ statement in the research report abstract which states: “…during the year prior to the intervention, program participants had lower medical expenditures and healthier behaviors than non-participants.” I would love to see or hear the researchers thinking as to why this was the case here given the random assignment in the study. What are your thoughts Al as to why this was the case?
Bill, good catch. I’ll rephrase. They didn’t use that methodology. They tested that methodology.
And you called me out on the other sloppiness too.
However, on the last point, what they mean is, in the group assigned to the study (as opposed to the control), the active participants had lower medical expenditures. That is exactly what one would expect, phrased that way. As you recall from the Eastman Chemical/Health Fitness example, participants outperform non-participants even if they don’t have a program to participate in.