In case you missed it, turns out that the last 11 wellness studies have all come to basically the same conclusion. This is the intro to Employee Benefit News listing them and linking to them.
And if anyone still thinks overscreening programs have merit, here is a complete expose from 2017 on “pry, poke and prod.” It’s pretty extensive so clear your calendar.
The highly-publicized, randomized control trial of the wellness program at BJ’s Wholesale Club published recently in the Journal of the American Medical Association found virtually no value in the program.
While most pundits applauded the study, wellness vendors — whose livelihoods, of course, depend on believing the opposite — attacked it. Two common threads among the attacks, including one right here in Employee Benefit News, were that the program was bad (“anachronistic” in this case) and that one can’t draw conclusions from one study. The specific argument: “Recent research has disappointingly focused on a single — typically anachronistic — program to make sweeping statements on wellness programs more broadly.”
Far from being anachronistic, the JAMA study was as mainstream as studies get — screenings, risk assessments, coaching, learning modules. This is what wellness has been all about. True, there are a few new things today — like employee health literacy education — which is my own business, but they are too early in the life cycle to be evaluated.
And far from being a one-off, this study’s finding of zero impact was completely consistent with the previous 11 (eleven) published studies for which data was provided. Let’s review those studies…
Note: someone might say: “that doesn’t add up to 11.” There are plenty of others. My favorite is the self-immolation published by the Health Enhancement Research Organization in 2015. Very stable geniuses that they are, they didn’t actually realize they showed wellness losing money until I pointed it out.
First, Mr. Goetzel claimed that he had never seen this study before (even though it was the centerpiece of his own organization’s guidebook) and that the data was fabricated (not sure how he knew that if he hadn’t seen it). When that defense was outed, he started vigorously defending the study, his principal defense being that HERO didn’t intend for costs to be compared with benefits.
Further, my suggestion that the costs should be compared to benefits was “mischievous,” according to Ron. That much I’ll gladly agree with…