They Said What?

Home » Uncategorized » Are Participation-based Findings Invalid? The Wellness Industry Thinks So.

Are Participation-based Findings Invalid? The Wellness Industry Thinks So.

Do you know whether heartburn pills are safe for long-term use?

New Yorker cartoon, which unfortunately I can’t find and wouldn’t pay to license even if I could, shows a couple watching their dog chase its tail.  The caption is: “I’ll admit he’s stupid, but you have to admire his commitment to stupidity.”

Likewise, I stand in awe of the wellness industry’s willingness to defy all rules of math, plausibility, science, biostatistics, integrity, and common sense in their commitment to the participants-vs-non-participants study design.

In a nutshell, wellness apologists would have you believe the following:  If you divide a company into employees who want to lose weight vs. employees who don’t, that the difference in “before” and “after” weights is due to the program, not the difference in motivation to lose weight. And, further, that people who start in the motivated category but drop out, shouldn’t count at all.

If this study design made sense, the FDA could dispense with double-blinded trials.  They could simply compare people who are eager to take a certain drug to people who could care less about taking the drug.  And what if someone taking the drug gets major side effects and drops out?  Heck, don’t count him at all.  Pretend he never existed.  Silly?  Of course. But “silly” is what wellness is all about–and I’ve just described their study design.

Let’s consider the evidence. The beauty of this evidence is that it constitutes what a litigator would call “admissions against interest,” meaning that the perpetrators’ own statements invalidate their own case.  We don’t need to prove we’re right–the wellness industry proved it for us. Moreover, they did this despite massive investigator bias and publication bias in their attempt to show the opposite, sprinkled in with an attempted cover-up by Mr. Integrity himself, Ron Goetzel.

Wellness always appears to show savings because the “participation effect” isn’t isolated from the “program effect.” But what if you could isolate those two effects? What if there were situations in which:

  1. Participants were separated from non-participants but not offered a program to participate in?
  2. A program gave bad advice, the advice was taken…but participants still showed savings?
  3. A controlled experiment tested participation impact itself against program impact?

Remarkably, we have examples of all three –a compendium of self-immolating examples, all peer-reviewed, that have appeared in this blog before, though never in one place. And all continuing to prove the mantra: “In wellness, you don’t have to challenge the data to invalidate it. You merely have to read the data. It will invalidate itself.”


Eastman Chemical and Health Fitness Corporation: Savings without a Program

The slide below — blessed by the entire Koop Award Committee — clearly shows increasing savings over the two years of “baseline years” before the “treatment years” even started. Phantom savings reached a whopping $400/year/employee by 2006…without doing anything!

HFC Eastman Chemical wellness data

This was the example where Ron Goetzel, who had already known about this misrepresentation of savings, went back into the original Koop Award application and doctored it, removing the bottom axis altogether in order to prevent readers from seeing what really happened.  And then he wrote in Health Affairs that the “original is online and subject to review,” when in fact it wasn’t — only his doctored version was.

hfc rewritten

Coda: by the end of the 2008, the “savings” for Eastman participants exceeded $900/year…but average participant risk declined only 0.17 on a scale of 5.  Yet they won a Koop Award. Is this a great country or what?


Participant costs decline immediately, thanks to a lowfat diet

By getting people to eat more carbs and less fat, Stanford researchers showed a reduction in costs of $397 in the first six months alone, or roughly $800/year, or about $2000 in today’s dollars.  Since participant risk factors only declined 2.2% and since some of the advice was demonstrably wrong, the only thing (besides regression to the mean) that could explain this result is simply being willing to participate, even if you do the wrong thing.

Yet despite the obvious invalidity, this study was selected for inclusion in Professor’s Baicker’s 3.27-to-1 paean to the wellness industry.  Without invalid studies she wouldn’t have been able to perform a meta-analysis in the first place.


Aetna admits nothing happened…but claims $1464 in savings/participant anyway

This was a controlled experiment in which two groups of fairly low-risk people were “Invited” and divided into what we will designate as Participation Offered (“Group PO”) and Control.  Group PO was offered the program, and a small number signed up. the Control wasn’t. At the end of the year, there was basically no difference in health status between the Control and the combined Invited groups…and in a few cases the Control group did better:

joem aetna study vs control

Nor should there have been any difference. Subjects were Invited specifically because they were not chronically ill or even at risk for being chronically ill. Hence there was no opportunity for improve health status in a clinically meaningful way, especially in only 12 months. Further, the program itself was based on the highly questionable proposition that telling people they had a gene for obesity would motivate them to lose weight.

Yet, the participants in Group PO saved $1400+ as compared to the non-participants in Group PO. Just to be clear: once both those subgroups were totaled and compared to Control, any savings — or as shown above, health improvement — went away.

Pure participation bias. You couldn’t ask for a more compelling experiment. This was the one where an editorial advisor to the journal that published it apologized for allowing Aetna to show $1400 in savings when there was none. (See “Comments.”)


These are just the three most self-evident cases. But if you look hard enough, you’ll see a pattern in wellness, especially among award-winning programs: risk factors among participants barely budge, but huge amounts of money are saved.  Nothing is ever plausibility-tested even though the HERO Guidelines urge such an event rate-based test.

At this point the best argument for hiring a “pry, poke and prod” vendor is that the economy depends on it. If you don’t keep sending these people money the entire country might slip back into recession. Breadlines could form. But the good news is that bread is a key component of a lowfat diet.

 


2 Comments

  1. Ron says:

    Al, we’re not a poke, pry and prod vendor but I have a question for you. Given the argument against the methodology of participants vs. non-participants, what is your issue with measuring the impact of a program with the entire employee population to determine the influence the program has over time (whether employees are in or not in the program or go in and out of the program from year to year)?

    Like

    • whynobodybelievesthenumbers says:

      Hi Ron, that’s great if you’re doing that. You could earn validation from the Validation Institute. They would also want you to “plausibility -test” your findings with an event rate test — or at a minimum offer it to companies (it can even be for an extra fee.)

      If you have a different objective company-wide metric on which you’d like to be measured, that’s great too. I’d be happy to chat about it with you offline if you like.

      Like

In the immortal words of the great philosopher Pat Benatar, hit me with your best shot.