They Said What?

Home » HERO » Wellness trade association (HERO) throws itself under the bus

Wellness trade association (HERO) throws itself under the bus

Do you know whether heartburn pills are safe for long-term use?

In wellness, there is a saying that you don’t have to challenge the data to invalidate it. You merely have to read the data. It will invalidate itself.

Now that adage can be extended to cover the May “position paper”  published by the Health Enhancement Research Organization (HERO). HERO manages to invalidate its own position even though they didn’t actually take a position in this “position paper” to invalidate. They didn’t even present any data…and yet their data was wrong.

All they said was “critically examine research.” Then they provided four paragraphs which showed why doing precisely that invalidates everything they’ve ever published. See for yourself with each paragraph header.

I agree totally — except with the grammar, English being right up there with arithmetic, logic, statistics, and ethics as being courses that HERO’s staff have struggled with. There has not been a single study in the last five years showing wellness saves money. Sometimes, the studies say explicitly that wellness failed, as the National Bureau of Economic Research did, with Medicare close behind. Or Pepsico. Or the state of Connecticut, which, having thrown away millions on inappropriate screenings and doctor visits, declared that losing money was “a good thing.”

Other times, you have to actually read the study to realize that wellness loses money, as in every Koop Award-winning example in the last six years through 2016, like this one and this one and this one and this one and this one and this one.  Why not 2017? Because in 2017, they couldn’t even find a study that pretended to save money.

HERO has painted itself into a corner here too, because if a study ever does come out that alleges wellness saves money, critics would simply note that it is important to be skeptical of a claim from a single study refuting all the others. Apparently Prof. Baicker is working on a new paper, having found the arguably the most stable genius vendor in the wellness industry to support what are likely to be claims of savings high enough to support her own previous claims of savings, but low enough so they won’t immediately self-invalidate upon exposure to sunlight. And let us not forget that her original conclusions were publicly invalidated, by name, by her own subordinate. Is this is a great country or what?


The HERO position paper complains that:

Media criticism is sometimes based on programs that are not evidence-based, are poorly implemented, or are incorporated into unsupportive environments.

Hmm… so every time a study comes out showing wellness loses money, it’s because the program was done badly? This list of losers apparently includes the aforementioned Pepsico, whose program was so bad they won two Koop Awards.

During my debate with Ron Goetzel, a number of questions from the audience complained that their own company’s wellness program was horrible. Ron’s stock response: your company needs a better program. He reserved particular animus for the Penn State program, which he himself decried as “horrible,” even though he was the prime architect. In another instance he blamed the questioner for being a bad employee.

Since employees dislike being pried, poked and prodded, “unsupportive environments” would include every company, according to WillisTowersWatson. That means the only companies with “best practice” programs would be wellness vendors themselves, like Vitality Group, where, as a wellness vendor, they curated and implemented only the finest wellness interventions available to mankind. Oh, wait a sec, I goofed. That program failed miserably. Shame on their own bad employees!

Vitality Group is in good company. As Mr. Goetzel says, thousands of programs are done badly while only 100 have succeeded.

“Best practices” might include adhering to US Preventive Services Task Force guidelines, not demonizing social drinking, avoiding harms to employees with eating disorders, and not telling people to do stupid things. I’d settle for a program that does any one of those things.


How much time do they need? 3 years? 6 years? They need more? Sure — here is 14 years‘ worth of results for the entire US, showing virtually nothing has happened. The “wellness-exposed” and “wellness-non-exposed” <65 populations are virtually coincident, with only Medicare — where, by definition, there is no workplace wellness — showing a little improvement. It’s all spelled out here.

This paragraph is supposed to be a subtle diss of the NBER paper, which in the 4 months subsequent to publication, the very stable geniuses at HERO have not mentioned by name to avoid drawing attention to it due to its obvious quality and validity. (Welcome to my world.) True enough that the NBER paper covered only one year, though the authors added there was absolutely nothing “trending towards savings” to suggest that the second year would be any different.

The irony of course is that almost every one of those Koop Award-winning programs did claim first-year savings. So first year savings are obtainable, except when they aren’t.


Confirmation bias? HERO is a thesaurus-level paean to confirmation bias. Look at a typical study, and all you will see is citations to studies by their colleagues. Not one study in the wellness trade association’s journal (whose prevaricator-in-chief Paul Terry, also runs HERO) has ever cited me in all of its history, whereas this single article by me cites various members of the Wellness Ignorati 115 times. Not one study in the history of that journal has ever found wellness loses money. At least deliberately. Just like Perry Mason lost one case that was overturned on appeal, that journal accidentally found “randomized control trials exhibit negative ROIs,” but then devoted an editorial in the next issue to overturning their previous conclusion.

Here is HERO’s exact language on confirmation bias:

Confirmation bias is the tendency of researchers to draw inferences from their study that align with their preexisting beliefs but are not well supported by their data.

This is, of course, is how the Wellness Ignorati got their name — deliberately ignoring the overwhelmingly conclusive data that undermines their revenue stream. Examples are legion but my favorite is Larry Chapman breathlessly propagandizing a study that he interpreted as aligning with his preexisting belief that health risk assessments save 50% (“they should be treated like a beloved pet”). Alas, he made the mistake of also providing the actual data, which naturally not only can’t be interpreted to show 50% savings, but also can’t even be misinterpreted to show 50% savings. Or any savings, for that matter.

Claim:

Data:

Larry, one question for you:

 


13 Comments

  1. williammcpeck says:

    Here is the full paragraph from HERO:

    “Be skeptical of claims from a single study that appears to refute a large body of previously conducted peer-reviewed research. Hundreds of peer-reviewed studies published in scientific journals over three decades have identified organizational policies, interventions, and cultural supports that yield positive health and business outcomes. Systematic reviews have evaluated this research and concluded there is strong evidence that well-designed HWB initiatives are effective. New studies must be weighed against this large body of evidence.”

    Here are my thoughts about this paragraph:
    1. Employers and worksite wellness practitioners should be skeptical of every study published about worksite wellness. There are many variables that go into a research study which can influence its findings – both positive and negative. No research is fool proof or error free. Employers and practitioners need to think critically about what they read. Different levels of research offer different value when it comes to their results.
    2. The process of peer review has absolutely no influence over the research methodology chosen by the researcher. It is an apples and oranges comparison. The real issue is the researcher’s methodology, not peer review. The results reported are a function of the methodology used, not peer review. It is unlikely that if a researcher screws up the implementation of their chosen methodology so bad that it is flagged by peer review that the study would ever be published.
    3. To the best of my knowledge, there are no definitive study results which refute this statement: “Hundreds of peer-reviewed studies published in scientific journals over three decades have identified organizational policies, interventions, and cultural supports that yield positive health and business outcomes.” These results can be many things and not just healthcare savings.
    4. “Systematic reviews have evaluated this research and concluded there is strong evidence that well-designed HWB initiatives are effective.” But the real question is effective at what? Do we yet even know the full spectrum of results from definitive research?
    5. New studies must be weighed against the standards of research design hierarchy and the hierarchy of evidence and not just the large body of existing evidence. If a study is methodologically valid, then we must next ask how much weight, value or credence do we give its results in terms of cause and effect. The greater the rigor of research design used, the more weight, value and credence we should give the results found.

    Like

    • whynobodybelievesthenumbers says:

      Hi Bill, #3 that they said is wrong…and I’ve got $3 million that says so. They all run around peer-reviewing one another but when I peer-review them, I find obvious mistakes that invalidate the whole thing. That’s why people have stopped applying for the Koop Award (down from 21 to 3 applicants). Ron claimed that the reason for the decline was that the application form had gotten harder but — get ready — it is identical to the application from 20 year ago.

      I don’t know how much easier I can make it to claim the reward — the rules are all spelled out, it’s legally binding, and I’ve bent over backwards to make it easy for them to win. For instance, the burden of persuasion rests on me. I only get to appoint 1 of the 5 judges. https://theysaidwhat.net/2017/12/09/does-wellness-work-lets-boost-the-2-million-reward-to-3-million-and-make-it-easier-to-claim/

      Your fifth point is an excellent one. They themselves found that RCTs show negative ROIs…and that was before the NBER.

      Like

      • williammcpeck says:

        Your 3 million dollar challenge aside Al, I love learning from you so please direct me to the definitive research that concludes that worksite wellness programs offer no value at all to employers. I would love to read it so I might be better informed. Thanks.

        Like

      • whynobodybelievesthenumbers says:

        It’s not “no value at all.” It’s that conventional outcomes-based programs have no value. It is certainly the case that doing wellness for employees instead of to them has value, even though it is difficult to quantify. This link summarizes all the lack of value of outcomes-based programs eight ways to Sunday, including the links. Compare that to any allegation (and notice that no one outside of the Wellness Ignorati even pretends that wellness saves money any more) that wellness does save money and see which you think is more compelling. http://www.ajmc.com/contributor/al-lewis-jd/2018/05/in-light-of-wellness-program-findings-employees-should-be-able-to-opt-out-of-wellness-screenings

        Or in a nutshell, if Wellsteps is the best program and they admitted themselves that they mis-stated the outcomes and their own data shows they harmed employees, what does that say for either Wellsteps and/or the competence of the esteemed evaluators who are somehow experts and yet couldn’t figure out that Wellsteps scammed Boise according to their own application?

        Like

  2. williammcpeck says:

    HERO wrote: Determine whether the tested intervention represents a best practice approach.

    This statement from HERO requires employers and worksite wellness practitioners to understand the difference between the various levels of programming and interventions such as home grown, best practice, evidence based and next generation practices. I am not sure that having this clarity is at all the case. I have personally seen a case where a practitioner claimed she was using an evidence based program. When I explored the situation further, it became clear that she had taken an evidence based program and applied it in her organization but changed the program to fit her organization and organizational circumstances. The changes were to the extent that I really questioned that the fidelity of the original evidenced based program was maintained in its new iteration. She certainly didn’t understand that and I am guessing that many other worksite wellness practitioners and certainly most employers don’t either. In fact, I am willing to go out on a limb here and state that probably 98% of the worksite wellness programs in existence today are not using evidence based programming and interventions at all.

    The other point that I think needs to be made here is that the whole point of doing research is to prove that something works or to better understand why and how something is working. Many studies are done to prove that programming and interventions do in fact work. The point of the research is to create the evidence base for the intervention. Therefore, until the evidence base is established, it is not evidence based programming or an evidence based intervention.

    In this paragraph, HERO also states: “Best practice approaches are also supported by organizational policies, a workplace environment, leadership practices, and social norms that make healthy choices easier.” This is certainly a true statement. Anyone who has ever built a worksite wellness program knows that one strategy or intervention does not a wellness program make.

    Like

  3. williammcpeck says:

    HERO wrote: Ask whether enough time was allowed for the intervention to produce desired outcomes.

    This is a very legitimate question for the reader of any research study to ask. Outcomes, just like habits, will vary in the length of time it takes for them to come to fruition, if they ever will.

    The better questions to ask would be:
    1. Are the programming and interventions delivered likely to create the desired outcome? Since far too many worksite wellness programs today have a disconnect between their desired outcome and the programming and interventions they offer, this question becomes all the more critical to ask.
    2. If the desired outcome is a reasonable expectation of the programming and delivered, then the question becomes: How long should it take the desired outcome to materialize?
    3. Only after asking and answering these first two questions, should the question become: Did the study or program evaluation allow enough time in the post intervention period for the outcome to materialize?

    Program evaluation or I should say the lack of program evaluation is one of the four fundamental core problems with worksite wellness programs today. The last survey figure I saw showed less than 45% of the responding programs conducted any type of program evaluation. And we know from the 2013 Rand study that they found less than 2% of their employers actually conducted any type evaluation to determine if their programs did, in fact, save them any money on healthcare spend.

    Like

    • whynobodybelievesthenumbers says:

      “Are the programming and interventions delivered likely to create the desired outcomes?” is one of the bases on which I invalidated the studies behind Kate Baicker’s 3.27 to 1. In one case, th researcher told a bunch of diabetics to go on a lowfat diet and eat more carbs…and found massive savings. In another case, Dee Edington found a reduction in cat scratch fever and Von Willibrand’s Disease through a standard “pry, poke and prod” program. http://www.ajmc.com/contributor/al-lewis-jd/2017/01/do-wellness-outcomes-reports-systematically-and-dramatically-overstate-savings?

      Like

      • williammcpeck says:

        While what you wrote may all be true Al, I didn’t see anything that refutes my point that different outcomes need different maturation times so as a critical thinker, the reader of any published study should be asking themselves the time vs outcomes question.

        Like

      • whynobodybelievesthenumbers says:

        Bill, I really appreciate all your comments and you’re right in the sense that I can’t prove that there isn’t some maturation over an extended period in some really good program somewhere. But for the US as a whole you can’t discern any maturation this century.

        Like

      • whynobodybelievesthenumbers says:

        See below — I replied in the wrong place.

        Like

      • williammcpeck says:

        I hope this shows up in the right place. And I appreciate Al your pushing me to continue to think and explore my knowledge and beliefs about so many of the critical issues facing the field of worksite wellness today.

        Like

  4. williammcpeck says:

    As for the Where’s The Beef, I would argue that if what Dr. Jon Robison, Dr. Damon Jones and you say Al is correct, that comparing participants (the study group) to non-participants (comparison group) is an invalid methodology, then the JOEM data is meaningless as it makes just such a comparison. As for validity, even Dr. Jones’ own study compared participants to non-participants. The only difference was he used a random selection process for participant and non-participant groups to try and address the issue of self-selection bias. Speaking of bias, this is obviously an important bias for a researcher to try and compensate or adjust for. Rather than the comparative methodology being invalid, I would argue that his random selection makes his study of a higher research rigor, thus making any potential cause – effect results more credible or valuable based on the hierarchy of evidence scale. I haven’t yet sought out the JOEM original article, but I am assuming that study reported there did not use a random group selection process, thereby yielding results lower the hierarchy of evidence results pyramid.

    Your reposting the JOEM data Al reminded me that I wanted to comment about it regardless of its validity as a result. I find it interesting that in the NBER study, the participant group in the study had better health and incurred less costs than the comparison or control group pre intervention. Based on the data chart in this post, the exact opposite appears to be true for the JOEM data. This raises some interesting questions in my mind about the make-up of worksite wellness program participant groups as we seek to interpret and understand the results of any research study we read. As suggested by the NBER study, the nature of the participants in any intervention or participant group can have a profound effect of the results obtained.

    Like

In the immortal words of the great philosopher Pat Benatar, hit me with your best shot.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: