They Said What?

Home » 2016 » March

Monthly Archives: March 2016

Cancergate: Did a Koop Award Committee Member Commit a Crime?

As part of the cover-up of Health Fitness Corporation falsely claiming to save the lives of 514 Nebraskans with cancer, someone doctored their Koop Award application to remove the evidence of that claim and replace it with a literally and figuratively much more benign statement. For reasons described below, this may not even be legal. We are offering our assistance to Ron Goetzel to help him find the perp.


What would Dr. Koop say?

After Health Fitness Corporation (HFC) admitted lying about saving the lives of 514 alleged Nebraskan cancer victims who turned out never to have had cancer in the first place as part of their Koop Award-winning wellness program, someone tampered with their original award application to try to erase that lie. The “514 early-stage cancers” they claim to have cured morphed into “514 polyps.”

At the 2015 Great Debate, Ron Goetzel (who runs the Koop Award Committee) insisted that the Koop Committee knew nothing of the original lie about finding 514 cases of cancer, even though that line appeared twice in the original Koop Award application.  The Koop Award Committee also saw nothing suspicious in HFC’s marketing materials, which, incredibly, still resided on the HFC website for years after HFC was outed.  HFC finally took it down, an obvious admission of guilt on their part (to go with the actual admission in the newspaper), given how much they had ballyhooed it in the past. Naturally we have copies of the entire “case study” if anyone would like one.

Admittedly, that original lie was a little hard to spot in that case study. You needed to actually open your eyes:

nebraska life saving catches

nebraska cancer cases

And a lot of people did open their eyes.  The claim made its way into Google…and all the way to CalPERS:

nebraskacancergooglesearch

The Koop Committee missed this, though. Claiming to know nothing and see nothing — the Sergeant Schultz defense — is a Koop Committee favorite. However, the initial oversight doesn’t explain why Ron has called Nebraska a “best practice” three times even after he was shocked, shocked to learn that lying was going on in here.

Rewriting History

I want to be very clear: we are not accusing Ron Goetzel of sneaking back in and rewriting the original applications (including forging a section of a letter from the governor of Nebraska) to cover up the lies told by Health Fitness Corporation, which sponsors his award.  He could lose his job at Johns Hopkins if he did, so he wouldn’t.  Quite the opposite, both of us would want to get to the bottom of this!

Clearly, though, someone with the same coverup agenda and with the same access to the same Koop Award site rewrote the original HFC/Nebraska award application. Specifically, someone replaced “514 new cases of early-stage cancers” with these employees having only “benign polyps” in order to make it consistent with the denial that the Committee knew anything about HFC’s lie:

nebraska polyps

Owing to the previous doctoring of original evidence in the Koop Award (for which Ron Goetzel did admit responsibility), we now know to keep screenshots of originals.  Note the difference in the last sentences. The original is claiming “514 new cases…of cancer” below. This is the original that Mr. Goetzel insists did not appear in the application. And yet, here it is.

nebraska cancer koop award

This bungled evidence-tampering shows our book, Surviving Workplace Wellness, is right: “In wellness you don’t have to challenge the data to invalidate it.  You merely have to read the data.  It will invalidate itself.”

The doctored paragraph has now replaced the original paragraph in both places where it appears.  Ours is the only extant copy of the original screenshot.  We learned long ago that you need to capture screenshots because these people always cover their tracks when they get caught lying.  And since most of wellness is a lie, they have a veritable Pennsylvania Station of tracks to cover.

Cancergate?

The perpetrator is in a lot of trouble:  In the application, this altered phrase appears in a letter from then- Governor Heineman’s office in support of the Koop Award.  Obviously that’s not legal. The reason we assume Ron Goetzel didn’t do it is because he would have had to get permission from Johns Hopkins, and they would not have let him forge official state documents while using their affiliation in his title.

Goetzel’s History of Rewriting History

We don’t know who the perpetrator is, but one reason to doubt that Ron Goetzel is the guilty party is that he was already caught doctoring original Koop applications, and it wasn’t fun for him. Hence, one could assume he would be unlikely to do the same thing again.

So Mr. Goetzel is a victim here too because of the Koop Award’s shattered credibility. He should be as horrified as we are, and we should work together on this, and offer our help.  We urge, demand, insist that as the leader of this committee, Ron Goetzel get to the bottom of this!  He needs to find out who tampered with this letter from the Governor, turn him in to Nebraska authorities if indeed that is illegal, apologize, and rescind that Koop Award.

We can’t investigate this ourselves without his cooperation. Even if we knew who it was, we can’t convene the wellness industry ethics committee because in wellness, there is no ethics committee.  That’s because in wellness, as this website has repeatedly shown, there are no ethics.


2022 Update

It appears that the perpetrator once again hacked into Mr. Goetzel’s Koop Aweard website…and this time, like the Secret Service, deleted the entire submission, including the forged letter from Gov. Heineman, from the award application. There is literally no documentation of what the State of Nebraska won the award for. There is only the erratum statement. “Erratum,” in Goetzel-speak, means none of the very stable geniuses on his committee managed to spot an obvious mistake when confering an award on one of their friends.

So it looks like we will never know who in this industry committed forgery.

Upcoming MBGH Annual Conference Shaping Up as a Great Show

Dear Members of TheySaidWhat Nation, Friends, Family, Groupies, Hangers-on, and Paparazzi,

The regional business coalitions seem to have consistently excellent conferences.  The upcoming 36th Annual Conference in Chicago (May 5 to 6) is likely to continue in this tradition, with a great agenda, venue, and attendee list.  I look forward to meeting many of you there.  The good news is, unlike many conferences, if you tell them you’re a friend of mine, they’ll still let you in.

By way of background and it’s a bit confusing, the regional business groups on health are loosely affiliated with the National Business Coalition on Health (NBCH), not the National Business Group on Health. While they have very different philosophies, there is good news about both organizations.

The good news about the former is that they are not nearly as dependent on wellness vendor dollars as other organizations, so it is important that you support their efforts to bring unbiased and thoughtful discussions into their conferences, starting with this one. In addition, the NBCH has recently continued its tradition of strong management with the appointment of Mike Thompson to the CEO role.

The good news about the latter is, according to the law of averages, they should soon be saying many intelligent things.

 

 

 

Petition: Allow Food Policy Panel Debate on Carbs-vs-Fats

Dear TheySaidWhat? Nation,

Nina Teicholz, author of The Big Fat Surprise, was kicked off the Food Policy Panel.  Likewise, qualified proposed replacements, such as the Harvard-affiliated leader of the Joslin Diabetes Center obesity program, were also nixed for political reasons. The back story is here — a carbohydrate defender was selected instead.  The replacement person in question works for an organization called — and we still have several days left in March so you know I’m not making this up — the Alliance for Potato Research and Education.

potato

I know what it’s like to be excluded from panels due to the threats posed by one’s ideas.  (In my case, the “ideas” are mathematical proofs, of course.)    It may be that Teicholz’ ideas are wrong — she doesn’t believe the science supports a diet low in saturated fats and fats generally (which necessarily means substituting more carbs) — but they deserve an airing.  Remember, for years we were told that dietary cholesterol was bad for us (Cerner still puts that in their HRA)…largely because debate was stifled.  If we learned anything from that debacle, it’s this: the carbs-vs.-saturated fats needs to be debated.

Just like “pry, poke and prod” programs, our country’s nutrition policy — and hence, School Lunches, WIC, and everything taught by every MD, nutritionist and dietitian in the country — survives only because dissenting views (or proofs, in the case of wellness) don’t get aired.

Here is the petition.  Please sign it if you would like to see carbs-vs-fats debated.  Personally, I am fairly agnostic on this carbs-vs-fats debate itself.  Unlike wellness vendors, I see human physiology, genetics and psychology in shades of gray. Data influences my opinion. It’s called the “scientific method.”  That’s why I switched overnight from being the biggest advocate of wellness/disease management to the biggest critic — because we had all been measuring the data wrong.

In the case of fats-vs-carbs, my view would be that the jury is still out, and/or there may be an undiscovered or unappreciated “x-factor.” Or, it’s possible that some ethnicities or individuals react differently to different nutrient groups–just like they have different susceptibilities to diabetes.

So what I’m not agnostic on is this: we need a debate!

 

 

Are Participation-based Findings Invalid? The Wellness Industry Thinks So.

New Yorker cartoon, which unfortunately I can’t find and wouldn’t pay to license even if I could, shows a couple watching their dog chase its tail.  The caption is: “I’ll admit he’s stupid, but you have to admire his commitment to stupidity.”

Likewise, I stand in awe of the wellness industry’s willingness to defy all rules of math, plausibility, science, biostatistics, integrity, and common sense in their commitment to the participants-vs-non-participants study design.

In a nutshell, wellness apologists would have you believe the following:  If you divide a company into employees who want to lose weight vs. employees who don’t, that the difference in “before” and “after” weights is due to the program, not the difference in motivation to lose weight. And, further, that people who start in the motivated category but drop out, shouldn’t count at all.

If this study design made sense, the FDA could dispense with double-blinded trials.  They could simply compare people who are eager to take a certain drug to people who could care less about taking the drug.  And what if someone taking the drug gets major side effects and drops out?  Heck, don’t count him at all.  Pretend he never existed.  Silly?  Of course. But “silly” is what wellness is all about–and I’ve just described their study design.

Let’s consider the evidence. The beauty of this evidence is that it constitutes what a litigator would call “admissions against interest,” meaning that the perpetrators’ own statements invalidate their own case.  We don’t need to prove we’re right–the wellness industry proved it for us. Moreover, they did this despite massive investigator bias and publication bias in their attempt to show the opposite, sprinkled in with an attempted cover-up by Mr. Integrity himself, Ron Goetzel.

Wellness always appears to show savings because the “participation effect” isn’t isolated from the “program effect.” But what if you could isolate those two effects? What if there were situations in which:

  1. Participants were separated from non-participants but not offered a program to participate in?
  2. A program gave bad advice, the advice was taken…but participants still showed savings?
  3. A controlled experiment tested participation impact itself against program impact?

Remarkably, we have examples of all three –a compendium of self-immolating examples, all peer-reviewed, that have appeared in this blog before, though never in one place. And all continuing to prove the mantra: “In wellness, you don’t have to challenge the data to invalidate it. You merely have to read the data. It will invalidate itself.”


Eastman Chemical and Health Fitness Corporation: Savings without a Program

The slide below — blessed by the entire Koop Award Committee — clearly shows increasing savings over the two years of “baseline years” before the “treatment years” even started. Phantom savings reached a whopping $400/year/employee by 2006…without doing anything!

HFC Eastman Chemical wellness data

This was the example where Ron Goetzel, who had already known about this misrepresentation of savings, went back into the original Koop Award application and doctored it, removing the bottom axis altogether in order to prevent readers from seeing what really happened.  And then he wrote in Health Affairs that the “original is online and subject to review,” when in fact it wasn’t — only his doctored version was.

hfc rewritten

Coda: by the end of the 2008, the “savings” for Eastman participants exceeded $900/year…but average participant risk declined only 0.17 on a scale of 5.  Yet they won a Koop Award. Is this a great country or what?


Participant costs decline immediately, thanks to a lowfat diet

By getting people to eat more carbs and less fat, Stanford researchers showed a reduction in costs of $397 in the first six months alone, or roughly $800/year, or about $2000 in today’s dollars.  Since participant risk factors only declined 2.2% and since some of the advice was demonstrably wrong, the only thing (besides regression to the mean) that could explain this result is simply being willing to participate, even if you do the wrong thing.

Yet despite the obvious invalidity, this study was selected for inclusion in Professor’s Baicker’s 3.27-to-1 paean to the wellness industry.  Without invalid studies she wouldn’t have been able to perform a meta-analysis in the first place.


Aetna admits nothing happened…but claims $1464 in savings/participant anyway

This was a controlled experiment in which two groups of fairly low-risk people were “Invited” and divided into what we will designate as Participation Offered (“Group PO”) and Control.  Group PO was offered the program, and a small number signed up. the Control wasn’t. At the end of the year, there was basically no difference in health status between the Control and the combined Invited groups…and in a few cases the Control group did better:

joem aetna study vs control

Nor should there have been any difference. Subjects were Invited specifically because they were not chronically ill or even at risk for being chronically ill. Hence there was no opportunity for improve health status in a clinically meaningful way, especially in only 12 months. Further, the program itself was based on the highly questionable proposition that telling people they had a gene for obesity would motivate them to lose weight.

Yet, the participants in Group PO saved $1400+ as compared to the non-participants in Group PO. Just to be clear: once both those subgroups were totaled and compared to Control, any savings — or as shown above, health improvement — went away.

Pure participation bias. You couldn’t ask for a more compelling experiment. This was the one where an editorial advisor to the journal that published it apologized for allowing Aetna to show $1400 in savings when there was none. (See “Comments.”)


These are just the three most self-evident cases. But if you look hard enough, you’ll see a pattern in wellness, especially among award-winning programs: risk factors among participants barely budge, but huge amounts of money are saved.  Nothing is ever plausibility-tested even though the HERO Guidelines urge such an event rate-based test.

At this point the best argument for hiring a “pry, poke and prod” vendor is that the economy depends on it. If you don’t keep sending these people money the entire country might slip back into recession. Breadlines could form. But the good news is that bread is a key component of a lowfat diet.

 

Measuring Progress Towards a “Culture of Health”: A Primer

Conventional “pry, poke and prod” wellness died last month, when Ron Goetzel, in an unexpected and unprecedented display of candor, announced that only “about 100 programs” have succeeded, but “thousands” fail.  That failure rate validates what we’ve been saying all along: wellness isn’t an industry–it’s a lottery. It was worth all the abuse I’ve taken for the satisfaction of seeing Ron — and most of his cronies — finally succumbing to proof.

new yorker wellness

Now that I’m done gloating (for today), it’s time for everyone to take Dee Edington’s advice instead — most recently in his new book coauthored with Jennifer Pitts — and move towards the attainment of a culture of health.  (Dee is joined in this pursuit by Jon Robison and others — all have advocated this for a long time.)

Ironically, that means cutting way back on your screenings, to a level that adheres to US Preventive Services Task Force guidelines, and investing the savings in your culture of health.

[Cue 10-second shameless plug…]

Obviously, Quizzify would be one such investment — our mantra is: “To create a culture of health, one must create a culture of health literacy.”

But this post isn’t about how to create such a culture, an undertaking whose complexity and direction will vary by organization. This post is about how to measure it.  It’s a clip-and-save (or download-and-file) set of metrics that might be expanded or otherwise modified over time, with reader input.

Holdover metrics

Most conventional measurements are about preventing diabetes and heart attacks, but your avoidable cost lies elsewhere — the top 25 DRGs have almost nothing to do with wellness.  Further, some metrics, like BMI, are demonstrably invalid as a population health measure. Considering all the conventional metrics together, even award-winning programs don’t generate noticeable improvements despite measuring only self-selecting active repeat participants. (Not to mention that Vitality, itself in the wellness business, couldn’t improve its own employees’ key metrics.)

Having said that, I would continue to measure one health outcome metric: “wellness-sensitive medical events (WSMEs),” as recommended in Why Nobody Believes the Numbers and now (albeit reluctantly) recommended by HERO.  Most importantly, that metric is a mainstay of Intel’s esteemed Validation Institute. The V-I has bestowed recognition on a select handful of health plans — Blue Cross of Louisiana, Blue Cross of South Carolina, Harvard Pilgrim, Novacare, Presbyterian, Providence.  You can see from the links how they use these measures to track (and reduce!) WSMEs (or just disease management events, depending on the plan). That measure is objective.  WSME rates respond to both disease management and wellness.  There is no selection bias, no regression to the mean. It is the single best indicator of control and avoidance of chronic disease.

New Proposed Metrics

A “culture of health” can’t be measured with facile blood values or weight. You need to measure culture, not lipids. And you need to do it without lining up employees to be weighed or otherwise inconvenienced or even humiliated. Weigh-ins disrupt your organization both literally and figuratively…and not in a good way.

Quite the opposite, my experience running a NASDAQ company steers me to measures that don’t disrupt operations to collect data. Those would be easily collectible, objective, and population-wide measures. Remember, you need the support of line managers, and an excellent way to get that support is bringing them insights, not taking their staff off the job.

Let’s start with absenteeism.  All-cause absenteeism is a great metric because absenteeism rates reflect not only illness, but stress, depression, and general job dissatisfaction – all indicators of a healthy culture.

It’s not that easy.  For starters, organizations must first distinguish scheduled from unscheduled paid time off. Once you’ve done that, here are several things to consider.

  1. Focus on changes in absenteeism, not absolute levels. We don’t have a benchmark. Benchmarks will vary by industry anyway.
  2. Measure everyone, not just “high-risk” employees who were in an “absence management program.” That is pure regression to the mean. The 20% who were most often absent last year will mostly not be the 20% who are most often absent this year.  So your program will always show success but your overall lost work days likely won’t decline.
  3. And don’t use self-reported changes, with questionnaires and HRAs asking: “How much work did you miss last year because of such-and-such?”

Next, recruitment/retention. What are your rates?  Why do people say they want to work at your company? What do they say on exit interviews? Do you force-rank employees or track the ones you most want to keep?  Compare your retention rates for them to other employees, who may stick around only because they don’t have other attractive options for employment.

Finally, some process-oriented organizations (like manufacturers) have easily measurable productivity standards.  Rework/defect rates might be one measure, though that could have a high signal-to-noise ratio.  If specifically health is being measured, rework/defect/incident and other productivity measures towards the ends of shifts could be contrasted with rates earlier in the day.  If a healthy workforce in a healthy culture does anything, it should be avoiding fatigue…and mistakes due to fatigue should correlate with continuous hours worked.

Each of those is a top-line measure, that would then lead to root-cause examination…and at this point we can’t offer a one-size-fits-all solution (though my colleague and co-owner of this blog, Vik Khanna, consults on this topic). But that’s what measurement is all about—measure, manage, and then re-measure.  This difference is that for culture of health measurement, let’s start with valid measurements instead of repeating the mistakes of the past.

This is just a starting point.  We look forward to your comments, here and on linkedin, to refine these measures and take them to the next level.  Perhaps we will post an updated version including those new ideas.

Wellness: The Politicians vs. the People

This Huffpost points out how wellness is a microcosm of the political disconnect between the “establishment” and the “insurgency.”  The Business Roundtable is all-in for it, as is Congress and the White House (meaning BOTH parties).

And yet the people — the actual voters — are dead set against it.

We’ll be starting a series soon on wellness done FOR employees vs. wellness done TO employees. This Huffpost is excellent background on why the latter is quite literally the most unpopular part of the entire Affordable Care Act — regardless of whether you like the rest or it or not.

And let your elected representative know what you think of it — add a comment or a “like”

Wellness Meets 1984

A truly Orwellian wellness program would:

  1. Force employees to do something that they shouldn’t do;
  2. Fine them if they don’t do it; and
  3. Make the whole thing so complex that employees won’t be able to figure out what you’re talking about and employers won’t be able to administer it.

And that is indeed exactly what Hodges-Mace is proposing in this week’s Employee Benefit Advisor.  Even though I’ve visited the Hodges-Mace website, I can’t figure out what they’re selling, but whatever it is, this article isn’t exactly a ringing endorsement for buying it.

Hodges-Mace wants employees to all get annual physicals, even though Choosing Wisely, the New England Journal of Medicine and everyone else says not to, because the potential harms of overtreatment exceed the benefits. (The list of harms doesn’t even include the cost of those visits themselves.)

Hodges-Mace’s exact words: “Use financial penalties to sway employees to see their PCP. If they don’t, charge them a surcharge.”  Going into detail:

If the employee fails to comply, she must pay the surcharge, and it is this practice that encourages higher participation rates than traditional wellness programs and generally results in a positive ROI for these programs during their first year.

I had to read this multiple times to figure it out.  The more employees who get useless checkups, the higher the ROI? Or is it the reverse? The more employees who pay the fines, the higher the ROI?  Or, using wellness math, is it both?  In 1984, War is Peace and Ignorance is Knowledge. In this wellness program, spending is savings, and…

…Complexity is Simplicity

The author, Hodges-Mace’s Eric Helman, says: “Employers are moving to simpler programs with tangible results and nearly immediate cost savings,” like this wellness-or-else program Hodges-Mace is advocating.  How “simple” is this program?

On average, 8 to 12% of employees will opt out and elect to pay the surcharge, which is based on complex rate tables for both the employee and her dependents…

If the employee chooses to participate in the program, the system has to verify her compliance, a process that typically entails multiple reminders, follow-up communications and documentation from the PCP.

So far this is a piece o’ cake, right? Complex rate tables, compliance verification, multiple reminders, pestering the PCP.  Not so fast — here is the rest of the cake:

The benefit system must be able to seamlessly accept information from the verification process, calculate and apply surcharges, and then communicate the appropriate information to the payroll system. These actions and the rules they are based on must also be integrated into the process of handling qualified life events.

In other words, it takes a lot of complexity to make a program simple, a lot of spending to save money, and a lot of harms to benefit employees.

One is reminded of the immortal words of the great philosopher Major Chester Brown, as told to the AP’s Vietnam correspondent Peter Arnett: “We had to destroy the village in order to save it.”


 

Hey, I know it isn’t always about Quizzify, but we try to un-teach exactly what Hodges-Mace and other wellness companies teach:  Astronomy is not a health issue. Don’t schedule a checkup just because the earth completes a revolution of the sun.  We educate employees on when and when not to visit the doctor. Avoiding useless checkups–now that’s something that really will save money right away.  

But we should caution: even though the science is overwhelming and the economics are transparent, it still takes months for us to get this message across. Unlearning incorrect information is not easy…and it doesn’t help that wellness vendors are teaching the opposite.

Part 2: “Harvard Study” Showing Wellness ROI Is WAY Worse Than Anyone Thought

Like sausage-making, Trump University, or the sun, the “Harvard Study’s” 3.27-to-1 ROI should not be examined too closely.

And yet, throwing caution to the wind and making sure my stomach was empty, I delved (as promised) into the individual studies comprising this meta-analysis to understand what it was about their design and findings that allowed them to be repackaged into the wellness industry’s raison d’etre.

I had been quite adamant in the previous post that this whole gold-plated meta-analysis was likely based on garbage case studies. I compared it to packaging zero-doc home loans into AAA-rated collateralized mortgage obligations (CMOs). I realize now that was a premature and unfair comparison. So I owe an apology–to Bear Stearns, Lehman Brothers, and Countrywide.

What Katherine Baicker did was way more mathematically unforgivable because it was already obvious that the results were wrong when she wrote her article.  For the CMO packagers, the invalidity didn’t become completely obvious for at least a few weeks after they started foisting these CMOs onto their customers.

Only four of Professor Baicker’s studies have abstracts or full text available at no cost. Let’s consider these four case studies one by one.  I’ll be gosh-darned if I am going to pay for the privilege of reading the others. The authors should be paying me for remedial lessons in arithmetic.


This study, in a now-defunct journal, found that members of a high-risk cohort showed a greater decline in claims than members of lower-risk cohorts.  To use a highly technical health services research term, and please don’t be put off by my jargon: duh.

Suppose risks are “heads” and you have three sets of 100 pennies. The first has 100 heads, the second 75 heads, the third 50 heads.  Flip all three groups of pennies. You’ll reduce heads by about 50 in the first group, 25 in the second, and 0 in the third. Biostatisticians call those declines “regression to the mean.” Dee Edington calls them the “natural flow of risk.” Apparently Professor Baicker calls them “pennies from heaven.”


Stanford researchers chimed in with a study that self-invalidates three different ways:

  1. They also used a low-risk group as a control for a high-risk group — and were equally thrilled when the high-risk group showed a greater decline in risk factors than the low-risk group
  2. They achieved a whopping $1000 per participant in savings (in today’s dollars) — in a six-month period, no less. Let your imagination run wild to imagine the dramatic risk reduction needed to achieve an immediate $1000 in savings.
  3. Actually, you can let your imagination run tame, because the high-risk group’s self-reported risk scores improved only 2% vs. the low-risk control.

And at the risk of sounding like Yogi Berra, even if this result were possible, it would be impossible. The improved scores were achieved in part by eating more carbs and less fat. This study was done in the era when fat of all types was demonized, and Kellogg’s Frosted Flakes earned a “heart-healthy” label from the American Heart Association.

Awful stuff, those fats and oils. (Not!)


What raised my eyebrows for this next study is that wellness produced the greatest savings in the following categories: “neoplasms, digestive disorders, and blood and blood-forming organs.” Let’s examine each category in turn.

If anything, “neoplasm” expense would rise in that short a period, since in those days women over 35 were given annual mammograms despite the overwhelming likelihood of false positives.

Digestive disorders?  Even wellness vendors don’t claim to reduce cases of ulcerative colitis or Crohn’s Disease.  Colon cancer would increase because in those days people got screened every five years.  And we know from the Nebraska program that those screenings catch a boatload of benign polyps — or as gastroenterologists call them, thousand-dollar bills.  (They can charge much more when they remove polyps than when they can’t find any to remove.)

How, you might ask, does a wellness program reduce diseases of “blood and blood-forming organs”?  A better question: what is a disease of “blood and blood-forming organs”? Most wellness vendors probably couldn’t even name one. Here’s the list.  The highly favorable results in this wellness program suggest that eating more Frosted Flakes and less olive oil prevented sideropenic dysphagia, lymphangioma, and of course von Willibrand’s Disease.  And for you Ted Nugent fans, there’s cat scratch fever.

ted-nugent


No meta-analysis showing massive savings is complete without Ron Goetzel. His study of Procter & Gamble showed a 29% reduction in spending for participants vs. non-participants in wellness.  Those annual mammograms and lowfat diets must really have paid off for them.  Either that or…could be it that participants-vs-non-participants is now proven to be an invalid study design?  Maybe Ron didn’t know that at the time — that was years before he tampered with some data in order to maintain the fiction that this study design isn’t obvious nonsense.

This massive savings — worth billions in the stock price given P&G’s reasonably high P/E multiple — was so critical to P&G (my client now), that no one there has any memory of it.  And no wonder — even the world wide web is younger than this study.


So what next?  We are asking Katherine Baicker to set things right.  It’s Pottery Barn Rules–she broke it, she fixes it.  We would like her to do a study showing the mathematical and clinical impossibility of savings from “pry, poke and prod” programs.   To paraphrase the immortal words of the great philosopher Charlie Brown, this is her chance to be the hero instead of the goat.

charlie brown

Another Expert Slams Fat-Shaming Corporate Weight-Loss “Challenges”

While vendors like HealthyWage are pushing company weight challenges onto unsuspecting and poorly advised wellness directors — and wellness promoters at the University of Pennsylvania are subjecting their own employees to weight-loss experiments to encourage corporations to do more of them — real researchers are urging a halt to these activities.  They do more harm than good.

We, of course, have been calling out fat-shaming for months on this site–with specific attention to companies like Johnson&Johnson, ShapeUp, and Vitality Group, all companies that want to profit from fat-shaming in various ways.  In particular, we wrote a very well-received Huffpost on this topic three months back.

And a year ago, we called for an end to these fat-shaming programs, showing that they made no impact on health expenses, productivity or profitability.

However, we did not explore this topic remotely as well as Pat Barone, in today’s LifeZette. (While LifeZette is Laura Ingraham’s publication, Pat Barone lives in the People’s Republic of Madison, so the politics cancels itself out.)  We urge everyone take a looksee here…and then when you’re ready to sue your wellness vendor and need an expert witness, who you gonna call?

We invariably get fast settlements.  No vendor wants to face me in court, where even wellness vendors are required to tell the truth.  Facts, as we often say, are the wellness industry’s worst nightmare.

It’s Time to Retire the Infamous “3.27-to-1 ROI” from the “Harvard Study” (Part 1)

“You know that laundry detergent that gets bloodstains out of t-shirts?  I’d say if you’re routinely coming home with bloodstains in your t-shirts, laundry probably isn’t your biggest problem.”

Recognize that line?  That was Jerry Seinfeld’s signature joke, the one that more than any other propelled his early fame.  Having used it for a decade, he retired it shortly after Seinfeld went on the air.

Now it’s time to retire the wellness industry’s signature joke: the so-called Harvard Study and its 3.27-to-1 ROI. Among other things, as we’ll see, a large chunk of the data in that study also predates Seinfeld, a show which recently celebrated its 25th anniversary.  You know the old line: “How can you tell a lawyer is lying?” Answer: “His lips move.”  Well, the way you can tell a wellness vendor is lying is that they still cite that study, knowing it to be completely invalid.  (If they actually think that 3.27-to-1 is valid, in a way that’s even worse.)

Some Background

The article, which today wouldn’t pass peer review by the normally cautious Health Affairs, was rushed into publication during the healthcare reform debate as a favor to David Cutler, a healthcare advisor to President Obama and an author on the study.  Along with the results from the Safeway wellness program — which turned out not to exist (and I’m referring not just to the results but literally to the program itself) — this meta-analysis provided cover for the 30%-to-50% clawback provision (for non-smokers and smokers respectively)  in what became ironically known as the Safeway Amendment to the Affordable Care Act.

In turn, this clawback was needed to secure the support of the Business Roundtable (BRT) for the Affordable Care Act. The BRT was and is obsessed with this provision. Not because they are especially concerned about employee health — the guy who ran their wellness lobbying also ran a casino company, exposing thousands of employees to second-hand smoke.  Rather, because this clawback provision creates billions in forfeitures that go back into the coffers of its member organizations.  Bravo Wellness accidentally spilled that secret.  The worse the program, the more an organization can save via forfeitures. For instance, Wellsteps saved millions for the Boise School District even though healthcare spending/person jumped–simply by creating a program so unappealing that employees preferred to obtain insurance through their spouses. (Either that or they outright lied–you make the call.)

How Prof. Baicker Invalidated Her Own Study

The title was definitive and the conclusion was amazingly precise. The title was: “Workplace Wellness Can Generate Savings.”  Not “might” or “possibly could on a good day” but “can.” And the conclusion itself left little doubt.  The ROI wasn’t “approximately 3-to-1” or even “approximately 3.3-to-1” but rather a statistically precise 3.27-to-1.  You don’t make such a definitive pronouncement without being ready to defend it.

Or maybe you do. Here’s how Professor Baicker’s story changed on various occasions once it became clear that her result was a major outlier, and that wellness loses money:

  1. It’s too early to tell whether these programs pay off, and employers should experiment on their employees;
  2. She’s not interested in wellness any more;
  3. People aren’t reading the paper right. They aren’t paying enough attention to the “nuances.” Shame on the readers!
  4. “There are few studies with reliable data on the costs and the benefits”

This may be unique in healthcare history: writing a paper and then almost immediately retracting its key finding, claiming you have no interest in the topic even as your fans deify you, and then blaming readers for believing the conclusion.

But the biggest head-scratcher is the 4th item.  Here is someone who just published a conclusion with two-significant-digit precision now saying (accurately, as we’ll see) that, oh, by the way, the studies she analyzed are mostly garbage.  You know the old saying: “In wellness, you don’t have to challenge the data to invalidate it. You merely have to read the data. It will invalidate itself.”  She was kind enough to save us the trouble.

When RAND’s Soeren Mattke saw this stuff along with her refusal to defend it, he went ballistic.  If you know RAND’s wellness uberguru Soeren Mattke, you know he is a very even-tempered, even-handed guy.  it takes a lot to annoy him…and yet she did.  Here is his smackdown of her work.

Studies Comprising the Meta-Analysis

If you’ve been following the Baicker saga for a while, none of this is news.  Here’s what is news:  thanks to a long plane ride with both an internet connection and an assortment of in-flight films I had already seen, I was able to dig into the actual studies.  My conclusion: combining these third-rate studies in third-rate journals into a Health Affairs meta-analysis was like combining a hodegpodge of individual subprime mortgages written for first-time homebuyers with 5% downpayments and no credit history into an AAA-rated collateralized mortgage obligation.

In terms of timely relevance, of these 22 studies:

Keep in mind too that the average time between when a study begins and publication is about four years, meaning much of this data was collected before some TheySaidWhat? readers were born.  Wellness-sensitive medical events (in both exposed and non-exposed populations of all ages, for reasons having nothing to do with wellness) have fallen by about 70% since most of this data was collected, meaning even if these studies were valid at the time (they weren’t), they have no relevance today. Likewise, dietary recommendations are now the opposite, people can no longer smoke in their offices etc.  There is nothing else in healthcare where data on interventions from 20 and 30 years ago is considered relevant.

In terms of publication bias, consider the journals:

Piling investigator bias on top of author bias, 14 of these studies were authored or co-authored by members of the Koop Award Committee, a committee that can’t spot obviously fabricated outcomes even after I point them out, people whose entire livelihoods depend on making up savings figures, and who endorse admitted liars.

The Actual Data

Every study that used a comparison group employed some variation of the demonstrably invalid participants-vs-non-participants methodology.  Not one study was plausibility-tested.   Almost every one of them showed savings far in excess of what could be saved if all wellness-sensitive medical events were wiped out.

Quite literally no one thinks savings can be achieved within 18 months, and yet the majority of studies with a comparison group were 18 months or less.  All showed massive savings.  One study lasted only 6 months, but showed roughly 20% savings nonetheless.


 

You might ask, how could all that money be saved in such a short period of time? For that answer, you’ll have to wait a couple of days. For Part 2, we will dig into a few of the studies themselves to see where the magic happens…and we do mean “magic”, since nothing else can explain those preternatural results.

%d bloggers like this: