They Said What?

Home » 2017 » January

Monthly Archives: January 2017

And the envelope please. The best outcomes evaluator in wellness is…

The best outcomes evaluator in the wellness field is Dr. Iver Juster.*

*Among the subset of males not affiliated with They Said What.

Why Dr. Juster’s Case Study Is the Best Case Study Ever Done in This Field

Chapter 2 of the HERO Guide is a great study and deserves high praise. But before we get into the salient points of what makes this absolutely the best case study analysis ever done in this field, be aware the provenance is not a coincidence.  Dr. Juster is very skilled at evaluation. Indeed he was the first person to receive Critical Outcomes Report Analysis (CORA) certification from the Disease Management Purchasing Consortium. (Dr. Juster very graciously shares the credit, and as described in his comments below would like to be listed as “the organizer and visible author of a team effort.”)

Note: the CORA course and certification are now licensed for use by the Validation Institute, which has conferred honorary lifetime certification on Iver gratis, to recognize his decades of contribution to this field. (Aside from the licensure, the Validation Institute is a completely independent organization from DMPC, from They Said What, and from me. It is owned by Care Innovations, an subsidiary of Intel.  If you would like to take the CORA Certification course live, it is being offered next in Philadelphia on March 27.  You can take it online as well.)

Early in the chapter, Iver lists and illustrates multiple ways to measure outcomes. He dutifully lists the drawbacks and benefits of each, but, most importantly, notes that they all need to be plausibility checked with an event-rate analysis, which he provides a detailed example of–using data from his own work. In an event rate analysis, wellness-sensitive medical events are tracked over the period of time in question.

Wellness has never been shown to have a positive impact on anything other than wellness-sensitive events. Consequently, there is no biostatistical basis for crediting, for example, “a few more bites of a banana” with, to use our favorite example, a claimed reduction in cost for hemophilia, von Willibrand’s Disease and cat-scratch fever.

ted-nugent

By contrast, real researchers, such as Iver, link outcomes with inputs using a concept called attribution, meaning there has to be a reason logically attributable to the intervention to explain the outcome. it can’t just a coincidence, like cat scratch fever. As a result, he is willing to attribute only changes in wellness-sensitive medical events to wellness.

Event-Rate Plausibility Analysis

Event rates (referred to below as “PPH” or “potentially preventable hospitalizations”) are laid out by disease on page 22 of the HERO Report.  Note the finding that PPH are a small fraction of “all-cause hospitalizations.”  Though the relative triviality of the magnitude of PPH might come as a surprise to people who have been told by their vendors that wellness will solve all their problems, Iver’s hospitalization data sample is representative of the US as a whole for the <65 population, in which chronic disease events are rare in the <65 population.

hero page 23 total

Gross savings total $0.99 per employee per month.  This figure counts all events suffered by all members, rather than excluding events suffered by non-participants and dropouts. Hence it marks the first time that anyone in the wellness industry had included those people’s results in the total outcomes tally — or even implicitly acknowledged the existence of dropouts and non-participants. He also says, on p. 17:

For example, sometimes savings due to lifestyle risk reduction is calculated on the 20% of the population that supplied appropriate data. It’s assumed that the other 80% didn’t change but if some of the people who didn’t supply risk factor data worsened, and people who got worse were less likely to report their data, that model would overestimate savings.

Note that the PPH declined only in cardiac (“IVD”) and asthma.  Besides the event rates themselves being representative of the employed population in the US as a whole as a snapshot, the observed declines in those event rates are almost exactly consistent with declines nationally over that same period. This decline can be attributed to improvements in usual care, improvements that are achieved whether or not a wellness program was in place.  The existence and magnitudes of the declines, coupled with the slight increase in CHF, diabetes and COPD combined (likewise very consistent with national trends), also confirm that Iver’s analysis was done correctly. (Along with attribution, in biostatistics one looks for independent confirmation outside the realm of what can be influenced by the investigator.)

It is ironic that Ron Goetzel says: “Those numbers are wildly off…every number in that chapter has nothing to do with reality” when I have never, ever seen a case study whose tallies — for either total events or event reduction, let alone both — hewed closer to reality (as measured by HCUP) than this one.

Another factor that conveniently gets overlooked in most wellness analyses is that costs other than PPHs rise.  By contrast, Iver is the first person to acknowledge that:

HERO other costs increase

The implication, of course, is that increases in these costs could exceed the usual care-driven reductions in wellness-sensitive medical events. Indeed, Iver’s acknowledgement proved prescient when Connecticut announced that its wellness program made costs go up.

The $0.99 gross savings, and Connecticut’s healthcare spending increase, exclude the cost of the wellness program itself, of course. Factor in Ron Goetzel’s recommendation of spending $150/year for a wellness program and you get some pretty massive losses.


The old Al Lewis would close by making some reference to the dishonesty and cluelessness of the Health Enhancement Research Organization’s board. The new Al Lewis will do just the opposite. In addition to congratulating Iver Juster (and his co-author, Ben Hamlin) on putting this chapter together, I would like to congratulate the Health Enhancement Research Organization, for what Iver describes as the “team effort” in publishing it — HERO’s first flirtation, however fleeting and inadvertent, with integrity and competence.



Iver Juster Comments on the article

Iver reviewed this article and would like to add several points. I am only adding a couple of my own points, noted in indented italics:

  • It’s important to credit the work to a larger group than just myself. I was the ‘lead author’ on the financial outcomes chapter of the HERO/PHA measurement guide, but the work entailed substantial planning and review in collaboration with the chapter’s coauthor (Ben Hamlin from NCQA) and members of the group dedicated to the chapter (as well as the HERO/PHA authoring group as a whole).
    • Yes, I am more than happy to credit the entire group with this study, especially Ron Goetzel, Seth Serxner and Paul Terry.
  • Nonetheless the work does reflect my perspective and approach on the topic – the important points being (a) select metrics that are impactible by the intervention or program; (b) be transparent about the metric definitions and methodology used to measure and compare the; (c) assiduously seek out potential sources of both bias and noise (in other words, exert the discipline of being curious, which is greatly aided by listening to others’ points of view); (d) understand and speak to the perspective of the study—payer, employee/dependents, clinician/healthcare system, society.
  • Be particularly sensitive to the biologically-plausible timeframes in which your outcomes ought to occur, given the nature of the program. Even if optimally implemented with optimal uptake and adherence, we might expect ‘leading indicators’ like initial behavior changes to improve quickly; program-sensitive biometrics (lipids, A1C, blood pressure, BMI) and medication adherence to change in a matter of months; and a few program-sensitive ER/inpatient visits (like worsening heart failure or asthma/COPD exacerbations) to improve within several months (again, assuming the program is designed to address the causes of these events). Longer-term events like kidney failure, heart attack and stroke and retinopathy take much longer to prevent partly because they require sustained healthy behavior, and partly due to the underlying biology.
    • This is one excellent reason that the measured event rate decline mirrored the secular decline in the US as a whole over the period, meaning the program itself produced no decline over that period.  Possibly they might decline in future years if Iver is correct. Ron Goetzel would take issue with Iver’s assertion — Ron says risk factors decline only 1-2% in 2-3 years.
  • Event rate measurement in any but the largest Commercially-insured populations is subject to considerable noise. Though a challenge, estimating ‘ confidence intervals should at least shed light on the statistical noisiness of your findings. 
    • No need this time because your results hewed so closely to secular trend, reflecting the quality of the analysis.
  • It is very likely that the program used in the illustration did affect more than the events shown because it was a fairly comprehensive population health improvement initiative. For example, ER visits were not counted; and collateral effects of ‘activation’ – a very key component of wellness – were not included in this analysis. Assuming the 99 cents is an accurate reflection of the program’s effect on the events in the chart, I’d be willing to increase the actual claims impact by 50 to 75%.
    • If your speculation is accurate, that would increase gross savings to $1.49 to $1.73/month–before counting preventive care increases indicated on Page 22.
  • Nonetheless, to get effect from an effective program you have to increase both the breadth (number of at-risk people) and depth (sustained behavior change including activation) – but at a cost that is less than a 1:1 tradeoff to the benefit. In other words, you must increase value = outcomes per dollar. This cannot be done through incentives alone – as many researchers have shown, if it can be done at all, it must be the result of very sustained, authentic (no lip service!) company culture.
  • We are beginning to pay attention to other potential benefits of well-designed, authentic employee / workplace wellness programs (of which EHM is a part) on absenteeism, presenteeism, employee turnover and retention – and, importantly, company performance (which is after all what the company is in business to do). It’s early days but it’s possible research will show that companies that are great places to work and great places to have in our society will find financial returns that far outstrip claims savings. The jury’s still out on this important topic but let’s help them deliberate transparently and with genuine curiosity.
  • Did Ron really say you have to spend $150 per year PER MEMBER on a wellness program? I’d be thinking a few dollars (unless he’s including participation incentives)

 

And the Envelope Please. The Best Outcomes Evaluator in Wellness Is…*

*Among the subset of males not affiliated with They Said What.


Alert readers may recall that my New Year’s resolution was to balance my negative postings about the wellness industry with positive ones.  Like Diogenes searching for an honest man, I thought the finding the latter would be hard, but just as Romy Antoine also did earlier this month, The subject of this posting — to be named in Part Two — makes that easy.  Part One sets the stage for the review of his study.

By way of background, in preparation for bringing a possible lawsuit, I re-read the famous Chapter 2 of the equally famous HERO report. That was the chapter which inspired Ron Goetzel, Seth Serxner and Paul Terry (who was recently anointed as the American Journal of Health Promotion’s new Fabricator-in-Chief) to circulate their defamatory letter about me to the media, in a singularly self-immolating attempt to discourage them from publishing my material.  They insisted that Chapter 2 was pure fabricated nonsense, rather than a carefully analyzed report of real data.  Here is an excerpt from their actual letter, copies of which are available from me but which is summarized here:

A fabricated…absurd, mischievous and potentially harmful misrepresentation of our data.

Ron said it best in our Great Debate, minute 1:17 in the MP3 downloadable here:

Those numbers are wildly off…every number in that chapter has nothing to do with reality. 

However, the sun rises in the east, taxes are due April 15th, and Ron Goetzel is lying.  Quite the contrary, Chapter 2 turns out to be a carefully analyzed report of real data — almost certainly the best case study ever published.

How did I learn that Ron was fabricating a story that his guidebook had fabricated a story?

  1. This chapter says it’s a real report, on p. 22.
  2. Since this chapter’s analysis was so far above the pay grade of those three aforementioned HERO characters, I checked the acknowledgements in the HERO book. Sure enough, none of the HERO cabal wrote it. Someone else (to be named in the next posting) was the lead author, and I called to congratulate him on it. I also asked him some background questions, one of which proved very revealing. It turns out that…
  3. This real analysis of real data was — get ready — reviewed prior to publication by the exact same people who are disowning it now. Yes, among the people who peer-reviewed it prior to publication were the very same Ron Goetzel, Seth Serxner, and Paul Terry. (In addition to them doing the actual review, the lead author, very graciously sharing the credit, wanted to make sure that I indicate that he was only the “organizer and visible author of a team effort.”)

Yes, as is so often the case with these three, they lied about the lies that they lied about.  It’s quite ironic that their argument against my original praise of this analysis was to insist that because my source was their own lies, my own analysis was unreliable.  These lies above don’t include the actual lies I might sue them about, which were lies about me, which are totally separate from their lies about their previous lies. (Their lie about me was that I had a history of outrageously inaccurate statements, none of which they have ever been able to identify.)

These characters aren’t ordinary run-of-the mill alternative fact-type liars.  They’re way beyond that.

Their lies go to 11.

goetzel-on-fire


Coming soon, the reveal…

Support our colleague Jon Robison by supporting MS research…and win a prize

Our esteemed colleague and co-creator of the Employee Health Program Code of Conduct has had multiple sclerosis (MS) for several decades.  MS is one of the approximately 30,000 diseases known to mankind that can’t be prevented or alleviated by “eating one more bite of a banana,” as Wellsteps would have us believe is true for most chronic diseases, such as chronic banana deficiency-itis.  And Jon exercises quite a bit too, as this picture shows, so “taking a few more steps a day” (another Wellsteps alternative fact) isn’t going to help cure his MS either.

jon-robison


Here are two ways to support Jon and MS.

First, I will match contributions to the Climb to the Top, a race to the pinnacle of the Hancock Clarendon Tower, here in Boston on March 4, to support MS research.  (There is some irony in my climbing to the top while the wellness industry is racing to the bottom.)

This is the building. After you look at the picture, you can also hazard a guess (by commenting below) as to how long it will take me to reach the 63rd floor. (If it helps, I’ll be turning 61 that day.)  The closest guess by a contributor wins a copy of one of my books, Surviving Workplace Wellness, Cracking Health Costs, or Why Nobody Believes the Numbers.

hancock-tower


Second, you can “support” Jon Robison (and his colleague Rosie Ward– and, most importantly, yourself!) by joining their program, a great opportunity to apply practical skills to transform workplace cultures where both the organization and the people within them can thrive.  Past participants have even called their program “life-changing”. More information and a link is included below. I also have been provided a discount code I can pass on to you that provides a $200 discount; just use BEWELL17 at checkout.

There are just a few spots left in the Salveo Partners Thriving Workplace Culture Certificate™ (TWCC) online training program. Get all the tools, templates, and support you need to create a lasting impact on workplace wellbeing while earning continuing education credits! Get more information and register at https://salveopartners.com/products-services/twcc/   It’s starting quite soon, so don’t dawdle like I do when I have to send someone money.

Here are a couple of named testimonials so you know I don’t make this stuff up — which I don’t because we here at TSW are in the “integrity segment” of the market.

I don’t think I’ve ever read a book or taken a class or attended a conference that was so challenging (and helpful) for me in my professional development. This course has been extremely awesome… while putting my world in upheaval and at times exhausting me mentally… but I wouldn’t have it any other way! I’m so glad I took this and have so much material to work with and apply in the future. Thanks Jon and Rosie!

– Nathan Taylor

Each week I looked forward to the 90 minutes of training. It is so inspiring to be part of the evolution of the employee wellness field. Both Jon and Rosie are fantastic leaders for this training and in the field as well. I am thankful that my organization allowed me to complete this training and I am looking forward to the future of wellness!

– Megan Matuszeski

 

Stick a fork in it: Participants-vs-non-participants study design is dead

If instead of randomized control trials, the FDA simply allowed drug companies to compare the results of people who conscientiously used a drug to people who couldn’t be bothered, they could save a ton of money.

Not that I want to put ideas in their head.

But they don’t allow that, for the simple reason that active motivated people will always outperform inactive unmotivated people. Absent equal intent-to-treat — meaning comparing people actually taking the drug to people who think they are actually taking the drug — you can’t distinguish the effect of motivation from the effect of the drug.

As true as that is in drug research, it is even more true in activities, such as wellness, where motivation is paramount.  And yet the standard study design –participants vs. non-participants — compares all the motivated people to all the unmotivated people.

Sure, the Wellness Ignorati will claim the groups are “matched,” and on paper maybe their demographics are the same…but you can’t “match” the state of mind between participants and non-participants, as the Ignorati well know.

This American Journal of Managed Care essay, a more formal albeit less colorful version of an earlier TSW smackdown, means that quite literally every study done using this participants-vs-non-participants design is either largely or entirely invalid.

 

Schlumberger’s program confirms that in wellness, harming employees is the new black.

Silly us. We thought Wellsteps’ wellness program was an outlier when it came to harming employees. But Wellsteps has nothing on Schlumberger.  Almost immediately after reading last year’s post on the harms of crash-dieting contests run by Healthywage, Schlumberger instituted  — hang on to your hats — a crash-dieting contest run by Healthywage.  They even included some decent-size prizes — $1000 per winning participant. How did they finance those prizes, with sales of their drilling equipment falling by about half from its peak? Simple. They canceled their employee gym membership subsidy.

Then, as some may remember, it turned out that Healthywage’s understanding of arithmetic rivaled their understanding of obesity. In their contest, somehow five teams lost exactly 16.59% of their body weight.  This is clearly alternative math, since the chances of that coincidence using real math are about 1 in 4 quadrillion, meaning that the odds of winning the lottery are about 1000 times better than the odds that Healthywage’s executives are not a bunch of idiots.

So Schlumberger harmed employees, wasted money, and got ripped off by alleged weight control experts who can’t read a scale. What does a company do in situations like this? Double down, of course. Literally. Yes, this year, they’re back partnering with Healthywage…with twice the prize money — $2000 apiece for the five members of the winning team.

schlumberger-contest

They gave employees a week’s advance notice, so that they could pack on some pounds that they can take off later.  Plus they could time their consumption of salt tablets, and concoct other ways to bloat up, before the contest began.  Basically they needed to figure out how to become as unhealthy as possible, before starting the unhealthy process of crash-dieting. (As an aside, several teams apparently tried to recruit pregnant women whose due dates fall during the contest period.)  And now that they contest is underway, who wouldn’t pop a few OTC diet pills to make $2000?


The difference is that this year, several concerned employees wrote to me and urged me to inform their benefits department of the indisputable facts that:

  1. Crash-dieting is a stupid idea;
  2. Offering prizes for crash-dieting is an even stupider idea.

I wrote the requested letter to Schlumberger, and explained all this to them, not that anyone with an internet connection should need an explanation of why crash dieting contests don’t work, or, more basically, why being stupid is a bad idea.

Their response?  The benefits department appears to have tried to determine who sent me the announcement, presumably in order to get them fired. It was actually multiple people since I have family in Texas in the oilfield services industry. I had anticipated this, so I un-linkedin with all of them before writing to Schlumberger. I’ve learned through experience that in wellness, you need to anticipate the most inappropriate and misanthropic reaction to any helpful offer, because that is the reaction you will get.  (For instance, rather than being concerned about Wellsteps harming their employees, the Boise School District wellness program coordinator told Wellsteps I was blowing the whistle on them for harming their employees.)


Update:  Healthywage presents alternative math, Volume 2. If the rule is (as stated) that you must sign up in teams of five, what is wrong with this picture?

schlumberger-703-people


And just as I was about to click “publish,” I noticed alternative math, Volume 3.  Apparently Healthywage thinks you can lose a high percentage of your total weight even if “you only have a little weight to lose”:

schlumberger-misunderstanding-percentages

And that brings us back to the main, decidedly unfunny, point: crash-dieting contests, especially with big prizes, are a very bad idea…and companies like Healthywage ought to be ashamed of themselves for making a business out of harming employees.

 

 

 

 

Corporate Care Management Rounds Up 25 Times the Number of Usual Suspects

You may recall Phantom Tollbooth, a classic of children’s literature. Among other things, the book features a dish called “subtraction stew,” where the more you eat, the hungrier you get. Likewise  the more cancer cases Corporate Care Management  (CCM) finds, the more money they save.

With math like that, it’s no wonder that 5% of their case study population of 600 was told they have cancer or precancer. And that was just in 2015 alone.

In Casablanca, Captain Renault told Major Strasser: “Owing to the seriousness of this crime I’ve instructed my men to round up twice the usual number of suspects.” Turns out Inspector Renault had nothing on CCM. In a typical year maybe 0.2% of the working-age population would get a valid cancer diagnosis for the cancers they screen for. That means CCM rounded up 25 times the number of usual suspects. To put this in perspective, at this rate we’d all get diagnosed with cancer twice during a 40-year career.

casablanca

 

 

 

 

 

And the savings?  Over $20,000 per alleged cancer victim. That means CCM saved more than $1000 per covered employee, which is more than most employers spend on cancer — simply by finding more cancer cases to spend money on. (Care for a second helping of Subtraction Stew?)  And, yes, of course, this screening includes the USPSTF D-rated prostate screens, a staple of the wellness industry.

ccm-prostate

This cancer-finding is all accomplished on the basis of a letter-writing campaign. (For instance, men were instructed on three different occasions in 2015 to get their prostates screened.)

One other nuance: the 5% cancer incidence rate — 30 people — is based on the total 600 employees. Yet the company sent out only 146 “welcome packets,” meaning that less than a quarter of the employees actually responded to the letters.  Of those 146, almost a quarter (30) allegedly had cancer or precancer.  This proportion even tops the previous high for cancer hyperdiagnosis, which was the Nebraska wellness program (before Ron Goetzel and Health Fitness Corporation admitted they had made the whole thing up).

ccm-cancer-statistics

While it wasn’t clear what the other 454 people did with those letters, here is a question for your consideration: would you even open a letter with a return address of: “Corporate Care Management” ?  As a patient, those are three words I hope never to see adjacent to one another over the course of my natural lifetime.

Lest I forget, here is what the actual letter looked like. I don’t know about you, but if I ever received letters with this design from an outfit with this name, I’d get a restraining order.

ccm-new-years


And apparently I’m not the only one who would prefer to be left alone.  The Maytag repairmen used to be the loneliest people on earth.

maytag-repairman

But CCM’s nurses might give them a run for their money.  Here are the utilization statistics they presented to this very same client with 600 employees:

ccm-outreach-title

ccm-outreach

Yes, that’s correct. they completed one conversation.  Although that may be because the other 599 employees are too worried that if they talk to these people, they’ll get treated for cancer so CCM can save even more money.

 

American Journal of Health Promotion Announces New Fabricator-in-Chief — Paul Terry

Paul Terry, formerly of Staywell and the Health Enhancement Research Organization, has just been appointed the new editor of the wellness industry trade publication, the American Journal of Health Promotion.  He replaces Michael “Let’s Charge Employees Insurance by the Pound” O’Donnell in that role.

Mr. Terry brings exactly the type of expertise to this job that AJHP readers have come to expect, in that very few people can claim to surpass Mr. Terry’s ability to fabricate outcomes.

I first became familiar with Mr. Terry’s work when Staywell claimed mathematically impossible savings for British Petroleum’s pry-poke-and-prod wellness program, which I dutifully reported on The Health Care Blog in the posting: “BP’s Wellness Program is Spewing Invalidity.” Staywell, as a preferred vendor of Mercer, was able to “convince” Mercer to fabricate savings, when their client, BP, asked for an evaluation.  Staywell pretended to have saved almost $20,000 for every risk factor reduced among active participants (meaning dropouts and nonparticipants’ failures aren’t counted).

This was quite a feat considering that the average employee only spent about $5000 during the year in which this analysis was conducted. And of course only a tiny percentage of healthcare costs in the short term are attributable to risk factor reduction anyway. (Staywell was offered the opportunity to rebut, and didn’t.)

But the smoking gun here was that Mr. Terry apparently forgot that Staywell itself only claimed to be able to save $129/risk factor reduced. Magnanimous guy that I am, I was kind enough to point out that integrity chasm for him in the article.


Most people, when they are caught fabricating data, try to deny it. But Paul Terry brags about it.   In case you haven’t already done so, take a looksee at his defamatory letter to the media that he sent, along with his cronies Ron Goetzel and Seth Serxner. He insists that they made up the data I reviewed — meaning his best argument against me is that I didn’t realize he was lying. If we take him to court, he could argue that the judge should apply the legal standard for negligence — that I “knew or should have known” their data was fabricated, because all their data is fabricated.

Although ironically it turns out the data they insisted was fabricated was, this time, legitimate — meaning that he was making up his claim that HERO had made up the data. That’s a topic for another blog.  Suffice it to say that, in the immortal words of the great philosopher LL Cool J, he lied about the lies he lied about.

Most importantly, if you read the letter he wrote, you’ll see that another of his arguments is that when calculating ROI, you should not compare costs to savings.  And a good thing because comparing costs to savings, and other feats of arithmetic, would be the wellness industry’s second-worst nightmare (next to facts).

Refusing to acknowledge the existence of basic arithmetic makes Mr. Terry a perfect choice to be editor of the wellness industry trade publication.

 

%d bloggers like this: