They Said What?

Home » Posts tagged 'Paul Terry'

Tag Archives: Paul Terry

The 2017 Deplorables Awards — Runners Up

It’s time for the 2017 Deplorables Awards, lovingly bestowed on those vendors who do the best job making other vendors look good. 


The good news is that you don’t have to actually win the Deplorables Award to sue me.  Runners-up are eligible too. Here is my address for hand-service delivery most of the year:

890 Winter Street #208, Waltham MA 02451

In case you decide to sue me between June 22 and August 8, use:

8 Paddock Circle, Chilmark, MA 02535

And don’t leave out my attorney:

Josh Gardner, GARDNER & ROSENBERG P.C.33 Mount Vernon Street, Boston, Massachusetts 02108

I don’t know how much more I can do for you, other than lick the envelope. So go for it. Don’t make me beg.

But, remember, unlike your usual business model, in court you are required to actually tell the truth (I would be happy to explain to you how that works), meaning there is no chance of your winning — or likely even avoiding summary judgment, since none of the evidence is in dispute. It’s all your own writings.  Oh, and I do my own cross, which means you won’t be able to find an expert witness. Anyone who knows enough about wellness to be an expert witness also knows enough about wellness to know that attempting to defend you would be a humiliating, on-the-record experience.

And there is always the chance that some annoying jerk might blog about it…


The 2017 Runners-Up

Imagine a four-square matrix with competence on one axis and integrity on the other. The people and organizations we’ll be highlighting today would intersect with the companies mentioned in Monday’s posting at only one single point.

Springbuk and Fitbit

As many of you recall, earlier in the year we analyzed the study done by Springbuk that secretly financed by Fitbit. Or maybe I need new glasses, because I just couldn’t find the disclosure in the Springbuk report that this paean to Fitbit was financed by Fitbit, the way Nero used to have the judges award him Olympic medals.

Coincidentally, the study showed Fitbit saving gobs of money because employees taking more than 100 steps a day spend less money than those taking fewer. However, a simple tally of one’s own footsteps shows that it is impossible not to take 100 steps a day unless you are both:

  1. in a hospital bed; and also
  2. on dialysis.

This 100 steps-a-day threshold was repeated many times in the study, with no explanation of how that number came to be. However, it turns out we owe these two outfits an apology. Fitbit and Springbuk have told a number of people privately (not publicly, in order to avoid an embarrassing news cycle) that they didn’t really mean to say that 100 steps a day constituted activity.  They meant to say that taking 100 steps a day implied you had your Fitbit on. My apologies for failing to read their minds that their conclusions were based on reading people’s minds to determine whether they wore the Fitbit deliberately, or simply forgot/remembered/cared to put their Fitbit on.

They never did explain — privately or publicly or to anyone — how employees who took an average number steps during the baseline year could show huge savings by taking an average number of steps in the study year too.

They also never explained how these two statements didn’t completely contradict each other, even though I specifically asked them to in a personal letter, excerpted here:

Third, can you reconcile this statement…:

“The materials in this document represent the opinion of the authors and not representative of the views of Springbuk, Inc. Springbuk does not certify the information, nor does it guarantee the accuracy and completeness of such information.”

…with this statement:

“This demonstration of impact achieved by integrating Fitbit technology into an employee wellness program reinforces our belief in the power of health data and measurement in demonstrating ROI,” said Rod Reasen, co-founder and CEO of Springbuk. 


National Business Group on Health

Next up is the National Business Group on Health. Last year they made the list for criticizing the US Preventive Services Task Force for not demanding enough screenings, in a country that is drowning in them. Not content to rest on those laurels, this year they earned an Honorable Mention for inviting Dr. Oz to keynote on the role of quackery in corporate wellness, and perhaps tell us about his latest lose-weight-by-eating-chocolate miracle diet.


Health Enhancement Research Organization

HERO of course also earns a runner-up award. 2017 will be remembered as the year they finally came to grips with the realization that a business model based on fabricating outcomes requires that perpetrators possess that critical third IQ digit. Without that extra “1”, an organization trafficking in math that can at best be considered fuzzy is going to be outed.

This year’s set of lies?  By way of background, their 2016 poison-pen letter insisted they had fabricated that data set showing that wellness loses money without disclosing that it was fabricated — and also never reviewed their fabricated data before publication. Early in the year, I had the insight that, wow, this “fabricated” Chapter in their guidebook is so much better than the other chapters that something is amiss. No one at HERO can analyze data competently…and yet, here it was, a competent data analysis.

I did something I had never thought to do before, which was look up the actual author of that chapter. It was Iver Juster MD. He was a great analyst even before he read all my books, took all my courses, and achieved all my certifications in Critical Outcomes Report Analysis.

So I called Iver. Here’s what I learned:

  1. Whereas Paul Terry and Ron Goetzel had insisted that Iver fabricated the data, Iver said, of course he didn’t — whatever made me think that?  (“If it wasn’t real, I would have disclosed that,” he observed. Of course he would have. Iver has tremendous integrity.)
  2. The Board discussed and reviewed his chapter at length, and made helpful suggestions, for which he was quite grateful.  This review process required “countless hours,” just as the HERO document says:

The number of  transparent lies HERO tells could make a president blush. In the immortal words of the great philosopher LL Cool J, they lied about the lies they lied about.

Even though 2017 was an off-year for them in terms of the number of lies, they still told enough to be named a runner-up.


Wellness Corporate Solutions

Next is Wellness Corporate Solutions, famous for its crash-dieting contests. WCS now offers a water-drinking contest. The idea is to set up a “challenge” for your team to drink more water than other teams. They call this a “healthy competition.”  I guess they didn’t get the memo that forcing yourself to drink when you don’t want to drink, just to make more money, is anything but healthy. Here is a novel idea: drink when you are thirsty.  Evolution 1, WCS 0.

Perhaps as an encore, WCS, Dr. Oz and the National Business Group on Health could team up to offer a chocolate-eating contest.

I looked into this outfit to see where they get their ideas. The CEO previously ran something called the Washington Document Service. That qualifies her to run a wellness company. As Star Wellness says, to run a wellness company successfully, your background needs to be in sales, or “municipality administration.”  After all, what is more central to administering a municipality than documents?


Wellsteps

What fun would a list of runners-up be without Wellsteps, the  proud recipient of the 2016 Deplorables Award? While their streams of consciousness weren’t as memorable in 2017 as in 2016 (“It’s fun to get fat. It’s fun to be lazy“), they get credit for trying. Their 2017 weight-loss campaign was headlined: “This campaign is not really about weight loss, it is about helping you apply the behavioral secrets of those who have lost weight.”

So if your kids ever want you to teach them how to ride a bike, say: “It’s not really about riding a bike. It’s about helping you apply the secrets of people who have ridden bikes.”

And what secrets are we talking about? What person who has lost weight doesn’t brag to everyone or even write a book?  If there is a secret to weight loss, like eating chocolate, Wellsteps owes it to the country to tell them. Don’t make us beg.


Odds and Ends

No Koop Award winner this year, but an honorable mention to past winners and runners up for their commitment to wellness:

Sounds like in 2018 the logical winners would be Philip Morris, or maybe The Asbestos Corporation of America.

Veering briefly into the public sector, kudos to Representative Virginia Foxx, (R-NC5) for introducing the Required Employee DNA Disclosure Act. Even HERO thought it was a dumb idea…and their threshold for thinking something that increases wellness industry revenues is a dumb idea is quite high, having all rallied behind the Johnson & Johnson Fat Tax, in which companies would be required to disclose the weight of their employees.



Next up…the winner of the 2017 Deplorables Award

The reward for showing your wellness program works is now $3 million!

As almost everyone in the wellness industry knows, we have offered a $2 million reward to anyone who can show that conventional annual “pry, poke and prod” wellness saves money. I’m feeling very generous today, what with the holidays upon us, so let’s make the reward $3 million.

Even more importantly, let’s loosen the rules — a lot —  to encourage applicants. You’ll find the $3 million reward is not just more generous, but also far easier to claim than the previous $2 million reward.


Loosening the Rules

Except as indicated below, the rules stay the same as in the previous posting, but with the following relaxed standards. Most importantly, I’ll now accept the burden of persuasion. It is my job to convince the panel of judges, using the standard civil level of proof, that you are wrong, as opposed to you having to convince them that I am wrong.

Next, let’s expand the pool from which the judges can be drawn. It wasn’t very nice of me to allow you to choose from only the 300 people on Peter Grant’s exclusive healthcare policy listserve, since obviously no one invited into a legitimate healthcare policy listserve thinks wellness saves money.

In addition, you can also choose among the 100+ people on Dave Chase’s email list and the 70 people on the Ethical Wellness email list. (www.ethicalwellness.org)  And to make it totally objective, we will add as judges whatever two bloggers happen to be the leading dedicated lay US healthcare economic policy bloggers at the time of the application for the award, as measured by the ratio of Twitter followers-to-Twitter-following, with a minimum of 15,000 followers.

So judges are chosen as follows: two bloggers chosen by objective formula, plus we each choose six people from among the other 460, with the other party having veto rights for 5 of them. That gives a total of 4 judges, who will choose a fifth from among those roughly 500 people.

This means I only name one of the five judges, so I can’t “stack the deck,” not that I would need to.

The original rules included the requirement of defending Wellsteps’ Koop Award.  After all, the best vendor should be exemplary, right? A beacon for others to follow? A benchmark to show what’s possible when the best and brightest make employees happy and healthy?

However, now you have another option. You could instead just publicly acknowledge that the Koop Award committee is either corrupt or incompetent, as you prefer, since that possibility cannot be ruled out as a logical explanation for Wellsteps winning that award. Your choice…

Next, you may bring as many experts with you to address the adjudication forum as you wish to bring.  I, on the other hand, will be limited to myself.

Further, you no longer have to defend the proposition that wellness as a whole has saved money. You can, if you prefer, simply acknowledge that most of it has failed…except you. Meaning that, if you are a vendor that has been “profiled” on this site in the last 2 years, you can limit your defense to your own specific results. You don’t have to defend the swamp.

That new loophole allows companies like Interactive Health, Fitbit, Wellness Corporate Solutions, etc. — and especially Wellsteps — to get rich…if what I have said specifically about them is wrong. I have $3 million that says it isn’t.


Special Offer for HERO

Ah, yes, the Health Enhancement Research Organization (HERO). The belly of the beast.

Let me make them a special offer. Paul Terry, the current HERO Prevaricator-in-Chief, has accused me of the following  (if you link, you’ll see they had enough sense not to use my name, likely on advice of counsel, given that I already almost sued them after they circulated their poison pen letter to the media):

I’m convinced responding to bloggers who show disdain for our field is an utter waste of time. I’ve rarely been persuaded to respond to bloggers [Editors note, in HERO-speak, “rarely” means “never” — except for that intercepted Zimmerman Telegram-like missive], and each time I did it affirmed my worry that, more than a waste, it’s counter-productive. That’s because they’ll not only incessantly recycle their original misstatements, but worse, they’ll misrepresent your response and use it as fodder for more disinformation.*

Tell ya what, Paul. let’s debate disinformation, including your letter. Aside from the standard 10% entry fee (used to pay the judges honoraria, reserve the venue, and compensate me for wasting my time with your THC-infused quixotry), all the economic burden falls on me.

The only catch: I have asked you on multiple occasions to clue me in as to what my alleged disinformation actually is, if any. That way I can publicly apologize and fix it, should I choose to do so.  Before applying for this award, you need to disclose this alleged disinformation. You can’t just go around saying my information is made up etc. without specifying what it is.

By definition, “disinformation” is deliberate misrepresentation. To my knowledge, as a member of the “integrity segment” of the wellness industry, I have never, and would never, spread disinformation.

On the other hand, if I did spread inadvertently incorrect information by mistake, it seems only fair to let me fix it — especially given that I have been totally transparent and generous with my time in explaining to you what yours is, and how to correct it. (I might have missed some. Keeping up with yours is a challenge of Whack a Mole-meets-White House press correspondent proportions.)

So perhaps it is time to man up, Mr. Terry.  You and your cronies claim to have been collecting my “disinformation” for years, without disclosing any of it. I’m offering you a public forum and $3-million to present it…with only one of 5 judges on “my” side.

Otherwise, perhaps you should, in the immortal word(s) of the great philosopher Moe Howard, shaddap.


A couple other mid-course corrections to the previous award offer.  Someone wondered if this offer is legally binding, so if your attorney’s knowledge of contract law matches your knowledge of wellness economics, they can voice their likely spurious objection. I will publish the objection and address it if need be, to make the reward a binding offer.

Another commenter whined that maybe I just won’t pay the reward. I’m sure that’s the reason no one has applied. (Not.) So put 10% of the entry fee down, and I’ll attach a lien.


And as mentioned, to close the original thread, Happy Holidays to one and all.


*As a side note, Mr. Terry writes: “We’re fortunate to work in an industry with a scant number of vociferous critics.” This “scant” number appears to include the entire medialeft-wing, right-wing, centrist, and health policyApparently also most employees, according to Towers Watson. The good news about “pry, poke and prod” is that it truly bridges the partisan divide in that everyone hates it.

 

 

 

 

And the envelope please. The best outcomes evaluator in wellness is…

The best outcomes evaluator in the wellness field is Dr. Iver Juster.*

*Among the subset of males not affiliated with They Said What.

Why Dr. Juster’s Case Study Is the Best Case Study Ever Done in This Field

Chapter 2 of the HERO Guide is a great study and deserves high praise. But before we get into the salient points of what makes this absolutely the best case study analysis ever done in this field, be aware the provenance is not a coincidence.  Dr. Juster is very skilled at evaluation. Indeed he was the first person to receive Critical Outcomes Report Analysis (CORA) certification from the Disease Management Purchasing Consortium. (Dr. Juster very graciously shares the credit, and as described in his comments below would like to be listed as “the organizer and visible author of a team effort.”)

Note: the CORA course and certification are now licensed for use by the Validation Institute, which has conferred honorary lifetime certification on Iver gratis, to recognize his decades of contribution to this field. (Aside from the licensure, the Validation Institute is a completely independent organization from DMPC, from They Said What, and from me. It is owned by Care Innovations, an subsidiary of Intel.  If you would like to take the CORA Certification course live, it is being offered next in Philadelphia on March 27.  You can take it online as well.)

Early in the chapter, Iver lists and illustrates multiple ways to measure outcomes. He dutifully lists the drawbacks and benefits of each, but, most importantly, notes that they all need to be plausibility checked with an event-rate analysis, which he provides a detailed example of–using data from his own work. In an event rate analysis, wellness-sensitive medical events are tracked over the period of time in question.

Wellness has never been shown to have a positive impact on anything other than wellness-sensitive events. Consequently, there is no biostatistical basis for crediting, for example, “a few more bites of a banana” with, to use our favorite example, a claimed reduction in cost for hemophilia, von Willibrand’s Disease and cat-scratch fever.

ted-nugent

By contrast, real researchers, such as Iver, link outcomes with inputs using a concept called attribution, meaning there has to be a reason logically attributable to the intervention to explain the outcome. it can’t just a coincidence, like cat scratch fever. As a result, he is willing to attribute only changes in wellness-sensitive medical events to wellness.

Event-Rate Plausibility Analysis

Event rates (referred to below as “PPH” or “potentially preventable hospitalizations”) are laid out by disease on page 22 of the HERO Report.  Note the finding that PPH are a small fraction of “all-cause hospitalizations.”  Though the relative triviality of the magnitude of PPH might come as a surprise to people who have been told by their vendors that wellness will solve all their problems, Iver’s hospitalization data sample is representative of the US as a whole for the <65 population, in which chronic disease events are rare in the <65 population.

hero page 23 total

Gross savings total $0.99 per employee per month.  This figure counts all events suffered by all members, rather than excluding events suffered by non-participants and dropouts. Hence it marks the first time that anyone in the wellness industry had included those people’s results in the total outcomes tally — or even implicitly acknowledged the existence of dropouts and non-participants. He also says, on p. 17:

For example, sometimes savings due to lifestyle risk reduction is calculated on the 20% of the population that supplied appropriate data. It’s assumed that the other 80% didn’t change but if some of the people who didn’t supply risk factor data worsened, and people who got worse were less likely to report their data, that model would overestimate savings.

Note that the PPH declined only in cardiac (“IVD”) and asthma.  Besides the event rates themselves being representative of the employed population in the US as a whole as a snapshot, the observed declines in those event rates are almost exactly consistent with declines nationally over that same period. This decline can be attributed to improvements in usual care, improvements that are achieved whether or not a wellness program was in place.  The existence and magnitudes of the declines, coupled with the slight increase in CHF, diabetes and COPD combined (likewise very consistent with national trends), also confirm that Iver’s analysis was done correctly. (Along with attribution, in biostatistics one looks for independent confirmation outside the realm of what can be influenced by the investigator.)

It is ironic that Ron Goetzel says: “Those numbers are wildly off…every number in that chapter has nothing to do with reality” when I have never, ever seen a case study whose tallies — for either total events or event reduction, let alone both — hewed closer to reality (as measured by HCUP) than this one.

Another factor that conveniently gets overlooked in most wellness analyses is that costs other than PPHs rise.  By contrast, Iver is the first person to acknowledge that:

HERO other costs increase

The implication, of course, is that increases in these costs could exceed the usual care-driven reductions in wellness-sensitive medical events. Indeed, Iver’s acknowledgement proved prescient when Connecticut announced that its wellness program made costs go up.

The $0.99 gross savings, and Connecticut’s healthcare spending increase, exclude the cost of the wellness program itself, of course. Factor in Ron Goetzel’s recommendation of spending $150/year for a wellness program and you get some pretty massive losses.


The old Al Lewis would close by making some reference to the dishonesty and cluelessness of the Health Enhancement Research Organization’s board. The new Al Lewis will do just the opposite. In addition to congratulating Iver Juster (and his co-author, Ben Hamlin) on putting this chapter together, I would like to congratulate the Health Enhancement Research Organization, for what Iver describes as the “team effort” in publishing it — HERO’s first flirtation, however fleeting and inadvertent, with integrity and competence.



Iver Juster Comments on the article

Iver reviewed this article and would like to add several points. I am only adding a couple of my own points, noted in indented italics:

  • It’s important to credit the work to a larger group than just myself. I was the ‘lead author’ on the financial outcomes chapter of the HERO/PHA measurement guide, but the work entailed substantial planning and review in collaboration with the chapter’s coauthor (Ben Hamlin from NCQA) and members of the group dedicated to the chapter (as well as the HERO/PHA authoring group as a whole).
    • Yes, I am more than happy to credit the entire group with this study, especially Ron Goetzel, Seth Serxner and Paul Terry.
  • Nonetheless the work does reflect my perspective and approach on the topic – the important points being (a) select metrics that are impactible by the intervention or program; (b) be transparent about the metric definitions and methodology used to measure and compare the; (c) assiduously seek out potential sources of both bias and noise (in other words, exert the discipline of being curious, which is greatly aided by listening to others’ points of view); (d) understand and speak to the perspective of the study—payer, employee/dependents, clinician/healthcare system, society.
  • Be particularly sensitive to the biologically-plausible timeframes in which your outcomes ought to occur, given the nature of the program. Even if optimally implemented with optimal uptake and adherence, we might expect ‘leading indicators’ like initial behavior changes to improve quickly; program-sensitive biometrics (lipids, A1C, blood pressure, BMI) and medication adherence to change in a matter of months; and a few program-sensitive ER/inpatient visits (like worsening heart failure or asthma/COPD exacerbations) to improve within several months (again, assuming the program is designed to address the causes of these events). Longer-term events like kidney failure, heart attack and stroke and retinopathy take much longer to prevent partly because they require sustained healthy behavior, and partly due to the underlying biology.
    • This is one excellent reason that the measured event rate decline mirrored the secular decline in the US as a whole over the period, meaning the program itself produced no decline over that period.  Possibly they might decline in future years if Iver is correct. Ron Goetzel would take issue with Iver’s assertion — Ron says risk factors decline only 1-2% in 2-3 years.
  • Event rate measurement in any but the largest Commercially-insured populations is subject to considerable noise. Though a challenge, estimating ‘ confidence intervals should at least shed light on the statistical noisiness of your findings. 
    • No need this time because your results hewed so closely to secular trend, reflecting the quality of the analysis.
  • It is very likely that the program used in the illustration did affect more than the events shown because it was a fairly comprehensive population health improvement initiative. For example, ER visits were not counted; and collateral effects of ‘activation’ – a very key component of wellness – were not included in this analysis. Assuming the 99 cents is an accurate reflection of the program’s effect on the events in the chart, I’d be willing to increase the actual claims impact by 50 to 75%.
    • If your speculation is accurate, that would increase gross savings to $1.49 to $1.73/month–before counting preventive care increases indicated on Page 22.
  • Nonetheless, to get effect from an effective program you have to increase both the breadth (number of at-risk people) and depth (sustained behavior change including activation) – but at a cost that is less than a 1:1 tradeoff to the benefit. In other words, you must increase value = outcomes per dollar. This cannot be done through incentives alone – as many researchers have shown, if it can be done at all, it must be the result of very sustained, authentic (no lip service!) company culture.
  • We are beginning to pay attention to other potential benefits of well-designed, authentic employee / workplace wellness programs (of which EHM is a part) on absenteeism, presenteeism, employee turnover and retention – and, importantly, company performance (which is after all what the company is in business to do). It’s early days but it’s possible research will show that companies that are great places to work and great places to have in our society will find financial returns that far outstrip claims savings. The jury’s still out on this important topic but let’s help them deliberate transparently and with genuine curiosity.
  • Did Ron really say you have to spend $150 per year PER MEMBER on a wellness program? I’d be thinking a few dollars (unless he’s including participation incentives)

 

And the Envelope Please. The Best Outcomes Evaluator in Wellness Is…*

*Among the subset of males not affiliated with They Said What.


Alert readers may recall that my New Year’s resolution was to balance my negative postings about the wellness industry with positive ones.  Like Diogenes searching for an honest man, I thought the finding the latter would be hard, but just as Romy Antoine also did earlier this month, The subject of this posting — to be named in Part Two — makes that easy.  Part One sets the stage for the review of his study.

By way of background, in preparation for bringing a possible lawsuit, I re-read the famous Chapter 2 of the equally famous HERO report. That was the chapter which inspired Ron Goetzel, Seth Serxner and Paul Terry (who was recently anointed as the American Journal of Health Promotion’s new Fabricator-in-Chief) to circulate their defamatory letter about me to the media, in a singularly self-immolating attempt to discourage them from publishing my material.  They insisted that Chapter 2 was pure fabricated nonsense, rather than a carefully analyzed report of real data.  Here is an excerpt from their actual letter, copies of which are available from me but which is summarized here:

A fabricated…absurd, mischievous and potentially harmful misrepresentation of our data.

Ron said it best in our Great Debate, minute 1:17 in the MP3 downloadable here:

Those numbers are wildly off…every number in that chapter has nothing to do with reality. 

However, the sun rises in the east, taxes are due April 15th, and Ron Goetzel is lying.  Quite the contrary, Chapter 2 turns out to be a carefully analyzed report of real data — almost certainly the best case study ever published.

How did I learn that Ron was fabricating a story that his guidebook had fabricated a story?

  1. This chapter says it’s a real report, on p. 22.
  2. Since this chapter’s analysis was so far above the pay grade of those three aforementioned HERO characters, I checked the acknowledgements in the HERO book. Sure enough, none of the HERO cabal wrote it. Someone else (to be named in the next posting) was the lead author, and I called to congratulate him on it. I also asked him some background questions, one of which proved very revealing. It turns out that…
  3. This real analysis of real data was — get ready — reviewed prior to publication by the exact same people who are disowning it now. Yes, among the people who peer-reviewed it prior to publication were the very same Ron Goetzel, Seth Serxner, and Paul Terry. (In addition to them doing the actual review, the lead author, very graciously sharing the credit, wanted to make sure that I indicate that he was only the “organizer and visible author of a team effort.”)

Yes, as is so often the case with these three, they lied about the lies that they lied about.  It’s quite ironic that their argument against my original praise of this analysis was to insist that because my source was their own lies, my own analysis was unreliable.  These lies above don’t include the actual lies I might sue them about, which were lies about me, which are totally separate from their lies about their previous lies. (Their lie about me was that I had a history of outrageously inaccurate statements, none of which they have ever been able to identify.)

These characters aren’t ordinary run-of-the mill alternative fact-type liars.  They’re way beyond that.

Their lies go to 11.

goetzel-on-fire


Coming soon, the reveal…

American Journal of Health Promotion Announces New Fabricator-in-Chief — Paul Terry

Paul Terry, formerly of Staywell and the Health Enhancement Research Organization, has just been appointed the new editor of the wellness industry trade publication, the American Journal of Health Promotion.  He replaces Michael “Let’s Charge Employees Insurance by the Pound” O’Donnell in that role.

Mr. Terry brings exactly the type of expertise to this job that AJHP readers have come to expect, in that very few people can claim to surpass Mr. Terry’s ability to fabricate outcomes.

I first became familiar with Mr. Terry’s work when Staywell claimed mathematically impossible savings for British Petroleum’s pry-poke-and-prod wellness program, which I dutifully reported on The Health Care Blog in the posting: “BP’s Wellness Program is Spewing Invalidity.” Staywell, as a preferred vendor of Mercer, was able to “convince” Mercer to fabricate savings, when their client, BP, asked for an evaluation.  Staywell pretended to have saved almost $20,000 for every risk factor reduced among active participants (meaning dropouts and nonparticipants’ failures aren’t counted).

This was quite a feat considering that the average employee only spent about $5000 during the year in which this analysis was conducted. And of course only a tiny percentage of healthcare costs in the short term are attributable to risk factor reduction anyway. (Staywell was offered the opportunity to rebut, and didn’t.)

But the smoking gun here was that Mr. Terry apparently forgot that Staywell itself only claimed to be able to save $129/risk factor reduced. Magnanimous guy that I am, I was kind enough to point out that integrity chasm for him in the article.


Most people, when they are caught fabricating data, try to deny it. But Paul Terry brags about it.   In case you haven’t already done so, take a looksee at his defamatory letter to the media that he sent, along with his cronies Ron Goetzel and Seth Serxner. He insists that they made up the data I reviewed — meaning his best argument against me is that I didn’t realize he was lying. If we take him to court, he could argue that the judge should apply the legal standard for negligence — that I “knew or should have known” their data was fabricated, because all their data is fabricated.

Although ironically it turns out the data they insisted was fabricated was, this time, legitimate — meaning that he was making up his claim that HERO had made up the data. That’s a topic for another blog.  Suffice it to say that, in the immortal words of the great philosopher LL Cool J, he lied about the lies he lied about.

Most importantly, if you read the letter he wrote, you’ll see that another of his arguments is that when calculating ROI, you should not compare costs to savings.  And a good thing because comparing costs to savings, and other feats of arithmetic, would be the wellness industry’s second-worst nightmare (next to facts).

Refusing to acknowledge the existence of basic arithmetic makes Mr. Terry a perfect choice to be editor of the wellness industry trade publication.

 

Goetzel, Koop Committee, Staywell, Mercer, BP America meet Groundhog Day

Perhaps the strategy of the leaders of the wellness ignorati (who constitute the Koop Committee) is to overwhelm us with so many lies that we don’t have time to expose every one and still get home in time for dinner.

No sooner have we finished pointing out the numerous (and unrebutted) implausibilities and internal inconsistencies in Ron Goetzel’s posting on the value of workplace wellness, than the Koop Committee (Mr. Goetzel and his cabal) feeds us even more red meat:  They gave the 2014 Koop Award to British Petroleum.  However, apparently only British Petroleum wants to tell the world about it. The Koop Committee hasn’t even updated its own website to list 2014 award winners.

Recall that we’ve spent months excoriating Goetzel and his sidekicks (Wellsteps’ Steve Aldana, Milliman’s Bruce Pyenson, Mercer’s Dan Gold and the rest of them) for doing three things in the Nebraska award, for a program that prima facie seems to be in violation of Nebraska’s state contractor anti-fraud regulations:

(1)   Gave it to a program where the numbers were obviously fabricated and later admitted to be

(2)   Gave it to a program whose vendor sponsors the Committee

(3)   Forgot to disclose in the announcement that the vendor sponsors the Committee

Perhaps what you are about to read isn’t their fault.  Perhaps their mothers simply failed to play enough Mozart while the Committee members were in their respective wombs, but here’s how they applied the learning from the Nebraska embarrassment to their decision to award British Petroleum.  This time they:

(1)   Gave it to a program where the numbers had already been shown to be fabricated

(2)   Gave it to a program whose vendor sponsors the Committee

(3)   Forgot to disclose in the announcement that the vendor (Staywell) sponsors the Committee

(4)   Forgot to disclose in the announcement that the vendor sits on the Committee

(5)   Forgot to disclose in the announcement that the consulting firm (Mercer) sponsors the Committee

(6)   Forgot to disclose in the announcement that the consulting firm sits on the Committee

 

mercer staywell sponsorship

I suspect we will be writing a similar analysis again next year, when once again, the Committee will attempt to demonstrate the value of sponsoring a C. Everett Koop Award.

American Heart Association promotes StayWell while violating its conflict of interest policy

American Heart AssociationStayWell


Short Summary of Company:

AHA wellness: “The American Heart Association’s Worksite Wellness Kit encourages companies to give employees an excuse to get away from their desks.”

Staywell: “StayWell helps clients across the health care spectrum address the changing landscape like no other company. We leverage the latest technology, enhanced analytics, and deep consumer insights in an integrated portfolio of best-in-class client solutions.”

Materials Being Reviewed

Questions for AHA

Your conflict-of-interest statement says you “make every effort to avoid actual or potential conflicts of interest that may arise as a result of an outside relationship.” Why doesn’t letting the Chief Science Officer of a wellness company write your wellness policy citing his own articles in support of wellness violate that policy?

ANS: Refused to answer

Were you aware that Staywell perpetrated a scheme in which they worked with Mercer to convince British Petroleum that their outcomes were 100 times better than what Staywell itself said was possible?

ANS: Refused to answer

Why did you allow a writer to source his own articles, thus creating an AHA policy stand that is clearly in his own financial interest?

ANS: Refused to answer

Is it representative of your peer review policy not to “vet” your peer reviewers to see if they themselves were involved in scandals that are very relevant to the article they are reviewing?

ANS: Refused to answer

Why did you as an organization and the writers of that policy decline The Health Care Blog’s invitation to defend your article against observations that it was totally conflicted and based on data known to be invalid?

ANS: Refused to answer

Why did you allow the writers to cherry-pick the available literature, ignoring the overwhelming evidence against your policy and instead continue to cite the old “Harvard study” whose lead author has now walked it back three times?

ANS: Refused to answer

Why did your editors allow the writers to call this (disavowed) Harvard study “recent” even though it was written in 2009 using data with an average date of 2004?

ANS: Refused to answer

Why did your writers knowingly cite studies that no legitimate health services researcher would find acceptable due to obvious study design flaws, like comparing active motivated participants to non-motivated non-participants, claiming that an outcome on volunteers who persisted in the program for three years is representative of the population as a whole, and taking credit for risk reductions in previously high-risk people that would have happened anyway?

ANS: Refused to answer

Why didn’t you mention that the screening frequencies you are endorsing are far in excess of guidelines set by the United States Preventive Services Task Force?

ANS: Refused to answer

As an association named for the human heart, how come you didn’t publish cautions that the screening frequencies you’re recommending can lead to overdiagnosis, overtreatment and other cardiometabolic harms?

ANS: Refused to answer

Postscript:  Any apologies, retractions, explanations etc. other than answering the questions

A July 17 email from co-author Ross Arena: “I am troubled by these accusations, as is AHA.  I have included an AHA representative who will address this.”  [No AHA response followed.]

A July 17 response from us noted that technically these are observations, not accusations.  We “observed” that their screening policy was co-authored by the CEO of  a screening company.  (We offered to link them to dictionary.com to see the difference between the two words, but they declined.)

%d bloggers like this: