They Said What?

Home » Posts tagged 'Paul Terry'

Tag Archives: Paul Terry

And the envelope please. The best outcomes evaluator in wellness is…

The best outcomes evaluator in the wellness field is Dr. Iver Juster.*

*Among the subset of males not affiliated with They Said What.

Why Dr. Juster’s Case Study Is the Best Case Study Ever Done in This Field

Chapter 2 of the HERO Guide is a great study and deserves high praise. But before we get into the salient points of what makes this absolutely the best case study analysis ever done in this field, be aware the provenance is not a coincidence.  Dr. Juster is very skilled at evaluation. Indeed he was the first person to receive Critical Outcomes Report Analysis (CORA) certification from the Disease Management Purchasing Consortium. (Dr. Juster very graciously shares the credit, and as described in his comments below would like to be listed as “the organizer and visible author of a team effort.”)

Note: the CORA course and certification are now licensed for use by the Validation Institute, which has conferred honorary lifetime certification on Iver gratis, to recognize his decades of contribution to this field. (Aside from the licensure, the Validation Institute is a completely independent organization from DMPC, from They Said What, and from me. It is owned by Care Innovations, an subsidiary of Intel.  If you would like to take the CORA Certification course live, it is being offered next in Philadelphia on March 27.  You can take it online as well.)

Early in the chapter, Iver lists and illustrates multiple ways to measure outcomes. He dutifully lists the drawbacks and benefits of each, but, most importantly, notes that they all need to be plausibility checked with an event-rate analysis, which he provides a detailed example of–using data from his own work. In an event rate analysis, wellness-sensitive medical events are tracked over the period of time in question.

Wellness has never been shown to have a positive impact on anything other than wellness-sensitive events. Consequently, there is no biostatistical basis for crediting, for example, “a few more bites of a banana” with, to use our favorite example, a claimed reduction in cost for hemophilia, von Willibrand’s Disease and cat-scratch fever.

ted-nugent

By contrast, real researchers, such as Iver, link outcomes with inputs using a concept called attribution, meaning there has to be a reason logically attributable to the intervention to explain the outcome. it can’t just a coincidence, like cat scratch fever. As a result, he is willing to attribute only changes in wellness-sensitive medical events to wellness.

Event-Rate Plausibility Analysis

Event rates (referred to below as “PPH” or “potentially preventable hospitalizations”) are laid out by disease on page 22 of the HERO Report.  Note the finding that PPH are a small fraction of “all-cause hospitalizations.”  Though the relative triviality of the magnitude of PPH might come as a surprise to people who have been told by their vendors that wellness will solve all their problems, Iver’s hospitalization data sample is representative of the US as a whole for the <65 population, in which chronic disease events are rare in the <65 population.

hero page 23 total

Gross savings total $0.99 per employee per month.  This figure counts all events suffered by all members, rather than excluding events suffered by non-participants and dropouts. Hence it marks the first time that anyone in the wellness industry had included those people’s results in the total outcomes tally — or even implicitly acknowledged the existence of dropouts and non-participants. He also says, on p. 17:

For example, sometimes savings due to lifestyle risk reduction is calculated on the 20% of the population that supplied appropriate data. It’s assumed that the other 80% didn’t change but if some of the people who didn’t supply risk factor data worsened, and people who got worse were less likely to report their data, that model would overestimate savings.

Note that the PPH declined only in cardiac (“IVD”) and asthma.  Besides the event rates themselves being representative of the employed population in the US as a whole as a snapshot, the observed declines in those event rates are almost exactly consistent with declines nationally over that same period. This decline can be attributed to improvements in usual care, improvements that are achieved whether or not a wellness program was in place.  The existence and magnitudes of the declines, coupled with the slight increase in CHF, diabetes and COPD combined (likewise very consistent with national trends), also confirm that Iver’s analysis was done correctly. (Along with attribution, in biostatistics one looks for independent confirmation outside the realm of what can be influenced by the investigator.)

It is ironic that Ron Goetzel says: “Those numbers are wildly off…every number in that chapter has nothing to do with reality” when I have never, ever seen a case study whose tallies — for either total events or event reduction, let alone both — hewed closer to reality (as measured by HCUP) than this one.

Another factor that conveniently gets overlooked in most wellness analyses is that costs other than PPHs rise.  By contrast, Iver is the first person to acknowledge that:

HERO other costs increase

The implication, of course, is that increases in these costs could exceed the usual care-driven reductions in wellness-sensitive medical events. Indeed, Iver’s acknowledgement proved prescient when Connecticut announced that its wellness program made costs go up.

The $0.99 gross savings, and Connecticut’s healthcare spending increase, exclude the cost of the wellness program itself, of course. Factor in Ron Goetzel’s recommendation of spending $150/year for a wellness program and you get some pretty massive losses.


The old Al Lewis would close by making some reference to the dishonesty and cluelessness of the Health Enhancement Research Organization’s board. The new Al Lewis will do just the opposite. In addition to congratulating Iver Juster (and his co-author, Ben Hamlin) on putting this chapter together, I would like to congratulate the Health Enhancement Research Organization, for what Iver describes as the “team effort” in publishing it — HERO’s first flirtation, however fleeting and inadvertent, with integrity and competence.



Iver Juster Comments on the article

Iver reviewed this article and would like to add several points. I am only adding a couple of my own points, noted in indented italics:

  • It’s important to credit the work to a larger group than just myself. I was the ‘lead author’ on the financial outcomes chapter of the HERO/PHA measurement guide, but the work entailed substantial planning and review in collaboration with the chapter’s coauthor (Ben Hamlin from NCQA) and members of the group dedicated to the chapter (as well as the HERO/PHA authoring group as a whole).
    • Yes, I am more than happy to credit the entire group with this study, especially Ron Goetzel, Seth Serxner and Paul Terry.
  • Nonetheless the work does reflect my perspective and approach on the topic – the important points being (a) select metrics that are impactible by the intervention or program; (b) be transparent about the metric definitions and methodology used to measure and compare the; (c) assiduously seek out potential sources of both bias and noise (in other words, exert the discipline of being curious, which is greatly aided by listening to others’ points of view); (d) understand and speak to the perspective of the study—payer, employee/dependents, clinician/healthcare system, society.
  • Be particularly sensitive to the biologically-plausible timeframes in which your outcomes ought to occur, given the nature of the program. Even if optimally implemented with optimal uptake and adherence, we might expect ‘leading indicators’ like initial behavior changes to improve quickly; program-sensitive biometrics (lipids, A1C, blood pressure, BMI) and medication adherence to change in a matter of months; and a few program-sensitive ER/inpatient visits (like worsening heart failure or asthma/COPD exacerbations) to improve within several months (again, assuming the program is designed to address the causes of these events). Longer-term events like kidney failure, heart attack and stroke and retinopathy take much longer to prevent partly because they require sustained healthy behavior, and partly due to the underlying biology.
    • This is one excellent reason that the measured event rate decline mirrored the secular decline in the US as a whole over the period, meaning the program itself produced no decline over that period.  Possibly they might decline in future years if Iver is correct. Ron Goetzel would take issue with Iver’s assertion — Ron says risk factors decline only 1-2% in 2-3 years.
  • Event rate measurement in any but the largest Commercially-insured populations is subject to considerable noise. Though a challenge, estimating ‘ confidence intervals should at least shed light on the statistical noisiness of your findings. 
    • No need this time because your results hewed so closely to secular trend, reflecting the quality of the analysis.
  • It is very likely that the program used in the illustration did affect more than the events shown because it was a fairly comprehensive population health improvement initiative. For example, ER visits were not counted; and collateral effects of ‘activation’ – a very key component of wellness – were not included in this analysis. Assuming the 99 cents is an accurate reflection of the program’s effect on the events in the chart, I’d be willing to increase the actual claims impact by 50 to 75%.
    • If your speculation is accurate, that would increase gross savings to $1.49 to $1.73/month–before counting preventive care increases indicated on Page 22.
  • Nonetheless, to get effect from an effective program you have to increase both the breadth (number of at-risk people) and depth (sustained behavior change including activation) – but at a cost that is less than a 1:1 tradeoff to the benefit. In other words, you must increase value = outcomes per dollar. This cannot be done through incentives alone – as many researchers have shown, if it can be done at all, it must be the result of very sustained, authentic (no lip service!) company culture.
  • We are beginning to pay attention to other potential benefits of well-designed, authentic employee / workplace wellness programs (of which EHM is a part) on absenteeism, presenteeism, employee turnover and retention – and, importantly, company performance (which is after all what the company is in business to do). It’s early days but it’s possible research will show that companies that are great places to work and great places to have in our society will find financial returns that far outstrip claims savings. The jury’s still out on this important topic but let’s help them deliberate transparently and with genuine curiosity.
  • Did Ron really say you have to spend $150 per year PER MEMBER on a wellness program? I’d be thinking a few dollars (unless he’s including participation incentives)

 

And the Envelope Please. The Best Outcomes Evaluator in Wellness Is…*

*Among the subset of males not affiliated with They Said What.


Alert readers may recall that my New Year’s resolution was to balance my negative postings about the wellness industry with positive ones.  Like Diogenes searching for an honest man, I thought the finding the latter would be hard, but just as Romy Antoine also did earlier this month, The subject of this posting — to be named in Part Two — makes that easy.  Part One sets the stage for the review of his study.

By way of background, in preparation for bringing a possible lawsuit, I re-read the famous Chapter 2 of the equally famous HERO report. That was the chapter which inspired Ron Goetzel, Seth Serxner and Paul Terry (who was recently anointed as the American Journal of Health Promotion’s new Fabricator-in-Chief) to circulate their defamatory letter about me to the media, in a singularly self-immolating attempt to discourage them from publishing my material.  They insisted that Chapter 2 was pure fabricated nonsense, rather than a carefully analyzed report of real data.  Here is an excerpt from their actual letter, copies of which are available from me but which is summarized here:

A fabricated…absurd, mischievous and potentially harmful misrepresentation of our data.

Ron said it best in our Great Debate, minute 1:17 in the MP3 downloadable here:

Those numbers are wildly off…every number in that chapter has nothing to do with reality. 

However, the sun rises in the east, taxes are due April 15th, and Ron Goetzel is lying.  Quite the contrary, Chapter 2 turns out to be a carefully analyzed report of real data — almost certainly the best case study ever published.

How did I learn that Ron was fabricating a story that his guidebook had fabricated a story?

  1. This chapter says it’s a real report, on p. 22.
  2. Since this chapter’s analysis was so far above the pay grade of those three aforementioned HERO characters, I checked the acknowledgements in the HERO book. Sure enough, none of the HERO cabal wrote it. Someone else (to be named in the next posting) was the lead author, and I called to congratulate him on it. I also asked him some background questions, one of which proved very revealing. It turns out that…
  3. This real analysis of real data was — get ready — reviewed prior to publication by the exact same people who are disowning it now. Yes, among the people who peer-reviewed it prior to publication were the very same Ron Goetzel, Seth Serxner, and Paul Terry. (In addition to them doing the actual review, the lead author, very graciously sharing the credit, wanted to make sure that I indicate that he was only the “organizer and visible author of a team effort.”)

Yes, as is so often the case with these three, they lied about the lies that they lied about.  It’s quite ironic that their argument against my original praise of this analysis was to insist that because my source was their own lies, my own analysis was unreliable.  These lies above don’t include the actual lies I might sue them about, which were lies about me, which are totally separate from their lies about their previous lies. (Their lie about me was that I had a history of outrageously inaccurate statements, none of which they have ever been able to identify.)

These characters aren’t ordinary run-of-the mill alternative fact-type liars.  They’re way beyond that.

Their lies go to 11.

goetzel-on-fire


Coming soon, the reveal…

American Journal of Health Promotion Announces New Fabricator-in-Chief — Paul Terry

Paul Terry, formerly of Staywell and the Health Enhancement Research Organization, has just been appointed the new editor of the wellness industry trade publication, the American Journal of Health Promotion.  He replaces Michael “Let’s Charge Employees Insurance by the Pound” O’Donnell in that role.

Mr. Terry brings exactly the type of expertise to this job that AJHP readers have come to expect, in that very few people can claim to surpass Mr. Terry’s ability to fabricate outcomes.

I first became familiar with Mr. Terry’s work when Staywell claimed mathematically impossible savings for British Petroleum’s pry-poke-and-prod wellness program, which I dutifully reported on The Health Care Blog in the posting: “BP’s Wellness Program is Spewing Invalidity.” Staywell, as a preferred vendor of Mercer, was able to “convince” Mercer to fabricate savings, when their client, BP, asked for an evaluation.  Staywell pretended to have saved almost $20,000 for every risk factor reduced among active participants (meaning dropouts and nonparticipants’ failures aren’t counted).

This was quite a feat considering that the average employee only spent about $5000 during the year in which this analysis was conducted. And of course only a tiny percentage of healthcare costs in the short term are attributable to risk factor reduction anyway. (Staywell was offered the opportunity to rebut, and didn’t.)

But the smoking gun here was that Mr. Terry apparently forgot that Staywell itself only claimed to be able to save $129/risk factor reduced. Magnanimous guy that I am, I was kind enough to point out that integrity chasm for him in the article.


Most people, when they are caught fabricating data, try to deny it. But Paul Terry brags about it.   In case you haven’t already done so, take a looksee at his defamatory letter to the media that he sent, along with his cronies Ron Goetzel and Seth Serxner. He insists that they made up the data I reviewed — meaning his best argument against me is that I didn’t realize he was lying. If we take him to court, he could argue that the judge should apply the legal standard for negligence — that I “knew or should have known” their data was fabricated, because all their data is fabricated.

Although ironically it turns out the data they insisted was fabricated was, this time, legitimate — meaning that he was making up his claim that HERO had made up the data. That’s a topic for another blog.  Suffice it to say that, in the immortal words of the great philosopher LL Cool J, he lied about the lies he lied about.

Most importantly, if you read the letter he wrote, you’ll see that another of his arguments is that when calculating ROI, you should not compare costs to savings.  And a good thing because comparing costs to savings, and other feats of arithmetic, would be the wellness industry’s second-worst nightmare (next to facts).

Refusing to acknowledge the existence of basic arithmetic makes Mr. Terry a perfect choice to be editor of the wellness industry trade publication.

 

Goetzel, Koop Committee, Staywell, Mercer, BP America meet Groundhog Day

Perhaps the strategy of the leaders of the wellness ignorati (who constitute the Koop Committee) is to overwhelm us with so many lies that we don’t have time to expose every one and still get home in time for dinner.

No sooner have we finished pointing out the numerous (and unrebutted) implausibilities and internal inconsistencies in Ron Goetzel’s posting on the value of workplace wellness, than the Koop Committee (Mr. Goetzel and his cabal) feeds us even more red meat:  They gave the 2014 Koop Award to British Petroleum.  However, apparently only British Petroleum wants to tell the world about it. The Koop Committee hasn’t even updated its own website to list 2014 award winners.

Recall that we’ve spent months excoriating Goetzel and his sidekicks (Wellsteps’ Steve Aldana, Milliman’s Bruce Pyenson, Mercer’s Dan Gold and the rest of them) for doing three things in the Nebraska award, for a program that prima facie seems to be in violation of Nebraska’s state contractor anti-fraud regulations:

(1)   Gave it to a program where the numbers were obviously fabricated and later admitted to be

(2)   Gave it to a program whose vendor sponsors the Committee

(3)   Forgot to disclose in the announcement that the vendor sponsors the Committee

Perhaps what you are about to read isn’t their fault.  Perhaps their mothers simply failed to play enough Mozart while the Committee members were in their respective wombs, but here’s how they applied the learning from the Nebraska embarrassment to their decision to award British Petroleum.  This time they:

(1)   Gave it to a program where the numbers had already been shown to be fabricated

(2)   Gave it to a program whose vendor sponsors the Committee

(3)   Forgot to disclose in the announcement that the vendor (Staywell) sponsors the Committee

(4)   Forgot to disclose in the announcement that the vendor sits on the Committee

(5)   Forgot to disclose in the announcement that the consulting firm (Mercer) sponsors the Committee

(6)   Forgot to disclose in the announcement that the consulting firm sits on the Committee

 

mercer staywell sponsorship

I suspect we will be writing a similar analysis again next year, when once again, the Committee will attempt to demonstrate the value of sponsoring a C. Everett Koop Award.

American Heart Association promotes StayWell while violating its conflict of interest policy

American Heart AssociationStayWell


Short Summary of Company:

AHA wellness: “The American Heart Association’s Worksite Wellness Kit encourages companies to give employees an excuse to get away from their desks.”

Staywell: “StayWell helps clients across the health care spectrum address the changing landscape like no other company. We leverage the latest technology, enhanced analytics, and deep consumer insights in an integrated portfolio of best-in-class client solutions.”

Materials Being Reviewed

Questions for AHA

Your conflict-of-interest statement says you “make every effort to avoid actual or potential conflicts of interest that may arise as a result of an outside relationship.” Why doesn’t letting the Chief Science Officer of a wellness company write your wellness policy citing his own articles in support of wellness violate that policy?

ANS: Refused to answer

Were you aware that Staywell perpetrated a scheme in which they worked with Mercer to convince British Petroleum that their outcomes were 100 times better than what Staywell itself said was possible?

ANS: Refused to answer

Why did you allow a writer to source his own articles, thus creating an AHA policy stand that is clearly in his own financial interest?

ANS: Refused to answer

Is it representative of your peer review policy not to “vet” your peer reviewers to see if they themselves were involved in scandals that are very relevant to the article they are reviewing?

ANS: Refused to answer

Why did you as an organization and the writers of that policy decline The Health Care Blog’s invitation to defend your article against observations that it was totally conflicted and based on data known to be invalid?

ANS: Refused to answer

Why did you allow the writers to cherry-pick the available literature, ignoring the overwhelming evidence against your policy and instead continue to cite the old “Harvard study” whose lead author has now walked it back three times?

ANS: Refused to answer

Why did your editors allow the writers to call this (disavowed) Harvard study “recent” even though it was written in 2009 using data with an average date of 2004?

ANS: Refused to answer

Why did your writers knowingly cite studies that no legitimate health services researcher would find acceptable due to obvious study design flaws, like comparing active motivated participants to non-motivated non-participants, claiming that an outcome on volunteers who persisted in the program for three years is representative of the population as a whole, and taking credit for risk reductions in previously high-risk people that would have happened anyway?

ANS: Refused to answer

Why didn’t you mention that the screening frequencies you are endorsing are far in excess of guidelines set by the United States Preventive Services Task Force?

ANS: Refused to answer

As an association named for the human heart, how come you didn’t publish cautions that the screening frequencies you’re recommending can lead to overdiagnosis, overtreatment and other cardiometabolic harms?

ANS: Refused to answer

Postscript:  Any apologies, retractions, explanations etc. other than answering the questions

A July 17 email from co-author Ross Arena: “I am troubled by these accusations, as is AHA.  I have included an AHA representative who will address this.”  [No AHA response followed.]

A July 17 response from us noted that technically these are observations, not accusations.  We “observed” that their screening policy was co-authored by the CEO of  a screening company.  (We offered to link them to dictionary.com to see the difference between the two words, but they declined.)

%d bloggers like this: