Home » Posts tagged 'Koop Award' (Page 3)
Tag Archives: Koop Award
HERO (Health Enhancement Research Organization) Crowdsources Arithmetic
This is the third in a series deconstructing the Health Enhancement Research Organization’s (HERO) attempt to replace the basic outcomes measurement concepts presented to the human resources community in Why Nobody Believes the Numbers with a crowdsourced consensus version of math. The first installment covered Pages 1 to 10 of their outcomes measurement report, where HERO shockingly admitted wellness hurts morale and corporate reputations. The second installment jumped ahead to page 15, where HERO shockingly admitted wellness loses money. This report covers pages 11-13. Next week we shall be covering Page 14.
Spoiler Alert: The wellness industry believes that math is a popularity contest. (We have a million-dollar reward if they can show that’s true. More on that later.)
All the luminaries in the wellness industry got together to crowdsource arithmetic, and put their consensus (a word they use 50 times) in an 88-page report. Unfortunately, math is not a consensus-based discipline, like democracy. It is not even an evidence-based discipline, like science. It is a proof-based discipline. A methodology that doesn’t work in hypothetical mathematical circumstances is proven wrong no matter how many votes it gets.
The pages in question list 7 “methodologies” for measuring outcomes. To begin with, consider the absurdity of having 7 different ways to measure. Imagine if you asked your stockbroker how much money you made last year, and were told: “Well, that all depends. You could measure that seven different ways. And by the way, six of those ways will overstate your earnings.” Math either works or it doesn’t. There is only one right answer.
Methodology #1: “Cost Trend Compared with Industry Peers”
This methodology “may require consulting expertise.”
As a sidebar, one of the many ironies of this HERO report is that most of these methodologies emphasize the need for actuarial or consulting “inputs” or “analytic expertise”…and yet no mention was made on Page 10 of the cost of this expertise when all the elements of cost were listed. While not mentioned as a cost element, consulting firms are very expensive And even if consulting were free, we generally recommend hiring only consultants to do outcomes report analysis who are certified in Critical Outcomes Report Analysis by the Validation Institute.
By contrast, Staywell and Mercer offer an example of what happens when you as a buyer use non-certified “consulting expertise” to evaluate a vendor. Here’s what happens: the vendor wins. Needless to say, Staywell showing savings 100x greater than what Staywell itself said was possible simply by reducing a few employees’ risks raises a lot of questions. But despite repeated requests and offers of honoraria to answer these questions, Mercer wouldn’t answer and the only response Staywell gave us was to accuse us of bullying them. Staywell and Mercer held firm to the Ignorati strategy of not commenting—even though Mercer was representing the buyer (British Petroleum), not the vendor. Oh, yes—both Staywell and Mercer are represented on the HERO Steering Committee.
To HERO’s credit, they do admit the obvious for Methodology #1: If all your peers are using the same vendors, who recommend the same worthless annual checkups, the same overscreening/overdiagnosis, the same lowfat(!) diets, and the same consultants to evaluate all the phony savings attributable to these checkups, diets, and biggest-loser contests, obviously you’ll get the same results. And since trend is going down everywhere (including Medicare and Medcaid, which have no wellness), everyone gets to “show savings.”
Methodology #2: “Inflection on expected cost trend.”
Mercer has been a big proponent of this methodology, as in the previous Staywell example. At one point they used “projected trend” to find mathematically impossible savings for the state of Georgia’s program even though the FBI(!) later found the program vendor, APS, hadn’t done anything. In North Carolina, they projected a trend that allowed them to show massive savings in the state’s patient-centered medical home largely generated, as luck would have it, by a cohort that wasn’t even eligible for the state’s patient-centered medical home.
Comparing to an “expected” trend is one of the most effective sleight-of-hand techniques in the wellness industry arsenal. Every single published study in a wellness promotional journal comparing results to “expected trend” has found savings. And have you ever hired a consultant or vendor to compare your results to “expected trend” who hasn’t found “savings”? We didn’t think so.
QED.
Methodology #3: “Chronic vs. non-chronic cost trend.”
The funny things about this methodology are twofold.
First, the HERO Committee already knows this methodology is invalid because it was disproven in Why Nobody Believes the Numbers (and I offered an unclaimed $10,000 reward for finding a mistake in the proof). We know that people on the Committee have read my book because at least one of them – Ron Goetzel – used to copy selected pages from it until the publisher, John Wiley & Sons, made him stop. Methodology #3 was the fallacy on which the entire disease management industry was based. I myself made a lot of money measuring outcomes this way, until I myself proved I was wrong. At that point, integrity being more important to me than money, I changed course abruptly, as memorably captured by Vince Kuraitis’ headline: Founding Father of Disease Management Astonishingly Declares: “My Kid Is Ugly“. (Naturally the benefits consulting industry filled the vacuum created by my withdrawal from this market, and plied their clients with worthless outsourced programs that more than coincidentally generated a lot of consulting fees.)
If you had perfect information and knew who had chronic disease (before the employees themselves did) and everyone stayed put in either the non-chronic or chronic categories, you could indeed use non-chronic trend as a benchmark, mathematically (though the epidemiology is still very squirrelly). The numbers would add up, at least in a hypothetical case.
But we can’t identify anywhere near 100% of the employees who have chronic disease. Absent that perfect information, any fifth grader could understand the proof that this methodology is fabricated, as follows. Assume that 10 people with a chronic disease cost $10,000 apiece both in the baseline and in the study period. Their costs are therefore flat. The program did not reduce costs between periods.
Now add in 10 people with undetected chronic disease as the “non-chronic benchmark.” Maybe they are ignoring their disease, maybe they don’t know they have it, maybe they are misdiagnosed, maybe the screen was wrong (vendor finger-pricks are very unreliable). Assume these 10 people cost $5000 in the baseline…but they have events in the study period so their costs become $10,000.
That makes the “non-chronic trend” 100%! Suddenly, the program vendor looks much better because they kept the costs of the chronically ill cohort constant even though the “benchmark” inflation was 100%.
Second, Why Nobody Believes the Numbers has already shown how to make this methodology valid mathematically (though the epidemiology applied to that math might still be squirrelly, and there could still be random error in non-hypothetical populations). You simply apply a “dummy year analysis” to the above example. So do exactly what is described above, but for a year-pairing before the program. Then you’ll know what the regression-to-the-mean bias is, and apply that bias to the study years. So If in fact the “non-chronic trend” is always 100% due to the people with unrecognized chronic disease, you would take this trend out of the benchmark non-chronic population before applying that trend to the chronic population. In this case, as in every case, the bias is eliminated. This is called the Dummy Year Adjustment. (Chapter 1 of Why Nobody Believes the Numbers offers several examples of the DYA.)
Proofs are best understood to be proofs if accompanied by rewards, since only an idiot would monetarily back a proof that wasn’t a proof. So here’s what we propose for this one: I’ll up my $10,000 reward to $1,000,000. A panel of Harvard mathematicians can decide who is mathematically right. The HERO Committee escrows a $100,000 nuisance fee for wasting my time and paying for the panel if they are wrong. (We’ll pay if we lose.) They present Methodology #3. We lose the $1,000,000 if the panel votes that this HERO methodology is valid without our “Dummy Year Adjustment.”
My challenge: Either collect your $1,000,000, or publicly apologize for proposing a methodology which you know to be made up. Or is offering you a million dollars “bullying,” a word defined very non-traditionally in this field? Our bad.
Yes, we know this sounds like a big risk but you might remember the old joke:
Science teacher: “If I drop this silver dollar into this vat of acid, will it dissolve?”
Student: “No, because if it would, you wouldn’t do it.”
Methodologies 4 and 5: The Comparison of Participants to Non-Participants
Besides not making any intuitive sense that active motivated engaged participants are somehow equivalent to inactive unmotivated non-participants, Ron Goetzel already admitted this methodology is invalid. Health Fitness Corporation, accidentally proved that on the slide below.
Note that they “matched” the participant (blue) and reference (red) groups in the 2004 “baseline year” but didn’t start the “treatment” until 2006. However, in 2005, they already achieved 9% savings vs. the “reference group” even without a program. This “mistake” was in plain view, and was pointed out to them many times, politely at first. Page 85 of Why Nobody Believes the Numbers showed it, but as the screenshot below shows, I was too polite to mention names or even to call it a lie, figuring that as soon as Health Fitness Corporation or Ron Goetzel saw it, they/he would be honest enough to withdraw it.
Not knowing the players well, I naively attributed the fact that HFC used this display to a rookie mistake, rather than dishonesty. That was plausible because rookie mistakes are the rule rather than the exception in this field. (As we say in Surviving Workplace Wellness, the good news about wellness vendors is that NASA employees don’t need to worry about their job security because these people aren’t rocket scientists.)
On the advice of colleagues more familiar with the integrity of the wellness industry true believers, I also tried a test of the rookie-mistake hypothesis: I strategically placed the page with this display next to the page that I knew Ron Goetzel would be reading (and copying), a page whereon I complimented him on his abilities. I might the the world’s only bully who publicly compliments his victims and offers to pay them money:
That way, I would know that if Mr. Goetzel and his Koop Committee and their sponsors HFC didn’t remove this display, it was due to a deliberate intentiion to mislead people, not an oversight or rookie mistake.
Sure enough, that display continued to be used for years. Finally, a few months ago, faced with the bright light of being “bullied” in Health Affairs, HFC withdrew the slide. Ron “the Pretzel” Goetzel earned his moniker, twisting and turning his way around how to spin the fact that this “mistake” was ignored for so long despite all the times it had been pointed out. He ending up declaring the slide “was unfortunately mislabeled.” He gave no hint as to who did the unfortunate mislabeling, despite being repeatedly asked. We suspect the North Koreans. The whole story is here.
Summary and Next Steps
The first five of these methodologies in Pages 13-14 have several things in common:
- They all contradict the 6th methodology;
- They contradict the statement on page 15 that the only significant savings is in reducing admissions. Of course, self-contradiction is embedded in Wellness Ignorati DNA. To paraphrase the immortal words of the great philosopher Ned Flanders, the Wellness Ignorati “believe all the stuff in wellness is true. Even the stuff that contradicts the other stuff.”
- They call for megadoses of consulting and analytic expertise, contradicting the list on Page 10 that omits the cost of outside expertise.
Speaking of Methodology #6, our next installment will cover it. It’s called event-rate based plausibiltiy testing. I would know a little something about that methodology, since I invented it. I am flattered that the Wellness Ignorati, seven years later, are finally embracing it. I am even more flattered that they aren’t attributing authorship to me. No surprise. That’s how the Wellness Ignorati got their name – by ignoring inconvenient facts. Ignoring facts means they cross their fingers that their customers don’t have internet access. Customers who do can simply google on “plausibiltiy test” and “disease management” and see whose name pops up.
Stop the Presses: We Goofed!
When you are in the “countererrorism” business like we are, it’s important to have a zero tolerance for errors. Occasionally one slips through. In that case the important thing to do is to admit it, rather than fire the Attorney General and the Special Prosecutor and have your secretary erase the tape.
Vik and I wrote a posting for The Health Care Blog, the upshot of which was that the Affordable Care Act should no longer require insurers to cover adult checkups. Free checkups are ubiquitous in self-administered plans. On balance, our posting shows what grownups in health services research already know: they are worthless. Not completely satisfied with their innate worthlessness with a full subsidy, many employers — guided by benefits consultants — attach additional money to them: you are either fined for not getting one or else receive a bonus for getting one. Our proposed solution was/is quite simple: employers that attach bonuses or fines to physicial exams need to disclose that checkups are a waste of time and money. That simple disclosure requirement would end forced checkups.
It turns out, however, that adult checkups are not required by ACA. It was a complete benefits consulting urban legend and we fell for it. So we were wrong.
Here are the lessons from this.
First, if someone proves us wrong, we admit it. See? Admitting error is a concept that is lost on the wellness and benefits consulting industries. For example, after we pointed out that Mercer’s client British Petroleum got completely snookered by Mercer Staywell, the response of Mercer to BP wasn’t: “We apologize to for letting your vendor snooker you.” It was: “Let’s nominate this program for a Koop Award,” which naturally they won because both Mercer and Staywell are represented on the board of the group that gives out the award.
Second, the particular someone who proved us wrong, Chris Glason, did not “bully” us. He merely asked a tough question that invalidated a (minor) premise of our argument. However, when we do something quite similar, the people who are wrong (or lying) say we are “bullying” them. But all we did was ask 11 questions to clarify what someone already said — and offer him $1000 to answer the questions. Trust us: if Chris Glason had offered us $1000 to look it up and get back to him, we would have. (Instead we were rather dismissive, to put it mildly, for which we also publicly apologize.)
Third, our mistake was to assume that benefits consultants actually know something about, well, benefits consulting. We know they know nothing about wellness—Mercer and Milliman have both basically self-immolated by participating in the aforementioned Koop Award Committee and getting snookered four times by dishonest vendors. On two occasions the Committee was forced to backtrack as a result of our exposes, though they never admitted they got snookered. We kind of assumed that since benefits consultants don’t know anything about wellness, the only way they could stay in business was to actually know the first thing about benefits….and we listened to them.
Sidebar: a few benefits consultants are highly competent. We recommend the ones whom the Validation Institute (which is not connected with us but which we have a lot of respect for) has certified. (Don’t strain your eyes–no one from Mercer or Milliman appears on their listing.)
Fourth and most importantly, the answer doesn’t change: End “pry, poke, prod and punish” programs — especially the “prodding” part, now that even benefits consultants can see that prodding someone to go to the doctor when they aren’t sick is a complete waste of time and money.
“I made a mistake. I listened to the experts.”
— John F. Kennedy
Ron Goetzel’s “Dumb and Dumber” Defense Deflects Latest Koop Award Ethical Scandal
By Al and Vik
Oh, the twists and turns as Ron “The Pretzel” Goetzel tries to wriggle out of all his ethical stumbles.
This time around, we thought we had nailed both him and his cabal handing out the ironically named C. Everett Koop Award to themselves and their friends based on made-up outcomes. Specifically, this time they gave their sponsor (Health Fitness Corporation, or HFC) an award based on data that was obviously made up, that no non-sponsor could have gotten away with submitting. This was the third such instance we’ve uncovered of a pattern of giving awards to sponsors for submitting invalid data while making sure that the award announcement contains no reference to the sponsorship. (There are probably others; we’ve only examined 3, which might explain why we’ve only found 3.)
How obviously was the data made up? Well, take a looksee at this slide, comparing participants to non-participants. This is the classic wellness ignorati ruse: pretending that non-motivated inactive non-participants can be used as a valid control for comparison to active, motivated participants. The wellness ignorati would have us believe that any healthcare spending “separation” between the two groups can be attributed to wellness programs, not to inherent differences in motivation between the two groups. Unfortunately for the ignorati, their own slide invalidates their own argument: in 2005, the label “Baseline Year” shows there was no program to participate in, and yet – as their own slide shows – participants (in blue) significantly underspent non-participants (in red) nonetheless. In Surviving Workplace Wellness, we call this “Wellness Meets Superman,” because the only way this could happen is for the earth to spin backwards.
Given that the 2005 baseline label was in plain view, we just assumed that HFC did not indeed have a program in place for this customer (Eastman Chemical) in 2005, which is why they called 2005 a “Baseline Year” instead of a “Treatment Year.” Not actually having a program would logically explain why they said that didn’t have a program, and why they used that display or variations of it like the one below for 4 years with the exact same label. Presumably if they had had a program in 2005, someone at HFC would have noticed during those 4 years and relabeled it accordingly.
Originally we thought the Koop Award Committee let this invalidating mistake slide because HFC — and for that matter, Eastman Chemical — sponsor the awards they somehow usually win. But while trying to throw a bone to HFC, the Koop Award luminaries overlooked the profound implication that the year 2005 separation of would-be participants and non-participants self-invalidated essentially the entire wellness industry, meaning that is is an admission of guilt that the industry-standard methodology is made up.
Goetzel the Pretzel to the rescue. He painstakingly explains away this prima facie invalidation. Apparently the year 2005 was “unfortunately mislabeled.” Note the pretzelesque use of the passive voice, like “the ballgame was rained out,” seemingly attributing this mislabeling to an act of either God or Kim-Jung-Un. He is claiming that instead of noticing this invalidator and letting this analysis slide by with a wink-and-a-nod to their sponsor, none of the alleged analytical luminaries on the Koop Committee noticed that the most important slide in the winning application was mislabeled — even though this slide is in plain view. We didn’t need Edward Snowden to hack into their system to blow up their scam. They once again proved our mantra that “in wellness you don’t need to challenge the data to invalidate it. You merely need to read the data. It will invalidate itself.”
We call this the “Dumb and Dumber” defense. Given two choices, Goetzel the Pretzel would much prefer claiming sheer stupidity on the part of himself, his fellow Koop Award committee members like Staywell’s David Anderson and Wellsteps’ Steve Aldana, and his sponsor HFC, rather than admit the industry’s methodology is a scam and that they’ve been lying to us all these years to protect their incomes.
Still, the Dumb-and-Dumber defense is a tough sell. You don’t need Sherlock Holmes, Hercule Poirot or even Inspector Clouseau to detect a few holes in the Pretzel’s twisted logic:
- How could no one – no member of the Koop Award Committee or employee of Health Fitness Corporation (which used this as its “money slide” for years) – have noticed this until we pointed it out for the third time (the first two times not being as visible to the public)?
- In early 2012, this slide was reproduced–with the permission of Health Fitness Corporation–right on p. 85 of Why Nobody Believes the Numbers, with the entire explanation of its hilarious impossibility. We know Mr. Goetzel read this book, because he copied material out of it before the publisher, John Wiley & Sons, made him stop. So we are curious as to why it has taken until now for him to notice this “unfortunate mislabeling.” Hmm…would the fact that it was just exposed to the world in Health Affairs have anything to do with this sudden epiphany? We’re just sayin’…
- If indeed it was just an “unfortunate mislabeling,” how come HFC has now expunged all references to this previously highlighted slide from their website, rather than simply change the label?
As regards the third point, we would recommend that next time Mr. Goetzel invokes the Dumb-and-Dumber defense, he coordinate his spin with his sponsor.
But let’s not overlook the biggest point: the entire Koop Committee – including “numbers guys” like Milliman’s Bruce Pyenson and Mercer’s Dan Gold — is apparently incapable of reading a simple outcomes slide, as they’ve proven over and over.
So, as a goodwill gesture, we will offer a 50% discount to all Koop Committee members for the Critical Outcomes Report Analysis course and certification. This course will help these committee members learn how to avoid the embarrassing mistakes they consistently otherwise make and (assuming they institute conflict-of-interest rules as well to require disclosure of sponsorships in award announcements) perhaps increase the odds that worthy candidates win their awards for a change.
Goetzel, Koop Committee, Staywell, Mercer, BP America meet Groundhog Day
Perhaps the strategy of the leaders of the wellness ignorati (who constitute the Koop Committee) is to overwhelm us with so many lies that we don’t have time to expose every one and still get home in time for dinner.
No sooner have we finished pointing out the numerous (and unrebutted) implausibilities and internal inconsistencies in Ron Goetzel’s posting on the value of workplace wellness, than the Koop Committee (Mr. Goetzel and his cabal) feeds us even more red meat: They gave the 2014 Koop Award to British Petroleum. However, apparently only British Petroleum wants to tell the world about it. The Koop Committee hasn’t even updated its own website to list 2014 award winners.
Recall that we’ve spent months excoriating Goetzel and his sidekicks (Wellsteps’ Steve Aldana, Milliman’s Bruce Pyenson, Mercer’s Dan Gold and the rest of them) for doing three things in the Nebraska award, for a program that prima facie seems to be in violation of Nebraska’s state contractor anti-fraud regulations:
(1) Gave it to a program where the numbers were obviously fabricated and later admitted to be
(2) Gave it to a program whose vendor sponsors the Committee
(3) Forgot to disclose in the announcement that the vendor sponsors the Committee
Perhaps what you are about to read isn’t their fault. Perhaps their mothers simply failed to play enough Mozart while the Committee members were in their respective wombs, but here’s how they applied the learning from the Nebraska embarrassment to their decision to award British Petroleum. This time they:
(1) Gave it to a program where the numbers had already been shown to be fabricated
(2) Gave it to a program whose vendor sponsors the Committee
(3) Forgot to disclose in the announcement that the vendor (Staywell) sponsors the Committee
(4) Forgot to disclose in the announcement that the vendor sits on the Committee
(5) Forgot to disclose in the announcement that the consulting firm (Mercer) sponsors the Committee
(6) Forgot to disclose in the announcement that the consulting firm sits on the Committee
I suspect we will be writing a similar analysis again next year, when once again, the Committee will attempt to demonstrate the value of sponsoring a C. Everett Koop Award.
HealthFitness takes credit for program savings without having a program
HealthFitness
Short Summary of Intervention:
“When you partner with HealthFitness, we work collaboratively with you to develop a strategic plan for program implementation, which includes a cultural assessment and an operational plan. You can expect results-oriented programs and services delivered through a highly personalized strategy, matched to your employees and culture.”
Materials Being Reviewed:
Success at risk reduction and translation of that risk reduction into cost savings. These excerpts are from the successful Koop Award application at http://www.thehealthproject.com/documents/2011/EastmanEval.pdf.
Summary of key figures and outcomes:
- Reduction in risk factors from 3.20 to 3.03 — net change of 0.17 — over 5 years. This success excludes dropouts.
- 24% improvement in costs vs. non-participants, or $460/year at Eastman Chemical (currently up to >$500/year according to HFC website)
Questions for Health Fitness Corporation:
Since only about 20% of all inpatient events are wellness-sensitive, and you only reduced risk factors by 0.17 per person, and hospital expenses are at most 50% of total spending, how is it that you are able to reduce spending by 24%?
ANS: Refused to answer
Why did you take credit for savings in 2005, even though according to your own slide you didn’t have a program in 2005?
ANS: Refused to answer
Does starting the Y-axis at $1800 instead of $0 create the illusion of greater separation between the two cohorts?
ANS: Refused to answer
Your website says that comparing participants to non-participants “adheres to statistical rigor and current scientific standards for program evaluation” and “is recognized by the industry as the best method for measurement in a real-world corporate wellness program.” Can you explain how non-motivated non-volunteers who decline financial incentives to improve their health are comparable to motivated volunteers, especially in light of the separation between the two groups that took place just on the basis of differential mindset in 2005, before you had a program?
ANS: Refused to answer
You and your customers have won three Koop Awards in the last 4 years. Do you think also being a sponsor of the Koop Award (along with Eastman, in this case) has helped you win these awards or is this just a coincidence?
ANS: Refused to answer
Why Nobody Believes the Numbers defines the “Wishful Thinking Multiplier” as “alleged cost saviings divided by alleged risk reduction.” Your cost savings is $460 and your risk reduction in 0.17, for a Wishful Thinking Multiplier of 2700, the highest in the industry. The book calculates that a risk reduction of your magnitude (even assuming dropouts also reduced risk by the same amount) could generate roughly a $8 reduction in annual spending. To what do you attribute your ability to reduce spending by 50x what is mathematically possible?
ANS: Refused to answer
Help us with the arithmetic below, also from this Koop Award application.
How is it mathematically possible to have a higher ROI ($3.62) when also including the cost of incentives in program expense than the ROI ($3.20) excluding the cost of paying incentives to employees to participate?
ANS: Refused to answer
Update December 2014: Ron Goetzel admits HFC lied. (See #5 and #6.) The slide was “unfortunately mislabeled,” using the passive voice, as though it was an act of God (“the game was rained out” ) or else perhaps the North Koreans. The geniuses at HFC apparently didn’t notice this “unfortunate mislabeling” for 4 years, despite it’s having been pointed out to them many times before this.
Health Fitness Corp wins a Koop award for curing non-existent cancers in Nebraska
C. Everett Koop National Health Award Committee,
Wellness Council of America and Health Fitness Corp.
Short Summary of Award:
The C. Everett Koop award committee’s mission is:
“…to seek out, evaluate, promote and distribute programs with demonstrated effectiveness in influencing personal health habits and the cost effective use of health care services. These programs have the objectives of
- Providing appropriate quality care
- Sharply reducing the alarming rate of health care inflation, by holding down unnecessary expenditures.”
Materials Being Reviewed:
The brochure in question describing the Nebraska program is downloadable from the WELCOA website.
Case Study of Award Winner for 2012: Health Fitness Corporation and Nebraska
Summary of key figures and outcomes:
Alleged cancer outcomes include the following:
Risk reduction outcomes include the following:
Questions for C. Everett Koop Award Committee:
I: Alleged Cancer Outcomes
Were you troubled by the program sponsors’ decision to waive all age-related colon cancer screening guidelines established by the government, and send out 140,000 flyers, at taxpayer expense, featuring a beautiful woman much too young to have a screening colonoscopy?
ANS: Refused to answer
How come, when the program reported that 514 of the 5000 (or fewer) people screened had colon cancer (in addition to the ones who would have been screened anyway), none of the Committee members with health informatics backgrounds from Truven Health Analytics and Mercer and Milliman (and from Wellsteps and Staywell, both of whose programs are also highlighted) were concerned that this alleged 11% colon cancer rate was at least 100 times greater than Love Canal’s?
ANS: Refused to answer
When Health Fitness Corporation admitted lying and reversed their story from making “life saving, cost-saving catches” of “early stage [colon] cancer” to revealing that those 514 people didn’t have cancer, why did the Koop Committee re-endorse what would appear to be outright data falsification, instead of rescinding the award?
ANS: Refused to answer
Even if the committee is allowing Health Fitness Corporation to keep its award and not even apologize, why does this claim of “life-saving, cost-saving catches” still appear on the WELCOA website even though the lie has been admitted?
ANS: Refused to answer
Wouldn’t the fact that the perpetrator of this acknowledged lie is also a sponsor of this Koop award that its own customers have won three times (including this incident) create the perception of a conflict of interest?
ANS: Refused to answer
Does anyone on the Committee think if Dr. Koop were still alive that he would endorse your position on data falsification of cancer victims?
ANS: Refused to answer
WELCOA’s website said it was founded by someone who appears to be the inventor of the self-serve all-you-can-eat restaurant. Despite his well-deserved reputation for integrity, did he endorse data falsification of cancer victims even after the perpetrators admitted it?
ANS: Refused to answer (but did change the spelling)
II: Risk Reduction Outcomes
How do you reconcile the claimed savings figure exceeding $4-million with your own chart above showing that only 161 active participants (3.1%) reduced a risk factor? (That chart of course doesn’t include dropouts and non-participants, whose risk factors may have increased.)
ANS: Refused to answer
Dividing the total savings by 161 yields more than $20,000/person in savings. Wouldn’t that $20,000+ for each risk factor avoided imply that all 161 would have had a heart attack even though the entire eligible population only had about 30 heart attacks the previous year, while the participating population would have had about 7?
ANS: Refused to answer
How do you reconcile your statement that 40% of the population had previously undiagnosed high blood pressure or high cholesterol with your other statement that “the total number of prescription scripts [sic] filled within the Wellness Plan reduced [sic] 3% last year,” despite your reducing or waiving the copays? Shouldn’t prescriptions have gone up, if indeed 40% more people were at risk?
ANS: Refused to answer
How can you attribute the 3% reduction in prescriptions to “improved lifestyles” with the fact that your own graph shows only 161 people improved their lifestyles enough to reduce a risk factor? What happened to the thousands who were diagnosed but were neither medicated nor improved their lifestyles?
ANS: Refused to answer
How do you reconcile that same finding – that 40% had high blood pressure or cholesterol — with that same graph, showing that almost three-quarters of the population was low-risk?
ANS: Refused to answer
How do you reconcile the brochure’s claim that the “majority of employees touted how the program has improved their lives” with the brochure’s own admission that only a minority of employees (42%) even bothered to be screened once and only 25% twice despite the four-figure financial incentive?
ANS: Refused to answer
Follow-up response
Not-for-attribution response received August 1, stating that the reason the Committee let them keep their award was not because were a sponsor but rather because they did not make the life-saving claim on their application. (They did make all the other invalid claims.) Because they didn’t make the claim on the application, they are not in violation of the Committee’s ethical standards by making it in other venues.
Our reaction:
So it is OK if a ballplayer admits using steroids as long as he didn’t happen to test positive?
Follow-Up Response
September 2014: Nebraska listed as a “best practice program” by Ron Goetzel
Our Reaction:
Doesn’t this listing contradict your initial excuse — that you forgot to ask them about whether they made up their cancer statistics during your due diligence — because now you know about that lie and all the other lies in their outcomes measurement…and yet you still call them a best-practice program?