In my rebuttal, I was finally able to introduce ethics into the debate. Along with arithmetic and facts, ethics would be one of the three categories that I have the greatest advantage over Ron. Quizzify and I are in the “integrity segment” of the market.
As is always the case in this debate and in general, I don’t need to cite my own data. His data is so obviously wrong (the New York Times had a few choice words, like “crap”) that I merely point out that his own data sets reveal wellness’s failures when read by an actual smart person, as opposed to, for example, a member of the Koop Award committee.
Citing his own data means we don’t get into he said-she said arguments over the validity of the data sets. We can both agree to use his data.
I eviscerate Ron’s old saw about 50% of people having chronic disease. As I always do when someone repeats that myth, I ask attendees to raise their hands if they have a chronic disease and maybe 3% of hands go up. This 50%-of-people-have-chronic disease is the biggest urban legend in healthcare, as we have noted. It’s somewhat true in the Medicare population, but we are talking about the employed population today. In the employed population, it is only true if the definition of “chronic disease” is expanded to cover, for example, back pain, tooth decay, dandruff, and Ring Around the Collar.
I got some good laughs in this hand-raising exercise (“To be compliant with HIPAA, close your eyes”).
I had anticipated that he would cite his Procter & Gamble study from a quarter-century ago (!) in defense of wellness. So I had checked with P&G, who are my clients, and no one there has any idea what he was talking about.
I get Ron to admit doctoring the evidence on the Koop Award site and then lying about it. He not only doctored the original, but said he didn’t doctor the original. The back story: showing their typical level of competence at reading graphs, Ron’s committee accidentally gave out an award to a program in which participants had outperformed non-participants for two years (2004 and 2005 below) before the program even started, so he changed the x-axis so it looks like the program had been in place during that period. Here is the “before” graph:
Here is the x-axis after he doctored it to remove the evidence that the whole thing was invalid:
As he was doctoring the original application, he created the fiction that the original “application was online and subject to review.”
Even if the study had started in 2004 instead of 2006, the risk profile only improved by 0.17 on a scale of 5 (3%) by 2008, making the massive savings in 2008 completely impossible. (Until 2016, when they shed all pretense of integrity in order to give an award to their Wellsteps colleagues, the Koop Committee trademark was attributing massive savings to trivial reductions in risk.)
I point out that Aetna had just accidentally confirmed what should be obvious: participants-vs-non-participants is a totally invalid study design. They were trying to show the opposite, of course. This is classic wellness and confirms our mantra: “In wellness, you don’t need to challenge the data to invalidate it. You merely have to read the data. It will invalidate itself.” The most respected member of the journal’s editorial board, Dr. Nortin Hadler, apologized and said Aetna’s article never should have passed peer review. So much for the peer review process at Ron’s favorite journal.
I also observed that Ron had himself admitted the participants-vs-non-participants study design was invalid:
Ron responds to my observation that he had “doctored the information.” He admits that the chart in question was originally “mislabeled” on the x-axis. The label was unmistakably clear. To allow Health Fitness Corporation (HFC) to win an award for this non-attributable, invalid result either reflects the total inability of the vendors (Wellsteps, Staywell etc.) and actuarial consultants (Mercer, Milliman etc.) on this committee to recognize a screamingly obvious invalidity when it’s staring at them, or was a conscious decision to give the award to a Koop Committee sponsor. Or maybe both. Each year has seen the award go to someone connected with the Committee.
Ron at least gives me credit for being “sharp-eyed,” but, honestly, anyone who’s taken my course in Critical Outcomes Report Analysis would have seen that the X-axis clearly shows the program didn’t start until 2 years after the two groups were separated.
I sprung the trap I had set earlier, when Ron said I praised him in my book. I had reproduced that HFC graph in Why Nobody Believes the Numbers on the page following the page where I praised his own work, so that I knew he would see the graph in all its hilariously invalid glory. That juxtaposition was a test to see if he would retract that graph once the obvious invalidity was brought to his attention. He not only didn’t retract it then, but he called that program (listed below under “Eastman Chemical,” HFC’s customer) a “best practice” for two years after that–knowing full well the key graph showed no program impact.
I ask Ron if he’s doctored any other Koop Award application after my expose. He is completely silent. He did doctor another original application that very morning in preparation for the debate, not realizing I had saved a screenshot of the original original. The original and the doctored version are reproduced below, for the Nebraska application. But he was “saved by the bell.” The moderator jumped in and started asking the audience for questions, and I frankly forgot to bring this one up later. Compare the last line of each passage below. The first was before my expose of Nebraska. Ron had claimed the Koop Committee had no knowledge of Health Fitness Corporation’s lie about finding all those cancer victims in the Nebraska state employee population. However, it was right in the application…until Ron doctored the application before the debate.
While he has conceded many points so far in this debate, I have yet to concede that Ron and his cronies do anything well, so here’s the first: he excels at tampering with evidence.