They Said What?

Home » 2015 » May

Monthly Archives: May 2015

News flash: The Wellness Ignorati are ignoring facts for a change

No more deception

There has to be a limit, even to deception

The Wellness Ignorati got their name by ignoring facts. Facts, of course, are the wellness industry’s worst nightmare. They ignore them In order to avoid creating news cycles that might reach human resources departments despite the best efforts of their consultants and vendors to shield them from actual information.

And they’re at it again.

First, Atul Gawande wrote a scathing article in the New Yorker about massive overscreening earlier this month. As Mitch Collins noted in The Health Care Blog, not a peep in response from the perpetrators of those hyperdiagnostic jihads. Nor has their been any response to Mitch’s article itself. Literally, no one defends wellness industry practices. And yet somehow all the laws are on their side.

Speaking of which, Mitch mentioned the famous Nebraska debacle, in which the vendor, Health Fitness Corporation, lied about making “life-saving catches” of “early-stage cancers.”  Since HFC was a sponsor of Ron Goetzel’s Koop Award, Ron naturally gave them that prize for these lies.

However, we’ve thrown down the gauntlet. HFC, come on out and fight. Give us your side of the story. How was this not a deliberate lie designed to score political points in Nebraska?  If it was a mistake, why didn’t you change it and apologize? How do those 514 cancer non-victims feel? And Mr. Goetzel, why do you not only keep defending HFC, but have even upped the ante? They’ve been promoted from “best practice” to “exemplar” in your most recent webinar.

Quizzify Q in B and W

As long as wellness vendors are silent, we won’t shut up.

Speaking of non-responses from Mr. Goetzel, where is the correction of or explanation for the massive mistake in Mr. Goetzel’s most recent wellness program evaluation? All those readers have been misled by his blog into thinking Graco’s costs/employee are $2280/year when in reality the cost per employee contract holder — according to Mr. Goetzel’s own blog — is about  $11,100, like almost every other company. (That includes spouses and dependents but any reasonable dependent ratio would yield more like a typical $5000 to $6000 per employee rather than $2280.) I know he knows about this mistake because I’ve submitted a comment to his blog, which shockingly hasn’t been posted.

So, please, could someone actually respond for a change, even if it’s just to accuse us of bullying.

Ron Goetzel reports on Graco, the company with the country’s most expensive spouses

Talk about “burying the lead.”

Need we say more?

Need we say more?

Ron Goetzel just reported on a company called Graco, where employees were subjected to a “pry-poke-prod-and-punish” wellness program.  These are line employees in an “old economy” company–exactly the type of company where healthcare spending would be high.  And it is high.  According to the article, Graco spent $29,000,000 on healthcare for 2600 employees. That’s about $11,100 apiece, roughly what you’d expect. This estimate is with or without a wellness program, since as Ron’s recent HERO report noted, wellness programs have no positive impact on spending.

Yet later on in the article he writes:

graco

 

In the immortal words of the great philosopher Rick Perry, oops.

$190 per member per month (and we assume that he meant just for employees, not members) is $2280/year/employee.  Here are the possibilities:

(1) Graco has the country’s mot expensive spouses, costing about $18,000/year (to bring the average spend to $11,000 per employee contractholder per year) but hasn’t noticed

(2) Graco has some magical special sauce that kept costs way below average even before the wellness program started that Ron failed to tell us about (hence “buried the lead”)

(3) Ron Goetzel made yet another rookie mistake in his math, thus invalidating the entire study, just like most of his Koop Awards.

You can rule out that this $190 had anything to do with the wellness program.  Smoking rates (the only thing that really affects spending) remained unchanged, and obesity only fell a few points.   And a company can’t save money by overscreening people, paying for their drugs, and making them get unnecessary checkups.  In any event, it wasn’t $190/month.  It was $11,100/year.

$2280 vs. $11,100…  We look forward to Mr. Goetzel’s explanation of how both these figures could be true, since it appears they are completely at odds with each other.  In the immortal words of the great philosophers Dire Straits, if two men say they’re Jesus, one of them must be wrong.

And once again, the mantra of Surviving Workplace Wellness holds true:  In wellness, you don’t have to challenge the data to invalidate it.  You simply have to read the data.  It will invalidate itself.

Quizzify Q in B and W

Here’s a Quizzify number that adds up: a 100% guarantee you’ll save money

We will no doubt be accused of “bullying” him for invalidating this study, which he obviously spent a lot of well-compensated time on.  So just to show our good intentions, we will offer him our course and certification in Critical Outcomes Report Analysis gratis.  It seems he could learn a lot from it and we look forward to announcing his successful completion.


 

Update: Ron apparently “forgot” to include the actual data in his writeup, which showed that, um, how to put this tactfully, his entire conclusion is wrong. Looks like kids (who had no access to wellness) trended better than the adults who did have access. We added this as the second installment, 

 

 

 

Measuring Wellness-Sensitive Medical Events: The Grand Finale of the HERO Analysis (part 2 of 3)

London fireworksThe first post in this three-part conclusion to our analysis of the HERO report is here.

Part Two: How to Do WSME plausibility-testing

This is the installment where we actually offer useful guidance.

All the tools and guidance are available from our site (for a fee in some cases — but no fee for employers requesting this information for their own populations) but here is the simple process (minus a few asterisks):

  • Make a list of the key ICD-9 codes. Since you would do smoking cessation with or without a wellness program, we recommend taking out COPD and chronic bronchitis.  Likewise, if you’re trying to save money, take out asthma, where the “number needed to treat” and the cost of medications means spending about $10 to save $1.00 in ER and inpatient expense.
  • Count the number of events in the codes. We use an Excel sheet to tally them, as seen here.  We combine ER and IP for tallying event rates for various reasons, but separate them out when calculating savings. Be sure not to double-count ER visits leading to admits, professional fees billed separately etc.

outcomes measurement screen shot

  • Apply your covered population (total covered person-months times 12) to create a rate per 1000.
  • Do it for several years going back to discern a trend and – if you don’t use our database – get together with some of your counterparts at other similar organizations to create a benchmark.

You should end up with a result that looks like this one though likely not as steep, from Providence Health Plan’s award from the Validation Institute. 

providence VI writeup

Where the HERO write-up is wrong is suggesting that this event-rate methodology be used to check other methodologies. Sure, but if this methodology shows no noticeable event reduction and another methodology “shows savings,” which one is wrong?  So why use the other methodologies at all if indeed you’re just going to have to keep paying your consultants to redo them to match this one?

Calculating Savings and ROI

Both savings and ROI are remarkably easy to calculate. For savings, just take a “standard cost” for each event and multiply it by the number of events that you believe to have been “avoided.”

For ROI just divide that savings figure by the cost of the program (inclusive of all the costs to run it including all vendor fees and incentives).

Cautions for WSMEs

First, because there are so few WSMEs to tally (as noted in the government reports analyzed in the first installment), you need a largish population to identify enough of them to discern a consistent trend.  Otherwise there will be a lot of “bouncing.” Two ways that smaller populations can smooth the bouncing are:

  1. Look at the total number of events rather than try to track individual events, and
  2. Combine multiple years in both baseline and intervention periods. So if you started in 2013, compare 2013+2014 to 2011+2012, rather than, or in addition to, drawing a graph like Providence’s above.

Second, even if you get a decline, you still need to check that decline against improvements in compliance or risk, to make sure that the decline in WSMEs was not due to good fortune but rather to a significant improvement in workforce health. (Also, consider workforce “disruptions.” Did your company merge with another?  Offer early retirement packages or buy-outs, etc?) This is a very important check-and-balance because the wellness industry is notorious for attributing tremendous declines in cost to small improvements in risk factors or compliance. (Those programs usually win Koop Awards.)

Third, you need to compare to national averages. (We can do this for you but you can also puzzle this out from government data, if you have the time or expertise.) As a group, the national average benchmark cardiometablic WSME rate tends to be stable except for cardiac events, which have been falling dramatically for years. In the <65 population, cardiac events are no longer common. Heart attacks themselves are 1-in-800 shots in the <65 population. (That low rate raises the much more basic question of why anyone would do “pry, poke, prod and punish” wellness in the first place, because HERO now agrees with us that wellness loses money.)

Hence, if you read Installment #4 of this series, you’ll see that HERO is claiming the WSME methodology shows savings in cardiac events…but they forgot to adjust for the benchmark decline. That adjustment wipes out all their savings.

Fourth, don’t double-count. You can only measure WSMEs against the combined spending for wellness and disease management. Anyone who tells you that you can measure each intervention separately is wrong. If your event rates fall, was it due to a small intervention on a large number of people (wellness) or a focused intervention on a small number of people (disease management)?

Fifth, your consultants will try to talk you out of a WSME analysis, which itself should be reason enough to do it. It will show exactly what the HERO report and government data show – that WSMEs typically represent a trivial amount of your spending, that they don’t decline much except in cardiac (which comes down anyway), and that the savings doesn’t remotely approach the cost of the program.

Plus, this analysis doesn’t cost much — another drawback from the consulting firm’s viewpoint.

Two classic vendor sleight-of-hand tricks

The vendor (and probably your consultants, especially if they “selected” the vendor, a la Staywell and Mercer) will try to wriggle out of this valid methodology two different ways. First, they will say this analysis should only be done on participants. That, of course, is exactly the wrong answer. Participants always show savings. The whole point of a plausibility test is to see if the event rate decline among participants was simply self-selection and regression to the mean, or whether it was real. If the latter, the real improvement amongst participants will drive the entire WSME rate down.

Second, they will want to “adjust” for the increase in prevalence. Do not allow this (unless you significantly changed your workforce demographics). “Prevalence” doesn’t mean anything. If you look hard enough, you can “find” people with disease. For instance, the Koop Award-winning Nebraska state program vendor “found” 40% of state employees were at high risk. Only 3% of them subsequently went on drugs, and only 3% reduced their risk factors…and yet somehow a huge amount of money was saved.

And the database to which you compare your rates doesn’t “adjust” for rising prevalence either, so the comparison is valid. The rate is the rate, period.

What happens when you use this methodology

You may show some very modest savings, like the textbook example on Page 23 of the HERO guidelines shows – 99 cents PMPM.

hero page 23 99 cents

HERO admits this doesn’t cover the $1.50 of vendor fees noted on Page 15, let alone all the other costs of wellness.

HERO list of costs

Quizzify Q in B and W

Quizzify helps employees master facts, the best  tool for reducing unnecessary care

 

And you can forget about covering the average $693 in incentives with your 99-cents-a-month savings. Even Quizzify, the only population health vendor to guarantee savings, doesn’t claim it can cover incentives too.

There simply aren’t enough WSMEs, and they aren’t sufficiently reducible to generate meaningful savings. Your costs aren’t concentrated in “unhealthy” employees who should be eating more broccoli. Your costs are all over the map. The mantra of our universally acclaimed award-winning outcomes measurement trade bestseller Why Nobody Believes the Numbers:

Everything in life has an 80-20 rule.  In healthcare, the 80-20 rule is that 80% of the time, there is no 80-20 rule.

That’s why focusing on one area, even one that’s all the rage, won’t move the needle. And that’s why Quizzify’s “Jeopardy®-meets-health benefit education-meets-Comedy Central” philosophy is to educate employees and change employee behavior on everything, not just broccoli.


 

In our third and final installment, we’ll give you the chance to download gratis a very user-friendly tool that you can use to do your own analysis of the ROI of your own wellness program.

In the immortal words of the great philosopher Dizzy Dean, don’t fail to miss it.

Measuring Wellness-Sensitive Medical Events: The Grand Finale of the HERO Analysis

fireworks finaleThe eighth in the series deconstructing the HERO Outcomes Guidelines, covering Page 14. The full series can be found here. This installment in particular should be read in conjunction with installment #4  This Grand Finale will be presented in 3 parts…with a downloadable tool to help you calculated your wellness program savings as part 3.

PART ONE: HERO ACCEPTS OUR METHODOLOGY

In the stock market, no one is as valuable as the person who’s always right, except the person who’s always wrong. Therefore, until now we have greatly appreciated the opportunity HERO’s report has created for us to explain how to measure outcomes correctly.

So imagine our disappointment when one of their methodologies, the sixth of the seven listed, turned out to actually be valid. No surprise — this is the methodology I invented. Also no surprise given the industry’s standards for integrity, they didn’t acknowledge that particular factoid anywhere in their 88 pages. (And yet they accuse us of being impolite.) Here is the screen shot.

hero methodology 6

The philosophy of #6 is quite straightforward. If you were introducing a flu vaccine program, you’d measure the reduction in number of people who got the flu. If you offered a new program for conservative treatment of meniscal tears, you’d measure the reduction in the number of people who had meniscal surgery. That’s the way experimentation works. You hypothesize an outcome that the intervention should create…and then you measure that outcome to see if the experiment worked.

Except, of course, in population health, where any improvement in anything (cost, trend, utilzation) gets attributed to any wellness program that happened to be in place. The masters of this would be Mercer. Mercer once “found” massive, mathematically impossible, savings for North Carolina Medicaid’s medical home in a cohort that, as luck would have it, wasn’t even eligible for the medical home. And one wellness industry stalwart, Larry Chapman, says the simple act of completing a health risk assessment can reduce total healthcare spending by 50%, even when the information in the HRA is wrong, as is often the case.

And did you ever notice that when a company switches to a high-deductible health plan and adds some needle-poking, they attribute the reduction in spending to the needle-poking, not the fact that everyone in their company suddenly gets socked with a bigger annual deductible?

Enter wellness-sensitive medical event rates (WSMEs). This is the only methodology that tallies hospitalizations for conditions targeted by a wellness program – statistically avoided heart attacks etc. This is the only one of the seven HERO methodologies that would be acceptable to legitimate researchers. Hence, its use both in Health Affairs and by the GE-Intel Validation Institute. The former is the most respected health policy publication and the latter is the most (the only) respected outcomes evaluation organization.  Further evidence of its validity is that there is no mention of it in the leading wellness promotional publication, the American Journal of Health Promotion, perhaps because – as HERO has attested – it doesn’t show savings.

History of event rate-based plausibility testing

Even though it isn’t attributed to me in the HERO guidebook, I invented this methodology in 2007. This is incontrovertible. No one else had anything remotely close to it. Unlike the automobile, TV, the computer, etc., this was not one of a series of incremental improvements to or the amalgamation of existing technologies.

And none of the other invention clichés apply either. The Chinese didn’t invent it in 1000 BC. Leonardo DaVinci didn’t sketch it in 1541. The Germans and the Allies weren’t racing to develop it at the end of World War II. By contrast, I’ve been presenting on it and using it for validation since then (meaning 2007). It figured prominently in Why Nobody Believes the Numbers too, before being highlighted in Health Affairs and the Validation Institute. For a modest fee, the detailed how-to can be downloaded from our website, though a Reader’s Digest version appears below.

While a number of employers and health plans use it now, several health plans – more than coincidentally three of the highest-rated in the country (Harvard Pilgrim, Blue Cross of Massachusetts, and Providence Health Plans) – have been measuring hospitalizations for conditions targeted by wellness/DM programs since the methodology’s inception.

So needless to say I was surprised and totally flattered that the 88-page HERO Report contained no attribution to me as the inventor of the WSME plausibility test. As mentioned previously, the strategy of the Wellness Ignorati is to ignore facts (hence their moniker), especially including my very existence. That strategy reduces the likelihood that one of their customers might click through to the site. They aren’t much for our recommending that companies learn our helpful insights, which they call “bullying.

The wellness industry has had a love-hate flip-flopping relationship with WSME measurement.

Quizzify Q in B and W

Click on the Q to try our demo

First, until 2013, the entire Wellness Ignorati, quite in character, ignored this methodology, which is a powerful testament to its validity.

Then, in 2013, that strategy took a body blow: the exact methodology was used in Health Affairs. You may recall the same thing happened with another epiphany of ours — the expose of the invalid Koop Award-winning Health Fitness Corporation fabricated results. The Wellness Ignorati completely ignored our whistle-blowing expose until it appeared in Health Affairs, when they were forced to admit we were right and the whole thing was made up, or to use Ron Goetzel’s phrase in the passive voice, “was unfortunately mislabeled” for four years.

Just as Ron Goetzel — the leader of the Wellness Ignorati — caved when the Health Affairs light was shined on the Koop-HFC debacle, he caved on WSMEs when the Health Affairs light was shined on them. In this case, “caving” was acknowledging the fact that this methodology existed. He reviewed the aforementioned Health Affairs article that specifically analyzed WSMEs — hospitalizations for conditions targeted by the wellness program. In September 2014, he wrote:

goetzel quote on BArnes article

But then he un-caved. Once the Health Affairs storm had passed, he invoked the Sergeant Schultz defense. In December 2014 he said: ,

goetzel quote on WSMEs

He may have just forgotten in December that he reviewed them in September. But in March he and his colleagues re-remembered wellness-sensitive event rates, and put them right in the HERO report, for which we are immensely grateful.

Hopefully they won’t re-forget in June. (Their memory appears to be correspond with the change of seasons.) Hopefully instead, to paraphrase the immortal words of the great philosopher George Gershwin, our methodology is here to stay.

How do I feel about HERO rewriting history so that I am no longer the inventor of this methodology? Honestly, having firmly staked out a niche in the small but growing “integrity segment” of the wellness industry, I prefer them staying out of that niche as long as possible. So I’m glad they show no interest in facts.

In part two, which we will post in a few days, we will explain how we do WSME plausibility testing and why it’s the essential method for assessing the impact of your wellness and disease management efforts.

Your Employer’s Wellness Program Can Demand Your DNA | Al Lewis

DNA_Double_HelixAl’s excellent column in The Huffington Post, just published this evening.

 

Quizzify Q in B and W

Quizzify will never ask for an employee’s DNA

While Aetna itself is not storing your DNA, the Aetna partner company actually collecting this information, a Canadian firm called Newtopia, says that your DNA will be safe with them in Ontario. “Safe” may be defined as “unsafe,” though, because Newtopia’s privacy policy, linked in their footer menu, is rather vague, listing only “some of” the ways they can re-use this data.

via Your Employer’s Wellness Program Can Demand Your DNA | Al Lewis.

Greatest Hits Collection: Staywell

10011881004_b001078db1_o

Collaborating with Staywell may not be enhancing the American Heart Association’s reputation.

Occasionally we have to attend to our Day Jobs and can’t post regularly. Fortunately, we have access to a bolus of posts from mid-2014, the posts that went up on this site initially. There were too many stories to highlight, so we decided to inventory them, in order to fill in gaps when we didn’t have time for new posts.

High on that list would be Staywell. First was their collaboration with Mercer, in which they agreed to tell British Petroleum that they found $17,000/person savings.  They knew those savings were mathematically impossible since the average person only spends $6000/year.  They also forgot that they themselves had said it was only possible to save $100/person.

Following on the heels of that was a collaboration with the American Heart Association to create screening guidelines that (surprise) call for much more screening than the United States Preventive Services Task Force recommends.

In both cases, we welcomed — and in the latter case offered $1000 honorarium for — responses to our questions, but our good-faith offer was met with silence.

Also, in both articles Staywell continued to cite Katherine Baicker’s study that she herself no longer defends, with the added wrinkle of referring to it as “recent” in the hopes that no one looks at the endnotes and sees that it was submitted for publication in 2009 and covered studies from a decade before that. With any luck they’ll have enough integrity to stop citing that study now that RAND has invalidated it. A good rule of thumb is that anyone who cites Baicker’s study without noting that no one (including Professor Baicker) believes that 3.27-to-1 ROI any more is prima facie deliberately misleading people. It is no longer credible to say one doesn’t know that her study has been shown to be hooey and that she is no longer defending it (and actually says she has no more interest in wellness).

Quizzify Q in B and W

Our health benefit education will never break your heart

We recommend click-throughs to both studies. Each raises questions that Staywell refused to answer, after initial conversations which confirmed they knew about these issues. You’ll also see how the American Heart Association was shocked, shocked, that anyone would question their integrity (perhaps they haven’t read The Big Fat Surprise) but then let it go, rather than create a news cycle.

Staywell also helped give British Petroleum a Koop Award.  Nice to be on the award committee AND be an award sponsor–makes it easy to give your customers awards.  With one or two exceptions, we can’t remember the last time the Koop Award went to a company with no connection to a sponsor or committee member. Perhaps someone could let us know?

Newtopia Wants Your DNA for Aetna’s Wellness Program

We're sure nothing could go wrong when Aetna and a wellness vendor have your DNA

We’re sure nothing could go wrong when Aetna and a wellness vendor want your DNA

We often say on this site that “lying is part of wellness vendor DNA.”  However, we didn’t mean that literally–until Newtopia came along.  You see, Newtopia, like most vendors, lies — but they also actually collect employee DNA.

Quizzify Q in B and W

We think your DNA belongs to you

You might observe: “Wow, they collect your DNA? What an incredibly intrusive wellness program! This sounds like something Aetna would do.” This would be a reference to Dr. Aetna Will See You Now and its sequel. Yes, Aetna is Newtopia’s biggest fan, but the particular numbers below are made up by Newtopia, not Aetna. In all fairness to Newtopia, many of what could be termed “lies” could charitably be characterized as “very misleading but technically accurate statements,” So ironically it would be us who would be lying if we said these were all lies. Hence we will call them “gaffes,” a category which includes lies but also includes situations in which the truth shocks the conscience as much or more than a lie would.

 Gaffe #1: Engagement:

To Newtopia’s credit (not unlike HERO, which did the same thing in its report), they put the invalidating information about their engagement right on their website.  This again proves our mantra from Surviving Workplace Wellness that: “In wellness, you don’t have the challenge the data to invalidate it.  You merely have to read the data.  It will invalidate itself.”

Specifically, for some reason Newtopia provided a link to an Associated Press story, which they called a “profile,” but obviously we wouldn’t be linking to a puff piece “profiling” them.  They Said What? only links to real reporting, which is invariably unflattering to wellness vendors and which would never be considered “profiles.”  Puff piece or real reporting?  You make the call — We can’t both be right.

newtopia ap profile

Among other things (and there are plenty of other things, which we will get to) the AP reported that of 130 employees of the profiled customer organization, Jackson Laboratory, invited into the program, only 15% remain one year later,

newtopia ap stats

However, the website itself proclaims:

newtopia engagement

Maybe “unheard of engagement” technically isn’t a lie. Maybe they meant: “unheard of” in that an engagement rate as low as 15% would be unheard-of.  We doubt that because elsewhere they cite their use of “engagement science,” whatever that is. (Funny thing, Quizzify, the only company to literally financially guarantee increased engagement, doesn’t use “engagement science.” Instead Quizzify simply offers employees a tool they will want to use.)

Gaffe #2: Success

On its website, Newtopia’s case study for this customer, Jackson Labs, states:

newtopia jackson labs

However, the AP’s Tom Murphy borrowed a trick from Reuters’ Sharon Begley—and actually did reporting. They asked Jackson Labs itself for some statistics. They learned that of the 28 employees who submitted to Newtopia, only 19 remain. So even if every single one of those 19 lost weight, only 68% — not 92% — would report weight loss.  (Newtopia might respond that their website’s 92% statistic was after six months whereas the 68% was after a year, thus begging the question of why they decided not to update the statistic on their website.  It’s also a window on the truism that the longer the measurement period, the more people regain the weight.)

Whether 68% or 92%, Newtopia credits its results to “science.”   Indeed their website loudly proclaims:

newtopia science drives

Gaffe #3: Made-Up Facts 

One of the darnedest things about science is, it is a fact-based discipline.  You can’t “drive” your own facts. For instance, their website proclaims:

newtopia 3x statistic

Productivity is defined as “output per man-hour.” Therefore in a company that is “3x more productive,” each employee gets 3x more work done.

If Newtopia’s statement is accurate, and if Walmart promoted health, their cashiers could ring up 3x more customers. Doctors could see 3x more patients. Pilots could fly planes three times faster. Teachers could teach three times more classes.  The recorded messages on hold would tell us that customer service thinks our calls are three times more important to them…

Another made-up fact:

newtopia incidence rate

Quite the opposite of “failing to control the incidence,” current approaches have in fact dramatically reduced the rate of heart attacks and strokes.  Don’t take our word for it.  Here are the federal government’s statistics for heart attacks:

heart attacks

 

During most recent available period, in which the population over 50 grew close to 20%, the number of heart attacks fell 19%. Not bad for “failing to control the incidence.” The wellness industry, of course, had nothing to do with that decline, which encompassed all age categories and payers.

Strokes also declined quite a bit. Curiously, strokes increased significantly in the age categories in which wellness programs were supposed to prevent them, paralleling the dramatic growth of the wellness industry from 2001 to 2012. They declined dramatically in the >65 population, which doesn’t have access to workplace wellness and which grew close to 20% over this period.  So it looks like the only industry that “failed to control the incidence” was: the wellness industry.

strokes 2001

strokess 2012

 

Gaffe #4: Unsupported Statements

Another thing about science? Credible scientists don’t make statements that aren’t “driven” by evidence. Note this statement:

newtopiaitworks

Besides being uncited and disputed by its largest customer (Aetna, which “collaborated” on the HERO report admitting wellness loses money), note the wording. It makes it sound like the more you spend on wellness, the more your costs fall. The implication is that spending an extra $500 on Newtopia’s genetic testing (over and above the cost of a regular wellness program) will then save even more.

Gaffe #5: Newtopia’s Privacy Policy

If only this were a lie!

One should be able to assume that if your employer forces you (under threat of fine) to give up your DNA, that the company using it will destroy it after use. The fine print is breathtaking. Whereas Quizzify’s privacy policy requires 13 words and is prominently displayed on our home page (“We can’t misuse or lose personal health information because we don’t request any”), Newtopia’s is buried in a linked footer and stretches on for 900 words.  Apparently they will be using your DNA for their own purposes–eventually profiting on it, without giving you any royalties. It will allegedly be de-identified, but privacy guru Anna Slomovic notes that de-identified data can and has been re-identified.

Newtopia also admits they could make a mistake with it.  OK, they don’t exactly admit it.  They imply it.  They say your DNA could be used in “error management,” and by definition there is no need for error management unless you make errors. And with the full list of people who have access to it — eight categories of occupation including “naturopathic doctors” plus unspecified “other persons” — errors are inevitable.

newtopia privacy policy

 

People sometimes complain that all we do is criticize wellness vendors. Newtopia is Exhibit A in why that’s often not such a bad idea.  Even Newtopia doesn’t seem to mind–we gave them the opportunity to fact-check, rebut or comment on this, and they didn’t.

%d bloggers like this: