Friday, December 24, 2010

The Difficult Science Behind Becoming a Savvy Healthcare Consumer, Part I

“People are so prone to overcausation that you can make the reticent turn loquacious by dropping an occasional ‘why’ in the conversation.” Nassim Nicholas Taleb

“The mind leans over backward to transform a mad world into a sensible one, and the process is so natural and easy we hardly notice that it is taking place.” Jeremy Campbell

“In a substantialist view, the universe will be unborn, non-ceased, remaining immutable and devoid of variegated states.” Nagarjuna

“If he is weak in the knees, let him not call the hill steep.” Henry David Thoreau



On the same day in November, headlines from the Wall Street Journal and the New York Times reported on the same story about a federal panel’s recommendations on consumer intake of vitamin D. “Triple That Vitamin D Intake, Panel Prescribes” read the WSJ story; “Extra Vitamin D and Calcium Aren’t Necessary, Report Says” stated the New York Times. (http://ow.ly/3tJMe) Since I had recently started taking vitamin D daily, I was interested in what the experts in Washington, DC were recommending.

How should you decide what advice to follow about the relationship between your diet, lifestyle, medications, health, and wellness?

Is this just another example of how the media does a terrible job? Many of us resonate with the view of media watchdog Steven Brill who said, “When it comes to arrogance, power, and lack of accountability, journalists are probably the only people on the planet who make lawyers look good.” (http://ow.ly/3tKdM)

The media does play a role here and needs to improve, but it turns out that it is really complicated to figure out what the “truth” is about diet, exercise, medicines, and your individual well being. Everybody (journalists, government panel members, scientists, patients, physicians, and nurse practitioners) needs to change.

It is really hard to establish with certainty the cause of any disease. Pierre-Daniel Huet ‘s Philosophical Treatise on the Weaknesses of the Human Mind is my favorite skeptical analysis of causality. He writes in 1690 that any event can have an infinite number of possible causes. (http://ow.ly/3tLy2) David Hume, the great Scottish philosopher, makes us realize that until we know the Necessary Connection / cause of things then all human knowledge is uncertain, merely a habit of thinking based upon repeated observation (induction), and which depends upon the future being like the past. (http://ow.ly/3u5Fs) All involved in giving medical advice should read Nassim Nicholas Taleb who studies how empirical decision makers need to concentrate on uncertainty in order to understand how to act under the inevitable conditions of incomplete information. (http://ow.ly/3tLy2)

But don’t scientific studies establish the cause of diseases so that we can either prevent them or treat them with evidence-based methods? The philosopher Karl Raimund Popper’s theory of science emphasized that there are really only two kinds of scientific theories: those that have been proven wrong and those that have yet to be proven wrong. For Popper, and Taleb who greatly admires him, one needs to be skeptical of definitive truths because the world is very unpredictable. (http://ow.ly/3tNOf)

However, many of us would agree with physicist James Cushing’s statement that “scientific theories are to be taken as giving us literally true descriptions of the world.” A straw poll at a university department of physics found ten out of eleven faculty members who believed that what they were describing with their equations was objective reality. (http://ow.ly/3tNIL)

And yet Edmund Gettier showed that one can have a justified, true belief and still not know what he believes. A man believes there is a sheep in a field because he mistakes a dog for a sheep, but hidden behind a rock out of view in the field is a real sheep. “The three criteria for knowledge (belief, justification, and truth) appear to have been met yet we cannot say that this person actually knows there is a sheep in the field, since his ‘knowledge’ is based on having mistaken a dog for a sheep.” (http://ow.ly/3tNIL)

Science does not give us truth or certainty. As Lys Ann Shore says, “The quest for absolute certainty must be recognized as alien to the scientific attitude, since scientific knowledge is fallible, tentative, and open to revision and modification.”
(http://ow.ly/3tNIL)

When I graduated from Case Western Reserve School of Medicine in 1980, the evidence-based causes of peptic ulcer disease included stress, spicy food, chewing gum, and inadequate parenting. In 1982, Perth pathologists Robin Warren and Barry J. Marshall proposed that infection with Helicobacter pylori was the real cause, but physicians did not readily agree. When Marshall developed gastritis 5 days after drinking a Petri dish full of Helicobacter pylori, the scientific community slowly accepted the new theory. Treatment changed from a bland diet and psychotherapy to a combination of two antibiotics and a proton pump inhibitor. In 2005, Warren and Marshall were awarded the Nobel Prize for Physiology or Medicine. Medical knowledge is indeed “fallible, tentative, and open to revision and modifications.”

So how can the average person evaluate the latest scientific breakthrough that is reported in the press? One must become an informed skeptic. Scientific studies come in different types. Observational studies are often untrustworthy. Epidemiological studies are better, but still can lead us astray. Meta-analysis try to aggregate the knowledge discovered by many studies and can be useful. Randomized controlled clinical trials are the trustworthiest and considered the gold standard of evidence. (http://ow.ly/3tKdM)

John Ioannidis, MD, faculty member at Tufts-New England Medical Center and University of Ioannina Medical School, has studied the accuracy of all those medical studies we read in the popular press, and he has discovered that by a two to one margin discoveries in the most prestigious medical journals are either refuted or the results are found to be exaggerated by later papers. (http://ow.ly/3tQbX)

Ioannidis states, “Amazingly, most medical treatment simply isn’t backed up by good, quantitative evidence.” He also writes that the problem is not confined to medicine, “The facts suggest that for many, if not the majority, of fields, the majority of published studies are likely to be wrong.” (http://ow.ly/3tKdM)

To better understand how this can be so, let’s take a look at hormone replacement therapy for women. In 1966 a best selling book Feminine Forever argued that menopause was a disease that could be treated by taking estrogen, and hormone replacement therapy became a best selling drug in the United States. In 1985 the Harvard Nurses’ Health Study reported that women on estrogen had only a third as many heart attacks as women who did not receive the drug. In 1998 the Heart and Estrogen-progestin Replacement Study (HERS) found that estrogen increased the likelihood that women who already had heart disease would experience a myocardial infarction. In 2002 the Women’s Health Initiative (WHI) concluded that hormone replacement therapy increased the risk of heart disease, stroke, blood clots, and breast cancer.

One journalist estimates that tens of thousands of women suffered harm because they took a prescription drug that was prescribed by their physician to treat menopause and protect them from heart attacks. (http://ow.ly/3tSko) What happened and why?

The Harvard Nurses’ Health Study is a well designed, large (122,000 subjects), and well-run prospective cohort study that examines disease rates and lifestyle factors to generate hypotheses about what caused the diseases. Although such studies can say there is an association between two events (women who took estrogen had fewer heart attacks), they cannot determine causation. (http://ow.ly/3tSko) Huet writing in 1690 was right; there are a lot of possible causes for any one event. Ioannidis estimates that there are as many as three thousand different factors that might cause a condition like obesity, so it is not surprising that many hypotheses turn out to be wrong. (http://ow.ly/3tKdM)

That is why the hypotheses generated by such observational studies need to be tested by the gold standard randomized controlled clinical trials like the HERS and WHI. There are three ways to reconcile the difference between the clinical trial and the Nurses’ Health Study results. (http://ow.ly/3tSko) The association of estrogen with fewer heart attacks could be explained by the healthy user and prescriber effects; the women who took hormone replacement therapy were different from those who did not take it and the physicians prescribing it to women who took it were different from the physicians who treated women without it. Another possible explanation for the discrepancy is that it is hard to accurately find out if the women in the observational study actually took the estrogen before their heart attacks occurred. The third possibility is that both the clinical trials and the observation got the right answer, but to different questions. The Nurses’ study had mostly younger women, and the clinical trials had mostly older women. It is possible that estrogen both protects the hearts of younger women and induces heart attacks in older women, and this is now known as the timing hypothesis. (http://ow.ly/3tSko)
We really don’t know which of these three possible explanations is correct.

What I take away from this is that the skeptical consumer needs to be wary of all the new advice coming from scientific breakthrough studies reported in the lay press. A principal investigator with the Nurses’ study from 1976 to 2001 warns, “Even the Nurses’ Health Study, one of the biggest and best of these studies, cannot be used to reliably test small-to-moderate risks or benefits. None of them can.” (http://ow.ly/3tSko) Skeptical consumers need to understand the inherent limitations of such observational studies.

David H. Freedman in Wrong: Why Experts Keep Failing Us – And How to Know When Not to Trust Them (http://ow.ly/3tKdM) writes about the certainty principle. Drawing upon behavioral economics studies, he shows that humans are biased to advice that is simple, clear-cut, actionable, universal, and palatable. “If an expert can explain how any of us is sure to make things better via a few simple, pleasant steps, then plenty of people are going to listen.” And experts know that people will pay more attention if they make dramatic claims, tell interesting stories, and use a lot of statistics.

Freedman provides some guidance for those of us who want to be informed, skeptical, wise consumers of medical tests, therapies, and expert advice. Under characteristics of less trustworthy expert advice he lists:

• Simplistic, universal, and definitive advice
• Advice supported by a single study, or many small studies, or animal studies
• Groundbreaking advice
• Advice pushed by people or organizations that will benefit from its adoption
• Advice geared toward preventing the future occurrence of a recent crisis or failure.

Characteristics of expert advice we should ignore according to Freedman include:

• It’s mildly resonant
• It’s provocative
• It gets a lot of positive attention
• Other experts embrace it.
• It appears in a prestigious journal
• It’s supported by a big, rigorous study
• The experts backing it boast impressive credentials.

Freedman’s characteristics of more trustworthy advice:

• It does not trip the other alarms
• It’s a negative finding
• It’s heavy on qualifying statements
• It’s candid about refutational evidence
• It provides some context for the research
• It provides perspective
• It includes candid, blunt comments.


With due respect to Freedman who has written an informative book, his advice is confusing and not all that helpful to the layperson trying to decide if he should take extra Vitamin D. Although experts with impressive credentials have given advice that should be ignored, others with similar expertise have also performed rigorous prospective, randomized clinical trials whose findings should be followed. Think not smoking if you want to avoid lung cancer or heart disease. It is also true that the really skeptical epidemiologists accept very few diet and lifestyle factors as true causes of common diseases: smoking is a cause of lung cancer and heart disease; sun exposure does cause skin cancer; sexual activity does spread the papilloma virus that causes cervical cancer. (http://ow.ly/3tSko)

So where does all this reading get me as far as Vitamin D is concerned? Primary care physicians, relying on findings of an association between Vitamin D levels and a higher risk for a variety of diseases including heart disease, cancer, and autoimmune disease, started telling their patients to take supplemental Vitamin D. The sale of Vitamin D rose 82% from 2008 to 2009, reaching $430 million a year in the United States. One expert is quoted in the New York Times saying, “Everyone was hoping Vitamin D would be a kind of panacea.” (http://ow.ly/3u5YJ) As far as I know, none of these claims for Vitamin D preventing disease have been proven by a randomized controlled clinical trial.

And what about the conflicting WSJ and New York Times headlines? In a way, they both were right. The WSJ concentrated on the recommendation that people should get 600 IU of Vitamin D every day which is three times as much as the old standard of 200 IU a day. (http://ow.ly/3u63J) The New York Times concentrated on the finding that most of us get enough Vitamin D from our diet and exposure to sunlight. (http://ow.ly/3u5YJ)

So what did I decide? Following in the footsteps of Huet, Hume, Popper, Taleb, Ioannidis, and Freedman, I decided to stop taking Vitamin D and not to test my blood level. Sometimes you have to act based on incomplete knowledge of an unpredictable world, and I tend to be a skeptic and a minimalist when it comes to doctors and medical advice.

Part II: How and why we all have to change (coming soon)

5 comments:

  1. Insightful... as usual. It's interesting how the principles of shared decision making or "preference sensitive" medicine have led two (i assume) reasonably good physicians to look at, interpret and act upon the literature quite differently. I went from a daily intake of ~800U a day to 5,000U/day for the past year :-) There is a very important take home point here... same info, presumably same ability to interpret the data, yet two very different decisions were made.

    For my own particular small corner of the medical universe, Orthopedic Surgery, I try to embrace those same principles that led you and I to two different decisions given the same set of information. I discuss this on my blog under topics of both shared decision making, as well as posts on the *personality* of an injury. How the same injury may not behave the same way, nor produce the same degree of pain or disability in two different people --- concluding that we should not (always) treat MRI findings, but instead, treat patients. Because for the same injury, in various individuals, different treatment plans will emerge.

    ReplyDelete
  2. It is an interesting topic. Part I examined "the limitations of science in helping us make wise choices and decisions about our health."

    Sweat

    ReplyDelete
  3. Excellent and eloquent! There is one source of truly evidence based information
    That takes into account all of the peer reviewed findings published to the National Library of Medicine from the past 100 years: www.curehunter.com - still not a silver bullet, but with 200,000 articles published in the last year, it is an evidence based view that no expert opinion can match.

    ReplyDelete
  4. This would certainly proved to be much better as there are positive aspects that must needs to regard with some essential needs. annotated bibliography helper

    ReplyDelete