Wednesday, July 13, 2005

Evaluating the Weight of Evidence

How reliable are the findings of a research study?

Scientists generally agree that data is reliable when it's replicated - that is repeated by a different group of researchers using the same or more rigorous investigational protocols and the findings of the additional study confirm the original findings.

This month's JAMA reported the findings of Dr. John Ioannidis, a researcher at the University of Ioannina in Greece, who reviewed major studies published in three influential medical journals between 1990 and 2003 - including 45 highly publicized studies that initially claimed a drug or other treatment worked.

As reported on the review, [s]ubsequent research contradicted results of seven studies - 16 percent - and reported weaker results for seven others, an additional 16 percent. That means nearly one-third of the original results did not hold up.

"Contradicted and potentially exaggerated findings are not uncommon in the most visible and most influential original clinical research," said study author Dr. John Ioannidis.

Examples of data originally touted as beneficial and later refuted or watered down by subsequent research:
  • Hormone pills protect menopausal women from heart disease. A larger, more rigorous Women's Health Initiative study later found the pills increase actually heart disease risks.
  • Vitamin E pills protect against heart disease. A more rigorous study found no such protection.
  • Antibody treatment targeting a bacterial poison improves patients' chances of surviving sepsis, a potentially deadly bloodstream infection. A much larger study found no protection.
  • Inhaling nitric oxide helps patients with respiratory failure. Larger studies found no benefit.
  • Antioxidant substances contained in tea, wine and many fruits and vegetables substantially reduce the risk of heart disease. A later study said the benefit was more modest.
  • An operation that clears fat from neck arteries reduces stroke risks in patients without symptoms. A subsequent analysis found more modest benefits.

Ioannidis acknowledged an important but not very reassuring caveat: "There's no proof that the subsequent studies ... were necessarily correct." But he noted that in all 14 cases in which results were contradicted or softened, the subsequent studies were either larger or better designed. Also, none of the contradicted treatments is currently recommended by medical guidelines.

So what does this mean for you?

It's a reminder that you should not put too much stock in a single study and understand that treatments often become obsolete with medical advances. This isn't to say that emerging data is wrong - just that before you jump at a new treatment based on new evidence, weigh the possibility that it may not be your best option until later studies confirm the data. In this review, in about one-third of studies the data from later results found the original findings did not hold up or were weak.

Ioannidis said scientists and editors should avoid "giving selective attention only to the most promising or exciting results" and should make the public more aware of the limitations of science.

"The general public should not panic" about refuted studies, he said. "We all need to start thinking more critically."

No comments:

Post a Comment