In a WebMD article about the study, Christian Gludd (one of the researchers) was quoted as saying "Anyone is welcome to criticize our research, but my question is, what is your evidence? I think the parties that want to sell or use these antioxidant supplements in the dosages used in these trials, they want [to see only] positive evidence that it works beneficially."
It's funny a researcher would assume criticism is driven by profit motive or desire to believe something is beneficial, and not flawed methodology, overwhelming scope, subjective exclusions or potentially subjective assignment of bias risk.
Here's my take on the review, with first a full disclosure - I do not sell or promote the use of antioxidant supplements; I do not have a financial interest in any supplement company; and I don't take any of the antioxidants included in the meta-analysis except what is included in my daily multi-vitamin. You can see I have no profit motive to criticize the study, nor do I have a blindspot desire to believe that I'm taking particular antioxidants reduce my risk of dying.
I'm interested in the data and the methodology used to reach the conclusion that the researchers did not "find convincing evidence that antioxidant supplements have beneficial effects on mortality. Even more, beta carotene, vitamin A, and vitamin E seem to increase the risk of death. Further randomized trials are needed to establish the effects of vitamin C and selenium."
The stated objective to undertake a meta-analysis of antioxidants was "to analyze the effects of antioxidant supplements (beta carotene, vitamins A and E, vitamin C [ascorbic acid], and selenium) on all-cause mortality of adults included in primary and secondary prevention trials."
Interestlingly, we learn upon reading the full-text that the researchers specifically excluded studies that reported no deaths during a trial period or follow-up. A meta-analysis that seeks to establish effects on mortality but excludes studies with no reported deaths from the pool of data to be analyzed? Hmmm...
Yes, this is the first interesting thing that popped up when I read the paper and learned the researchers conducted an intial search across a number of databases and found 16,111 references that hit their search criteria. Of course many of these were duplicate references (14,003), leaving 2,108 references to review.
The next review led to the exclusion of another 907 references because they included patients with cancer; involved acute and infectious disease; involved infants or children; or involved pregnant or lactating women. These understandable exclusions then left the researchers with 1,201 references (815 trials) to review for inclusion or exclusion from the analysis.
So the researchers moved on and started to excluded based on criteria few can or will agrue with - the trial didn't meet inclusion criteria established for the review; it was not randomized; data was insufficient for review purposes; or a trial was still ongoing.
But then we come to a curious reason for exclusion - a study did not have any deaths during trial or follow-up.
To me this is an odd exclusion criteria for a review specifically looking at mortality.
In fact, this one exclusion removed another 405 studies from the table for the review - almost half the studies that remained after the first round exclusions.
So this now left the researchers with 68 trials, and we know these studies had reported deaths during the trial since the researchers excluded any trials where subjects didn't die during the study or follow-up.
Of these 68 trials that remained, it's now important to understand if and how they differed.
First, 21 were "primary prevention" - that is they were conducted on "healthy" subjects with the trial seeking to establish a benefit to health; the remaining 47 trials were "secondary prevention" - that is the subjects included were diagnosed with disease and the trial was conducted to see if an intervention would slow progression of a disease or reduce risk of death from the disease.
In addition, differences between the studies included in the review help us understand the challenge the researchers set up in what can only be called a broad scope review. We absolutely have to consider all of this to understand just how the data may be confounded in the statistical analysis, so here are the dozen differences that popped out as I read the paper:
1. Primary versus secondary prevention trials
2. Single versus multiple combinations of antioxidants
3. Dose difference in antioxidants administered
4. Mode of administration differences
5. Differences between intervention versus placebo; versus no intervention
6. Absolute risk versus relative risk reporting
7. Duration differences for supplementation and follow-up
8. All cause mortality without cause of death examined in review
9. Synthetic antioxidants
10. Single isomer or full spectrum antioxidants
11. Collateral interventions in some trials
12. Measurement of oxidative stress with and without supplementation not included in review
As the Cochrane Handbook for Systematic Reviews cautions, "[t]he validity of very broadly defined reviews may be criticised for mixing apples and oranges, particularly when there is good biologic or sociological evidence to suggest that various formulations of an intervention behave very differently or that various definitions of the condition of interest are associated with markedly different effects of the intervention. It is fine to mix apples and oranges, if your question is about fruit, but not if your question is about vitamin C and you know that apples and oranges are different with respect to vitamin C."
When we look at the above list of potential confounding variables, it's clear the researchers had apples, oranges, pears, grapes, bananas and more to contend with in analyzing the main outcome endpoint - mortality.
To put it bluntly, the studies used are all over the place, looking at different antioxidants, at different doses, at different durations, with different lengths of follow-up, in different populations - ranging from folks who were incredibly healthy to people with different diseases, with different combinations administered in different trials.
But that's just the beginning.
Where things get really interesting is when we look at just how many various models of analysis worked out to determine risk compared with the warnings in the media.
Let's start by looking at what is called the pooled effect of all the randomized trials included in the review. This part of the analysis looked at all the studies included with all the varying confounders to assess risk.
The findings are interesting:
"The pooled effect of all supplements vs placebo or no intervention in all randomized trials was not significant (RR, 1.02; 95% CI, 0.98-1.06)."
That was step one - an overall analysis of all the data from the 68 trials included. Finding - no statistically significant increase in mortality.
The researchers then took the next step, and examined the effect of dose since the trials included low to mega doses of antioxidants. With this "univariate" (single item, dosage) analysis, they found:
"Univariate meta-regression analyses revealed significant influences of dose of beta carotene (RR, 1.004; 95% CI, 1.001-1.007; P = .012), dose of vitamin A (RR, 1.000006; 95% CI, 1.000002-1.000009; P = .003), dose of selenium (RR, 0.998; 95% CI, 0.997-0.999; P = .002), and bias-risk (RR, 1.16; 95% CI, 1.05-1.29; P = .004) on mortality. None of the other covariates (dose of vitamin C; dose of vitamin E; single or combined antioxidant regimen; duration of supplementation; and primary or secondary prevention) were significantly associated with mortality."
This is critically important to the findings in this meta-analysis because, setting aside "bias risk," the two statistically significant findings of increased mortality risk above regard vitamin A - as vitamin A and as beta-carotene.
A closer look at the studies included shows that the dosage was all over the place for vitamin A - from 2000IU per day to twenty-times the upper tolerable limit (UTL) at 200,000IU per day; and beta-carotene dose ranged from a low of 1.2mg per day to 50mg per day.Just as important to the finding that higher doses statistically increase risk, it is also important to note that there was no increased risk found for vitamin E, vitamin C; single or combined regimen; duration of supplementation; or primary or secondary prevention.
It is also notable that in this univariate analysis, selenium had a statistically significant protective effect by dose, with an RR 0.998 (CI = 0.997-0.999).
But before we get to thinking selenium may be protective, let's see what the researchers next step, multi-variate analysis, found:
"In multivariate meta-regression analysis including all covariates, dose of selenium was associated with significantly lower mortality (RR, 0.998; 95% CI, 0.997-0.999; P = .005) and low-bias risk trials with significantly higher mortality (RR, 1.16; 1.05-1.29; P = .005). None of the other covariates was significantly associated with mortality."
So, here we have a completely different finding - by dose there are increased risks for vitamin A and beta-carotene; a benefit for selenium; and no increased risk for the remaining antioxidants. Do a multi-variate regression and that intial finding of benefit for selenium holds when all studies are examined and disappears to become a statistically significant risk of death when only "low-bias risk" studies are analyzed.
Where before vitamin A and beta-carotene increased risk, now there is no statistical significance; and again, the other antioxidants had no increased risk associated with them.
Not content yet that they'd fully massaged the data every which way they could, they then segregated and analyzed by antioxidant, by single/combination administration, and by "bias risk" that they determined during their review of the trials to include/exclude.
The analysis of the full data from all "low-bias risk" studies revealed "...mortality was significantly increased in the supplemented group (RR, 1.05; 95% CI, 1.02-1.08)" while all "high-bias risk" data, taken together, revealed "...mortality was significantly decreased in the supplemented group (RR, 0.91; 95% CI, 0.83-1.00)"
Is your head spinning yet?
They then analyzed by single or combination administration of antioxidants.
Here come the mental gymnastics and data massaging needed to reach statistical significance:
Beta carotene used singly significantly increased mortality. RR 1.06 (CI = 1.01-1.11)
This effect was not significant when combined with other supplements. RR 1.01 (CI = 0.94-1.08)
After exclusion of high-bias risk and selenium trials, beta carotene singly or combined significantly increased mortality. RR 1.07 (CI = 1.02-1.11)
Beta carotene by itself, bad; combined, nothing; exlude some study data and again it's bad.
Vitamin A given singly did not significantly affect mortality. RR 1.18 (CI = 0.84-1.68)
Vitamin A given in combination with the other supplements did not significantly affect mortality. RR 1.03 (CI = 0.90-1.19)
After exclusion of high-bias risk and selenium trials, vitamin A singly or combined significantly increased mortality. RR 1.16 (CI = 1.10-1.24)
Vitamin A alone or in combination, nothing; exclude some study data, bad.
Vitamin E given singly did not significantly affect mortality. RR 1.02 (CI = 0.98-1.05)
Vitamin E given in combination with the other supplements did not significantly affect mortality. RR 1.01 (CI = 0.95-1.06)
Vitamin E given singly in high (1000 IU) or low dose (greater than 1000 IU) did not significantly affect mortality. RR 1.07 (CI = 0.91-1.25)
After exclusion of high-bias risk and selenium trials, vitamin E given singly or combined significantly increased mortality. RR 1.04 (CI = 1.01-1.07)
Vitamin E alone, in combination, in high dose - nothing; exclude some study data, it's bad.
Vitamin C given singly was without significant influence on mortality. RR 0.88 (CI = 0.32-2.42)
Vitamin C given in combination with the other supplements was without significant influence on mortality. RR 0.97 (CI = 0.88-1.07)
After exclusion of high-bias risk trials and selenium trials, vitamin C was without significant influence on mortality. RR 1.06 (CI = 0.94-1.20)
Vitamin C alone, in combination and when excluding some study data, nothing.
Selenium given singly had no significant influence on mortality when analyzed separately. RR 0.85 (CI = 0.68-1.07)
Selenium given in combination with other antioxidant supplements had no significant influence on mortality when analyzed separately. RR 0.90 (CI = 0.81-1.01)
Selenium given singly or combined significantly decreased mortality when analyzed together. RR 0.91 (CI 0.84-0.99)
After exclusion of high-bias risk trials, selenium given singly or with other antioxidants had no significant influence on mortality. RR 0.90 (CI = 0.80-1.02)
Selenium alone or in combination, nothing; data analyzed together singly or in combination, benefit; exclude some data, nothing.
Interestingly the researchers didn't take the data even further - they didn't investigate causes of death, so no one knows if deaths were related to taking antioxidants, accidents, diagnosed disease or something else. They also did not analyze for potential outcome differences between primary prevention and secondary prevention trials - basically they didn't ask if being sick (secondary prevention trials) influenced mortality outcome differently than being healthy (primary prevention trials) while taking antioxidants.
They provide no context for absolute risk in either setting (primary or secondary prevention) and no context of absolute risk even with their different regression models.
Additionally, each of the above models used highlights the way data can be tweaked a bit here and there - in this case segregated by a subjective determination of "low-bias risk" and "high-bias risk" - to reach different findings that most assuredly will get the media in a tizzy to report the dangers of antioxidant supplements.
As WebMD reported (and pay attention to the wording here) "Use of the popular antioxidant supplements beta-carotene, vitamin E, or vitamin A slightly increases a person's risk of death, an overview of human studies shows. The study also shows no benefit -- and no harm -- for vitamin C supplements. Selenium supplements tended to very slightly reduce risk of death.A new, detailed analysis of human studies of beta-carotene, vitamin A, and vitamin E shows that people who take these antioxidant supplements don't live any longer than those who don't take them. In fact, those who take the supplements have an increased risk of death."
Note WebMD offers no qualification of what part of the analysis finding they refer to that reached statistical significance and then they interject subjectives not normally associated with statistically significant findings; specifically the part where they say "selenium supplements tended to very slightly reduce risk of death" - this is an interpretation. The only question is "was the finding statistically significant?", with the only answer being yes or no, not this "slightly" or "tended to" crap interjected to muddy the water for the consumer.
Now go back and look at the detailed numbers by regression model again.
The full weight of the data analysis in the review certainly does not warrant the quote from Kathleen Zelman, MPH, RD, LD, director of nutrition for WebMD, who said "This is a very comprehensive, to-be-respected analysis. This isn't just another study coming out. The bottom line is that antioxidant supplements are not a magic bullet for disease prevention. We hoped maybe they were, but they are not."
Interesting choice of words considering the review wasn't investigating only primary prevention trials; and that unless you cherry-pick through the data, increased risk of mortality was not a consistent finding throughout the review. While the headlines are pointing to the findings from the group of "low-bais risk" studies within the 68 that showed a relative 16% increase in mortality rates, the statistical difference in the studies analyzed comes down to this:
- there were 15,366 deaths among 99,095 subjects who took antioxidants (15.50%)
- there were 9,131 deaths among 81,343 subjects who did not take antioxidant supplements (11.22%),
- the difference in hard numbers is 4.28% (not quite the frightening 16% relative number the media is headlining)
- the difference did not include any investigation for cause of death
- the studies included in the meta-analysis only included studies in which subjects did die
The problem in a meta-analysis like this is that there are so many pitfalls (identified above) that the researchers do discuss in their paper, but not when they're quoted in the media. It's unfortunate that these researchers are not specific in their quotes in the media, nor doing much to point out all was not "risky" with taking antioxidant supplements.
This meta-analysis had great potential, but was rendered pretty meaningless with all the various regression analyses done and the fact that so much depended on excluding data while mixing the apples-to-oranges study types of primary prevention and secondary prevention.