I had
previously written about this well-performed meta-analysis, and
also written about some unfair ways that people use to try to attack randomized trials, and these letters provide an interesting (at least to me) intersection between these posts.
Letters in academic scientific journals are sociologically revealing. There’s typically a polite veneer on even the most vicious attacks. Letters written to European medical journals have a somewhat different feel from those to American medical journals, and letters to the Lancet often seem to have a sneering tone that would be unusual to find in the
NEJM or JAMA.
One letter about the meta-analysis objects that the results cease to be
statistically significant when diabetes diagnosed only by physician report are excluded, and secondly that the results involved a
post-hoc analysis of the data, with the warning that we might fall victim to the
logical fallacy, “
Post hoc ergo propter hoc“.
Are these fair objections?
Diagnosing diabetes by physician report rather than blood glucose measurement is likely to lead to misclassification: some patients will be classified as having diabetes who don’t, and some who have diabetes will be missed. In an
RCT, though, misclassification like this will almost certainly be random as well, leading to random misclassification bias. Bias of this sort is toward the null hypothesis (no difference between the groups), as you can convince yourself of if you imagine that the classification is perfectly random such that there is no relation between the classification and diabetes. Under such perfect misclassification, the two groups would have equal numbers of patients classified as having diabetes and there would be no difference between treatment and control. In a meta-analysis that found higher rates of diabetes in patients receiving statins, misclassification bias can be expected to have somewhat
reduced the true effect, not to have created an effect out of thin air.
The second objection might be called the “post-hoc-ergo-propter-hoc-fallacy fallacy”. The actual fallacy is, of course, a way of saying that just because B follows A, you should not conclude that A caused B. This question of causality is central to epidemiologic research and one of the primary reasons for performing randomized trials, which have particular strengths when arguing for causality. The fallacy has nothing to do with performing post hoc analyses of trials. (To be fair, it’s possible the letter writer understood this and was being humorous when writing of this fallacy.) The main problem with a post hoc analysis of a randomized trial is that it often involves multiple comparisons/data dredging, where statistical blips are likely to confuse the issue of what is a true effect. As discussed in my
earlier post, a prime issue preceding this meta-analysis was whether JUPITER had found just such a random blip or detected a real problem. The meta-analysis’ reason for being performed was primarily to answer this question, and in such a setting there is nothing at all concerning about going back to previously conducted RCTs and performing post hoc analyses looking for diabetes effects. No data dredging was involved, and the analysis should not be looked at askance simply for being post hoc. Revealingly, the meta-analysis found an increased risk of diabetes even when data from JUPITER were excluded.
A
second letter complained that the analysis would have been better had it been carried out using hazard ratios rather than odds ratios. While this would likely be true, such an analysis was not possible given the information available to the authors, and it is hard to imagine why an OR analysis would have shown statins to be causing diabetes if it were not true. The same letter also re-raised the possibility that statins appeared to be causing people to have more diabetes by keeping them alive longer to develop diabetes. However, the authors had already addressed this in their meta-analysis and reiterated in their
response to the letters that differences in survival were much too small to produce such an effect.
A
third letter mis-states the definition of a type I error on its way to arguing that the meta-analysis should have used 99% confidence intervals (p-value cutoff of 0.01) for some reason that was not made terribly clear, but seemed related to concerns that a very large meta-analysis would be more likely to detect a spurious result. It is true that given the enormous N in the analysis, it was possible to find a statistically significant difference in diabetes rates that is likely of little clinical significance, but this has nothing to do with the truth or falsehood of the result itself. The letter also argues that the result is biologically implausible, though it does not seem implausible that a medication could increase diabetes rates during the time of a randomized trial, if only by raising
blood sugars in patients near the margin between insulin resistance and diabetes.
A
fourth letter suggests that the “diabetes” found in the study might be different in terms of patient-important outcomes than the clinical condition we think of as diabetes. That is, statins might be raising blood sugars in a way that is harmless. While this is possible, it’s interesting that when a drug class raises blood sugar people are willing to argue it might be harmless, but when a drug class lowers blood sugar there’s a tendency (at least for the manufacturer) to argue that blood sugar control is an excellent surrogate for clinical outcomes. The author of the letter suggests an analysis that might have been done to sort out this issue, which the authors of the meta-analysis correctly point out would not have answered the question.
There were a few other replies to the article, which I have not detailed. Overall, though, this is a fairly typical picture of what happens when someone publishes a trial that conflicts with conventional beliefs, such as “statins are good”. This occurs even when the conflict is quite minor — the meta-analysis merely shows a small increase in diabetes that would be heavily outweighed by cardiovascular benefit in anyone who would be appropriately treated with a statin.
There is no guarantee that the meta-analysis by Sattar et al. is correct about statins and diabetes, but none of the letters published by Lancet raise a sensible reason to think that the post-analysis state of knowledge should change: it is now far more likely than not that statins cause a small increase in diabetes risk. Our response to a meta-analysis like this should be to congratulate the authors on a job well done, while recognizing the possibilities for errors and chance to disrupt the conclusions. It should not be to search high and low for far-fetched flaws that would allow us to discard the inconvenient likelihood that a new statin side-effect has been detected.