Before I ramble on about this, if you want the short version, check out this good summary by Dr. Michael Evans in the Globe and Mail.
I received a fax from my head office imploring me to call all of my patients on Avandia to discuss recent concerns raised in the lay press about this drug. I decided that before I called them all and freak the hell out of them I should look into the original article.
The article was published June 14 in the New England Journal of Medicine. The first thing that set me off was that the media coverage emerged May 22, meaning that information was released to the public before health professionals even had a chance to access the information to critically appraise it. Now that I have access to it, let us discuss the problems with this study and why you should not stop taking your Avandia.
First of all, a prelude to the study. Avandia is also known as rosiglitazone, a thiazolidinedione, a group of drugs that increase sensitivity of the body’s cells to insulin, thereby improving glucose uptake and reducing the signs and symptoms of diabetes and their ensuing consequences. When Avandia first came onto the market, it was recognized that it increased cardiovascular risk slightly, specifically in patients with congestive heart failure (CHF), a condition in which the heart can not efficiently pump blood often leading to fluid buildup and consequences of insufficient blood supply to the organs. Accordingly, Health Canada specifically states that Avandia should NOT be used in patients with severe CHF.
Furthermore, Avandia is not approved for use in combination with certain other diabetes medications, namely sulfonylureas (glyburide and gliclazide) and insulin.
So this is the first problem with the study. Since it is a meta-analysis it analyzes the results of studies already performed and so the populations used in these studies are known and tabulated in this study. In 17 of the 42 trials studied, the treatment group was receiving a rosiglitazone/other drug combination not approved for use in Canada.
Now this is where it may get boring for some of you, but as God as my witness, I love this stuff.
One of the key features of a quality meta-analysis is a comprehensive literature search. If you do not include all the possible studies on the subject, you are running the risk of your study results reflecting publication bias. This is one place in which this paper failed. They did not do literature searches of the major electronic databases, nor the Cochrane collaboration, two very important sources of quality medical literature. Furthermore, they did not scour the reference lists of the studies used to find other relevant studies that may go unnoticed in a literature search. The real problem with this study is that it relies almost entirely on studies done by the manufacturer, GlaxoSmithKline, all garnered from their website. In total, 26 of the 42 studies used were unpublished in peer-reviewed medical literature. This is a huge problem as the quality of the studies, and therefore the validity of their results, is brought into question. Only 2 large well done studies were used in the entire meta-analysis.
One huge problem I have with their selection of studies is that they provided no measure of agreement among authors as to which papers should be included and which should not. This is mathematically expressed as a kappa statistic, something conspicuously absent from this paper. They also did no test for publication bias, that is, did the studies they include miss out on a big chunk of the available literature on the subject and therefore skew the eventual conclusion.
Next one must consider the test of heterogeneity. This is a test that determines the variation in results between the different studies. Variation will always exist, but the test of heterogeneity determines whether this variation is likely due to chance or if the studies are so varied in their results that we must question whether they can be combined into a pooled result. A HUGE problem here is that trials in which the subjects had no cardiovascular events were not included in the analyses. Consider that for a minute. You are trying to determine whether the medical literature out there supports the hypothesis that a certain drug causes cardiovascular events (heart attacks, strokes, etc.) You want to see if the studies you’ve included vary so much as to prevent them from being combined. Unfortunately, you do not include any study in which patients DID not have cardiovascular events. So of course your results will show a relative level of homogeneity, because you removed all the studies that stray from your hypothesis. Hmm.
When meta-analyses combine results of their individual studies, they use either a fixed-effects model (FEM) or a random effects model (REM). The FEM assumes that if all studies were infinitely large, the effect would be identical. For this to work, the test for heterogeneity needs to show very little variation. This study showed little variation, but not little enough. The REM, however, assumes that all study results will be randomly distributed around the true value. The researchers in this case used FEM which biases away from the null hypothesis: that is it will push toward the notion that there is an effect. The REM is more appropriate in most cases, because it is most conservative and biases toward the null: there is no effect.
Finally, let us look at the results. This is where the real problem arises. The article listed at the beginning touches on the distorted reporting of these results in the media. They reported all about relative risk and nothing on absolute risk. See, we shall now.
The odds ratio reported was 1.43, ranging anywhere from 1.03-1.98 with 95% confidence. So the truth lies somewhere between those receiving rosiglitazone being just as likely to 2 times as likely to have a heart attack than those not on rosiglitazone. However, if you look at the absolute risk increase, something entirely different emerges. It is incredibly small. First of all, the rate of heart attacks in both groups (treatment and control) rounds off to 0.6%. The most understandable value I can provide from this is the numbers needed to harm: 4854. This means that 4854 people would need to be treated with rosiglitazone versus controls in order for 1 heart attack to occur.
The biggest problem I have with the above is that the study authors did not even identify the total rate of heart attacks in both groups. Of course, having done so would have made their fancy odds ratio seem very silly.
So, if your neurons are still active after all of this, take the following home with you: be careful what you read in the papers. Most health journalists are not trained in the intricacies of epidemiology. Did you understand any of what I just said? Unless you are a pharmacist, physician, or other highly-trained healthcare professional, probably not. I took a whole semester in university just on learning how to analyze medical literature. Do you think the leading health journalists took that course?
Sure, information is a good thing to have, but careful what you do with false and misleading information, and almost more importantly, incomplete information. Please, for the sake of your health, do NOT stop taking this drug without having a serious discussion of the risks and benefits with your pharmacist and your physician. The last person you should be taking medical advice from is a journalist.
PS-On a side note, I have no connection to any pharmaceutical manufacturer that would bias my views in any way. I am heavily supported by Reason, Science, and Rational Thorough Analysis. Although these companies are publicly traded, not many people out there seem to be buying. Oh, and for the record, I will not be calling my patients to unnecessarily worry them. If they ask me about it, I will give them the short version of the above.
No comments:
Post a Comment