In 2006, researchers first published results from a $35 million NIMH-funded study of antidepressants known as STAR*D, claiming it proved the effectiveness of second-generation antidepressants used alone and in combination with each other. The NIMH chimed in with press releases extolling “new strategies” that help depressed patients become symptom-free, and the findings became the basis for American Psychiatric Association’s guidelines calling for the open-ended use of antidepressants in treating depression.
But, as Edward Pigott, a Maryland psychologist, reveals in several published papers and his blog, it was all a big lie. Pigott shows how the STAR*D authors, 10 of whom had financial ties to antidepressant makers, played unethical games with the data to make all of the antidepressants in the study look far more effective than they really were. For instance:
*The researchers changed the primary outcome measure from the Hamilton rating scale of depression (considered a gold standard in measuring depression) to a proprietary rating scale owned by the principal investigator, Dr. John Rush, a psychiatrist at the University of Texas (who by the way was investigated by Senator Grassley for failing to disclose extensive conflicts of interest). And they made the change even though the secondary rating scale had been used in clinical treatment, thus tainting it as an objective research measure. Furthermore, in the published results, Rush and his co-authors didn’t bother to disclose this change (which skewed the results in favor of the drugs).
* They failed to count patients who had dropped out as treatment failures, thus further skewing results in favor of the drugs.
* Halfway through the study, they included patients with only mild depressive symptoms who had originally been excluded because they didn’t meet the original criteria for being depressed, again making the results look better than they were.
* They repeatedly rounded up percentages to make it look as if the antidepressants in the study were more effective than they really were. Yet they didn’t round up any findings showing the percentage of negative side effects among patients taking these drugs.
After re-analyzing the data from STAR*D, Pigott found that in contrast to STAR*D’s published findings, only 108 of its 4,041 patients (2.7 percent) went into remission in the acute phase of the study. And of those initial patients, only 38 percent obtained remission after being dosed with other medications in three later phases of the trial. In every phase, more patients dropped out than were remitted, and this drop-out rate increased throughout the study. As Pigott says in a paper published this month in the journal Ethical Human Psychology and Psychiatry, “this reality directly counters the study’s false claim that about 70 percent of those who did not withdraw from the study became symptom-free.”
Three years later, in a review article for the Journal of Clinical Investigation, Dr. Thomas Insel, director of the NIMH in 2006 and today, pretty much acknowledged the inadequacy of second-generation antidepressants in treating depression. To quote Insel:
In 2007, the third and fourth most heavily purchased medications in the United States were antipsychotics and antidepressants, respectively, with a combined market of $25 billion. Remarkably, despite the heavy use of these medications, we have no evidence that the morbidity or mortality of mental disorders has dropped substantially in the past decades.
Instead, as Pigott points out in his most recent paper, “the morbidity and chronicity of mental disorders appears to be increasing with a twofold to threefold increase between 1987 and 2007 in the number of Americans receiving disability payments for such disorders.” Robert Whitaker, of course, argues in his new book, Anatomy of an Epidemic, that it is the very overuse of so many psychoactive drugs with severe side effects that has led to this exponential increase in the number of Americans disabled by mental illness.
Whether or not you buy Whitaker’s hypothesis — and it certainly warrants further investigation — the fact is that the STAR*D study, like Paxil study 329 was flawed in so many ways that its published results should be retracted.
Not only did most of the STAR*D authors including John Rush have financial ties to Forest Labs, the maker of Celexa, and other antidepressant makers who stood to benefit from the study’s positive findings. But as Pigott reveals, the very same NIMH officials who were tasked with the oversight of this $35 million multi-center study were also allowed to put their names on STAR*D studies published in the New England Journal of Medicine and the American Journal of Psychiatry, an egregious conflict of interest that should never have been allowed.
Interestingly, while Insel acknowledges the inadequacy of current antidepressants in his 2009 review, there is nothing on NIMH’s website that points to this more sober assessment, despite the existence of five meta-analyses that show “modest to no” advantage of antidepressants over placebo in clinical trials. Pigott concludes that it is hard to find any reason for this pro-drug bias “other than convention, ease to prescribe for physicians, and the success of pharmaceutical companies’ relentless marketing efforts.”