One answer: come up with low-placebo-response study designs, and patent them if possible. (And yes, it is possible. But we're getting ahead of the story.)
Placebo effect has always been a problem for drug companies, but it's especially a problem for low-efficacy drugs (psych meds, in particular). An example of the problem is provided by Eli Lilly. In a March 29, 2009 press release announcing the failure of Phase II trials involving a new atypical antipsychotic known as LY2140023 monohydrate or mGlu2/3, Lilly said:
In Study HBBI, neither LY2140023 monohydrate, nor the comparator molecule olanzapine [Zyprexa], known to be more effective than placebo, separated from placebo. In this particular study, Lilly observed a greater-than-expected placebo response, which was approximately double that historically seen in schizophrenia clinical trials. [emphasis added]Fast-forward to August 2012: Lilly throws in the towel on mGlu2/3. According to a report in Genetic Engineering and Technology News, "Independent futility analysis concluded H8Y-MC-HBBN, the second of Lilly's two pivotal Phase III studies, was unlikely to be positive in its primary efficacy endpoint if enrolled to completion."
Lilly is not alone. Rexahn Pharmaceuticals, in November 2011, issued a press release about disappointing Phase IIb trials of a new antidepressant, Serdaxin, saying: "Results from the study did not demonstrate Serdaxin’s efficacy compared to placebo measured by the Montgomery-Asberg Depression Rating Scale (MADRS). All groups showed an approximate 14 point improvement in the protocol defined primary endpoint of MADRS."
In March 2012, AstraZeneca threw in the towel on an adjunctive antidepressant, TC-5214, after the drug failed to beat placebo in Phase III trials. A news account put the cost of the failure at half a billion dollars.
In December 2011, shares of BioSante Pharmaceutical Inc. slid 77% in a single session after the company's experimental gel for promoting libido in postmenopausal women failed to perform well against placebo in late-stage trials.
The drug companies say these failures are happening not because their drugs are ineffective, but because placebos have recently become more effective in clinical trials. (For evidence on increasing placebo effectiveness, see yesterday's post, where I showed a graph of placebo efficacy in antidepressant trials over a 20-year period.)
Some idea of the desperation felt by drug companies can be glimpsed in this slideshow (alternate link here) by Anastasia Ivanova of the Department of Biostatistics, UNC at Chapel Hill, which discusses tactics for mitigating high placebo response. The Final Solution? Something called The Sequential Parallel Comparison Design.
SPCD is a cascading (multi-phase) protocol design. In the canonical two-phase version, you start with a larger-than-usual group of placebo subjects relative to non-placebo subjects. In phase one, you run the trial as usual, but at the end, placebo non-responders are randomized into a second phase of the study (which, like the first phase, uses a placebo control arm and a study arm). SPCD differs from the usual "placebo run-in" design in that it doesn't actually eliminate placebo responders from the overall study. Instead, it keeps their results, so that when the phase-two placebo group's data are added in, they effectively dilute the higher phase-one placebo results. The assumption, of course, is that placebo non-responders will be non-responsive to placebo in phase two after having been identified as non-responders in phase one. In industry argot, there will be carry-over of (non)effect from placebo phase one to placebo phase two.
This bit of chicanery (I don't know what else to call it) seems pointless until you do the math. The Ivanova slideshow explains it in some detail, but basically, if you optimize the ratio of placebo to study-arm subjects properly, you end up increasing the overall power of the study while keeping placebo response minimized. This translates to big bucks for pharma companies, who strive mightily to keep the cost of drug trials down by enrolling only as many subjects as might be needed to give the study the desired power. In other words, maximizing study power per enrollee is key. And SPCD does that.
SPCD was first introduced in the literature in a paper by Fava et al., Psychother Psychosom. 2003 May-Jun;72(3):115-27, with the interesting title "The problem of the placebo response in clinical trials for psychiatric disorders: culprits, possible remedies, and a novel study design approach." The title is interesting in that it paints placebo response as an evil (complete with cuplrits). In this paper, Maurizio Fava and his colleagues point to possible causes of increasing placebo response that have been considered by others ("diagnostic misclassification, issues concerning inclusion/exclusion criteria, outcome measures' lack of sensitivity to change, measurement errors, poor quality of data entry and verification, waxing and waning of the natural course of illness, regression toward the mean phenomenon, patient and clinician expectations about the trial, study design issues, non-specific therapeutic effects, and high attrition"), glossing over the most obvious possibility, which is that paid research subjects (for-hire "volunteers"), who are desperate, in many cases, to obtain free medical care, are only too willing to tell researchers whatever they want to hear about whatever useless palliative is given them. But then Fava and his coauthors make the baffling statement: "Thus far, there has been no attempt to develop new study designs aimed at reducing the placebo effect." They go on to present SPCD as a more or less revolutionary advance in the quest to quelch placebo effect.
Up until this point in science, I don't think there had ever been any discussion, in a scientific paper, of a need to attack placebo effect as something bothersome, something that interferes with scientific progress, something that needs to be guarded against vigilantly like Swine Flu. The whole idea that placebo effect is getting in the way of producing meaningful results is repugnant, I think, to anyone with scientific training.
What's even more repugnant, however, is that Fava's group didn't stop with a mere paper in Psychotherapy and Psychosomatics. They went on to apply for, and obtain, U.S. patents on SPCD (on behalf of The General Hospital Corporation of Boston). The relevant U.S. patent numbers are 7,647,235; 7,840,419; 7,983,936; 8,145,504; 8,145,505, and 8,219,41, the most recent of which was granted July 2012. You can look them up on Google Patents.
The patents begin with the statement: "A method and system for performing a clinical trial having a reduced placebo effect is disclosed." Incredibly, the whole point of the invention is to mitigate (if not actually defeat) the placebo effect. I don't know if anybody else sees this as disturbing. To me it's repulsive.
If you're interested in licensing the patents, RCT Logic will be happy to talk to you about it. Download their white paper and slides. Or just visit the website.
Have antidepressants and other drugs now become so miserably ineffective, so hopelessly useless in clinical trials, that we need to redesign our scientific protocols in such a way as to defeat placebo effect? Are we now to view placebo effect as something that needs to be made to go away by protocol-fudging? If so, it puts us in a new scientific era indeed.
But that's where we are, apparently. Welcome to the new world of wonder drugs. And pass the Tic-Tacs.
No comments:
Post a Comment