Editore"s Note
Tilting at Windmills

Email Newsletter icon, E-mail Newsletter icon, Email List icon, E-mail List icon Sign up for Free News & Updates

February 27, 2008
By: Kevin Drum

MORE ON ANTIDEPRESSANTS....This is obviously not the biggest deal in the world, but yesterday I noticed that a new study suggesting that certain antipressants weren't very effective had gotten big play in Britain and zero play in the U.S. Today I checked back, and the story had spread not only to more British sites, but also to news outlets in France, Germany, India, New Zealand, Canada, Thailand, and elsewhere. Mysteriously, though, the U.S. was still almost completely AWOL. There were short pieces on MSNBC and Fox, and longer pieces at the Washington Post and Time. That was pretty much it. It's really very strange that this story is being almost completely ignored here.

For what it's worth, the Time piece does a good job of explaining why the study is important:

Under the Freedom of Information Act, the researchers writing in PLoS Medicine were recently able to obtain [...] data, they believe, that lets them avoid a bias that often plagues reviews of previous research: the tendency for conclusive positive results to be published, sometimes more than once, and thus over-represented, while mediocre results can be ignored or even swept under the rug.

Drug companies claim the review is still flawed, however. One massive problem: there are many more recent studies than those surveyed in the article, which looked only at pre-approval trials conducted before 1999.

....The companies are correct in claiming there is far more data available on SSRI drugs now than there was 10 or 20 years ago. But Kirsch maintains that the results he and colleagues reviewed make up "the only data set we have that is not biased." He points out that currently, researchers are not compelled to produce all results to an independent body once the drugs have been approved; but until they are, they must hand over all data. For that reason, while the PLoS Medicine paper data may not be perfect, it may still be among the best we've got.

In other words, there might be a lot more data now, but it's hard to trust it because drug companies systematically suppress negative findings after they get FDA approval and no longer have to follow FDA rules. A few weeks ago the New York Times reported on a study that looked at precisely this question:

The new analysis, reviewing data from 74 trials involving 12 drugs, is the most thorough to date. And it documents a large difference: while 94 percent of the positive studies found their way into print, just 14 percent of those with disappointing or uncertain results did.

Now, for what it's worth, I find the results of the PLoS study a little hard to believe. Like a lot of commenters on last night's post, I've just heard too much anecdotal evidence from friends who have (eventually) been helped by various antidepressants. Maybe they were all kidding themselves, but that's a little hard to swallow.

But that aside, the PLoS study is still an important one. It's not the first one to question the efficacy of antidepressants, but are we already so jaded by this stuff that a confirming study isn't even worth reporting in the U.S.? If only for the insight it gives us into drug company testing practices, it seems like it's at least worth letting people know about.

Kevin Drum 12:07 AM Permalink | Trackbacks | Comments (48)

Bookmark and Share

Call me a doubter too. I know too many people who were helped. SSRIs are lifesaving drugs for some.

Posted by: capitalistimperialistpig on February 27, 2008 at 12:14 AM | PERMALINK

Since yesterday, I've switched to beer. So far so good.

Posted by: jerry on February 27, 2008 at 12:15 AM | PERMALINK

This is a really hard question, even if you're thoroughly familiar with all the studies and all the data and all the ancillary research, which I certainly am not.

Measurement of "depression" is a very inexact thing, for one; instruments are necessarily almost purely subjective, and variation in measurements even for a single individual are material. There are different kinds of depression -- categorical distinctions are made -- so it's not like there's just one condition with different levels. Etiology is manifold and very poorly understood.

When it comes to drugs, the placebo effect is big. Given that, plus the wide variation in measurements, plus what probably is a similarly wide variation in the day-to-day actual condition of all but the most severely depressed patients, the studies have to be huge to show any difference, and the difference required to decide that a medication is efficacious is small.

Room for error? Jeez, hardly room for anything but.

And yet, as you observe, patients certainly do seem to feel better. They write books about it, FCOL. Placebo effect? Maybe. But SSRIs certainly do seem to do a better job than, say, MAOIs, or lithium, or cyclics.

I'm sure this story will spawn the usual "pharmacos are evil" threads -- not a poster among which has ever had material benefits from modern pharmaceuticals, I'm sure -- but as much fault as I find with certain practices of the pharma industry, this one for me is very far from clear.


Posted by: bleh on February 27, 2008 at 12:26 AM | PERMALINK

In the course of my life I had two bouts with clinical depression. The first time I was given Zoloft (a SSRI) and the second time Lexapro (another SSRI). I am happy to report that both of these SSRIs helped improve my condition greatly. While it may be true that SSRIs don't work for everybody, they worked fine for me. Clinical Depression is a condition that nobody should have to suffer from; it's nothing less than horrible.

Posted by: Claimsman on February 27, 2008 at 12:33 AM | PERMALINK

1) A placebo doesn't feel fake, and by all physiological accounts, isn't fake. You really do get better. It's just that the chemicals in the pill aren't what's doing it. So just because one really did improve, doesn't mean that the fancy chemicals are what did it.

2) Even apart from the placebo effect, there's simple regression to the mean -- sick people tend to get better. Again, that's not fake; you really do get better. Studies try to control for this, of course, but it makes the treatment/control study much trickier.

3) How many ambivalent users aren't posting here? How many aren't arguing that it's a placebo because they don't want to look like jerks? Or, worse, ruin the placebo effect itself?

That said, I myself still think they have a non-placebo effect. It's just that it's likely to be much smaller and more variable than the drug companies would have us believe.

Posted by: JD on February 27, 2008 at 12:42 AM | PERMALINK

Agreed with bleh, but I'd like to add that it's entirely possible--given the difficulty of diagnosing depression and the limited understandings of the biochemistry of mental conditions--that SSRIs could work well for some people, poorly or not at all for others, to add up to the statistical result we have.

Beyond that, I don't have a good answer for Kevin's question about the media's lack of coverage of this study, other than to say science and medical reporting in the U.S. is not the greatest. But it is a bit troubling that this study hasn't gotten a bit more attention in the U.S.

Posted by: brad on February 27, 2008 at 12:42 AM | PERMALINK

If these drugs were prescribed solely to those who truly needed chemical intervention, I think the study would have shown less dramatic results. Lots of folks probably would have done fine on another regimen of say exercise or rest and relaxation.

As well, who would give a depressed and potentially self-immolating person a sugar pill? I think that the study reinforces my belief that these drugs are over-prescribed in the first place, but effective for certain conditions. But we live in a world of commercial health care. And it sucks.

Posted by: Sparko on February 27, 2008 at 12:45 AM | PERMALINK

What’s wrong with the PLoS story is that it’s meta-analysis. Here’s the incomparable Bob Carroll of The Skeptic’s Dictionary on that subject:

A meta-analysis is a type of data analysis in which the results of several studies, none of which need find anything of statistical significance, are lumped together and analyzed as if they were the results of one large study.

Meta-analysis is a favorite data-mining trick of parapsychologists. Even when not used dishonestly like that, it’s an iffy thing.

Read the whole SkepDic link for more on meta-analysis. People who fully defend the PLoS story, but are intelligent and open-minded, may think again.

Posted by: SocraticGadfly on February 27, 2008 at 1:11 AM | PERMALINK

The side-effects of the various SSRI's I've used are quite pronounced. You don't get blurred vision, odd-tastes in your mouth, sexual dysfunction, heavy sweats, etc. from sugar pills.

Posted by: LMichael on February 27, 2008 at 1:13 AM | PERMALINK

Americans are completely clueless when it comes to the political aspects of pharmaceuticals. In the US, the only thing that's political about "the pill" is the fact that it facilitates "choice"; the idea that the burden of contraception should fall on women and, further, that it reasonably takes the form of systematic disruption of hormonal cycles seems normal -- instead of, say, completely f*ing outrageous. Quite a few European women understand this; American women don't get it at all. (It took HIV to wake Americans up to the fact that the pill wasn't necessarily the be-all, end-all of sexual liberation.) Imagine if a birth-control drug hit the market that prevented men from producing spermatazoa -- oh, and BTW, could really f*k with your head. That'd big a big hit, not. Viagra, on the other hand...

Along similar lines, until big pharma figured out that "branding" psychoactive drugs made them socially acceptable, the vast majority of antidepressants and the like were prescribed to women.

These biases run through all of pharma -- for example, in the normative assumption that it's safer to conduct clinical trials of drugs on men than on women.

The patriarchal assumptions that pervade pharmacology are really appalling.

I'm a man.

Posted by: Someone Orother on February 27, 2008 at 1:13 AM | PERMALINK

Q: Why isn't the Prozac story getting any attention in the US?

A: Because Bill Gates doesn't want you to read it.

I am totally serious. Gates is a major investor in Eli Lily, the creator of Prozac.

Posted by: charlie don't surf on February 27, 2008 at 1:21 AM | PERMALINK

A follow-up to my first post. Some people might claim that meta-analysis is widely used in medicine. But, does that necessarily make it less iffy?

I still think, in medicine, as opposed to, say, physics, meta-analysis is iffy. First, you have medicine's rather lax 5 percent false positive rate as the research standar, compared to the MUCH smaller rate in the natural sciences; you then compound that over muliple anaylses being dumped together.

If astrophysicists were doing a meta-analysis of 20 different studies of gravitational lensing or something, I think it would be different.

For more on medicine's relatively lax false positive standards, read physicist Victor Stenger.

Posted by: SocraticGadfly on February 27, 2008 at 1:25 AM | PERMALINK

Uhh, Charlie, that story is six years old. We have no idea, from that, how much or how little Lilly stock Gates has today. (And the story can't even spell "Lilly" right on all occurrences.)

Posted by: SocraticGadfly on February 27, 2008 at 1:26 AM | PERMALINK

FYI, as a researcher I'd say that it's probably not that the drug companies are "suppressing" independent studies- it's a more systemic bias in science that negative results are often not published because journals and researchers don't think that they are very interesting, and it's much messier to try to conclusively prove that something doesn't happen than that it does. Although I'm not convinced that no corporate censoring occurs.

Posted by: Ruck on February 27, 2008 at 1:31 AM | PERMALINK

I haven't read the paper, but here's one angle I'd be interested in: A psychiatrist told me that psychiatrists won't allow their most depressed patients to participate in tests where they might get placebos instead of antidepressants -- too big a chance of suicide. So, studies tend to be done on less depressed patients.

Posted by: Steve Sailer on February 27, 2008 at 1:36 AM | PERMALINK


In principle the absolute size of the significance criterion used for a meta-analysis shouldn't be relevant as long as this is taken int account. Physicists get to report pvalues of 10^-gazillion because the tools that they use to measure effects are very precise, but also because there is exponentially more data. The resolution of the study is limited by sample size, and as long as false discovery and multiple testing is properly taken into account, it should be fine. Also haven't read the particular Plos study to comment on these particulars.

Posted by: Ruck on February 27, 2008 at 1:41 AM | PERMALINK

Socratic, I have been following Gates' investments in Big Pharma and I assure you he owns MORE of Eli Lilly now than he did in 2002. That's just the first link I dug up that directly associated Gates with the manufacturer of Prozac.

Gates has spent years quietly selling huge blocks of MSFT stock and investing the money in almost every major Pharmaceutical manufacturer. He is the largest single stockholder in Big Pharma. Apparently Big Pharma is the only monopoly more profitable than computer software.
What, you never heard of this? Of course not, Bill Gates doesn't want you to know. He only wants you to know about the Bill and Melinda Gates Foundation. Oh, you heard of that, I knew it.

Posted by: charlie don't surf on February 27, 2008 at 1:49 AM | PERMALINK

Socratic Gadfly--No, no, no. Meta-analyses are held in high regard because they attempt to get averages from many studies--if you check the methods on the PLoS study, for example, they note that most meta-analyses use only published data--their effort here focuses on all pre-approval data to, essentially, capture as much data as possible. It doesn't mean that this study is the end all and be all of SSRI studies--one of the weaknesses of medical reporting is that reporters tend to play up individual studies as definitive, and they shouldn't--but meta-analyses are extremely valuable.

To answer Steve Sailer's question (which, amazingly, is not laced with disturbing racial undertones): The study's authors explicitly note that the drugs do work well for severe cases. The authors call into question the effects for moderate and mild cases, where there are probably overprescription/overdiagnosis issues anyway.

Posted by: brad on February 27, 2008 at 2:02 AM | PERMALINK

Ruck, that's kind of the problem with medical studies, isn't it? The relative lack of precision.

That's part of why to take any medical study with a fair grain of salt unless it has a pretty big population group.

Medical studies, also, COULD strive for tighter pvalues. Part of it is, of course, a legitimate issue... this is medicine, a matter of health and even life and death; you don't want to risk squeezing out legitimate results.

That said, then, Carroll does, in a couple of sites on Skepdic, reference people who question standard p-values in clinical research. Here is the relevant comments, about halfway down the page.

And, from the main critic Carroll cited:

On the face of it, Fisher's standard of 0.05 suggests that the chances of a mere fluke being the real explanation for a given result is just 5 in 100 - plenty of protection against being fooled. But in 1963, a team of statisticians at the University of Michigan showed that the actual chances of being fooled could easily be 10 times higher. Because it fails to take into account plausibility, Fisher's test can see "significance" in results which are actually over 50 percent likely to be utter nonsense. …

In 1986, one scientist decided to take direct action against the failings of Fisher's methods. Professor Kenneth Rothman of the University of Massachusetts, editor of the well-respected American Journal of Public Health, told all researchers wanting to publish in the journal that he would no longer accept results based on P-values.
It was a simple move that had a dramatic effect: the teaching in America's leading public health schools was transformed, with statistics courses revised to train students in alternatives to P-values. But two years later, when Rothman stepped down from the editorship, his ban on P-values was dropped - and researchers went back to their old ways.
Once again, I stand behind what I said. P-values in medicine/health are too "loose," in many cases, to justify the faith put into the results they allow. (And, that "allow" wording is deliberate.)

Posted by: SocraticGadfly on February 27, 2008 at 2:02 AM | PERMALINK

For some reason, only the first graf of the quote blockquoted last time. It should read:

And, from the main critic Carroll cited:

On the face of it, Fisher's standard of 0.05 suggests that the chances of a mere fluke being the real explanation for a given result is just 5 in 100 - plenty of protection against being fooled. But in 1963, a team of statisticians at the University of Michigan showed that the actual chances of being fooled could easily be 10 times higher. Because it fails to take into account plausibility, Fisher's test can see "significance" in results which are actually over 50 percent likely to be utter nonsense. …

In 1986, one scientist decided to take direct action against the failings of Fisher's methods. Professor Kenneth Rothman of the University of Massachusetts, editor of the well-respected American Journal of Public Health, told all researchers wanting to publish in the journal that he would no longer accept results based on P-values.
It was a simple move that had a dramatic effect: the teaching in America's leading public health schools was transformed, with statistics courses revised to train students in alternatives to P-values. But two years later, when Rothman stepped down from the editorship, his ban on P-values was dropped - and researchers went back to their old ways.
Once again, I stand behind what I said. P-values in medicine/health are too "loose," in many cases, to justify the faith put into the results they allow. (And, that "allow" wording is deliberate.) Posted by: SocraticGadfly on February 27, 2008 at 2:06 AM | PERMALINK

Charlie... no problem... it wouldn't surprise me. It could be related to his foundation's work in Africa and be altruistic. It could be related to his foundation's work in Africa and NOT be altruistic, for that matter.

Brad, please read my remarks to Ruck. I'm not saying meta-analysis in the medical field is wrong; I am saying it's no guarantee for patching up the problems with the loose p-values of individual studies.

As I e-mailed Kevin, if this were a meta-analysis of individual physicists studying gravitational lensing, it would be a whole different thing; but, I think meta-analysis is seen, in health/medicine studies, of being more "powerful" than it actually is.

Posted by: SocraticGadfly on February 27, 2008 at 2:10 AM | PERMALINK

Socratic--fair enough. I think we're actually saying mostly the same thing... I would just say that I don't think people in the medical profession--or in health care more generally--see meta-analyses as definitive answers, but rather see them as a useful tool. I just think that when reporters write stories about meta-analyses, they write them as definitive either because 1) They don't really understand all of the issues involving the need for repetition in science and that no single study is definitive, or that 2) The reporters/their editors want to report studies as definitive because it sounds better/draws in more readers. (When was the last time you saw a headline read: "Study shows mild statistical advantage at loose threshhold p-value but requires further verification by other scientists in several larger studies"?)

You're right that p-values are pretty loose, and that they are that way to avoid missing potentially helpful information. Sort of a trade off with no good answer... I recently spent some time researching genetics and statistics for work--if you think p values are an issue in traditional medical studies, spend a few minutes looking into the statistical issues involving genetic studies.

Posted by: brad on February 27, 2008 at 2:21 AM | PERMALINK

I've taken Paxil at a couple points in my life (and admittedly, as SSRI's go, Paxil is quite a bit more hardcore, you actually go through withdrawal if you don't manage coming off of it real well). I can't say for certain if it helped my depression, per se, but it helped me get my mind in order, helped me get out of bed and focus and get things accomplished, and THAT was enough to help me with my depression and whatever self-destructive behaviors that I was going through. There's something really odd about this new analysis of the data.

Posted by: Inaudible Nonsense on February 27, 2008 at 3:03 AM | PERMALINK

I cannot tolerate them, am one of the 33% of folks who cannot take them (which drug companies don't like to mention).

But I think they probably do work for some people.

I took Zoloft for a time and it helped my sleeping, my main problem at that time. Unfortunately, I also got arthralgia. Took me a while to figure out it was from the Zoloft, since the dr. assured me that was not possible.

On the internet, I found another person with the exact same problem. I stopped the Zoloft but it took well over a year for the pain to go away. I didn't knowif it would.

Since that time, did try Zoloft again, even smaller dosages (and they were small to begin with), and the pain kicked in faster. After a few tries had to give up on it completely.

So I can say for sure there are bad side effects of these drugs, and since the sleep improved, some good effects, too. But it was scary that I wasn't able to walk stairs and such for a long time cuz of the damage caused by this medication.

BTW... placebo effect generally is not an issue for me. I feel sure both the improved sleep and the joint pain was real enough.

Posted by: clem on February 27, 2008 at 3:55 AM | PERMALINK

I too know from both personal experience and the experience of others that under the right circumstances and the right treatment strategy, SSRI's can be tremendously helpful. I don't know the various approaches these studies took, but it seems to me that effective use really combines an SSRI with therapy to work on dealing with the underlying causes of depression.

The big issue here is that despite the popular imagination of how these drugs work (aided by the drug companies' somewhat misleading advertising), SSRIs aren't a "happy pill" -- nor should we want them to be. SSRIs really just help interrupt a sort of feedback loop that the brain creates as it turns temporary depression into increasingly chronic depression. For some, this may alone be enough to return to a more balanced state, but for most, the meds principally help to establish enough emotional breathing room to deal with the underlying causes of the patient's depression in a rational and productive manner.

Ironically, I think you can use "The Surge" as a good analogy: The Surge created the political space necessary for reconciliation, but if the various Iraqi factions aren't working out their issues then things haven't gotten fundamentally better. In like manner, SSRIs create a Seretonin Surge that gives a conflicted brain the psychological space necessary for reconciliation, but the patient has to be actively participating in a productive process of self-examination in order to fundamentally improve their mental well-being.

Posted by: Goin' Private on This One on February 27, 2008 at 4:05 AM | PERMALINK

If you're going to give a long list of anecdotal 'evidence' more weight than a scientific study, you might as well believe in alien abduction, ESP, and telekinesis.

Posted by: joe on February 27, 2008 at 5:09 AM | PERMALINK

About a year ago, in an interview with the head of the British Smith Kline Glaxo, he said "everyone knew" that drugs only work for about 30% of the people who take them. Not just anti-depressants, ALL drugs.

Posted by: Susie from Philly on February 27, 2008 at 6:57 AM | PERMALINK

"What’s wrong with the PLoS story is that it’s meta-analysis." SocraticGadfly. Actually you did attack meta-analysis and you are incorrect. Meta-analysis, like any statitistical analysis, can be done poorly or usefully. In addition, p-values can be used in useful and misleading ways. In general, the vast majority of studies in peer-reviewed journals (like PLoS) report enough information in their methods to allow a statistically informed reader to evaluate if the meta-analysis and p-value significance level was done in a useful fashion. The only way to evaluate this is by reading the article and having a good knowledge of the statistical techniques used. Meta-analysis is an extremely useful technique that should not be impugned even if it were true that it was being poorly applied in this case. By the way, meta-analysis only uses data from the studies it examines and not the products of the statistical analyses of the individual studies. In other words, no p-values from the studies examined in the meta-analysis were used in the meta-analysis. Now the misuse of statistics is an important issue, although it tends to occur much more frequently when politics and not science is more in control of the study. But attacking statistics in and of itself is misguided and harmful in many ways (unless it is a careful critique of a particular method).

Posted by: Bill Hicks on February 27, 2008 at 8:50 AM | PERMALINK

I think that the study reinforces my belief that these drugs are over-prescribed in the first place, but effective for certain conditions.

They may or may not be over-presribed for people who don't need them, but SSRIs and other drugs are certainly under-prescribed for those with actual depression:

Cross-sectional studies have shown that approximately 50% of the psychiatric morbidity present in individuals seeing primary care physicians goes unrecognized. [4] Longitudinal studies have shown that more psychiatric illness is diagnosed with long-term follow-up, but much of the diagnosed mental illness persists and is severe.[5] The Medical Outcomes Study reported that only 15.8% of subjects with current major depression receiving treatment from a general medical clinician were receiving antidepressant therapy....Over 50% of patients treated for depression in general practice stop treatment within 3 weeks.[13]


This is due to a variety of factors, several among which are that those with depression may, ironically, be less likely to seek medical care in the first place (bc they're too immobilized to seek help), that people aren't willing to admit to depression for fear of seeming weak, and that doctors may not recognize depression in a patient but may instead treat the coping mechanism that the patient uses to mask the disease (for example, a depressed man starts drinking to self-medicate and then is treated for alcholism rather than depression).

Posted by: Stefan on February 27, 2008 at 10:19 AM | PERMALINK

Brad, we probably agree to some degree.

But, Bill (and perhaps Brad), I'll still stand by the attack on statistics re the "looseness" of p-values in medical and social science research vs. natural science research.

People just make way too damn many extractions from either health/medicine, or social science, studies that simply aren't warranted given the statistical looseness. Click the links I posted before blindly defending statistical methodology from criticism, please.

Again, Vic Stenger has an excellent takedown on all this. (Also note that the natural sciences p-value of 0.0001, not medicine and social sciences' 0.05. HUGE difference.)

P-values of the same looseness as in medicine/social sciences have been used to claim intercessory prayer actually works on sick people (halfway down the linked page), for example, or here (two-third down the linked page).

Targ's paper is not the only questionable study on the efficacy of prayer that has been published by medical journals. The editors and referees of these journals have done a great disservice to both science and society by allowing such highly flawed papers to be published. I have previously commented about the low statistical significance threshold of these journals (p-value of 0.05) and how it is inappropriate for extraordinary claims (Skeptical Briefs, March 2001). This policy has given a false scientific credibility to the assertion that prayer or other spiritual techniques work miracles, and several best selling books have appeared that exploit that theme. Telling people what they want to hear, these authors have made millions.

So, again I reiterate -- I stand behind all my statistics-related comments about this study, and medical studies in general. Bill, Brad, etc., there's room to be more skeptical.

Posted by: SocraticGadfly on February 27, 2008 at 11:01 AM | PERMALINK

Also, per a blogger, I came across a good statement on how many people misunderstand p-values in general:

First, the p value is often misinterpreted to mean the “probability for the result being due to chance”. In reality, the p-value makes no statement that a reported observation is real. “It only makes a statement about the expected frequency that the effect would result from chance when the effect is not real”.
Posted by: SocraticGadfly on February 27, 2008 at 11:07 AM | PERMALINK

And, re meta-analysis, I hope this explains more where I'm coming from, and my critique of undue faith in meta-analysis in the field of medicine vs., say, physics:

I’m not saying that the results of a meta-analysis are no stronger than the weakest study in its umbrella. I am saying that, with p values as loose as they are in health/medicine (and social sciences), is that no massive amount of individual research studies being included under one meta-analysis will make the meta-analysis’ results anything more than a little bit stronger than the best individual study.

In other words, in medicine, and in social sciences, meta-analysis adds a very modest bump, nothing more. The problem is, most people believe it does much more than that when it doesn’t.

Posted by: SocraticGadfly on February 27, 2008 at 11:14 AM | PERMALINK

Some SSRIs work well for some people, not at all for others, and are harmful for another group.

Some [vitamins, herbs, diets] work great for some people, okay for others, not at all for some others, and are absolutely harmful over either the short or long term for some.

We (and by that I mean everyone else but me and a few others) seems to start with the assumption that we are all biochemically the same, when in fact we are very different.

I mean, hell, they are just figuring out that men and women might respond differently to all kinds of drugs? Like, duh!

If you haven't read Biochemical Individuality, by Roger Williams (he discovered Pantothenic Acid--Vitamin B5), and you're interested in this stuff, you should read it. He claims people could easily differ by a factor of 20 in the need/utilization of a particular nutrient.

Oh, and not all blogs work for all people either. Some are definitely downright harmful (see: RedState, etc.)

Posted by: Charles on February 27, 2008 at 11:32 AM | PERMALINK

Oh, and I took Wellbutrin for a couple of years back in the 90s. It totally changed my life. Considering everything else I had tried up to that point, the placebo effect, while always possible, seems an unlikely explanation. A friend of mine just gave me some of her extra, as it wasn't doing much for her, so my expectations were low. the next day, I stayed up until 2 in the morning cleaning and organizing my office, and my entire professional career took off, taking me from frustration to a relatively high level of success.

As one poster above put it, these drugs can allow you to get out of a bunch of self-defeating patterns, patterns which feedback negatively and get you into self-destructive loops. With a little space, many people can find their way out of the bad behavioral habits. I was one of the lucky ones and I've seen others.

Of course I've also seem people who get on SSRIs just to cope with situations that they'd be much happier if they left. That, to me, is the most destructive aspect of these drugs. They make lives that shuld be intolerable, tolerable.

SOMA, indeed.

Posted by: Charles on February 27, 2008 at 11:40 AM | PERMALINK

Charles, very true.

Re the PLoS study, I have several other issues with.

First, per a comment by Zanthras to Kevin's initial post, two of the studied drugs are SNRIs, not SSRIs. The PLoS study doesn't even list SNRIs as a type of antidepressant. I quote:

Antidepressants include “tricyclics,” “monoamine oxidases,” and “selective serotonin reuptake inhibitors” (SSRIs).

Second, would you describe improvement of more than halfway from the baseline to "significant" as "marginal"? Again, I quote:

A previously published meta-analysis of the published and unpublished trials on SSRIs submitted to the FDA during licensing has indicated that these drugs have only a marginal clinical benefit. On average, the SSRIs improved the HRSD score of patients by 1.8 points more than the placebo, whereas NICE has defined a significant clinical benefit for antidepressants as a drug–placebo difference in the improvement of the HRSD score of 3 points.

(The word "marginal" is used more than once throughout the study.)

"Moderate" or even "modest" would be acceptable words. "Marginal" overstates the case.

Next, given what I've already said about p values, I'm not sure how much weight I would put on a total of 5,100 people in the 35 trials umbrelled in the meta-analysis. I'm not sure how much significance I would find in one medical study that had that many people, especially if studied over a short time period.

That said, given the "lag" anti-Ds can have, the FDA is also remiss on some of their study criteria, I don't doubt. Is two weeks too soon to allow a drug switch? In cases of severe depression, you may feel you have to try something else, which you do for the patient's sake, of course, but that should perhaps "ding" the study in some way.

That said, given that we still know little about brain chemistry, even if anti-Ds are shown to be of little effect some day, I'm not even sure that we can, today, say they are either effective or ineffective, with any degree of confidence.

Posted by: SocraticGadfly on February 27, 2008 at 11:46 AM | PERMALINK


I agree that it would be great to have tighter pvalues in medicine or a lot of biological studies, but in most cases it is simply impractical. If you started to require very stringent criteria only massive, expensive studies would ever get published, leading to both fewer and less ambitious studies. It's also true that a pvalue is not as ideal as a robust estimate of say, false discovery, but it's also a fairly straightforward statistic that is easy to calculate and (usually) easy to interpret. Naturally, any statistic can be abused by researchers who do not adequately consider their study and the various issues associated with it, but a careful researcher can account for the issues that you cite by doing things like calculating confidence intervals for their test statistics, using empirical (instead of theoretical) null distributions, and a number of other straightforward, albeit unfortunately uncommon, statistical techniques. Focusing on the absolute size of a pvalue, rather than the procedures for producing it, is a common mistake that many researchers make. But a good meta-analysis should be able to assess (and in most cases correct) the problems that exist in the statistical methods of the constituent studies.

The greater problem, I think, is the general inertia, especially in biology, to ignore the mathematical subtleties of data.

Posted by: Ruck on February 27, 2008 at 11:52 AM | PERMALINK

I have been thinking about this since yesterday, and I wonder if the weak results aren't from the short time spans over which improvement is measured, combined with the fact that the patients had milder versions of depression. Four to eight weeks, IIRC. While a person who is going to be helped is *starting* to feel better by then, the improvement continues over many months. So if you take milder depression, the amount of improvement over several weeks might not be all that dramatic, but still real and worth it.

I am glad that someone is looking at all this unpublished data. It can only be a good think to hold pharma's feet to the fire. But yeah, Prozac helped me immensely, and while I don't expect anyone *else* to be impressed with my anecdote, it was far too dramatic to leave me with any doubts. Colors became bright again. Things started tasting good again. These weren't effects I even knew to expect, so I can't see how it was placebo effect. I just hoped for relief from the emotional misery. I didn't know the whole world would get brighter.

Posted by: Emma Anne on February 27, 2008 at 1:50 PM | PERMALINK

Ruck, I'll definitely agree on the inertia. I think that, too, is part of the problem with medical studies. I'll also agree that absolute size of p-value isn't the be-all/end-all. But, I don't think I'd take it's value lightly, either.

The specific points about analysis and meta-analysis you mentioned... I don't know how much of that the PLoS authors did, or didn't do.

As for the practicality, that's why I suggested the "dual crunching." You get gold standard results that beat a p-value of 0.01, could be good/could be inconclusive with beating 0.05 but missing 0.01, and throw them out if worse than that.

THAT said, my biz (media), even supposedly subject-specific science writers/reporters/editors, is still too credulous about too much of this.

Posted by: SocraticGadfly on February 27, 2008 at 2:18 PM | PERMALINK

The study was flawed, because it did two things: Did not grade the patients in the study, and only compared them to placebo. It even says that perhaps for more severe depression the medication did work.

Well, duh.

The studies they collected are like noting which patients didn't get an infection, and then treated them with antibiotics. Of course healthy people don't do much better with than without.

Admittedly, the problem with studies being printed due to results instead of methodology more points to the need of more blinds in the process. Triple-blind?

Still, a study of 'none of the studies done in the last fifteen years' is somewhat disappointing. Maybe it means more testing should have been done - but maybe it means whoever did this study should have looked more closely at the data instead of mashing it together without controlling for any variables.

Posted by: Crissa on February 27, 2008 at 2:23 PM | PERMALINK

a good piece on this by a neuroscientist at Huffpo...


Posted by: RK1 on February 27, 2008 at 2:35 PM | PERMALINK

Let's summarize the criticisms of the British study so far.

1. Inherent problems with meta-analysis. Others are addressing this, and I have nothing to add.

2. Overprescription of anti-depressants. This problem would have a profound effect on the data being analyzed, by implicitly treating cases where the drug is improperly prescribed as cases where significant depression is occurring in the first place. If those patients were not experiencing real depression prior to medication, it would come as no surprise that they would not report changes consistent with amelioration of depression.

Query: Do statistical models exist that show the frequency of prescription to non-depressive patients? That data could then be used to at least eyeball the possible effect of inappropriate prescription on the British analysis.

3. Somebody pointed this flaw out yesterday, but it is worth repeating. The British review did not include all SSRIs, and did not limit itself only to SSRIs. In my own struggle with depression, I started out being prescribed tricyclics, which were the best thing around then. I have been given three different SSRIs, and now I take a regimen of Wellbutrin and Trazadone. Any study that would attempt to lump all of these radically different medications together and then test for efficacy would be highly dubious, in my opinion. I don't know what uncertainties are introduced by including some but not all of two different categories of drugs with different mechanics of operation.

4. Placebo effect. I will focus mostly on placebo effect, because it seems to be tossed around quite carelessly. First, I have to correct LMichael, who seems to conclude that because he has experienced side effects from the medication he has received, that that benefits can't be caused by a placebo effect.

That conclusion is wrong. It is easy to imagine a bad doctor experimenting with an anti-depressant and using a placebo (say, of arsenic--he's a really bad doctor) as a control. Either the real medication or the placebo would then produce side effects, and possibly neither would have any effect on the depression being treated.

I'm am certainly not an expert on placebos, but I believe that the basis of the (very real) effect is that the conscious mind knows of the treatment and reacts favorably in a way that effects the mind-body. So what if an experimenter does a controlled study of a drug known to be effective with an inactive placebo. If the experiment is done by giving the patients pills, then one would expect to see some placebo effect.

However, what happens if both the drug and placebo are delivered by very gentle mist, and is done while the patient is sleeping? If the patient knew nothing of the experiment (put aside ethical considerations for a moment), there shouldn't be any placebo effect, but the real drug should make a difference. Same result if a placebo and an effective medication were given to two groups while the patients were sedated for other reasons.

Here is the point of all this. I have taken at least six different anti-depressants over the last 30 years. In every single case, I was told that the drug wouldn't take effect for some period--sometimes 3 weeks, sometimes 6 weeks.

In each case, I first noticed some change in my sleep patterns. Instead of waking 20, 30 or even 40 times during a night, my sleep gradually becomes more normal. And the time for onset of beneficial signs has never been even close to the period I was told about. Sometimes it was longer, sometimes shorter.

I am simply very skeptical of a theory of placebo effect where the mind absorbs the idea that benefits will not happen for one or more months, stores that information unconsciously, and then starts allowing the physical complaints to ease. Is there evidence of the mind's unconscious clock and calendar?

Have any studies been done (probably not as problematic as true placebo studies) where patients divided into groups are given information about onset of beneficial effects that are sometimes accurate, sometimes sooner that usual, and sometimes later than usual? That might begin to measure what I have to regard as a very pliant placebo effect.

Posted by: anoregonreader on February 27, 2008 at 3:10 PM | PERMALINK


I agree about credulity on the part of the media an all of that, but frankly I think that the blame is mostly on the scientists rather than the media. As a statistical geneticist I think that it is absolutely incumbent on researchers to present their data in the most plainly understandable way possible, and to take care of all of the considerations that both colleagues and casual readers will miss. Reporters are supposed to be reporters, not experts in whatever field they happen to be assigned to cover (even narrowing the field to only "science" writing is way too broad for a single person to be able to find flaws in an arbitrary scientific study), and it's just very easy for a scientist to tell them whatever the reporter is trying to get at. I think that it's the responsibility of a scientist describing his research to do so clearly and accurately, just as a senator should describe a bill or an eyewitness should describe a car accident.

Case in point, a colleague at my institution studies neural development and a few years back found that variation in a particular gene was correlated with both brain size and ancestry (read: race). A reporter came to interview him for a major paper, and although the science was valid my colleague did not clearly explain the relative significance of a single gene for neural function, or properly explain how a result like that should be interpreted. Sure enough, the story eventually splashed across the front page of a major newspaper painting him as a eugenicist and suggesting that it conclusively proved that one race was smarter than another. The reporter probably acted in bad faith, but he certainly couldn't write anything that my colleague didn't tell him. I think that that was a particularly egregious instance of a scientist failing to explain his study properly to a layperson, and lowering the public and scientific discourse considerably for it.

Posted by: Ruck on February 27, 2008 at 4:40 PM | PERMALINK

Corporations control the media in the US, maybe they sympathize and support each other since AFAIK few big pharmas actually own media.

Posted by: Neil B. on February 27, 2008 at 6:08 PM | PERMALINK

The issue is less about whether SSRI's can influence your serotonin levels, which is the theory behind its use as an anti-depressant, but the side effects of such a distinctly unnatural approach to influencing serotonin levels.

Aside from the fact that administration of natural substances like tryptophan ought to be the first, safest approach to this endeavor, measuring serotonin levels and the spirits of your patient and determining if there is some impact or if more powerful measures may be necessary, noone really looks at the elephant in the room - i.e. why your serotonin levels may be off kilter, and whether this itself is related to your mood, emotion, mindset, life conditions, etc., and not the actual determinant, because if indeed serotonin levels are a side effect, not a primary determinant, then pumping additional serotonin into the body, or slowing down the deterioriation of serotonin, probably would have some predictable side effects (in general, specifically and individually, the side effects could be pretty undpredictable and variable).

Posted by: Jimm on February 27, 2008 at 6:54 PM | PERMALINK

I've been on Lexapro for the better part of the past year. My first observation was that it wasn't so much an anti-depressant as an anti-obsessive. It made sense: if you don't obsess about the things that depress you, you'll likely be less depressed. If it's easier to let go of depressing facts rather than see them clearly, it's easier not to be so sad about them.

Doesn't change the facts. And the facts are that since Reagan and the Republican machine took over 30 years ago, there've been so many facts that are depressing. It's nice to be able to take a drug that reduces the ability to focus on those facts. It doesn't change them.

So, my conclusion is that Lexapro just loosens my ability to maintain focus on the reality of depressing facts. Pleasant on a superficial level, but the better anti-depressant would be to eliminate the causes of depression in our culture.

Posted by: NealB on February 27, 2008 at 7:36 PM | PERMALINK

I don't know about the explanation for American newspapers, but TV network news is so dependent on pharmaceutical advertising that I could imagine strong disincentives to publicize this study. A study which indicated they had a harmful effect would have to be publicized, but I think the networks probably figure they can get away with not covering a story that suggests the effects are benign.

(Click on my name for a link to a recent study of the proportion of drug ads in network evening news shows.)

Posted by: Patience on February 27, 2008 at 10:03 PM | PERMALINK

Just a few comments on the meta-analysis:

First, the NEJM review showed that most antidepressants DID work (i.e. were statistically significant over placebo), but that their effect sizes were exaggerated to varying degrees depending on the drug in question. The average effect size was still 0.31, and for some medications was even higher (e.g. Effexor 0.40, Paxil 0.41). Furthermore, the FDA studies for Celexa, Paxil and Effexor had NO negative trials at all, and Prozac only had one negative trial that had 42 people compared to the positive trials that had 1,100 people. So, it appears that these medications, according to that more comprehensive review, ARE effective.

Second, this new study only includes the FDA studies performed BEFORE the drugs were FDA-approved, and thus do NOT include the dozens and dozens of newer trials that have shown efficacy. The pre-approval studies are typically skewed towards showing ANY efficacy without side effects in order to get approved, and then better studies are performed to clarify dose ranges and duration of treatment. The authors of this study ignored all subsequent trials, because they felt that they would be biased, because they were sponsored by the drug industry and their negative results could be quietly hidden away. Although this is a legitimate concern, it seems equally suspicious to categorically dismiss an entire set of data without examining it.

Third, the FDA studies typically lasted only 4-6 weeks, and prospective longitudinal studies carried out afterwards showed that most antidepressant efficacy occurs after this time period, and so it is unsurprising that there was a small statistical difference between placebo and medications. Furthermore, other studies have shown that relapse rates are significantly higher in follow-up for those who were taking placebo compared to those on antidepressants.

Fourth, it did not even include all SSRI's, including Citalopram, Escitalopram, and Sertraline. Thus, it's data is incomplete. Also, Effexir is NOT an SSRI, but an SNRI.

Fifth, there is a significant placebo effect, typically 30-40% efficacy, in MOST drug studies, largely because people who are enrolled in such studies get a great deal of attention from physicians, nurses, pharmacologists, social workers, and anyone else involved in the study. The question is what happens once this attention is removed, and studies have shown that the placebo effect typically wares off whereas the pharmacological effects remain to a large degree.

Sixth, the FDA studies excluded individuals who were SUICIDAL. That makes generalizability difficult in any case, but the studies are clearly limited due to the fact that they excluded a priori those MOST in need of these medications.

All in all, an unimpressive study by authors whose past work clearly biases them against medications whose conclusions are not entirely novel. For minor depression, medications are unnecessary; for moderate depression, medications or psychotherapy are helpful; and for severe depression, medications and psychotherapy are needed.

A final thought, the efficacy of psychotherapy is often compared to medications. CBT, for example, has been found to be equally effective as medications, especially for moderate depression. So, if medications don't work at all, then psychotherapy doesn't either, which leaves us with what? Exercise?!


Posted by: dguller on March 1, 2008 at 7:48 PM | PERMALINK

All of us have different DNA and different metabolisms. Some herbs and vitamins work better than others. The first thing that is needed is proper nutrition and a good physical exam. As the director of Novus Medical Detox, I often see patients who are on alcohol or opioids, central nervous system depressants, also taking antidepressants. When they detox they find they don't need the antidepressants.

This is good news because a Swedish study showed that 52% of the 2006 suicides by women on antidepressants. Since antidepressants work no better than placebos and are less effective than exercise in dealing with depression.

There is a prescription drug epidemic and these are leaders in the list of terribe abuses.

Steve Hayes

Posted by: steve hayes on March 2, 2008 at 9:50 AM | PERMALINK



Read Jonathan Rowe remembrance and articles
Email Newsletter icon, E-mail Newsletter icon, Email List icon, E-mail List icon Sign up for Free News & Updates

Advertise in WM

buy from Amazon and
support the Monthly