You are on page 1of 10

c 



  
 


by Michael D. Anestis, M.S.

I try to keep a calm stance towards the topics we cover on PBB. Obviously, Joye and I have our
own beliefs about things, but we try to keep our opinions out of our writing to the degree that
such a move is possible and to base all of our comments on empirical evidence. In doing this,
we try to keep emotion out of the picture and to therefore make it easier to facilitate civil
conversations amongst readers. That being said, occasionally a story takes hold in the media that
simply makes me incredibly angry. When the story is prompted by ignorance (e.g., a journalist
with no training in data analysis writing on a topic with which he or she is unfamiliar), I find it
relatively easy to turn that anger into frustration and understanding and to channel that into a
calm discussion of the facts, thereby debunking myths and errors. When the story is prompted
by a willful misrepresentation of the evidence and an open effort to distort reality through a
highly restricted discussion of data based purely upon highly flawed studies, on the other hand,
my response is a bit more harsh. All of this being said, today I would like to reflect on an article
posted on a number of websites, including that of the American Psychological Association, an
organization more than capable of looking at all of the data and accurately describing the facts
(thanks to PBB guest author Dr. James Coyne for alerting me to this and making my topic choice
for the day that much easier!).

This article is essentially an advertisement for the meta-analysis on psychodynamic therapy


written by Jonathan Shedler and published in the American Psychologist that we discussed
recently on PBB (click here for our coverage of the piece as well as free access to the journal
article itself). Now, keep in mind as you read my article today that I have nothing against
Dr.Shedler as a person or psychodynamic therapy as a general concept; however, I do have
strong negative opinions about sloppy interpretations of data and misrepresentations of facts and
the article covered today is weighed down heavily by both of those things.

Before reading my discussion of the article, you might find it useful to read it for yourself so that
you can form your own impressions first. Click here to read it on the APA website and here to
read the same text on Health Canal (you can leave comments on the article at Health Canal).

My approach today will be to quote the text of the article (in bold and italics) and then to reply
below the quote.

***
"Psychodynamic psychotherapy is effective for a wide range of mental health symptoms, including
depression, anxiety, panic and stress-related physical ailments, and the benefits of the therapy grow
after treatment has ended, according to new research published by the American Psychological
Association"

In just about any news article, the first sentence (and the headline) will be an attention grabber
and that's fine. The thing is, the claims put forth in this sentence are, in large part, based upon
horrifically flawed data. Unfortunately, the vast majority of readers of that article are unaware of
this fact and will never encounter those data. As such, their lasting impression of this issue is an
unfounded claim. When discussing science, this is a dangerous approach. When the APA is
publishing material on its site, such a non-scientific approach is highly unimpressive.

***

³The American public has been told that only newer, symptom-focused treatments like cognitive
behavior therapy or medication have scientific support,´ said study author Jonathan Shedler, PhD, of
the University of Colorado Denver School of Medicine. ³The actual scientific evidence shows that
psychodynamic therapy is highly effective. The benefits are at least as large as those of other
psychotherapies, and they last.´

As with the sentence discussed above, I understand that including quotes by the authors of the
article is a useful tool, but here again, a conclusion is stated and a passing reference is made to
data...but the reader is left to believe that they are high quality data that produce irrefutable
results. As we discussed in detail in our original coverage of the Shedler piece, this is so far
from the truth that it would be difficult to overstate.

***

To reach these conclusions, Shedler reviewed eight meta-analyses comprising 160 studies of
psychodynamic therapy, plus nine meta-analyses of other psychological treatments and antidepressant
medications. Shedler focused on effect size, which measures the amount of change produced by each
treatment. An effect size of 0.80 is considered a large effect in psychological and medical research. One
major meta-analysis of psychodynamic therapy included 1,431 patients with a range of mental health
problems and found an effect size of 0.97 for overall symptom improvement (the therapy was typically
once per week and lasted less than a year). The effect size increased by 50 percent, to 1.51, when
patients were re-evaluated nine or more months after therapy ended. The effect size for the most widely
used antidepressant medications is a more modest 0.31. The findings are published in the February
issue of American Psychologist, the flagship journal of the American Psychological Association.

The eight meta-analyses, representing the best available scientific evidence on psychodynamic therapy,
all showed substantial treatment benefits, according to Shedler. Effect sizes were impressive even for
personality disorders²deeply ingrained maladaptive traits that are notoriously difficult to treat, he
said. ³The consistent trend toward larger effect sizes at follow-up suggests that psychodynamic
psychotherapy sets in motion psychological processes that lead to ongoing change, even after therapy
has ended,´ Shedler said. ³In contrast, the benefits of other µempirically supported¶ therapies tend to
diminish over time for the most common conditions, like depression and generalized anxiety.´

First off, the writer of this article should be applauded for including actual data here and
attempting to explain the meaning of effect size. Most mental health articles choose to skip such
information, which is a disservice to the reader. That being said, here is where the use of meta-
analysis caused some real problems. The numbers cited by the author sound highly compelling,
but the results reflect poor data and low quality studies included in Shedler's meta-analysis. If
anyone takes the time to look at the actual results of the studies included in Shedler's work,
they'll see a completely different picture and, again, I encourage you to read our earlier article on
this topic to see precisely what I mean here (I describe the actual studies themselves in detail). I
realize that my opinion on meta-analysis is not universally accepted, but it would be difficult to
argue against the point that many of the studies included in Shedler's study were of low quality
and that the results directly comparing empirically supported treatments to psychodynamic
therapy actually directly contradicted his conclusions.

Moving beyond this point, Shedler referred to these studies as the "best available scientific
evidence on psychodynamic therapy." That's fine...except that this evidence is of poor quality.
If this is the best that is available (an arguable assertion), than what is available is not good
enough.

The claims of follow-up results are also not supported by the evidence. That being said, click
here to read a guest article written by John Ludgate, Ph.D on relapse in cognitive behavioral
therapy (CBT). That text will provide you with a more thorough understanding of what we
know about this topic.

***

³Pharmaceutical companies and health insurance companies have a financial incentive to promote the
view that mental suffering can be reduced to lists of symptoms, and that treatment means managing
those symptoms and little else. For some specific psychiatric conditions, this makes sense,´ he added.
³But more often, emotional suffering is woven into the fabric of the person¶s life and rooted in
relationship patterns, inner contradictions and emotional blind spots. This is what psychodynamic
therapy is designed to address.´

This argument is one that consistently makes me irate. It does so for a number of reasons. First,
it belittles the importance of symptoms. Here's the thing, symptoms like panic attacks, suicidal
ideation, the inability to experience pleasure (anhedonia), hopelessness, non-suicidal self-injury,
binge eating, substance withdrawal, and antisocial behavior are actually quite important.
Helping a client to no longer experience those things is no small feat and nobody benefits when
we act as though this is not the case. Second, he implies that empirically supported treatments
such as cognitive behavioral therapy ignore everything except the symptoms listed in the DSM.
There is no evidence that CBT does this or that psychodynamic therapy does it any less. It is
simply a talking point, repeated by those who oppose the EST movement often enough that
people have come to believe that it is true. Psychology is a science and, as such, our conclusions
need to be founded upon evidence. When people simply make claims like this without any sort
of support and we take them at their word for it, the entire foundation of the field collapses
inward and those most in need of help - the millions of people suffering from mental illnesses -
are harmed. As it turns out, EST researchers examine the impact of treatment on a vast array of
outcomes unrelated to DSM symptoms, by the way, and a quick search through our articles on
these treatments will provide you with numerous examples of this.

Additionally, the phrasing used by Shedler lumps proponents of ESTs in with the pharmaceutical
and insurance industries, which are highly unpopular with most people. As such, scientists are
suddenly the "bad guys," pushing an agenda upon the people whereas psychodynamic therapists
are the anti-establishment offering freedom from oppressive interventions. The thing is, it's hard
to be more closely associated with the establishment than psychodynamic therapy, which is such
a popular conceptualization of mental illness and psychotherapy that it hard to find any media
representation that takes any other approach. This group had so much control over this field for
so long that the initial two versions of the DSM used their jargon and directly asserted that
mental illnesses were best thought of in those terms. Empirically supported treatments are not
motivated by profits (by the way, they take less time and cost less money than most
psychodynamic approaches), nor are they associated with unpopular industries. They represent
the belief system of scientists, who rigorously test their theories through systematic
investigations.

The bottom line is, whether or not "more often, emotional suffering is woven into the fabric of
the person's life and rooted in relationship patterns, inner contradictions, and emotional blind
spots," the treatment that makes those symptoms disappear and thereby improves the individual's
quality of life is the better choice. If a particular set of symptoms is treated and the client still
has unresolved problems that he or she wants to address, practitioners who utilize ESTs are more
than happy to address them. Nobody is kicked out of therapy or forced to ignore their own
problems - instead, rather than the old school psychoanalytic preference of charging hundreds of
dollars per session for multiple sessions per week over a period of several years, the newer,
evidence-based approaches specifically target the issues that prompted the individual to come in
for treatment and address them in an effective, time-limited manner. EST's simply prioritize
things like reducing suicide risk as quickly as possible and I'm not certain I understand the
counter argument to that priority.

***

Shedler also noted that existing research does not adequately capture the benefits that psychodynamic
therapy aims to achieve. ³It is easy to measure change in acute symptoms, harder to measure deeper
personality changes. But it can be done.´

This comment mystifies me and it's a great example of how our own biases can cause us to put
forth arguments based upon contradicting points. Up until this point of the article, we have been
led to believe that the evidence supports psychodynamic therapy. Still, there is a substantial
research base left untouched (or misrepresented) in the Shedler analysis, so an answer is needed.
What answer is offered? Evidence does not capture psychodynamic therapy. So....first they say
look at all of this incredible supporting evidence and then they attempt to disarm their opponents
by pointing out that evidence in general is not useful. In other words, let me shine a light on bad
data that appears to support my case but simultaneously disavow data so that when people point
out all of the evidence against my conclusions, I can say that it only contradicts me because it is
not capable of seeing the truth.

This is the antithesis of science. You can not simply shine a light on the results that (appear to)
support you and disregard the results that contradict your conclusions. Either science is good or
it isn't (it is, by the way)...pick one.

***

The research also suggests that when other psychotherapies are effective, it may be because they
include unacknowledged psychodynamic elements. ³When you look past therapy µbrand names¶ and
look at what the effective therapists are actually doing, it turns out they are doing what psychodynamic
therapists have always done²facilitating self-exploration, examining emotional blind spots,
understanding relationship patterns.´ Four studies of therapy for depression used actual recordings of
therapy sessions to study what therapists said and did that was effective or ineffective. The more the
therapists acted like psychodynamic therapists, the better the outcome, Shedler said. ³This was true
regardless of the kind of therapy the therapists believed they were providing.´

I discussed the flaws of this analysis in great depth in our original piece on this topic, so rather
than rehash them here, I'll again encourage you to read the original article. The studies upon
which Shedler's conclusion here is based are so flawed it is actually mind-boggling. He himself
said in the original paper that "qualitative analyses of the verbatim sessions transcripts suggest
that the poorer outcomes associated with cognitive interventions were due to implementation of
the cognitive treatment model in dogmatic, rigidly insensitive ways by certain of the therapists."

***

Before concluding today, I want to provide you with another link, brought to my attention
through an email sent though the listserv of the Society for a Science of Clinical Psychology. In
this article, published in the LA Times (click here to read it), the results of the Baker et al (2009)
report on the use of science in clinical psychology, are debated (click here for our coverage of
the Baker et al report). This is a distinct but related issue and I wanted to call attention to three
quotes from the LA Times article to drive home my point for today.

Quotes number 1 and 2 are from Drew Westen, a vocal critic of the EST movement. He referred
to those who favor the use of ESTs as "largely people who not only don't practice themselves --
and therefore have no idea what would be relevant to practice -- but have a tremendous disdain
for people who do practice." He also said that "[Cognitive-behavior therapy] is deliberately
designed to ignore any relevant features of the personality of the individual."

Westen's first points, that EST supporters do not practice, that practicing is required in order to
understand therapy, and that EST researchers have disdain for practitioners are, quite frankly,
falsehoods. Many researchers are active clinicians. In fact, I would bet heavily that the
proportion of researchers who practice is substantially higher than the proportion of clinicians
who conduct and read research. Additionally, whereas all researchers are trained in therapy as
part of graduate school and internship, not all clinicians are trained in research. If experience is
required to understand one, than aren't non-scientific clinicians the only ones incapable of
understanding the entire picture? Finally, upon what evidence does he base the claim that
researchers have disdain for clinicians and, along those lines, what evidence does he have that
clinicians have less disdain for researchers?

Westen's second point, that CBT was designed to ignore personality, is absurd. CBT is designed
to address aspects of mental illnesses that have been shown to be common across individuals in
order to maximize symptom relief; however, there is plenty of flexibility to work with the client
as an individual. Regardless, given than CBT and variants of CBT (e.g., dialectical behavior
therapy) have been shown to be effective in the treatment of personality disorders, it seems a bit
off base to say that personality is not addressed in CBT.

Quote number 3 is from Michael Lambert, who said "I don't care what psychotherapy the person
is getting. I care whether they're responding." He said this in an effort to point out that
proponents of ESTs care more about providing a particular treatment than they do about clients
responding to treatment. This, again, is absurd. The entire premise of the EST movement is that
certain treatments have been shown, on average, to produce better results for particular
diagnoses. These treatments are thus considered the best choice; however, nobody believes
everyone will respond in the same manner to the same treatment and, as such, assessment is
required in order to ensure that improvement is happening. Because EST proponents believe in
this so wholeheartedly, an enormous research base has developed enabling us to better
understand the impact of treatment. Opponents of EST's on the other hand, eschew assessments
and, as such, have no idea the degree to which their treatment choice is effective. So, ironically,
in his attempt to criticize the EST movement, Lambert actually explained one of the primary
reasons why ESTs are so important. He also sheds a light on why the research base upon which
Shedler built his case is so flawed.

***

So...what's my overall point today? Quite simply, the point is that when we hear somebody say
something is a certain way, we need to always ask how they arrived at that conclusion and, when
possible, we need to examine the evidence ourselves. In a meta-analysis like the one conducted
by Shedler, it is easy for the authors to paint a picture that supports their point and to make very
compelling statements that sound intuitively profound. If we do not look at the evidence
underlying their conclusions, we are vulnerable to falling into traps. The claims made by people
like Shedler, Westen, and Lambert are not a malicious attempt to mislead the populace, but they
represent a sloppy, non-scientific approach to understanding mental health and psychotherapy.
The media stories covering such claims result in a form of deception that makes me remarkably
upset. If we simply listen to their words or the descriptions of journalists who write about them,
we will not see things accurately. When organizations like APA post this type of thing on their
website, they make the problem even worse.

Here on PBB, I likely say things readers disagree with on a daily basis. The thing is, I will
always provide you with the citations upon which my points are based and will openly discuss
the data with you. If you know of data I overlooked that calls my point into question, I
encourage you to mention it in the comment section so that we can all have a civil discussion
about these things. The next time I change my mind on an issue won't be the first, but this only
happens when I am made aware of evidence that makes my previous position a worse reflection
of reality than an alternative position. In the meantime, please, when you read about topics like
this, make sure that what the article says is actually supported by valid evidence.

  


  

c 
  
  

  
 


by Michael D. Anestis, M.S.

As readers of PBB have likely come to realize over the past year, Joye and I believe it is
extremely important to fight against misinformation. Unfortunately, a lot of bad research gets
published - sometimes in really strong journals - and that research is often then publicized as
accurate and factual. On a number of occasions, we have covered this issue, writing articles
discussing the important flaws in certain research, particularly when that research has become a
popular talking point (click here, here, and here for examples of this). In 2008, Leichsenring and
Rabung published a meta-analysis in the highly influential Journal of the American Medical
Association (JAMA) in which they claimed to demonstrate that long-term psychotherapy -
defined as at least one year or 50 sessions of psychotherapy - is more effective than short-term
psychotherapy. This study became a popular piece of evidence for individuals who already
believed this to be the case, and was cited in a number of journal articles, including Shedler's
(2010) piece, which was the subject of one of the articles linked to above.

As you might guess from the opening of this article, it turns out that the Leichsenring and
Rabung (2008) article was full of substantial flaws that completely negate the conclusions they
drew. In a paper just published in Psychotherapy and Psychosomatics by Sunil Bhar, Brett
Thombs, Monica Pignotti, Marielle Bassel, Lisa Jewett, PBB guest contributor Jim Coyne, and
Aaron Beck, these flaws were discussed in great detail. I have had to sit on these results for
several months now as the paper awaited publication, so I am excited to finally have the
opportunity to write about them!

In their meta-analysis, Leichsenring and Rabung (2008) analyzed 8 studies comparing long-term
psychodynamic psychotherapy (LTPP) to a variety of other interventions for a number of
diagnoses and concluded that LTPP was "significantly superior to shorter-term methods of
psychotherapy with regard to overall outcome, target problems, and personality functioning" (p
1563). If the data supported this claim, it would be a stunning reversal of clinical research
conducted over the past several decades. The data did not, however, do anything of the sort. To
explain my point, I'll briefly summarize each of the areas addressed by Bhar et al (2010).

ð   


The biggest issue with the Leichsenring and Rabung (2008) meta-analysis is that they ran the
wrong analysis, which created faulty results. Bhar and colleagues (2010) explained that the
authors, in calculating effect sizes (remember, effect sizes are a measure of how powerful a
finding is), used the wrong conversion formula. The formula they used is intended for
conversions of between-group point biserial correlations to standardized difference effect sizes,
but the authors used within-group effect sizes. Now, obviously this is a fairly obscure statistical
reference, but let me explain the consequences of this: even though no single study in the
analysis demonstrated an overall standardized mean difference greater than 1.45, the combined
effect size was calculated as 1.8. Additionally, because of this, they generated a between-group
effect size of 6.9, which means that 93% of the variance was explained. That is essentially
impossible. These miscalculations would be equivalent to earning a C on every exam you take
during a semester and then concluding that your average was a B+. What does this mean? It
means that they concluded that LTPP drastically outperformed control conditions when, in
reality, their estimates severely overstated the case.


 
 

There are actually several issues here, one of which we've discussed a number of times before.
First of all, the authors compared studies in which participants were being treated for a wide
variety of conditions and combined those results into one outcome. In other words, some of the
clients were being treated for anorexia nervosa, others for borderline personality disorder, some
for "neurosis," and others for a now defunct diagnosis: self-defeating personality disorder.
Asking whether one treatment is better than another for everything is a broad and essentially
useless question. There is an abundance of research indicating that particular treatments are
better than others on average for particular conditions. When you combine the results of
treatments for a number of diagnoses together, you are glossing over those results and essentially
combining apples and oranges and, perhaps not shockingly, coming up with non-significant
results.

In addition to comparing studies measuring the treatment of different diagnoses, Leichsenring


and Rabung (2008) also combined completely different treatments into single groups. The
general comparison in their study was LTPP versus short-term psychotherapy. Included in the
short-term psychotherapy group were:

O? Waitlist control condition (e.g., no treatment at all!!!)


O? Nutritional counseling
O? Standard psychiatric care
O? Low contact routine treatment
O? Treatment as usual in the community
O? Referral of alcohol rehabilitation
O? Provision of a therapist phone number

Looking over that list, do you think that represents a strong example of what typically occurs in
empirically supported short term psychotherapy?

There were only two examples in which LTPP was compared to an empirically supported
treatment. In one, LTPP was compared to dialectical behavior therapy (DBT) for borderline
personality disorder (BPD) and, in the other, LTPP was compared to family-based treatment for
anorexia nervosa. LTPP did not outperform either treatment. In other words, adding a huge
number of sessions and a large amount of time did not result in any benefits (although it almost
certainly cost substantially more money to the client). The only time LTPP outperformed
another form of therapy in any of the trials, it was being compared to no therapy at all or an
unvalidated treatment. Those are hardly compelling results.

The final issue with the comparisons in the Leichsenring and Rabung (2008) study was that they
were so severely underpowered. Leichsenring and Rabung (2008) believed that publication bias
was not an issue because non-significant correlations between effect size and sample size.
Because only 8 studies were used, however, a significant correlation was nearly impossible to
find and, as such, the absence of one is essentially meaningless. Analyses have indicated that, in
order for a treatment study to be able to actually answer the questions it asks, a minimum of 50
participants need to be in each treatment group. In the Leichsenring and Rabung (2008) study,
there was anywhere from 15 to 30 participants in each group.

The issue of power and publication bias is a tricky but important one. Think of it this way:
journals don't tend to publish results that are not statistically significant. Because a small sample
size requires a HUGE effect in order to be statistically significant, only these extreme examples
end up being published. As such, results become artificially inflated.

j
    

Bhar et al (2010) then shifted their focus to the lack of reasonable assessments of bias in the
studies included in the Leichsenring and Rabung (2008) analysis. The authors concluded that few
of the studies took appropriate safeguards to ensure that participants were properly randomized,
that randomization sequencing was concealed from people making assessments, that assessors
were blind to condition post-treatment, that missing data were analyzed appropriately, and that
the authors from the original studies actually included all of the relevant outcomes.

Additionally, there was great variety in the number and frequency of treatment sessions and the
presence of medication augmentation, making it very difficult to make valid comparisons. None
of the studies included in the Leichsenring and Rabung (2008) study properly assessed treatment
integrity, meaning we have no idea to what degree therapists actually administered the treatments
as they are designed.

R

Ultimately, what Bhar and colleagues (2010) found was that Leichsenring and Rabung (2008)
used too few studies, that those studies were methodologically weak, that diagnoses and
treatments were combined into groups that made no sense, that some short-term psychotherapies
actually did not involve any therapy, and perhaps worst of all, that the analyses they ran were
incorrect, leading to impossible results completely unrepresentative of reality.

A number of thoughts jump out at me when I think about this issue. First of all, how did the
Leichsenring and Rabung (2008) study get published in JAMA? Secondly, what can be done to
keep people from simply accepting the results of meta-analyses as though they are
representations of fact rather than studies full of at least as much bias and flaws as any single
study? The bottom line is, research simply does not support the claims that LTPP is more
effective for short-term psychotherapy (and those distinctions aren't very useful anyway). Meta-
analyses like this that make broad claims based upon weak studies and miscalculations are a real
problem unless readers are willing to go to the original studies and double check the claims being
made by authors. That, unfortunately, is not a realistic expectation.


 
 

 
    
   

   
     ?

You might also like