Preview — Psychodynamic Therapy by Richard F. Summers ,. Jacques P. Presenting a pragmatic, evidence-based approach to conducting psychodynamic therapy, this engaging guide is firmly grounded in contemporary clinical practice and research. The book reflects an openness to new influences on dynamic technique, such as cognitive-behavioral therapy and positive psychology. It offers a fresh understanding of the most common problems for which p Presenting a pragmatic, evidence-based approach to conducting psychodynamic therapy, this engaging guide is firmly grounded in contemporary clinical practice and research.
It offers a fresh understanding of the most common problems for which patients seek help--depression, obsessionality, low self-esteem, fear of abandonment, panic, and trauma--and shows how to organize and deliver effective psychodynamic interventions. Extensive case material illustrates each stage of therapy, from engagement to termination. Special topics include ways to integrate individual treatment with psychopharmacology and with couple or family work.
See also Practicing Psychodynamic Therapy: A Casebook , edited by Summers and Barber, which features 12 in-depth cases that explicitly illustrate the approach in this book. Get A Copy. Hardcover , pages. More Details Original Title. Other Editions 5. Friend Reviews. To see what your friends thought of this book, please sign up. To ask other readers questions about Psychodynamic Therapy , please sign up. Be the first to ask a question about Psychodynamic Therapy. Lists with This Book. Community Reviews. Showing Rating details. Sort order. Feb 04, Juletta Gilge rated it really liked it.
A clear and well written model of psychodynamic therapy. Great for those interested in learning about the theory and the underlying beliefs it has. Jan 27, Sam Tan rated it it was amazing. Flexibility within fidelity. Simos, G. Wilson, G.
Psychodynamic Psychotherapy Research: Evidence-Based Practice and Practice -Based Evidence continues the important work of the first book published in. Editors: Levy, Raymond A., Ablon, J. Stuart, Kächele, Horst (Eds.) Psychodynamic Psychotherapy Research: Evidence-Based Practice and Practice-Based Evidence continues the important work of the first book published in by Humana Press (Handbook of Evidence-Based Psychodynamic.
Manual-based treatments: The clinical applications of research findings. Behaviour Research and Therapy, 34, — Manual-based treatment: Evolution and evaluation. Treat, R. Baker Eds. McFall pp. I once again applaud your effort to bring empirical investigations into the conversation regarding psychodynamic therapy. That being said, a number of problematic claims made in your paper are repeated here. After the publication of that paper, a series of comments addressing concerns regarding its content were published in the American Psychologist and you then added a reply.
In response to your reply, I made an effort to continue the conversation on Psychotherapy Brown Bag, as I and several others felt that the majority of those concerns went unaddressed in your published response. Unfortunately, you either did not receive my invitations or chose not to address them, so I'm repeating them here. I will then reply in a separate piece and that can continue as long as we have something to say. Although you and I are unlikely to be swayed, in doing this we would offer readers an opportunity to make informed opinions based upon a thoughtful discussion of the evidence.
Seems like a great source of common ground for us and an opportunity to elevate the discussion beyond the boundaries and speed of the peer review process. For readers unfamiliar with the post-publication discussion of the article in the American Psychologist, here are the citations:. Anestis, M. When it comes to evaluating psychodynamic therapy, the devil is in the details.
American Psychologist, 66, McKay, D. Methods and mechanisms in the efficacy of psychodynamic psychotherapy. Thombs, B. Is there room for criticism of studies of psychodynamic psychotherapy? Tryon, W. No ownership of common factors. Michael D. Anestis, Ph. At some point I asked if any of them would choose a therapist for themselves on the basis of that person's reputation for adhering strictly to a manual. They laughed awkwardly; none would.
Incredibly, they even agreed that such a request would be evidence of an obsessional defense--although they balked at the term. What, exactly, in inauthentic about this exchange regarding the relative virtues or empirically supported practice? If I understood Roger Brooke's vignette correctly, the directors' original enthusiasm for manualized treatments was ultimately revealed to be inauthentic. Did I get that right, Roger? Isn't this very similar as when asking bio-psychiatrists what kind of treatment would they want if they had a psychotic breakdown - being decked with a Thorazine shot or..?
And they all say they don't want drugs but instead want someone that they can talk with and will listen. Perhaps some context would be helpful in understanding the hypothetical reluctant manualized treatment seeking DCT client. The use of the term manualized treatment is now rendered a pejorative, because it implies a cookbook.
I imagine that the same awkward silence would ensue if a group of internists were asked if they would be comfortable with manualized treatment for their physical ailments. It is beyond the scope of any single comment post, but the movement to empirically supported treatments or empirically supported practice connotes a general adherence to the science of therapy, which includes a wide range of methods and procedures for presenting problems, and that are not necessarily exclusively tied to individual diagnoses as dictated by the DSM.
Some of the responses so far seem a little overwrought. His starting point was an article that found that clinicians, identifying themselves as CBT in orientation, were not following what the authors thought to be CBT procedures. They concluded that clinicians might need periodic retraining to ensure that they were following correct CBT procedures. BTW, not that he needs defending, but in this article Shedler is addressing the blind spots of researchers. I am sure he is equally capable of addressing the blind spot of clinicians who dismiss any thought that they may also have something to learn.
Regarding the first observation, it may or may not be true that all researchers, or all CBT researchers, would subscribe to such a sweeping and non-evidence-based conclusion although Alan Kazdin did in a Time article a few years ago. They will adapt to the complex and multidimensional person who shows up. There is a more complicated and important issue embedded in this observation. Researchers, even psychodynamic researchers, are more or less stuck with the DSM Axis I conceptualization of human problems. They must operate within these constraints and their outcome findings typically address whether symptoms have been reduced or eliminated within x amount of time.
There is nothing wrong with this; it is how normal science operates and there is a lot that can be learned from isolating specific symptoms with specific interventions. But these findings rarely have any direct real world applications. Personality and interpersonal resources trump symptoms in working with actual patients. They are doing something quite different from what the typical therapy researcher does. In stating the above, I also realize that there are many exceptions to these observations. There are researchers who do attempt to bridge the gap between therapy and practice.
In particular, he noted that many times psychodynamic clinicians thought they were addressing transference when they were not; and that CBT clinicians addressed the negative transference without acknowledging that they were doing this. I think that by acknowledging that clinicians and researchers work in such different worlds, we might be able to find more common ground, less triumphant grandstanding, less resentful withdrawal. We all want to show the fly the way out of the fly-bottle.
We all continually obscure this task with our emotional armaments. Shedler continues to provide a place for this vital ground between research and practice. Thanks Jonathan - I've always had a feeling about this, having worked in agencies that claim to be using evidence-based treatment. Your findings validate my feelings, which is always a good feeling. I think a lot of this discussion is missing the point. The dominance of CBT comes from RCTs where the treatment has been strongly manualized and delivered in a cookbook fashion to a group of subjects that suffered from highly specific conditions that real clinicians rarely will ever encounter.
The proponents of CBT then makes a claim that the effectiveness of CBT in these highly artificial studies can be generalized to psychotherapy in real life. That is the lie that has been smartly marketed to gullible therapists, potential clients, administrators, agencies etc for several decades. Its time to openly discuss that there are no such connection and instead accept the fact that in real life you must meet your patients where they are.
I usually tell my tentative patients that my practice is not like a shoe store with one model and one size of shoes that I claim will fit all. I instead explain that therapy is a collaborative effort, where we together will find a model that works for us.
I frequently hear horror stories from their encounters with cookbook therapists who just "know" what is best for them and how they eventually managed to escape. Of course, there are potential clients who expect and want cookbook therapy - and I am happy to refer them to such therapists. First off, the notion that the evidence-base for evidence-based treatments hinges purely on highly controlled efficacy studies is misguided.
Effectiveness trials, in which comorbidity, training and allegiance of the therapist, etc Yes, studies that prioritize internal validity to ensure that they're actually measuring what they intend to measure, a point that seems altogether too easily dismissed here are limited in their generalizability, but they're also only one step in a larger process of testing the degree to which a treatment impacts a broad array of clients.
Second, there seems to be a strong pull to identify "researchers" as individuals who are out of touch with clinical practice, who never come into contact with clients and are not qualified to speak about life "in the trenches. First, people in this comment chain have absolutely no idea how many clinical hours these folks are logging each year and it seems patently absurd to ascribe such characterizations to some of the more prominent clinical researchers out there e.
Second, if that point were true - which is isn't - why would clinicians who do not publish quantitative research be qualified to speak about research? It seems to me that you can't have it both ways. If we dismiss the ability of individuals who emphasize research to speak about and understand clinical interactions, then we should similarly dismiss the ability of individuals who emphasize clinical practice to speak about research.
If we were to do this, however, this conversation would likely take a different path. All of this being said, I'd like to return to the point of my initial comment: many of the statements in this article echo comments from Dr. Shedler's initial article. Severe problems with many of those points and the data used to justify them were made in published replies in the American Psychologist, and then ignored.
This conversation would be elevated if Dr. Shedler would be willing to address those critiques in an actual back-and-forth, in which a thoughtful discussion of the evidence could take place and readers could make informed opinions. As it stands, this conversation still hasn't taken place. The invitation remains open. Michael Anestis, Ph. I want to comment on Dr. Anestis' comments regarding the need for more effectiveness trials.
I continue to maintain that more research is needed to maximize external validity. These, however, are limited to anxiety research and not enough effectiveness research has been done over the years. More is needed. I absolutely agree that each one of these components are steps in a larger process of testing the degree to which a treatment impacts a broad array of clients.
Your comments are false and appear deliberately designed to mislead. You leave readers with the false impression that Dr. Shedler did not respond to critical commentaries published in American Psychologist about his article, "The Efficacy of Psychodynamic Therapy. Shedler's response to the commentaries was published in the Feb-March edition of American Psychologist along with the commentaries themselves Shedler, , American Psychologist, 66, To suggest in a public forum that Dr.
Shedler "ignored" the comments is spreading falsehoods. Your insinuation that Dr. Shedler has been unwilling to engage in "actual back-and-forth" is likewise false. I have known Dr. Shedler as a faculty colleague at the University of Colorado School of Medicine. I have seen him lecture and engage in panel discussions and public debate with colleagues who hold opposing views. I know he is a scholar of the highest scientific integrity. I also have first-hand knowledge that there have been at least half a dozen efforts to organize panel discussions or debates in appropriate forums e.
Shedler and one or more nationally prominent scholars identified with the evidence-based therapy movement. In every single case, Dr. Shedler agreed, and it was the person on the other side who backed out. In the future, you may want to think twice before making false statements and ad hominem insinuations in a public forum.
I'm entirely aware of Dr. Shedler's reply in the American Psychologist. Indeed, in Dr. Shedler's reply, he used a portion of his text to detail a "master narrative" of CBT and to question the integrity, intentions, and scientific aptitude of myself and other authors of critiques of his piece, but he failed to reply to much of the content of those initial replies.
Perhaps a quick read through of those comments will help you get up to date on that. I then published a follow-up to Dr. I also have doctoral students read the entire set of above referenced studies and to think critically about them in my treatment course, thereby encouraging individuals to developed informed opinions based upon multiple perspectives and full engagement with the data.
Such a process would be aided on a broad scale if a conversation could take place in a public forum outside of comment sections read by only a small portion of readers and invisible in headlines that appear in Google searches capable of moving quicker than the pace of peer review. Perhaps you or others do not consider a conversation with an active participant in the American Psychologist discussion as an appropriate forum; however, I am yet to see any of these critiques addressed anywhere else either.
It is also unclear to me why a publicly accessible discussion as opposed to a panel in a conference attended by a small audience, many of whom already read treatment research with another professor would be anything short of appropriate. If others have opted not to engage in this discussion when Dr. Shedler has offered to do so, that it indeed a shame.
This is not one of those instances, however. This is an instance in which another scholar of high scientific integrity is offering an opportunity to have a thoughtful discussion. Nothing about my words involves false statements or ad hominem insinuations: I am directly stating that important critiques of the evidence cited here and in the paper have not been addressed yes, a reply was published, but that reply did not address the critiques and invitations to engage have not been responded to. This point is not mine alone, but reflected in the concerns of many other scholars.
This is not a critique of an individual's character or scholarly ability - it is an invitation to engage in a discussion and an acknowledgment of widespread frustration that this conversation has not yet taken place. I read your brownbag page. Your discussion of meta analysis is based on a gross misunderstanding of what meta analysis is. For example, your repetitious comments of that it is not appropriate to use "under-powered" studies in meta analysis is a fallacy. In fact, that is actually one of the main reasons to use such methodology - aggregating "under-powered" studies adds power and helps discover findings that otherwise would had been hidden!
Ideally, we would like to study each hypothesis in large scale studies, with thousands of subjects etc, but unfortunately the reality is that there is not enough funding for such. Just remember the cost of the TDCP study! I fear you may have a gross misunderstanding of gross misunderstandings. You and I agree that the central purpose of meta-analysis is to aggregate findings from several studies to present a more unified depiction of the strength of an effect. Where we disagree is in their ability to overcome the many shortcomings inherent in underpowered studies unclear what the quotes were for in your comment.
As a couple examples, underpowered studies typically are not statistically capable of finding significant effects of moderate or small size and, as such, are not published when they unsurprisingly produce null results another problem altogether. Those that are thus represent situations in which large effects were reported in an environment not conducive to statistical testing.
In meta-analysis, you can examine the Q statistic to look for heterogeneity in effect sizes or the funnel plot or other options to consider publication bias or an inconsistent story; however, if a meta-analysis relies heavily upon underpowered studies, you'll have ceiling effect on sample size that limits the ability of these and other metrics to detect the glaring problems.
This may seem like an improbable scenario, however, a number of studies have demonstrated that it is actually the norm e. Another consideration here is the notion that one major error overly small sample size is unlikely to exist in isolation of other major errors. For instance, such studies often fail to consider adherence to the treatment, utilize suboptimal randomization procedures, make no a prior decision about primary outcome variables or combine them in incoherent ways , and arbitrarily decide when to stop data collection or worse yet, look at the data and stop when the results reflect the outcomes they want.
Ultimately, my critique of poorly conducted meta-analyses extends far beyond studies of psychodynamic psychotherapy. Properly conducted meta-analyses are an invaluable tool; however, many are improperly done and few consumers are equipped to determine which is which. This discussion is in many ways tangential to this site and this article; however, I'd like to tie it all together.
You made a specific critique of my discussion of the evidence. I saw this critique, responded to it directly, and provided source documents to back up my point and allow readers to draw their own conclusions. In this sense, we engaged in a thoughtful discussion of the evidence. It would be nice if the topics discussed here and in Dr. Shedler's piece were subjected to the same type of dialogue.
Sir, As I said, your understanding of meta analysis is flawed. Your response just demonstrates more of the limitations you have. I suggest you take a course in the subject matter or read a book that can introduce you to the proper way of conducting meta analysis. However, your other comments here shows that your purpose is not to use science to find out the facts but to prove your own self-serving agenda so I don't think a course would help you. In fact, I don't think this debate can help you. As long as you and your homies are not looking for a balanced conversation about psychotherapy research, but just want an opportunity to tell everyone that they are wrong and you are right, these debate will continue to look like when the Tea party is invading Washington.
It is very sad and I do apologize to new psychotherapists that have read these conversations. I'm trying desperately to have a discussion of the evidence. In my response about meta-analysis, I explained my position and provided links to source material.
That doesn't indicate I'm unwilling to discuss, it indicates that I'm interested in folks reading the same material as me and having a dialogue about it. You replied by simply stating I'm wrong without mentioning any specific errors, providing a rebuttal, or providing links to support your point. You then randomly decided that I'm beyond help in understanding this material, a point that I suspect even the majority of other folks unwilling to attach their names to their vitriol in this discussion thread would dismiss as petty and unreasonable.
If you want a discussion of the evidence, discuss the evidence. I've done so several times, always backing up my point with evidence and awaiting a reply. It's okay for us to disagree The fact that we haven't reached a consensus doesn't indicate an unwillingness to discuss On a side note, the problematic points in the Shedler article and in this article that went unaddressed in the Shedler reply still remain in the dark with no response and the invitation to discuss those points remains enthusiastically on the table.
I'll mention this each time somebody responds to me about something else and the central point remains unaddressed. You have just made Dr. Nichols' point for him. You are "entirely aware" of Shedler's reply to the comments in American Psychologist, and yet you chose to make no mention of its existence in your original post.
Had Dr. Nichols not weighed in, readers would have been left with the mistaken impression that Shedler had declined to respond to published comments on his work. Nichols is right. That is misrepresentation and it is dishonorable.
It behooves a self-proclaimed "scholar of high scientific integrity" to tell the whole truth, not half truth, especially in a public forum where not everyone has read the original literature. Your disqualify yourself as a credible scholar by posting ad hominem innuendo and insinuation and then representing it as "scholarship. Actually, no. In my first comment in this thread, I not only mentioned it, but also included the full citation.
In this sense, I am the ONLY person who has mentioned the entire discussion and been willing to discuss the evidence. In case the first page of the comment thread is too far away for you to find it as seems to be the case given you just ripped into me in your comment without the slightest hint of accuracy in your claims , I will paste the comment below.
On a side note, my scholarly work, which is easily available for you to find note that my name and qualifications are listed in my posts and easy to find , is what qualifies me as a credible scholar. My post illustrate nothing close to what you implied. Lastly, innuendo is a word that involves somebody implying something without actually saying it. My point is entirely clear: a published back and forth has taken place in which the evidence was not truly addressed.
Misleading and dishonorable Submitted by Allison on October 19, - pm. View All References. But I think any true scientist would say that they welcome research from other traditions and encourage it! Eric M. I also have doctoral students read the entire set of above referenced studies and to think critically about them in my treatment course, thereby encouraging individuals to developed informed opinions based upon multiple perspectives and full engagement with the data. Young retrospectively evaluated her outcomes from 55 years of practice as a psychoanalyst and psychoanalytic psychotherapist. Premature discontinuation in adult psychotherapy: A metaanalysis.
I am inviting that discussion to continue between professors so that individuals can read multiple perspectives and source materials and draw informed conclusions. If this seems threatening or problematic, you and I have very different world views. If you're keeping score, that's now two posts attacking my character, and no discussions of the evidence. There seems to be some misunderstanding about what is standardized about treatment manuals. It seems as though people are under the impression that standardization or manualization of treatments means that the content of all sessions with all patients must be the same.
This of course would be far from practical in real life. What manualization actually entails is standardization in treatment principles that are relevant to the presenting problem. Thus, if we use PTSD as an example for the purposes of discussion, the content of therapy with a survivor of sexual assault is different than the content of therapy with a combat veteran, although in both cases I administer exposure interventions in the case of prolonged exposure or cognitive restructuring interventions in the case of cognitive processing therapy in the same standardized sequence.
So although the transcripts of therapy sessions in these two cases would be very different and "customized" to each patient's unique experience, the overall structure and flow of the two treatments from session to session would be the same. Because those recipes are the ones that have been tweaked and refined over the course of decades of systematic inquiry and trial-and-error to result in the best outcomes for patients with the condition of interest. There have been a lot of posts by well-informed colleagues who have defended the empirically-based protocols. On the other side have been a variety of also presumably well-informed colleagues who have voiced opposition to these very same protocols.
What seems evident is that many of the people opposed assume that each protocol or, in the former parlance, manuals are essentially scripted. No one who practices therapy, and I mean absolutely no one, who tries to use a script will succeed. If you try it, you will find out really fast that clients are notoriously bad at following the script, particularly because they are not privy to their lines beforehand. So my question is this: how many of the people who are anti-empirically supported treatment have attempted to familiarize themselves with these same protocols they strongly oppose?
As a strong advocate of empirically based treatments, I will note that in my early training I was exposed to supervision and have become familiar with both approaches. If the voices of opposition have not done the same with respect to empirically based approaches, then I strongly suggest that you do so before making claims that are factually and fundamentally misinformed. Patients and therapists should ignore new guidelines for treating trauma. Back Psychology Today. Back Find a Therapist. Back Get Help. Back Magazine.
Subscribe Issue Archive. Back Today. Making Sense of Nutritional Psychiatry. Educating for the Future. Jonathan Shedler Ph. Follow me on Twitter. Friend me on Faceook. Where is the Evidence for Evidence-Based Therapies? Thanks Submitted by Jonathan Shedler Ph. Thanks for your kind words! Bravo Submitted by Steven Reidbord M. Well said! Submitted by Jonathan Shedler Ph. Can you provide citations for Submitted by Martin on October 3, - am. Can you provide citations for the study you mention in the beginning of your piece? Submitted by Martin on October 3, - pm. Actually, it gets worse Submitted by Mike on October 4, - am.
Simpson, PhD on October 9, - am. Regardless, thanks for standing in the breach, sir. Fascinating Submitted by Anonymous on October 9, - pm. Still, that's one way to provide work for the mental health system. Submitted by Rudy Oldeschulte on October 9, - pm.
If you were a patient, Submitted by Anonymous on October 10, - am. Submitted by Lynn E. O'Connor Ph. CBT vs Psychodynamic: Is this the right tone?
Submitted by Brian Pilecki on October 11, - pm. A few quick points: 1 If there is a big lie, it is that people who use manuals read them as cookbooks. If CBT is not cookbook, then why is it tested as cookbook? Submitted by Mattias Desmet on October 15, - am.
Cookbooks, empirical tests, and real-world practice Submitted by Dean McKay on October 15, - am. Cookbooks, empirical tests, and real-world practice Submitted by Mattias Desmet on October 15, - am. I think this issue has been covered in the remarks about flexibility of CBT. Hi Mattias, Thank you for the reply. I agree that research trials Submitted by Brian Pilecki on October 16, - pm. In terms of citations about CBT manuals used in a flexible way, I would suggest the following sample of citations not exhaustive Kendall, P.