In psychotherapy, practically the only consistent finding is that whatever kind of psychotherapy the person running the study likes is most effective

High Cortisol Chad

High Cortisol Chad

Diamond
Joined
Apr 11, 2023
Posts
1,391
Reputation
1,675
Thirty different meta-analyses have confirmed this.

- It is widely asserted that “evidence-based” therapies are scientifically proven and superior
to other forms of psychotherapy. Empirical research does not support these claims.
- Empirical research shows that “evidence-based” therapies are weak treatments. Their
benefits are trivial, few patients get well, and even the trivial benefits do not last.
- Troubling research practices paint a misleading picture of the actual benefits of
“evidence-based” therapies, including sham control groups, cherry-picked patient sam-
ples, and suppression of negative findings.

“Publication bias” is a well-known phenomenon in research. Publication bias refers to
the fact that studies with positive results—those that show the outcomes desired by
the investigators—tend to get published. Studies that fail to show the desired outcome
tend not to get published. For this reason, published research can provide a biased or
skewed picture of actual research findings. There is a name for this phenomenon, it is
called the “file-drawer effect.” For every published study with positive results, how
many studies with negative results are hidden in researchers’ file drawers? How can
you prove there are file drawers stuffed with negative results? It turns out there is a
way to do this. There are statistical methods to estimate how many unpublished
studies have negative results that are hidden from view.
A team of researchers tackled this question for research on CBT for depression. 17
They found that the published benefits of CBT are exaggerated by 75% owing to
publication bias. How do you find out something like this? How can you know
what is hidden in file drawers? You know by examining what is called a funnel
plot. The idea is actually quite simple. Suppose you are conducting a poll—“Are
US citizens for or against building a border wall with Mexico?”—and you examine
very small samples of only 3 people. The results can be all over the place. Depend-
ing on the 3 people you happen to select, it may look like 100% of citizens favor a
wall or 100% oppose it. With small sample sizes, you see a wide scatter or range of
results. As sample sizes get larger, the findings stabilize and converge.
If you graph the findings—in this case, the relationship between sample size and
treatment benefit—you get a plot that looks like a funnel (Fig. 2, left). Studies with
smaller sample sizes show more variability in results, and studies with larger sample
sizes tend to converge on more similar values. That is what it should look like if
data are not being hidden. In fact, what it looks like is something like the graph on
the right (see Fig. 2). The data points that are supposed to be in the lower left area of
the graph are missing.

Screenshot 2024 05 13 at 19 53 27 Where Is the Evidence for Evidence Based Therapy   Shedl


In the typical randomized controlled trial for “evidence-based” therapies, about
two-thirds of the patients are excluded from the studies a priori. 10 Sometimes exclu-
sion rates exceed 80%. That is, the patients have the diagnosis and seek treatment,
but because of the study’s inclusion and exclusion criteria, they are excluded from
participation. The higher the exclusion rates, the better the outcomes. 11 Typically,
the patients who are excluded are those who meet criteria for more than one psychi-
atric diagnosis, or have personality pathology, or are considered unstable, or who may
be suicidal. In other words, they are the patients we treat in real-world practice. The
patients included in the research studies are not representative of any real-world
clinical population.
Here is some simple arithmetic. Approximately two-thirds of patients who seek
treatment are excluded from the research studies. Of the one-third who are treated,
about one-half show improvement. This is about 16% of the patients who initially
presented for treatment. But this is just patients who show “improvement.” If we
Where is the Evidence? 323
consider patients who actually get well, we are down to about 11% of those who orig-
inally sought treatment. If we consider patients who get well and stay well, we are
down to 5% or fewer. In other words, scientific research demonstrates that “evi-
dence-based” treatments are effective and have lasting benefits for approximately
5% of the patients who seek treatment

 
  • +1
Reactions: deadstock
All of it is jewish soyence
 

Similar threads

Zenis
Replies
63
Views
3K
Blackgymmax
Blackgymmax
M
Replies
23
Views
2K
i_love_roosters
i_love_roosters
MaghrebGator
Replies
102
Views
3K
mvp2v1
mvp2v1
D
Replies
11
Views
2K
Celery
C
Baban
Replies
30
Views
5K
DarkLoner94
DarkLoner94

Users who are viewing this thread

Back
Top