“Normal science
does not aim at novelty but at clearing up the status quo. It discovers what it
expects to discover.” – Thomas
Kuhn.
I was struck by
this quote from Thomas Kuhn last week when reading a Guardian blog about the
influential philosopher of science. It’s a simple statement suggesting that
so-called ‘normal science’ isn’t going to break any new ground, isn’t going to
change the way we think about something, but probably will reinforce
established ideas, and – perhaps even more importantly – will entrench what
scientists think are the important questions that need answering. Filling in
the gaps to clear up the status quo is probably a job that 95% of scientists
are happy to do. It grows the CV, satisfies your Dean of School, gets you
tenure and pays the mortgage.
But when I first
read that quote, I actually misread it. I thought it said “Normal science does
not aim at novelty but aims to maintain the status quo”! I suspect that when it
boils down to it, there is not much difference between my misreading of the
quote and what Kuhn had actually meant. Once scientists establish a paradigm in
a particular area this has the effect of
(1) framing the questions to be asked, (2) defining the procedures to
answer them, and (3) mainstreams the models, theories and constructs within
which new facts should be assimilated. I suspect that once a paradigm is
established, even those agencies and instruments that provide the
infrastructure for research contribute to entrenching the status quo. Funding
bodies and journals are good examples. Both tend to map on to very clearly
defined areas of research, and at times when more papers are being submitted to
scientific journals than ever before, demand management tends to lead to journal scope shrinkage in such a way that traditional research topics are
highlighted more and more, and new knowledge from other disciplinary approaches
is less likely to fertilize research in a particular area.
This led me to
thinking about my own research area, which is clinical psychology and
psychopathology. Can we clinical psychology researchers convince ourselves that
we are doing anything other than trying to clear up the status quo in a
paradigmatic approach that hasn’t been seriously questioned for over half a
century – and in which we might want to question it’s genuine achievements? Let’s
just take a quick look at some relevant points:
1. DSM still rules the way that much
clinical psychology research is conducted. The launch of DSM-5 in 2013 will
merely re-establish the dominance of diagnostic categories within clinical
psychology research. There are some who struggle to champion transdiagnostic
approaches, but they are doing this against a trend in which clinical
psychology and psychiatry journals are becoming more and more reliant on
diagnostic criteria for inclusion of papers. Journal of Anxiety Disorders is
just one example of a journal whose scope has recently shrunk from publishing
papers on anxiety to publishing papers on anxiety only in diagnosed
populations. DSM-I was published in 1952 – sixty years on it has become even
more entrenched as a basis for doing clinical psychology research. No paradigm
shift there then!
This doesn’t
represent a conspiracy between DSM and journals to consolidate DSM as the basis
for clinical psychology research – it merely reflects the fact that scientific
journals follow established trends rather than create new spaces within which
new concatenations of knowledge can emerge. Journals will by nature be a
significant conservative element in the progress of science.
2. There is a growing isolation in much of
clinical psychology research – driven in part by the shrinking scope of
clinical research journals and the adherence of many of them to DSM criteria
for publication. This fosters a growing isolation from core psychological
knowledge, and because of this, clinical psychology research runs the risk of
re-inventing the wheel – and probably re-inventing it badly. Some years ago I
expressed my doubts about the value of many clinical constructs that had become
the focus of research across a range of mental health problems (Davey,2003). Many of these constructs have been developed from clinical
experience and relate to individual disorders or even individual symptoms, but
I’m convinced that a majority of them simply fudge a range of different psychological
processes, most of which have already been researched in the core psychological
literature. I'm
an experimental psychologist by training who just happens to have become
interested in clinical psychology research, so I was lucky enough to be able to
bring some rather different approaches to this research than those who were
born and brought up in the clinical psychology way of doing things. What must
not happen is for clinical psychology research to become even more insular and
even more entrenched in reinventing even more wheels - or the wheels on the bus
really will just keep going round and round and round!
3. OK I'm going to
be deliberately provocative here – clinical neuroscience and imaging technology
costs a lot of money - so its role needs to be enshrined and ring-fenced in the
fabric of psychological knowledge endeavor, doesn’t it? Does it? If that’s the
case – then we’re in for a long period of paradigm stagnation. Imaging
technology is the Mars Rover of cognitive science while the rest of us are
using telescopes - or that's the way it seems. There are some clinical funding
bodies I simply wouldn't apply to for experimental psychopathology research –
‘cos if it ain’t imaging it ain't gonna get funded - yet where does the
contribution of imaging lay in the bigger knowledge picture within clinical
psychology? There may well be a well thought out view somewhere out there that
has placed the theoretical relevance of imaging into the fabric of clinical
psychology knowledge (advice welcome on
this)! There is often a view taken that whatever imaging studies throw up must
be taken into account by studies undertaken at other levels of explanation -
but that is an argument that is not just true of imaging, it's true of any
objective and robust scientific methodology.
Certainly - identifying brain locations and networks for
clinical phenomena may not be the way to go - there is growing support for
psychological constructionist views of emotion for instance, suggesting that
emotions do not have either a signature brain location or a dedicated neural
signature at all (e.g. Lindquist,Wager, Kober, Bliss-Moreau & Barrett, 2012). There are some very good
reviews of the role of brain functions in psychological disorders -but I'm not
sure what they tell us other than the fact that brain function underlies
psychological disorders – as it does everything! For me, more understanding of
psychological disorders can be gleaned from studying individual experience,
developmental and cognitive processes, and social and cultural processes than
basic brain function. Brain images are a bit like the snapshot of the family on
the beach - The photo doesn't tell you very much about how the family got there
or how they chose the beach or how they're going to get home.
But the point I’m trying to make is that if certain ways of
doing research require significant financial investment over long periods of
time (like imaging technology), then this too will contribute to paradigm
stagnation.
4. When tails begin
to wag dogs you know that as a researcher you have begun to lose control over
what research you can do and how you might be allowed to do it. Many
researchers are aware that to get funding for their research – however ‘blue
skies’ it might be – we now have to provide an applied impact story. How will
our research have an impact on society? Within clinical psychology research
this always seems to have been a reality. Much of clinical psychology research
is driven by the need to develop interventions and to help vulnerable people in
distress – which is a laudable pursuit. But does this represent the best way to do science? There is a real
problem when it comes to fudging understanding and practice. There appears to
be a diminishing distinction in clinical psychology between practice
journals and psychopathology journals, which is odd because helping people and
understanding their problems are quite different things – certainly from a
scientific endeavour point of view. Inventing an intervention out of
theoretical thin air and then giving it the facade of scientific integrity by
testing to see if it is effective in a controlled empirical trial is not good
science – but I could name what I think are quite a few popular interventions
that have evolved this way – EMDR and mindfulness are just two of them (I expect there will be others who will argue that these interventions didn't come out of a theoretical void, but we still don't really know how they work when they do work). At the
end of the day, to put the research focus on ‘what works in practice’ takes the
emphasis away from understanding what it is that needs to be changed, and in
clinical psychology it almost certainly sets research priorities within
establishment views of mental health.
5. My final point is a rather general one
about achievement in clinical psychology research. We would like to believe
that the last 40 years has seen significant advances in our development of
interventions for mental health problems. To be sure, we’ve seen the
establishment of CBT as the psychological intervention of choice for a whole
range of mental health problems, and we are now experiencing the fourth wave of
these therapies. This has been followed up with the IAPT initiative, in which
psychological therapies are being made more accessible to individuals with
common mental health problems. The past
40 years has also seen the development and introduction of second-generation
antidepressants such as SSRIs. Both CBT and SSRIs are usually highlighted as
state-of-the-art interventions in clinical psychology textbooks, and are hailed
by clinical psychology and psychiatry respectively as significant advances in
mental health science. But are they? RCTs and meta-analyses regularly show that
CBT and SSRIs are superior to treatment as usual, wait-list controls, or
placebos – but when you look at recovery rates, their impact is still far from
stunning. I am aware that this last point is not one that I can claim reflects
a genuinely balanced evidential view, but a meta-analysis we have just
completed of cognitive therapy for generalized anxiety disorder (GAD) suggests
that recovery rates are around 57% at follow-up. Which means that 43% of those
in cognitive therapy interventions for GAD do not reach basic recovery levels
at the end of the treatment programme. Reviews of IAPT programmes for depression
suggest no real advantage for IAPT interventions based on quality of life and
functioning measures (McPherson,Evans & Richardson, 2009). In a review article by Craske, Liao, Brown
& Vervliet (2012) that is about to be published in Journal of Experimental
Psychopathology, they note that even exposure therapy for anxiety disorders
achieves clinically significant improvement in only 51% of patients at
follow-up. I found it difficult to find studies that provided either recovery
rates or measures of clinically significant improvement for SSRIs, but Arroll et al (2005)
report that only 56-60% of patients in primary care responded well to SSRIs
compared to 42-47% for placebos.
I may be
over-cynical, but it seems that the best that our state-of-the-art clinical
psychology and psychopharmacological research has been able to achieve is a
recovery rate of around 50-60% for common mental health problems - compared with placebo and spontaneous
remission rates of between 30-45%. Intervention journals are full of research
papers describing new ‘tweaks’ to these ways of helping people with mental
health problems, but are tweaks within the existing paradigms ever going to be
significant? Is it time for a paradigm shift in the way we research mental
health?
Follow me on Twitter at:
http://twitter.com/GrahamCLDavey