Saturday, 29 December 2012

"Psychology" - The Struggling Science of Mental Life


Many of you may be old enough to remember George A. Miller’s book “Psychology: The Science of Mental Life”. As an undergraduate psychology student I was brought up with books with titles that variously contained the words science, psychology, behaviour and mind in them. These books had one main purpose – to persuade students of psychology that psychology was a legitimate scientific pursuit, using rigorous scientific methods to understand human behaviour and the human mind. All on a par with the more established sciences such as biology, physics and chemistry.

Even if you’re happy with the notion of psychology as a science, we then have the various debates about whether psychology is a biological science or a social science, and in the UK this isn’t just an issue about terminology, it is also a major issue about funding levels. Do psychologists need labs, do undergraduate psychology students need to do lab classes to learn to be psychologists? This almost became the tail wagging the dog, as funding bodies such as HEFCE (and its predecessor the University Funding Council) looked to save money by re-banding psychology as a half-breed science sitting somewhere between social science and biological science. I even seem to recall that some psychology departments were designated social psychology departments and given little or no lab funding. So were students in those Departments being taught science or not? What breed of psychology was it?

Just one more example before I get to the main point. A few years ago I had the good fortune to teach a small-group elective to second-year medical students. This was a 6-week course on cognitive models of psychopathology. I was fortunate to teach this group because it contained highly motivated and intelligent students. Now, I have never viewed myself as anything other than a scientist using scientific methods to understand human behaviour in general and psychopathology in particular. But these groups of highly able and highly trained medical students inevitably had difficulty with two particular aspects of the material I was teaching them: (1) how can we use science to study “cognitions” when we can’t see them, when we make up ‘arbitrary’ concepts to describe them, and we can’t physically dissect them? and (2) at the end of the day, cognitions will always boil down to biology, so it is biology – and not cognitions – that should be the object of scientific study.

What struck me most was that these students had already developed a conception of science that was not procedure based, but was content based. It was the subject matter that defined science for them, not particularly the methodology.

My argument here is that while psychology had been touted as a science now for a number of generations, psychologists over these generations have failed to convince significant others (scientists in other disciplines, funding organizations, etc.) that psychology is a science on a par with other established sciences. Challenges to psychology as a science come in many forms and from many different sources. Here are a few examples:

(1)      Funding bodies frequently attempt their own ‘redefining’ of psychology, especially when budgets are tight, and psychology is a soft target here, with its large numbers of students offering significant savings if science-related funding is downgraded.

(2)      Students, teachers and researchers in other science disciplines often have very esoteric views of what science is, and these views revolve around their own subject matter and the techniques they specially use to understand that subject matter. Psychologists have probably not been proactive or aggressive enough in broadcasting the ways in which psychology is science and how it uses scientific methodologies in a highly objective and rigorous way.

(3)      Members of other science disciplines frequently have a ‘mental block’ when it comes to categorizing psychology as a science (that’s probably the nicest way I can put it!). This reminds me of the time a few years ago when I was representing psychology on the UK Science Council. There was a long discussion about how to increase the number of women taking science degrees. During this discussion it was pointed out that psychology was extremely successful at recruiting female students, so perhaps we shouldn’t be too pessimistic about recruiting women into at least some branches of science. The discussion paused briefly, and then continued as if nothing of any relevance whatsoever had been said!

(4)      All branches of knowledge are open to allegations of fraud, and there has been some considerable discussion recently about fraud in science, fraud in psychology and the social sciences, and – most specifically – fraud in social psychology. Arguably, psychology is the science discipline most likely to be hurt by such allegations – not because methodology is necessarily less rigorous than in other science disciplines or publication standards any less high, but because many scientists in other disciplines fail to understand how psychology practices as a science. Sadly, this is even true within the discipline of psychology, and it is easy to take the trials and tribulations that have recently been experienced in social psychology research as an opportunity for the more ‘hard-nosed’ end of psychology to sneer at what might be considered the softer under-belly of psychological science. One branch of psychology ‘sneering’ at another branch is not a clever thing to do, because this will all be grist to the mill branding psychology generally as “non-scientific” by members of other science disciplines.

I’ll finish by mentioning a recent report published in 2011 attempting to benchmark UK psychology research within an international context. Interestingly, this report (published jointly by the ESRC, BPS, EPS and AHPD) listed nine challenges to the competitiveness of current psychology research in the UK. A significant majority of these challenges relate to the skills and facilities necessary for pursuing psychology as a science!

Psychology still requires an orchestrated campaign to establish it’s scientific credentials – especially in the eyes of other science disciplines, many of which have their own distorted view of what science is, but already occupy the intellectual high ground. Challenges to psychology as a science come from many diverse sources, including funding bodies, other sciences, intra-disciplinary research fraud, and conceptual differences within psychology as an integrated, but diverse, discipline.

Follow me on Twitter at:

Tuesday, 4 December 2012

‘Stickers’, ‘Jugglers’ and ‘Switchers & Dumpers’ – Which kind of researcher should you be?


I often look back on my own research career with some surprise at where it’s all travelled to. When I was a PhD student I was a dyed-in-the-wool behaviourist loading rats into Skinner boxes and clichés into arguments. Cognitions didn’t exist – and even in the remote possibility that they might, they were of no use to a scientific psychology. I was a radical Skinnerian pursuing a brave new world in which behaviour was all that mattered and contingencies of reinforcement would win out against all the airy-fairy vagaries of other approaches to psychology. Just a few years on from this I was still wondering why my PhD thesis on the “determinants of the post-reinforcement pause on fixed-interval schedules in rats” hadn’t been nominated for a Nobel Prize! 

I’ve begun with this personal example, because it emphasizes how relatively narrow interests (and views and approaches) can seem like they are the universe – and that is especially the case when you are personally invested in a specific piece of research like a PhD thesis. But what happens later on in our academic lives? Should we stay focused and hone our skills in a focused research niche, or should we nervously wander out of that niche into new areas with new challenges requiring new skills? 

It is certainly a question for young academics to think about. Stick with what you know, or get other strings to your bow? If you are a newly graduated PhD, you are more likely than not to be a “clone” of your supervisor, and that may well be a block on you getting a lectureship at the institution in which you did your research degree. But then most recruiting Departments will want to know that you are – as they put it - “capable of independent research” before appointing you. Do you go scrabbling for that last section in your thesis entitled “Future Directions” and try to stretch out your PhD research (often in a painfully synthetic way, like seeing how far some bubble-gum will stretch – even though the ‘amount’ there is still the same). Or do you bite the bullet and try your newly-learnt skills on some new and different problems? 

You have one career lifetime (unless you’re Buddhist!) – so should you diversity or should you focus? Let’s begin with those people who focus an entire research career in one specific area – “the stickers” - often concentrating on a small, limited number of research problems but maybe have the benefit of developing more and more refined (and sometimes more complex) theoretical models. Cripes – how boring! Take that approach and you’ll become one or more of the following: (a) The person who sits near the front at international conferences and begins asking questions with the phrase “Thank you for your very interesting talk, but…”, (b) That butcher of a referee who everyone knows, even though your reviews are still anonymous, (c) Someone who sits in Departmental recruitment presentations openly mocking the presentation of any applicant not in your specific area of research (usually by looking down at your clasped hands and shaking your head slowly from side to side while muttering words like “unbelievable” or “where’s the science?”, or, finally, you’ll become (d) Director of a RCUK National Research Centre. 

So what about taking that giant leap for researcher-kind and diversifying? Well first, it’s arguably good to have more than one string to your bow, and become a research “juggler”“. The chances are that at some point you’ll get bored with the programme of research that you first embarked on in early career. Having at least two relatively independent streams of research means you can switch your focus from one to the other. It also increases (a) the range of journals you can publish in, (b) the funding bodies you can apply to, and (c) the diversity of nice people you can meet and chat sensibly to at conferences. It can also be a useful way of increasing your publication rate in early mid-career when you’re looking for an Associate Editorship to put on your CV or a senior lectureship to apply for. 

But there is more to diversifying than generating two streams of research purely for pragmatic career reasons. If you’re a tenured academic, you will probably in principle have the luxury of being able to carry out research on anything you want to (within reason) – surely that’s an opportunity that’s too good to miss? B.F. Skinner himself was one who promoted the scientific principle of serendipity (a principle that seems to have gone missing from modern day Research Methods text books) – that is, if something interesting crops up in your research, drop everything and study it! This apparently was how Skinner began his studies on response shaping, which eventually led to his treatise on operant conditioning. But diversity is not always a virtue. There are some entrepreneurial “switchers and dumpers” out there, who post a new (and largely unsubstantiated) theory about something in the literature, and then move on to a completely new (and often more trending) area of research, leaving researchers of the former topic to fight, bicker and prevaricate, often for years, about what eventually turns out to be a red herring, or a blind alley, or a complete flight of fancy designed to grab the headlines at the time. 

Now, you’ve probably got to the point in this post where you’re desperate for me to provide you with some examples of “stickers”, “jugglers” and “switchers and dumpers” – well, I think you know who some of these people are already, and I’m not going to name names! But going back to my first paragraph, if you’d told me as a postgraduate student about the topics I would be researching now – I would have been scornfully dismissive. But somehow I got here, and through an interesting and enjoyable pathway of topics, ideas, and serendipitous routes. Research isn’t just about persevering at a problem until you’ve tackled it from every conceivable angle, it’s also an opportunity to try out as many candies in the shop as you can – as long as you sample responsibly!

Follow me on Twitter at:

Friday, 2 November 2012

The Lost 40%


I’ve agonized for some time about how best to write this post. I want to try and be objective and sober about our achievements in developing successful interventions for mental health problems, yet at the same time I don’t want to diminish hope for recovery in those people who rely on mental health services to help them overcome their distress.

The place to start is a meta-analysis of cognitive therapy for worry in generalized anxiety disorder (GAD) just published by my colleagues and myself. For those of you that are unfamiliar with GAD, it is one of the most common mental health problems, is characterized by anxiety symptoms and by pathological uncontrollable worrying, and it has a lifetime prevalence rate of between 5-8% in the general adult population. That means that in a UK population of around 62 million, between 3 and 5 million people will experience diagnosable symptoms of GAD in their lifetime. In a US population of 311 million these figures increase to between 15 to 25 million sufferers within their lifetime. Our meta-analysis found that cognitive therapy was indeed significantly more effective at treating pathological worrying in GAD than non-therapy controls, and we also found evidence that cognitive therapy was superior to other treatments that were not cognitive therapy based.

So, all well and good! This evidence suggests that we’ve developed therapeutic interventions that are significantly better than doing nothing and that are marginally better than some other treatments. Our results also suggest that the magnitude of these effects are slightly larger than had been previously found, possibly indicating that newer forms of cognitive therapy were increasingly more effective.

But what can the service user with mental health problems make of these conclusions? On the face it they seem warmly reassuring – we do have treatments that are more effective than doing nothing, and the efficacy of these treatments is increasing over time. But arguably, what the service user wants to know is not “Is treatment X better than treatment Y?”, but “Will I be cured?” The answer to that is not so reassuring. Our study was one of the first to look at recovery data as well as relative efficacy of treatments. Across all of the studies for which we had data on levels of pathological worrying, the primary recovery data revealed that only 57% of sufferers were classed as recovered at 12 months following cognitive therapy – and, remember, cognitive therapy was found to be more effective than other forms of treatment. To put it another way, 43% of people who underwent cognitive therapy for pathological worrying in GAD were still not classed as recovered one year later. Presumably, they were still experiencing distressing symptoms of GAD that were adversely affecting their quality of life. I think these findings raise two important but relatively unrelated issues.

First, is a recovery rate of 57% enough to justify 50 years of developing psychotherapeutic treatments for mental health disorders such as GAD? To be sure, GAD is a very stubborn disorder. Long-term studies of GAD indicate that around 60% of people diagnosed with GAD were still exhibiting significant symptoms of the disorder 12 years later (regardless or not of whether they’d had treatments for these symptoms during this period). Let’s apply this to the prevalence figures I quoted earlier in this piece. This means that the number of people in the UK and the USA suffering long-term symptoms of GAD during their lifetime might be as high as 3 million and 15 million respectively. In 50-years of developing evidence-based talking therapies, have we been too obsessed with relative efficacy and not enough with recovery? Has too much time been spent just ‘tweaking’ existing interventions to make them competitive with other existing interventions? Perhaps as our starting point we should be taking a more universal view of what is required for recovery from disabling mental health problems? That overview will not just include psychological factors it will inevitably include social, environmental and economic factors as well.

Second, what do we tell the service user? Mental health problems such as GAD are distressing and disabling. Hope of recovery is the belief that most service users will take into treatment, but on the basis of the figures presented in this piece, it can only be a 57% hope!  This level of hope is not just reserved for cognitive therapy for GAD or psychotherapies in general, it is a figure that pretty much covers pharmaceutical treatments for GAD as well, with the best remission/recovery rates for drug treatments being around 60% (fluoxetine) and some as low as 26%.

I have spent this post discussing recovery from GAD in detail, but I suspect similar recovery levels and similar arguments are relevant to other forms of intervention (such as exposure therapies) and other common mental health problems (such as depression and anxiety disorders generally). It may be time to start looking at the bigger picture required for recovery from mental health problems so that hope can also be extended to the 40-45% of service users for whom we have yet to openly admit that we cannot provide a ‘cure’.

Sunday, 30 September 2012

Discovering Facts in Psychology: 10 ways to create “False Knowledge” in Psychology


There’s been quite a good deal of discussion recently about (1) how we validate a scientific fact (http://bit.ly/R8ruMg; http://bit.ly/T5JSJZ; http://bit.ly/xe0Rom), and (2) whether psychology – and in particular some branches of psychology – are prone to generate fallacious scientific knowledge (http://bit.ly/OCBdgJ; http://bit.ly/NKvra6). As psychologists, we are all trained (I hope) to be scientists – exploring the boundaries of knowledge and trying as best we can’ to create new knowledge, but in many of our attempts to pursue our careers and pay the mortgage, are we badly prone to creating false knowledge? Yes – we probably are! Here are just a few examples, and I challenge most of you psychology researchers who read this post to say you haven’t been a culprit in at least one of these processes!

Here are 10 ways to risk creating false knowledge in psychology.

1.  Create your own psychological construct. Constructs can be very useful ways of summarizing and formalizing unobservable psychological processes, but researchers who invent constructs need to know a lot about the scientific process, make sure they don’t create circular arguments, and must be in touch with other psychological research that is relevant to the understanding they are trying to create. In some sub-disciplines of psychology, I’m not sure that happens (http://bit.ly/ILDAa1).

2.  Do an experiment but make up or severely massage the data to fit your hypothesis. This is an obvious one, but is something that has surfaced in psychological research a good deal recently (http://bit.ly/QqF3cZ; http://nyti.ms/P4w43q).

3.  Convince yourself that a significant effect at p=.055 is real. How many times have psychologists tested a prediction only to find that the critical comparison just misses the crucial p=.05 value? How many times have psychologists then had another look at the data to see if it might just be possible that with a few outliers removed this predicted effect might be significant? Strangely enough, many published psychology papers are just creeping past the p=.05 value – and many more than would be expected by chance! Just how many false psychology facts has that created? (http://t.co/6qdsJ4Pm).

4.  Replicate your own findings using the same flawed procedure. Well, we’ve recently seen a flood of blog posts telling us that replication is the answer to fraud and poor science. If a fact can be replicated – then it must be a fact! (http://bit.ly/R8ruMg; http://bit.ly/xe0Rom) Well – no – that’s not the case at all. If you are a fastidious researcher and attempt to replicate a study precisely, then you are also likely to replicate the same flaws that gave rise to false knowledge. We need to understand the reasons why problematic research gives rise to false positives – that is the way to real knowledge (http://bit.ly/UchW4J).

5.  Use only qualitative methods. I know this one will be controversial, but in psychology you can’t just accept what your participants say! The whole reason why psychology has developed as a science is because it has developed a broad range of techniques to access psychological processes without having to accept at face value what a participant in psychological research has to tell us. I’ve always argued that qualitative research has a place in the development of psychological knowledge, but it is in the early stage of that knowledge development and more objective methodologies may be required to understand more proximal mechanisms.

6.  Commit your whole career to a single effect, model or theory that has your name associated with it. Well, if you’ve invested your whole career and credibility in a theory or approach, then you’re not going to let it go lightly. You’ll find multiple ways to defend it, even if it's wrong, and waste a lot of other researchers’ time and energy trying to disprove you. Ways of understanding move on, just like time, and so must the intransigent psychological theorist.

7.  Take a tried and tested procedure and apply it to everything. Every now and then in psychology a new procedure surfaces that looks too good to miss. It is robust, tells you something about the psychological processes involved in a phenomenon, and you can get a publication by applying it to something that no one else has yet applied it to! So join the fashion rush – apply it to everything that moves, and some things that don’t (http://bit.ly/SX37Sn). No I wasn't thinking of brain imaging, but.... Hmmmm, let me think about that! (I was actually thinking about the Stroop!)

8.  If your finding is rejected by the first journal you submit it to, continue to submit it to journals until it’s eventually published. This is a nice way to ensure that your contribution to false knowledge will be permanently recorded. As academic researchers we are all under pressure to publish (http://bit.ly/AsIO8B), if you believe your study has some genuine contribution to make to psychological science, then don’t accept a rejection from the first journal you send it to. In fact, if you don’t think your study has any real contribution to make to psychological knowledge at all, don’t accept a rejection from the first journal you send it to! Because you will probably get it published somewhere. I’d love to know what the statistics are on this, but I bet if you persist enough, your paper will get published.

9.  Publish your finding in a book chapter (non- peer reviewed), or an invited review, or a journal special issue - all of which are likely to have an editorial "light touch”. Well, if you do it might not get cited much (http://t.co/D55VKWDm), but it’s a good way of getting dodgy findings (and dodgy theories) into the public domain.

10.  Do some research on some highly improbable effects - and hope that some turn up significant by chance. (http://bit.ly/QsOQNo) And it won’t matter that people can’t replicate it – because replications will only rarely get published! (http://bit.ly/xVmmOv). The more improbable your finding, the more newsworthy it will be, the more of a celebrity you will become, the more people will try to replicate your research and fail, the more you will be wasting genuine research time and effort. But it will be your 15 minutes of fame!

Finally, if you haven’t been able to generate false psychological knowledge through one of these 10 processes, then try to get your finding included in an Introduction to Psychology textbook. Once your study is enshrined in the good old Intro’ to Psych’ text, then it’s pretty much going to be accepted as fact by at least one and maybe two future generations of psychologists. And once an undergrad has learnt a “fact”, it is indelibly inscribed on their brain and is faithfully transported into future reality!

Follow me on Twitter at: