Sunday 30 September 2012

Discovering Facts in Psychology: 10 ways to create “False Knowledge” in Psychology


There’s been quite a good deal of discussion recently about (1) how we validate a scientific fact (http://bit.ly/R8ruMg; http://bit.ly/T5JSJZ; http://bit.ly/xe0Rom), and (2) whether psychology – and in particular some branches of psychology – are prone to generate fallacious scientific knowledge (http://bit.ly/OCBdgJ; http://bit.ly/NKvra6). As psychologists, we are all trained (I hope) to be scientists – exploring the boundaries of knowledge and trying as best we can’ to create new knowledge, but in many of our attempts to pursue our careers and pay the mortgage, are we badly prone to creating false knowledge? Yes – we probably are! Here are just a few examples, and I challenge most of you psychology researchers who read this post to say you haven’t been a culprit in at least one of these processes!

Here are 10 ways to risk creating false knowledge in psychology.

1.  Create your own psychological construct. Constructs can be very useful ways of summarizing and formalizing unobservable psychological processes, but researchers who invent constructs need to know a lot about the scientific process, make sure they don’t create circular arguments, and must be in touch with other psychological research that is relevant to the understanding they are trying to create. In some sub-disciplines of psychology, I’m not sure that happens (http://bit.ly/ILDAa1).

2.  Do an experiment but make up or severely massage the data to fit your hypothesis. This is an obvious one, but is something that has surfaced in psychological research a good deal recently (http://bit.ly/QqF3cZ; http://nyti.ms/P4w43q).

3.  Convince yourself that a significant effect at p=.055 is real. How many times have psychologists tested a prediction only to find that the critical comparison just misses the crucial p=.05 value? How many times have psychologists then had another look at the data to see if it might just be possible that with a few outliers removed this predicted effect might be significant? Strangely enough, many published psychology papers are just creeping past the p=.05 value – and many more than would be expected by chance! Just how many false psychology facts has that created? (http://t.co/6qdsJ4Pm).

4.  Replicate your own findings using the same flawed procedure. Well, we’ve recently seen a flood of blog posts telling us that replication is the answer to fraud and poor science. If a fact can be replicated – then it must be a fact! (http://bit.ly/R8ruMg; http://bit.ly/xe0Rom) Well – no – that’s not the case at all. If you are a fastidious researcher and attempt to replicate a study precisely, then you are also likely to replicate the same flaws that gave rise to false knowledge. We need to understand the reasons why problematic research gives rise to false positives – that is the way to real knowledge (http://bit.ly/UchW4J).

5.  Use only qualitative methods. I know this one will be controversial, but in psychology you can’t just accept what your participants say! The whole reason why psychology has developed as a science is because it has developed a broad range of techniques to access psychological processes without having to accept at face value what a participant in psychological research has to tell us. I’ve always argued that qualitative research has a place in the development of psychological knowledge, but it is in the early stage of that knowledge development and more objective methodologies may be required to understand more proximal mechanisms.

6.  Commit your whole career to a single effect, model or theory that has your name associated with it. Well, if you’ve invested your whole career and credibility in a theory or approach, then you’re not going to let it go lightly. You’ll find multiple ways to defend it, even if it's wrong, and waste a lot of other researchers’ time and energy trying to disprove you. Ways of understanding move on, just like time, and so must the intransigent psychological theorist.

7.  Take a tried and tested procedure and apply it to everything. Every now and then in psychology a new procedure surfaces that looks too good to miss. It is robust, tells you something about the psychological processes involved in a phenomenon, and you can get a publication by applying it to something that no one else has yet applied it to! So join the fashion rush – apply it to everything that moves, and some things that don’t (http://bit.ly/SX37Sn). No I wasn't thinking of brain imaging, but.... Hmmmm, let me think about that! (I was actually thinking about the Stroop!)

8.  If your finding is rejected by the first journal you submit it to, continue to submit it to journals until it’s eventually published. This is a nice way to ensure that your contribution to false knowledge will be permanently recorded. As academic researchers we are all under pressure to publish (http://bit.ly/AsIO8B), if you believe your study has some genuine contribution to make to psychological science, then don’t accept a rejection from the first journal you send it to. In fact, if you don’t think your study has any real contribution to make to psychological knowledge at all, don’t accept a rejection from the first journal you send it to! Because you will probably get it published somewhere. I’d love to know what the statistics are on this, but I bet if you persist enough, your paper will get published.

9.  Publish your finding in a book chapter (non- peer reviewed), or an invited review, or a journal special issue - all of which are likely to have an editorial "light touch”. Well, if you do it might not get cited much (http://t.co/D55VKWDm), but it’s a good way of getting dodgy findings (and dodgy theories) into the public domain.

10.  Do some research on some highly improbable effects - and hope that some turn up significant by chance. (http://bit.ly/QsOQNo) And it won’t matter that people can’t replicate it – because replications will only rarely get published! (http://bit.ly/xVmmOv). The more improbable your finding, the more newsworthy it will be, the more of a celebrity you will become, the more people will try to replicate your research and fail, the more you will be wasting genuine research time and effort. But it will be your 15 minutes of fame!

Finally, if you haven’t been able to generate false psychological knowledge through one of these 10 processes, then try to get your finding included in an Introduction to Psychology textbook. Once your study is enshrined in the good old Intro’ to Psych’ text, then it’s pretty much going to be accepted as fact by at least one and maybe two future generations of psychologists. And once an undergrad has learnt a “fact”, it is indelibly inscribed on their brain and is faithfully transported into future reality!

Follow me on Twitter at:

Saturday 15 September 2012

"An effect is not an effect until it is replicated" - Pre-cognition or Experimenter Demand Effects


There has been much talk recently about the scientific process in the light of recent claims of fraud against a number of psychologists (http://bit.ly/R8ruMg), and also the failure of researchers to replicate some controversial findings by Darryl Bem purportedly showing effects reminiscent of pre-cognition (http://bit.ly/xVmmOv). This has led to calls for replication to be the cornerstone of good science – basically “an effect is not an effect until it’s replicated” (http://bit.ly/UtE1hb). But is replication enough? Is it possible to still replicate “non-effects”? Well replication probably isn’t enough. If we believe that a study has generated ‘effects’ that we think are spurious, then failure to replicate might be instructive, but it doesn’t tell us how or why the original study came by a significant effect. Whether the cause of the false effect is statistical or procedural, it is still important to identify this cause and empirically verify that it was indeed causing the spurious findings. This can be illustrated by a series of replication studies we have recently carried out in our experimental psychopathology labs at the University of Sussex.

Recently we’ve been running some studies looking at the effects of procedures that generate distress on cognitive appraisal processes. These studies are quite simple in design and highly effective at generating negative mood and distress in our participants (participants are usually undergraduate students participating for course credits), and pilot studies suggest that experienced distress and negative mood do indeed facilitate the use of clinically-relevant appraisal processes.

The first study we did was piloted as a final year student project. It produced nice data that supported our predictions – except for one thing. The two groups (distress group and control group) differed significantly on pre-manipulation baseline measures of mood and other clinically-relevant characteristics. Participants due to undertake the most distressing manipulation scored significantly higher on pre-experimental clinical measures of anxiety (M=6.9, SD 3.6, v M=3.8, SD 2.5)[F(56)=4.01 , p=.05], and depression (M=2.2, SD 2.6, M=1.1, SD 1.1) [F(56)=4.24, p=.04]. Was this just bad luck? The project student had administered the questionnaires herself prior to the experimental manipulations, and she had used a quasi-random participant allocation method (rotating participants to experimental conditions in a fixed pattern).

Although our experimental predictions had been supported (even when pre-experimental baseline measures were controlled for), we decided to replicate the study, this time run by another final year project student. Lo and behold, the participants due to undertake the distressing task scored significantly higher on pre-experimental measures of anxiety (M=9.1, SD 4.1, v M=6.9, SD 3.0) [F(56)=6.01, p=.01], and depression (M=4.3, SD 3.7, v M=2.4, SD 2.4) [F(56)=5.09, p=.02]. Another case of bad luck? Questionnaires were administered and participants allocated in the same way as the first study.

Was this a case of enthusiastic final year project students determined to complete a successful project in some way conveying information to the participant about what they were to imminently undergo? Basically, was this an implicit experimenter demand effect being conveyed by an inexperienced experimenter? To try and clear this up, we decided to replicate again, this time it was to be run by an experienced post doc researcher – someone who was wise to the possibility of experimenter demand effects, aware that this procedure was possibly prone to these demand effects, and would presumably to be able to minimize them.  To cut a long story short – we replicated the study again – but still replicated the pre-experimental group differences in mood measures! Participants who were about to undergo the distress procedure scored higher than participants about to undergo the unstressful control condition.

At this point, we were beginning to believe in pre-cognition effects! Finally, we decided to replicate again. But this time, the experimenter would be entirely blind to the experimental condition that a participant was in. Sixty sealed packs of questionnaires and instructions were made up before any participants were tested – half contained information for the participant about how to complete the questionnaires and how to run either the stressful or the control condition. The experimenter merely allowed the participant to chose a pack from a box at the outset, and was entirely unaware which condition the participant was running during the experiment. To cut another long story short – to our relief and satisfaction, the pre-experimental group differences in anxiety and depression measures disappeared. It wasn’t pre-cognition after all - it was an experimenter demand effect.

The point I’m making is that replication alone may not be sufficient to identify genuine effects – you can also replicate “non-effects” quite effectively - even by actively trying not to, and even more so by meticulously replicating the original procedure. If we have no faith in a particular experimental finding, it is incumbent on us as good scientists to identify the factor or factors that gave rise to that spurious finding wherever we can.

Follow me on Twitter at:

Wednesday 5 September 2012

How Research Methods Textbooks Fail Final Year Project Students


The time is about to come when all those fresh-faced final year empirical project students will be filing through our office doors looking for the study that’s going to give them the first class degree they are craving for.

Unfortunately, as a supervisor you’ll find that their mind isn’t focused on doing scientific research – it’s focused on getting a good mark for their project. This means that most of your time as a supervisor will be spent not on training your undergraduate supervisees to do research (as it should be), but on (1) telling them what they have to do to write up a good project, and (2) reassuring them that they’ve understood what you said is required for writing up a good project.

As an empirical scientist you might believe that the most important part of the training for your undergraduate project students is learning about experimental design and about statistical analysis. Wrong. Absolutely no over-arching information about experimental design will be absorbed by the student – only that they lie awake at night needing to know how many participants they will need to test and – more importantly – how will they get those participants?

Most project students have a small notebook they’ve bought from W H Smiths and in which they write down the pressing questions they need to ask their supervisor at the next supervision session (just in case they may forget). Questions like “Can I do this experiment in my bathroom in my student flat?”, “Can I test my mother’s budgerigar if I’m short of participants?”, “Will it matter if my breath smells of cider when I’m coding my data?”, “Do I need to worry about where I put the decimal point?”, “Will it affect my participants’ behaviour if I dye my hair day-glow orange in the middle of the study?”… and so on.

I believe that project students ask these kinds of questions because none of these questions are properly addressed or answered in standard Research Methods textbooks – an enormous oversight! Research Methods textbooks mince around talking about balanced designs, counterbalancing, control groups, demand effects, and so on. But what about the real practical issues facing a final year empirical project student? “How will I complete my experiment if I split up with my boyfriend and can’t use his extended local family as participants?”, “Where can I find those jumbo paper clips that I need to keep all the response sheets together?”, “Why do I need to run a control condition when I could be skiing in Austria?”

Perhaps we need some new, young, motivated research methods authors to provide us with the textbooks that will answer the full range of questions asked by undergraduate empirical project students. Sadly, at present, these textbooks answer the questions that students aren’t interested in asking – let’s get real with undergraduate research training!