Friday 23 March 2012

Whatever happened to Learning Theory?


I’ve already blogged about B.F.Skinner, and –coincidentally – he has just celebrated his 108th birthday. But it led me to think about how learning theory in general seems to have drifted slowly out of our undergraduate psychology curricula, out of our animal and experimental psychology labs, and out of the list of high impact journals. I don’t mean just ‘behaviourism’, I mean learning theory and all that embraces - from schedules of reinforcement and behaviour analysis, to associative learning and cognitive inferential models of conditioning – in both animals and humans.

In 2010, the BPS Curriculum for the Graduate Basis for Chartered Membership of the Society listed ‘learning’ as a topic under Cognitive Psychology (that would have jarred with Prof. Skinner!), and not under Biological Psychology. Interestingly, 10 years ago it was listed under both cognitive and biological psychology. In my own institution I know that learning theory has become a relatively minor aspect of Level 1 and Level 2 teaching. Until 2 years ago, I offered a final year elective called ‘Applications of Learning Theory’, but despite its applied, impact-related title the course usually recruited less than 10 students. I usually had to begin the first two lectures by covering the basics of associative learning. If these students had been taught anything about learning theory in Years 1 and 2 they had retained none of it. This state of affairs is quite depressing in an institution that twenty five years ago had one of the leading animal learning labs in the world, inhabited by researchers such as Nick Mackintosh, Tony Dickinson, John Pearce, and Bob Boakes, to name but a few.

I haven’t done anything like a systematic survey of what different Psychology Departments teach in their undergraduate courses, but I suspect that learning theory no longer commands anything more than a couple of basic lectures at Level 1 or Level 2 in many departments. To be fair, most contemporary Introduction to Psychology texts usually contain a chapter devoted to learning (e.g. 1,2), but this is usually descriptive and confined to the difference between instrumental and classical conditioning, coverage of schedules of reinforcement (if you’re lucky), and a sizable focus on why learning theory has applied importance.

So why the apparent decline in the pedagogic importance of learning theory? I suspect the reasons are multiple. Most obviously, learning theory got overtaken by cognitive psychology in the 1980s and 1990s. There is an irony to this in the sense that during the 1980s, the study of associative learning had begun to develop some of the most innovative inferential methods to study what were effectively ‘cognitive’ aspects of animal learning (3, 4) and had also given rise to influential computational models of associative learning such as the Rescorla-Wagner and Pearce-Hall models (5,6). These techniques gave us access to what was actually being learnt by animals in simple (and sometimes complex) learning tasks, and began to provide a map of the cognitive mechanisms that underlay associative learning. This should have provided a solid basis from which animal learning theory could have developed into more universal models of animal consciousness and experience – but unfortunately this doesn’t appear to have happened on the scale that we might have expected. I’m still not sure why this didn’t happen, because at the time this was my vision for the future of animal learning, and one I imparted enthusiastically to my students. I think that the study of associative learning got rather bogged down in struggles over the minutiae of learning mechanisms, and as a result lost a lot of its charisma and appeal for the unattached cognitive researcher and the inquisitive undergraduate student. It certainly lost much of its significance for applied psychologists, which was one of the attractions of the radical behaviourist approach to animal learning.

A second factor in the decline of learning theory was almost certainly the decline in the number of animal labs in psychology departments – brought about in the 1980s and 1990s primarily by a vocal and active animal lib movement. This was certainly one factor that persuaded me to move from doing animal learning studies to human learning studies. I remember getting back into work one Monday morning to find leaflets pushed through the front door of the Psychology building by animal lib activists. These leaflets highlighted the cruel research carried out by Dr. Davey in Psychology who tortured rats by putting paper clips on their tails (7). At the time this was a standard technique used to generate stress in rats to investigate the effects of stress on feeding and drinking, but it did lead me to think hard about whether this research was important and whether there were other forms of research I should be moving towards. It was campaigns like this that led many Universities to either centralize their animal experiment facilities or to abandon them altogether. Either way, it made animal research more difficult to conduct and certainly more difficult for the interested undergraduate and postgraduate student to access.

In my own case, allied to the growing practical difficulties associated with doing animal learning research was the growing intellectual solitude of sharing a research topic with an ever decreasing number of researchers. In the 1980s I was researching performance models of Pavlovian conditioning – basically trying to define the mechanisms by which Pavlovian associations get translated into behaviour – particularly in unrestrained animals. Eventually it became clear to me that only me and maybe two or three other people worldwide shared this passion. Neither was it going to set the world on fire (a bit like my doctoral research on the determinants of the fixed-interval post-reinforcement pause in rats!). To cut a long story short, I decided to abandon animal research and invest my knowledge of learning theory into more applied areas that held a genuine interest for the lay person. Perhaps surprisingly it was Hans Eysenck  who encouraged me to apply my knowledge of learning theory to psychopathology. During the 1980s, conditioning theory was getting a particularly bad press in the clinical psychology literature, and after chairing an invited keynote by Hans at a BPS London Conference he insisted I use my knowledge of conditioning to demonstrate that experimental approaches to psychopathology still had some legs (but only after he’d told me how brilliant his latest book was). This did lead to a couple of papers in which I applied my knowledge of inferential animal learning techniques to conditioning models of anxiety disorders (8,9). But for me, these were the first steps away from learning theory and into a whole new world of research which extended beyond one other researcher in Indiana, and some futile attempts to attach paper clips to the tails of hamsters (have you ever tried doing that? If not – don’t!)(7).

I was recently pleasantly surprised to discover that both the Journal of the Experimental Analysis of Behavior and the Journal of Applied Behavior Analysis are still going strong as bastions of behaviour analysis research. Sadly, Animal Learning & Behavior has now become Learning & Behavior, and Quarterly Journal of Experimental Psychology B (the comparative half traditionally devoted largely to animal learning) has been subsumed into a single cognitive psychology QJEP. But I was very pleasantly surprised to find that when I put ‘Experimental Analysis of Behaviour Group’ into Google that the group was still alive and kicking (http://eabg.bangor.ac.uk). This group was the conference hub of UK learning theory during the 1970s and 1980s, affectionately known as ‘E-BAG’ and provided a venue for regular table football games between graduate students from Bangor, Oxford, Cambridge, Sussex and Manchester amongst others.

I’ve known for many years that I still have a book in me called ‘Applications of Learning Theory’ – but it will never get written, because there is no longer a market for it. That’s a shame, because learning theory still has a lot to offer. It offers a good grounding in analytical thinking for undergraduate students, it provides a range of imaginative inferential techniques for studying animal cognition, it provides a basic theoretical model for response learning across many areas of psychology, it provides a philosophy of explanation for understanding behaviour, and it provides a technology of behaviour change – not many topics in psychology can claim that range of benefits.

(1)      Davey G C L (2008) Complete Psychology. Hodder HE.
(2)      Hewstone M, Fincham F D & Foster J (2005) Psychology. BPS Blackwell.
(3)      Rescorla R A (1980) Pavlovian second-order conditioning. Hillsdale, NJ: Erlbaum.
(4)      Dickinson A (1980) Contemporary animal learning theory. Cambridge: Cambridge University Press.
(5)      Rescorla R A & Wagner A R (1972) A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A H Black & W F Prokasy (Eds) Classical conditioning II: Current research and theory. New York: Appleton-Century-Crofts.
(6)      Pearce J J & Hall G (980) A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychological Review, 87, 532-552.
(7)      Meadows P, Phillips J H & Davey G C L (1988) Tail-pinch elicited eating in rats (Rattus Norvegicus) and hamsters (Mesocricetus auratus). Physiology & Behavior, 43, 429-433.
(8)      Davey G C L (1992) Classical conditioning and the acquisition of human fears and phobias: Review and synthesis of the literature. Advances in Behaviour Research & Therapy, 14, 29-66).
(9)      Davey G C L (1989) UCS revaluation and conditioning models of acquired fears. Behaviour Research & Therapy, 27, 521-528.

Friday 2 March 2012

When measuring science distorts it: 8 things that muddy the waters of scientific integrity and progress

If you are a scientist of almost any persuasion, one of the processes that you probably cherish most dearly is the objectivity and integrity of the scientific process - a process that leads us to discover and communicate what we loosely like to call ‘the truth’ about our understanding of things. But maybe the process is not as honed as it should be, and maybe it’s not as efficient as it could be? In many cases it’s the desire to quantify and evaluate research output for purposes other than to understand scientific progress that is the culprit, and which distorts scientific progress to the point where it becomes an obstacle to good and efficient science. Below are 8 factors that lead to a distortion of the scientific process – many of which have been brought about by the desire to quantify and evaluate research. Scientific communities have discussed many of these factors previously on various social networks and in scientific blogs, but I thought it would be useful to bring some of them together.

1.         Does measurement of researchers’ scientific productivity harm science? Our current measures of scientific productivity are crude, but are now so universally adopted that they matter for all aspects of the researcher’s career, including tenure (or unemployment), funding (or none), success (or failure), and research time (or teaching load) (Lawrence, 2008)[1]. Research productivity is measured by number of publications, number of citations, and impact factors of journal outlets that are then rewarded with money (either in the form of salaries or grants). Lawrence argues that if you need to publish “because you need a meal ticket, then you end up publishing when you are hungry – not when the research work is satisfactorily completed”. As a result, work is regularly submitted for publication when it is incomplete, when the ideas are not fully thought through, or with incomplete data and arguments. Publication – not the quality of the scientific knowledge reported – is paramount.

2.         But the need to publish in high impact journals has another consequence. Journal impact factors are correlated with the number of retractions rather than citations an individual paper will receive (http://bit.ly/AbFfpz)[2]. One implication of this is that the rush to publish in high impact journals increases the pressure to ‘maybe’ “forget a control group/experiment, or leave out some data points that don’t make the story look so nice”? – all behaviours that will decrease the reliability of the scientific reports being published (http://bit.ly/ArMha6).


3.         The careerism that is generated by our research quality and productivity measures not only fosters incomplete science at the point of publication, it can also give rise to exaggeration and outright fraud (http://bit.ly/AsIO8B). There are recent prominent examples of well-known and ‘respected’ researchers faking data on an almost industrial scale. One recent example of extended and intentional fraud is the Dutch social psychologist Diederick Stapel, whose retraction was published in the journal Science (http://bit.ly/yH28gm)[3]. In this and possibly other cases, the rewards of publication and citation outweighed the risks of being caught. Are such cases of fraudulent research isolated examples or the tip of the iceberg? They may well be the tip of a rather large iceberg. More than 1 in 10 British-based scientists or doctors report witnessing colleagues intentionally altering or fabricating data during their research (http://reut.rs/ADsX59), and a survey of US academic psychologists suggests that 1 in 10 Psychologists in the US has falsified research data (http://bit.ly/yxSL1A)[4]. If these findings can be extrapolated generally, then we might expect that 1 in 10 of the scientific articles we read contains, or is based on, doctored or even faked data.


4.         Journal impact ratings have another negative consequence on the scientific process. There is an increasing tendency for journal editors to reject submissions without review – not purely on the basis of methodological or theoretical rigour – but on the basis that the research lacks “novelty or general interest” (http://bit.ly/wvp9V8). This tends to be editors attempting to protect the impact rating of their journal by rejecting submissions that might be technically and methodologically sound, but are unlikely to get cited very much. One particular type of research that falls foul of this process is likely to be replication. Replication is a cornerstone of the scientific method, yet failures to replicate appear to have a low priority for publication – even when the original study being replicated is controversial (http://bit.ly/AzyRXw). That citation rate has become the gold standard to indicate the quality of a piece of research or the standing of a particular researcher misses the point that high citation rates can also result from controversial but un-replicable findings. This has led some scientists to advocate the use of a ‘r’ or ‘replicability’ index for research to supplement the basic citation index (http://bit.ly/xQuuEP).

5.         Whether a research finding is published and considered to be methodologically sound is usually assessed by the use of standard statistical criteria (e.g. assessed by formal statistical significance, typically for p-values less than 0.05). But the probability that a research finding is true is not just dependent on the statistical power of the study and the level of statistical significance, but also on other factors to do with the context in which research on that topic is being undertaken. As John Ioannidis has pointed out, “..a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.” (Ioannidis, 2005)[5]. This leads to the conclusion that most research findings are false for most research designs and for most fields!

6.         In order to accommodate the inevitable growth in scientific publication, journals have increasingly taken to publishing research in shorter formats than the traditional scientific article. These short reports limit the length of an article, but the need for this type of article may well be driven by the academic researcher’s need to publish in order to maintain their career rather than the publisher’s need to optimize limited publishing resources (e.g. pages in a printed journal edition). The advantage for researchers – and their need to publish and be cited – is that on a per page basis shorter articles are cited more frequently than longer articles (Haslam, 2010)[6]. But short reports can lead to the propagation of ‘bad’ or ‘false’ science. For example, shorter, single-study articles can be poor models of science because longer, multiple study articles may often include confirmatory full or partial replications of the main findings (http://nyti.ms/wkzBpS). In addition, small studies are inherently unreliable and more likely to generate false positive results (Bertamini & Munafo, 2012)[7]. Many national research assessment exercises require not only that quality of research be assessed in some way, but they also specify a minimum quantity requirement as well. Short reports – with all the disadvantages they may bring to scientific practice – will have a particular attraction to those researchers under pressure to produce quantity rather than quality.

7.         The desire to measure the applied “impact relevance” of research – especially in relation to research funding and national research assessment exercises has inherent dangers for identifying and understanding high-quality research. For example, in the forthcoming UK research excellence framework, lower quality research for which there is good evidence of “impact” may be given a higher value than higher-quality outputs for which an “impact” case is less easy to make (http://bit.ly/y7cqPW). This shift towards the importance of research “impact” in defining research quality has the danger of encouraging researchers to pursue research relevant to short-term policy agendas rather than longer-term theoretical issues. The associated funding consequence is that research money will drift towards those organizations pursuing policy-relevant rather than theory-relevant research, with the former being inherently labile and dependent on changes in both governments and government policies.

8.         Finally, when discussing whether funding is allocated in a way appropriate to optimizing scientific progress, there is the issue of whether we fund researchers when they’re past their best. Do we neglect those researchers in their productive prime who can add fresh zest and ideas into the scientific research process? Research productivity peaks at age 44 (or an average of 17 years after a researchers first publication), but research funding peaks at age 53 – suggesting productivity declines even as funding increases (http://bit.ly/yQUFis). It’s true, these are average statistics, but it would be interesting to know whether there are inherent factors in the funding process that favour past reputation over current productivity.




[1] Lawrence P A (2008) Lost in publication: How measurement harms science. Ethics in Science & Environmental Politics, 8, 9-11.
[2] Fang F C & Casadevall A (2011) Retracted science and the retraction index. Infection & Immunity. Doi: 10.1128/IAI.05661-11.
[3] Stapel D A & Lindenberg S (2011) Retraction. Science, 334, 1202.
[4] John K L, Loewenstein G & Prelec D (in press) Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science.
[5] Ioannidis J P A (2012) Why most published research findings are false. PLoS Medicine, doi/10.1371/journal.pmed.0020124
[6] Haslam N (2010) Bite-size science: Relative impact of short article formats. Perspectives on Psychological Science, 5, 263-264.
[7] Bertamini M & Munafo M R (2012) Bite-size science and its undesired side effects. Perspectives in Psychological Science, 7, 67-71.