Tag Archives: interview

Are You Registering That? An Interview with Prof. Chris Chambers

There is no panacea for bad science, but if there were, it would certainly resemble Registered Reports. Registered Reports are a novel publishing format in which authors submit only the introduction, methods, and planned analyses without actually having collected the data. Thus, peer-review only focuses on the soundness of the research proposal and is not contingent on the “significance” of the results (Chambers, 2013). In one strike, this simple idea combats publication bias, researchers’ degrees of freedom, makes apparent the distinction between exploratory and confirmatory research, and calms the researcher’s mind. There are a number of journals offering Registered Reports, and this is arguable the most important step journals can take to push psychological science forward (see also King et al., 2016). For a detailed treatment of Registered Reports, see here, here, here, and Chambers (2015).

Picture of Chris Chambers

Chris Chambers is the initiator of the “Registration Revolution”, the man behind the movement. He has introduced Registered Reports into psychology, has written publicly about the issues we currently face in psychology, and has recently published a book called the “7 Deadly Sins of Psychology” in which he masterfully exposes the shortcomings of current academic customs and inspires change. He is somebody who cares deeply about the future of our field, and he is actively changing it for the better.

We are very excited to present you with an interview with Chris Chambers. How did he become a researcher? Where did he get the idea of Registered Reports from? What is his new book about, and what can we learn from hard sciences such as physics? Find out below!


Tell us a bit about your background. How did you get into Psychology and Cognitive Neuroscience? What is the focus of your research?

Since my teenage years I had been interested in psychology (the Star Trek Next Generation episode “Measure of a Man” left me pondering the mind and consciousness for ages!) but I never really imagined myself as a psychologist or a scientist – those seemed like remote and obscure professions, well out of reach. It wasn’t until the final year of my undergraduate degree that I developed a deep interest in the science of psychology and decided to make a run for it as a career. Applying to do a PhD felt like a very long shot. I have this distinct memory, back in 1999, scrolling down the web page of accepted PhD entrants. I searched in vain for my name among the list of those who had been awarded various prestigious scholarships, and as I neared the bottom I began pondering alternative careers. But then, as if by miracle, there was my name at the end. I was last on the list, the entrant with the lowest successful mark out of the entire cohort. For the next two and half years I tried in vain to replicate a famous US psychologist’s results, and then had to face having this famous psychologist as a negative reviewer of every paper we submitted. One day – about two years into my PhD – my supervisor told me about this grant he’d just been awarded to stimulate people’s brains with electromagnetic fields. He asked if I wanted a job and I jumped at the chance. Finally I could escape Famous Negative Reviewer Who Hated Me! Since then, a large part of my research has been in cognitive neuroscience, with specific interests in attention, consciousness and cognitive control.

You have published an intriguing piece on “physics envy” (here). What can psychology learn from physics, and what can psychologists learn from physicists?

Psychology can learn many lessons from physics and other physical sciences. The physics community hinges reputation on transparency and reproducibility – if your results can’t be repeated then they (and you) won’t be believed. They routinely publish their work in the form of pre-prints and have successfully shaped their journals to fit with their working culture. Replication studies are normal practice, and when conducted are seen as a compliment to the importance of the original work rather than (as in psychology) a threat or insult to the original researcher. Physicists I talk to are bemused by our obsession with impact factors, h-indices, and authorship order – they see these as shallow indicators for bureaucrats and the small minded. There are career pressures in physics, no doubt, but at the risk of over-simplifying, it seems to me that the incentives for individual scientists are in broad alignment with the scientific objectives of the community. In psychology, these incentives stand in opposition.

One of your areas of interest is in the public understanding of science. Can you provide a brief primer of the psychological ideas within this field of research?

The way scientists communicate with the public is crucial in so many ways and a large part of my work. In terms of outreach, one of my goals on the Guardian science blog network is to help bridge this gap. We’ve also been exploring science communication in our research. Through the Insciout project we’ve been investigating the extent to which press releases about science and health contribute to hype in news reporting, and the evidence suggests that most exaggeration we see in the news begins life in press releases issued by universities and academic journals. We’ve also been looking at how readers interpret common phrases used in science and health reporting, such as “X can cause Y” or “X increases risk of Y”, to determine whether the wording used in news headlines leads readers to conclude that results are more deterministic (i.e. causal) than the study methods allow. Our hope is that this work can lead to evidence-based guidelines for preparation of science and health PR material by universities and journals.

I’m also very interested in mechanisms for promoting evidence-based policy more generally. Here in the UK I’m working with several colleagues to establish a new Evidence Information Service for connecting research academics and policy makers, with the aim to provide parliamentarians with a rapid source of advice and consultation. We’re currently undertaking a large-scale survey of how the academic community feels about this concept – the survey can be completed here.

You have recently published a book titled “The 7 Deadly Sins of Psychology”. What are the sins and how can psychologists redeem themselves?

The sins, in order, are bias, hidden flexibility, unreliability, data hoarding, corruptibility, internment and bean counting. At the broadest level, the path to redemption will require wide adoption of open research practices such as a study preregistration, open data and open materials, and wholesale revision of the systems we use to determine career progression, such as authorship rank, journal rank, and grant capture. We also need to establish robust provisions for detecting and deterring academic fraud while at the same time instituting genuine protections for whistleblowers.

How did you arrive at the idea of Registered Reports for Psychology? What was the initial response from journals that you have approached? How has the perception of Registered Reports changed over the years?

After many years of being trained in the current system, I basically just had enough of publication bias and the “academic game” in psychology – a game where publishing neat stories in prestigious journals and attracting large amounts of grant funding is more rewarded than being accurate and honest. I reached a breaking point (which I write about in the book) and decided that I was either going to do something else with my life or try to change my environment. I opted for the latter and journal-based preregistration – what later became known as Registered Reports – seemed like the best way to do it. The general concept behind Registered Reports had been suggested, on and off, for about 50 years but nobody had yet managed to implement it. I got extremely lucky in being able to push it into the mainstream at the journal Cortex, thanks in no small part to the support of chief editor Sergio Della Sala.

The initial response from journals was quite cautious. Many were – and still are – concerned about whether Registered Reports will somehow produce lower quality science or reduce their impact factors. In reality, they produce what in my view are among the highest quality empirical papers you will see in their respective fields – they are rigorously reviewed with transparent, high-powered methods, and the evidence also suggests that they are cited well above average. Over the last four years we’ve seen more than 50 journals adopt the format (including in some prominent journals such as Nature Human Behaviour and BMC Biology) and the community has warmed up to them as published examples have begun appearing. Many journals are now seeing them as a strength and a sign that they value reproducible open science. They are realising that adding Registered Reports to their arsenal is a small and simple step for attracting high-quality research, and that having them widely available is potentially a giant leap for science as a whole.

Max Planck, the famous German Physicist, once said that science advances a funeral at a time. Let’s hope that is not true —  we simply don’t have the time for that. What skills, ideas, and practices should the next generation of psychological researchers be familiar and competent with? What further resources can you recommend?

I agree – there is no time to wait for funerals, especially in our unstable political climate. The world is changing quickly and science needs to adapt. I believe young scientists can protect themselves in two ways: first, by learning open science and robust methods now. Journals and funders are becoming increasingly cognisant of the need to ensure greater reproducibility and many of the measures that are currently optional will inevitably become mandatory. So make sure you learn how to archive your data, or preregister your protocol. Learn R and become familiar with the underlying philosophy of frequentist and Bayesian hypothesis testing. Do you understand what a p value is? What power is and isn’t? What a Bayes factor tells you? My second recommendation is to recognise these tumultuous times in science for what they are: a political revolution. It’s easy for more vulnerable members of a community to be crushed during a revolution, especially if isolated, so young scientists need to unionise behind open science to ensure that their voices are heard. Form teams to help shape the reforms that you want to see in the years ahead, whether that’s Registered Reports or open data and materials in peer review, or becoming a COS Ambassador. One day, not long from now, all this will be yours so make sure the system works for you and your community.

Fabian Dablander

Fabian Dablander is currently finishing his thesis in Cognitive science at the University of Tübingen and Daimler Research & Development. He is interested in Bayesian statistics, innovative ways of data collection, open science, and effective altruism. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss

Life is a box of chocolates

Sitting in a classroom and being lectured, I often felt a sense that I should not question what I am being taught. This was not due to any fault of the lecturers who mostly were very welcoming of students’ opinions. However, simply knowing that this was an area that they had spent years researching and seeing them sharing at their computers screen, or head in a book every time you look through their office window gave the sense that they must have all the answers and have a justified reason for their opinions whereas mine always felt too subjective to be taken seriously. During my undergraduate degree, my essays became more and more focused on the areas which we had been taught in class and less inclusive of the breath of what were my own opinions. This was simply because having a controversial argument seemed to lead to more frustration in conceiving the lecturer’s than arguing what was the ‘popular’ approach.

 

Continue reading

Facebooktwitterrss