Blog Contributors

Ivan Flis

Author image for Ivan Flis    			    			Ivan Flis is a PhD student in History and Philosophy of Science at the Descartes Centre, Utrecht University. His research focuses on quantitative methodology in psychology, its history and application, and its relation to theory construction in psychological research. He has been an editor of JEPS for three years in the previous mandates.

Hide this bio

What Do Whigs Have To Do With History of Psychology? on Thursday, January 30th, 2014

The state of Open Access in Europe – Horizon 2020 on Monday, October 1st, 2012

The State of Open Access in Europe – Finch Report on Wednesday, August 1st, 2012

Self-archiving and psychology journals on Sunday, June 10th, 2012

Research oriented social networking? on Thursday, May 10th, 2012

Podcast with Nick Shockey: Open Access and psychology students on Tuesday, May 1st, 2012

The implications of bite-size science on Tuesday, March 20th, 2012

ACTA, SOPA, PIPA, RWA and what do they have to do with psychologists on Monday, February 20th, 2012

What happens to studies that accept the null hypothesis? on Sunday, January 1st, 2012

Who publishes the most reputable journals in psychology? on Thursday, November 10th, 2011

Martin Vasilev

Author image for Martin Vasilev    			    			Martin Vasilev is an Editor in JEPS. He is a final year undergraduate student of Psychology at the University of Sofia, Bulgaria, and the author of some of the most popular posts at the JEPS Bulletin (see for example, his post on writing literature reviews, which was reprinted in the MBA Edge, a magazine for Malaysian prospective postgraduate students).

Hide this bio

What Are The Most Common APA Style Mistakes Done By Students? on Tuesday, January 15th, 2013

How to write a good title for journal articles on Saturday, September 1st, 2012

APA Style: Abbreviations on Saturday, March 10th, 2012

How to critically evaluate internet-based sources? on Saturday, August 20th, 2011

How to write a good literature review article? on Wednesday, July 20th, 2011

Zorana Zupan

Author image for Zorana Zupan    			    			
    		University of Belgrade, Serbia and Montenegro
Graduated from University of Belgrade, Dipl. in Psychology
Msc Research in Psychology
Research interests: Developmental Psychology, Developmental psychopathology, Cognitive Psychology

Hide this bio

Perfect references in no time: An introduction to free referencing software on Wednesday, February 1st, 2012

How to critically evaluate the quality of a research article? on Monday, August 1st, 2011

Deirdre Walsh

Deirdre Walsh is a doctoral student in Counselling Psychology at Trinity College, Dublin. She is a recent MSc Clinical Child Psychology graduate of Anglia Ruskin University, United Kingdom. She has various experiences conducting psychology studies using qualitative research methods. She aspires to be a counselling psychologist, and hopes to apply the knowledge gained from her research experiences into practice.

Hide this bio

The journey towards discovering people: Why I love qualitative research on Thursday, September 20th, 2012

Zoey Hudson

Zoey Hudson graduated from Anglia Ruskin University with a BSc (Hons) Psychology in 2012. She is currently working with Dr. Richard Piech researching on reciprocity in the trust. When Zoey is not at work, she is volunteering for a charity, helping them to find means of funding. Rationality is a particular interest of Zoey’s and one which she would like to pursue further research in. As applying for financial funding is necessary to conduct research, Zoey hopes that her work and volunteering experiences will be useful in her career in psychology research.

Hide this bio

Life is a box of chocolates on Saturday, October 20th, 2012

Maris Vainre

Managing and organising literature on Friday, April 20th, 2012

Tips for effective literature search on Tuesday, April 10th, 2012

APA style: How to format the references list? on Friday, January 20th, 2012

How to format headings in APA Style? on Tuesday, December 20th, 2011

Common mistakes made in APA style on Sunday, November 20th, 2011

What makes a presentation good? on Saturday, October 1st, 2011

Tomatoes against procrastination on Saturday, September 10th, 2011

Lost in translating? on Wednesday, August 10th, 2011

Can you find an article in 5 sec? The world of DOIs on Friday, June 10th, 2011

How to make (scientific) texts sound professional? on Wednesday, June 1st, 2011

Chris Noone

Author image for Chris Noone    			    			Chris Noone is a PhD student at the School of Psychology at the National University of Ireland, Galway. His research focuses on the effects of mood on higher-order cognition. He is the Member Representative Coordinator on the Board of Management of EFPSA.

Hide this bio

Student Action for Open Access on Wednesday, October 30th, 2013

Replication Studies: It’s Time to Clean Up Your Act, Psychologists! on Wednesday, January 30th, 2013

The state of Open Access in Europe – Right to Research Coalition on Monday, August 20th, 2012

Yee Row Liew

Author image for Yee Row Liew    			    			Yee Row Liew is an Editor of the JEPS Bulletin, who has a wide research background and experience that range from plant genetics to psychology. Having completed her postgraduate study just recently in Psychological Research Methods from Anglia Ruskin University, United Kingdom, she is now working as a research assistant at the Global Sustainability Institute. She hopes to gain further knowledge in the study of emotion, cognition, and motivation, in pursuit of her love for scientific research.

Hide this bio

Confessions of a Research Blog Editor on Monday, April 15th, 2013

Say again?: Scientific writing and publishing in non-English speaking countries on Sunday, December 30th, 2012

What makes a good research question? on Monday, September 10th, 2012

Sina Scherer

Author image for Sina Scherer    			    			As being part of EFPSA's JEPS team, Sina Scherer works as JEPS Bulletin's editor and is currently enrolled in the last year of her Master programme in Work and Organizational Psychology at the Westfälische Wilhelmsuniversität Münster. Her fields of interest cover the areas of Intercultural Psychology, Personality and Organizational Psychology such as Health Psychology.

Hide this bio

How to Collect Data Effectively? An Overview of the Best Online Survey Providers on Friday, November 15th, 2013

The structure of an APA research paper on Thursday, November 15th, 2012

The transformation of science on Friday, August 10th, 2012

Bias in psychology: Bring in all significant results on Friday, June 1st, 2012

Scaring European developments threaten Open Access on Sunday, April 1st, 2012

A revolution in scientific publishing? on Friday, February 10th, 2012

Journals in Psychology on Tuesday, January 10th, 2012

How to search for literature? on Saturday, December 10th, 2011

Lessons from a published fake study on Tuesday, November 1st, 2011

Written by the hands of a ghost on Tuesday, September 20th, 2011

Pedro Almeida

Author image for Pedro Almeida    			    			Pedro Almeida is a graduate student at the University of Coimbra, Portugal. His main research interests are group development and intergroup relations. He is an Editor and Webmaster for the Journal of European Psychology Students (JEPS).

Hide this bio

The Best of JEPS Bulletin in 2013 on Thursday, December 26th, 2013

Looking for New Contributors on Monday, September 23rd, 2013

Why We Publish: The Past, Present, and Future of Science Communication on Tuesday, April 30th, 2013

The origins of APA style (and why there are so many rules) on Tuesday, July 10th, 2012

Peter Edelsbrunner

Author image for Peter Edelsbrunner    			    			Peter Edelsbrunner is a PhD student at the Institute for Behavioural Sciences at the ETH Zurich. He completed his Master's degree in Psychology at the University of Graz. He is interested in conceptual change, reasoning processes, and strutural equation modelling. With his strong methodological background, he hopes to combine both cognitive theory and psychometrics in his future research pursuits.

Hide this bio

Introducing jamovi: Free and Open Statistical Software Combining Ease of Use with the Power of R on Thursday, March 23rd, 2017

Structural equation modeling: What is it, what does it have in common with hippie music, and why does it eat cake to get rid of measurement error? on Monday, December 14th, 2015

Bayesian Statistics: What is it and Why do we Need it? on Monday, November 17th, 2014

Advice for the Next Generation of Researchers in Psychology from an Experienced Editor on Friday, November 30th, 2012

Research as an international project on Thursday, December 1st, 2011

Julia Ouzia

Author image for Julia Ouzia    			    			Julia Ouzia is a German national who has lived in the United Kingdom for over seven years. Since then she has completed a Bachelor's degree in Psychology and a Master's degree in Clinical Child Psychology. Julia is currently interested in bilingual learning and cognition doing a PhD in Brain and Cognition at Anglia Ruskin University. She has also been part of the Executive Board and the Board of Management of EFPSA.

Hide this bio

Keep calm and be creative: Use mixed methods! on Wednesday, October 10th, 2012

How to be an academic rock star via poster presentation on Friday, July 20th, 2012

In the shoes of a peer-reviewer on Thursday, March 1st, 2012

How to stop being busy and become productive

With the rise of social media, potential distractions have risen to unseen levels; they dominate our daily lives. Do you check Facebook, Twitter, Snapchat, Instagram, or Email on a constant basis? Do you have an embarrassing relationship with your alarm clock’s snooze button? Do you pass on social invites, telling other people that you are too busy? As a generation, we have lost the ability to focus sharply on the task at hand; instead, we work on a multitude of things simultaneously, lamenting that we do not achieve what we seek to achieve.

Picture of a busy person on a computer (Lego).

In this post, we share useful tips, tricks, and tools for you to stay on top of your day and move quickly from task to task, accomplishing the things that matter. In addition to linking to further resources, we suggest a three stage actionable program for you to go through in order to stop being busy and start being productive. As we (Fabian and Lea, the authors of this post) have experienced first hand, making the jump from being busy to being productive — from workaholism to strictly separating work and play, from social exclusion to social inclusion — has the promising potential of increasing quality time spent with friends and family, accelerating the pace of skill development, avoiding burnouts, and leading to increased subjective well-being.

Challenges in the 21st Century

Why would one want to become more productive? In additional to personal reasons — leading a more happier, more accomplished, more balanced life — there are societal reasons. The 21st century presents us with unique challenges, and the way we tackle them will define the future of our species. The three most important challenges are the exploitation of the Earth (including climate change), income inequality (including world poverty), and the “rise of robots” which includes digitalisation and its impact on work. In this post, we want to focus on the latter and make the argument that, in order to stay lean, one needs to cultivate what Cal Newport calls Deep work habits, enabling one to quickly adapt to changing work environments. Additionally, these habits also increase the effectiveness with which we can tackle the three challenges.

Take data science as an example. Few fields move as fast as data science. In its current form, it didn’t even exist fifteen years ago (for a very short history of the field, see this). Now “data scientist” has become the “sexiest job of the 21st century”.

The job market will change dramatically in the coming years. It is predicted that many jobs will fall out of existence, being taken over by machines, and that new jobs will be created (see this study and these books). Humanity is moving at an incredibly fast pace, and each individual’s challenge is to stay sharp amidst all those developments. To do so requires the ability to quickly learn new things, and to spend time productively — the two skills which make you most employable.

Being busy vs being productive

Every day, week, and month we have a number of tasks and obligations we need to address; the way we organize the time spent on getting these done differs strongly among individuals. It is here that the distinction between being busy and being productive becomes apparent.

When thinking of someone who is busy, usually we picture someone who tries to complete a task while in the same time thinking about some other task, checking social media, email, or conversing with other people. The splitting of attention on multiple things at once, while claiming to be working on a really important task, is a dead giveaway. This causes the task at hand to take forever to be completed. Oddly enough, the extensive time this task takes to be completed need not bother a busy person. On the contrary, it provides an opportunity to talk a lot about being busy, having so much to do, having so many exams, etc. This leads to cancellations of social plans and less time for leisure activities. Too many things to do, not enough time. One gets more and more frustrated.

On the other hand, a productive person is a responsible person with a focus on setting clear, few priorities and thinking of measurable steps how to achieve her goal. While working, an intense focus and undivided attention is directed on a single activity. Keeping track of progress gives a clear idea of what has been achieved during the day and what is left for tomorrow.

The distinction between being busy and being productive is at the core of this blog post. Table 1 below gives an overview of what distinguishes these two states.

Table describing the difference between being busy and being productive

Table 1. Describes the difference between being busy and being productive.

Learning how to learn

In addition to personal productivity, which will be the focus of the remaining sections, being able to monitor one’s learning progress and learning new things quickly is another very important skill. Barbara Oakley and Terrence Seijnowski have designed an online course over at Coursera called “Learning How To Learn” in which they discuss, among other things, the illusion of competence, memory techniques, and how to beat procrastination. It is the most popular, free course on Coursera and we highly recommend it.

Tips, tricks, and tools

Note that these are personal recommendations. Most of them are backed by science or common sense, but they need not work for you. This is a disclaimer: your mileage may vary.

Manage your time. Time is your most important commodity. You can’t get it back, so consider spending it wisely. To facilitate that, we highly recommend the Bullet Journal. It is an “analog tool designed for the digital world”. All you need is a notebook — we use a Leuchtturm1917, but any other would do, too — and a pen. Here is a video explaining the basics. It combines the idea of keeping track of your time and obligations while providing a space for creativity.

Schedule tasks & eat your frog first. Write down what needs to get done the next day on the evening before. Pick out your most despised task — your frog — and tackle it first thing in the morning. If you eat your frog first, there is nothing more disgusting that can happen during the day. Doing this mitigates procrastination and provides a sense of accomplishment that keeps you energy levels up.

Avoid social media. Social media and email have operantly conditioned us; we get a kick out every notification. Thousands of engineers are working on features that grab our attention and maximize the time we spent on the platforms they build (see also this fascinating interview). However, checking these platforms disrupts our workflow and thought process. They train us to despise boredom and instill in us the unfortunate need of having something occupy our attention at all times. Therefore, we recommend having fixed time points when you check email, and not spend too much time on social media before late in the afternoon or evening, when energy is low. More important tasks require attention during the day when your mind is still sharp.

We feel that quitting social media altogether is too extreme and would most likely be detrimental to our social life and productivity. However, we did remove social media apps from our phones and we limit the number of times we log onto these platforms per day. We recommend you do the same. You will very soon realize that they aren’t that important. Time is not well spent there.

Stop working. There is a time for work, and there is a time for play. We recommend setting yourself a fixed time when you stop working. This includes writing and responding to emails. Enjoy the rest of the day, read a book, learn a new skill, meet friends, rest your mind. This helps your mind wander from a focused into a diffuse mode of thinking which helps with insight problems such as “Thiss sentence contains threee errors.” If you do this, you will soon realize a boost in your overall creativity and subjective well-being. Cal Newport has structured his schedule according to this principle, calling it fixed-schedule productivity.

Build the right habits. Being productive is all about building the right habits. And building habits is hard; on average, it takes 66 days to build one, although there is great variability (see Lally et al., 2009, and here). In order to facilitate this process, we recommend Habitica, an app that gamifies destroying bad habits and building good habits; see Figure 1 below.

Figure 1. From left to right, shows the apps Habitica, Calm, and 7 Minute. The important thing is to not break the chain. This creates a psychological need for continuation. Note the selection bias here. It took me over a month to get to level 3 in Habitica. Don’t expect miracles; take small, consistent steps every day.

Workout. In order to create high quality work, you need to take care of your body; you can’t really be productive when you are not physically fit. Staying fit by finding an exercise routine that one enjoys and can manage is one of the best things we do, and we can only recommend it. Being able to climb stairs without getting out of breath is just one of the many rewards.

Meditate or go for a run. In order to increase your ability to focus and avoid distractions, we recommend meditation. For this purpose, we are using Calm, but any other meditation app, for example Headspace, yields similar results. (Of course, nothing beats meditating in a Buddhist centre.) This also helps during the day when some stressful event happens. It provides you with a few minutes to recharge, and then start into the day afresh. Going for a run, for example, does the same trick.

Someone asked a Zen Master, “How do you practice Zen?”
The master said, “When you are hungry, eat; when you are tired, sleep.”
“Isn’t that what everybody does anyway?”
The master replied, “No, no. Most people entertain a thousand desires when they eat and scheme over a thousand thoughts when they sleep.”

Powernap. This is one of the more unconventional recommendations, but it has worked wonders for our productivity. In the middle of the day, take a short power nap. It provides a boost of energy that lasts until bedtime (for more, see this).

Process versus Product. For starting to work, focusing on process rather than product is crucial. Set yourself a timer for, say, 25 minutes and then fully concentrate on the task at hand. Take a short break, and start the process again. In this way, you will focus on bursts of concentrated, deep work that bring you step by step towards your final outcome, say a finished blog post.

This approach is reminiscent of the way Beppo, the road sweeper, works in Michael Ende’s book Momo. About his work, he says

“…it’s like this. Sometimes, when you’ve a very long street ahead of you, you think how terribly long it is and feel sure you’ll never get it swept. And then you start to hurry. You work faster and faster and every time you look up there seems to be just as much left to sweep as before, and you try even harder, and you panic, and in the end you’re out of breath and have to stop — and still the street stretches away in front of you. That’s not the way to do it.

You must never think of the whole street at once, understand? You must only concentrate on the next step, the next breath, the next stroke of the broom, and the next, and the next. Nothing else.

That way you enjoy your work, which is important, because then you make a good job of it. And that’s how it ought to be.

And all at once, before you know it, you find you’ve swept the whole street clean, bit by bit. What’s more, you aren’t out of breath. That’s important, too.”

This technique is sometimes called the “Pomodoro”, and apps help achieving that abound. Although you need no app for this, apps are nice because they keep track of how many Pomodoros you have finished on a given day, providing you with a direct measure of your productivity. We can recommend the Productivity Challenge Timer.

Write down ten ideas. This recommendation comes from James Altucher, who wrote Reinvent Yourself which is an entertaining book with chapters such as “Seven things Star Wars taught me about productivity” and “The twenty things I’ve learned from Larry Page”. The habit is simple: write down ten ideas every day, on any topic. The basic rationale behind this is that creativity is a muscle, and like every other muscle, training it increases its strength. Most of the ideas will be rather useless, but that doesn’t matter. Now and then there will be a really good one. This habit probably has strong transfer effects, too, because creativity is required in many areas of life.

Read, Read, Read. There’s a saying that most people die by age 25 but aren’t put into a coffin until age 75. Reading allows your mind to continuously engage with novel ideas. We recommend Goodreads to organize and structure your reading.

Reflect on your day. Take a few minutes in the evening to reflect on your day. Keep a gratefulness journal in which you write down five things you are grateful for each day (this might also increases your overall happiness, see, e.g., here). Summarize your day in a few lines, pointing out the new things you have learned.

Does it work? Quantifying oneself

It is important to once in while take a cold, hard look into the mirror and ask: What am I doing? Am I working on things that matter, am I helping other people? Am I progressing, or am I stagnating in the comfort zone? Am I enjoying my life?

A useful habit to build is to, every evening, reflect on one’s behaviour and the things that have happened during the day. To achieve this, I (Fabian) have created a Google Form that I fill out daily. It includes, among others, questions on what I have eaten during the day, on the quality of my social interactions, on what the most important thing I have learned today; see Figure 2 below. It also asks me to summarize my day in a few lines.

Figure 2. Quantified Self questions. Every evening I reflect on the day by answering these questions. You can create your own, adapting the questions to your needs.

I have not done much with the data yet, but I know that just the process of answering the questions is very reflective and soothing. It is also valuable in the sense that, should there be too many days in which I feel bad, this will be directly reflected in the data and I can adjust my behaviour or my environment. I can wholeheartedly recommend this tiny bit of quantified self at the end of the day.

Incidentally, there is a whole community behind this idea of quantifying oneself. They go much further. As with most things, it is all about finding the right balance. It is easy to become overwhelmed when engaging with too many tools that measure your behaviour; you might end up being busy and chasing ghosts.

A 3 Stage program

In order to succeed in whatever area of life, commitment is key. Reading a blog post on productivity is the first step in a long journey towards actual behaviour change. In order to help you take this journey, we suggest three “stages”. Note that they are not necessarily sequential; you can take ideas from Stage 3 and implement them before things listed in Stage 1. The main reason behind these stages is that you should avoid being overwhelmed. Take small steps and stick to them. The first two stages will probably take one or two months, while the latter will take a bit longer.

Stage 1

Stage 1 is about getting started. It is about you becoming clear of your motivation; why do you want to be productive? What are the issues that plague or annoy you in the way you currently work? We recommend that you

  • Figure out and write down your motivation for why you want to be productive
  • Become aware of your social media use
  • Enroll in and complete Learning How to Learn
  • Start using the Pomodoro technique
  • Create an account on Habitica, adding habits you want to build or destroy
  • Uninstall social media apps from your phone
  • Set yourself a time point after which you will not check email nor social media

Stage 2

Stage 2 is about staying committed and developing a healthier and more consistent lifestyle.

  • Stay committed to your habits and review your motivation
  • Review what you have accomplished during the last months
  • Develop a consistent sleep-wake cycle
  • Develop a morning ritual
  • Eat healthy food, not too much, mostly plants
  • Start to exercise regularly (at least 3x a week)
  • Start a Bullet Journal

Stage 3

Stage 3 is about exceeding what you have accomplished so far. It is about figuring out your goals and the skills you want to develop. It is about not staying in your comfort zone, about building a habit of reading a variety of books, and becoming more engaged with others. It is from other people that we can learn the most.

  • Stay committed to your habits and review your motivation
  • Review what you have accomplished during the last months
  • Figure out what skills you want to develop
  • Read Deep Work and figure out a Deep Work routine that suits you
  • Engage with others and exchange ideas and practices
  • Find mentors for the skills you want to develop (e.g., writing, programming)
  • Create an account on Goodreads and organize your reading
  • Read at least two books per month

Conclusion

We have started this blog post discussing the future of work. But it’s not really about work. Sure, applying the ideas we have sketched will make you more productive professionally; but it’s not about running in a hamster wheel, meeting every objective at work or churning out one paper after another. Instead, it’s about finding the right balance of work and play, engaging in meaningful activities, and enjoying life.

If you take anything from this blog post, it should be the following three points.

If you work, work hard. If you’re done, be done. This means sharply separating work from play. It is important for avoiding burning out, for creating an atmosphere in which creativity and novel ideas flourish, for enhancing your life through spending time with friends and family, and, overall, for increasing the amount of play in your life. After all, play is what makes life joyful.

Never be the smartest person in the room. This is about learning from others. Identify the skills you want to develop, and seek out mentors for those skills; mentors will rapidly speed up your learning. Additionally, hang out with people with different backgrounds. This exposes you to ideas that you would not otherwise be exposed to. It is the people who we barely know that have the capacity to change our lives the most.

Be relevant. This is the culmination of the whole post. It is about helping others and having a lasting impact. This might entail donating to the world’s poorest; being there for a friend in dire times; pushing people to expand their horizons; helping them develop in the direction they want to develop in; working on projects that have a lasting positive impact. It is about doing the things that matter.

Recommended Resources

80.000 hours
Learning How To Learn
Deep Work (or How to Become a Straight-A Student)
– Cal Newport’s fixed-schedule productivity

This post was written together with Lea Jakob and is based on a workshop we have presented at the 31st EFPSA Congress in Qakh, Azerbaijan in April — twice. The feedback we got from participants was extremely positive, and so we decided to write up the main points. This post will also act as a reminder to ourselves should we ever be lead astray and fall back into old habits.

Fabian Dablander

Fabian Dablander is currently finishing his thesis in Cognitive Science at the University of Tübingen and Daimler Research & Development on validating driving simulations. He is interested in innovative ways of data collection, Bayesian statistics, open science, and effective altruism. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss

Are You Registering That? An Interview with Prof. Chris Chambers

There is no panacea for bad science, but if there were, it would certainly resemble Registered Reports. Registered Reports are a novel publishing format in which authors submit only the introduction, methods, and planned analyses without actually having collected the data. Thus, peer-review only focuses on the soundness of the research proposal and is not contingent on the “significance” of the results (Chambers, 2013). In one strike, this simple idea combats publication bias, researchers’ degrees of freedom, makes apparent the distinction between exploratory and confirmatory research, and calms the researcher’s mind. There are a number of journals offering Registered Reports, and this is arguable the most important step journals can take to push psychological science forward (see also King et al., 2016). For a detailed treatment of Registered Reports, see here, here, here, and Chambers (2015).

Picture of Chris Chambers

Chris Chambers is the initiator of the “Registration Revolution”, the man behind the movement. He has introduced Registered Reports into psychology, has written publicly about the issues we currently face in psychology, and has recently published a book called the “7 Deadly Sins of Psychology” in which he masterfully exposes the shortcomings of current academic customs and inspires change. He is somebody who cares deeply about the future of our field, and he is actively changing it for the better.

We are very excited to present you with an interview with Chris Chambers. How did he become a researcher? Where did he get the idea of Registered Reports from? What is his new book about, and what can we learn from hard sciences such as physics? Find out below!


Tell us a bit about your background. How did you get into Psychology and Cognitive Neuroscience? What is the focus of your research?

Since my teenage years I had been interested in psychology (the Star Trek Next Generation episode “Measure of a Man” left me pondering the mind and consciousness for ages!) but I never really imagined myself as a psychologist or a scientist – those seemed like remote and obscure professions, well out of reach. It wasn’t until the final year of my undergraduate degree that I developed a deep interest in the science of psychology and decided to make a run for it as a career. Applying to do a PhD felt like a very long shot. I have this distinct memory, back in 1999, scrolling down the web page of accepted PhD entrants. I searched in vain for my name among the list of those who had been awarded various prestigious scholarships, and as I neared the bottom I began pondering alternative careers. But then, as if by miracle, there was my name at the end. I was last on the list, the entrant with the lowest successful mark out of the entire cohort. For the next two and half years I tried in vain to replicate a famous US psychologist’s results, and then had to face having this famous psychologist as a negative reviewer of every paper we submitted. One day – about two years into my PhD – my supervisor told me about this grant he’d just been awarded to stimulate people’s brains with electromagnetic fields. He asked if I wanted a job and I jumped at the chance. Finally I could escape Famous Negative Reviewer Who Hated Me! Since then, a large part of my research has been in cognitive neuroscience, with specific interests in attention, consciousness and cognitive control.

You have published an intriguing piece on “physics envy” (here). What can psychology learn from physics, and what can psychologists learn from physicists?

Psychology can learn many lessons from physics and other physical sciences. The physics community hinges reputation on transparency and reproducibility – if your results can’t be repeated then they (and you) won’t be believed. They routinely publish their work in the form of pre-prints and have successfully shaped their journals to fit with their working culture. Replication studies are normal practice, and when conducted are seen as a compliment to the importance of the original work rather than (as in psychology) a threat or insult to the original researcher. Physicists I talk to are bemused by our obsession with impact factors, h-indices, and authorship order – they see these as shallow indicators for bureaucrats and the small minded. There are career pressures in physics, no doubt, but at the risk of over-simplifying, it seems to me that the incentives for individual scientists are in broad alignment with the scientific objectives of the community. In psychology, these incentives stand in opposition.

One of your areas of interest is in the public understanding of science. Can you provide a brief primer of the psychological ideas within this field of research?

The way scientists communicate with the public is crucial in so many ways and a large part of my work. In terms of outreach, one of my goals on the Guardian science blog network is to help bridge this gap. We’ve also been exploring science communication in our research. Through the Insciout project we’ve been investigating the extent to which press releases about science and health contribute to hype in news reporting, and the evidence suggests that most exaggeration we see in the news begins life in press releases issued by universities and academic journals. We’ve also been looking at how readers interpret common phrases used in science and health reporting, such as “X can cause Y” or “X increases risk of Y”, to determine whether the wording used in news headlines leads readers to conclude that results are more deterministic (i.e. causal) than the study methods allow. Our hope is that this work can lead to evidence-based guidelines for preparation of science and health PR material by universities and journals.

I’m also very interested in mechanisms for promoting evidence-based policy more generally. Here in the UK I’m working with several colleagues to establish a new Evidence Information Service for connecting research academics and policy makers, with the aim to provide parliamentarians with a rapid source of advice and consultation. We’re currently undertaking a large-scale survey of how the academic community feels about this concept – the survey can be completed here.

You have recently published a book titled “The 7 Deadly Sins of Psychology”. What are the sins and how can psychologists redeem themselves?

The sins, in order, are bias, hidden flexibility, unreliability, data hoarding, corruptibility, internment and bean counting. At the broadest level, the path to redemption will require wide adoption of open research practices such as a study preregistration, open data and open materials, and wholesale revision of the systems we use to determine career progression, such as authorship rank, journal rank, and grant capture. We also need to establish robust provisions for detecting and deterring academic fraud while at the same time instituting genuine protections for whistleblowers.

How did you arrive at the idea of Registered Reports for Psychology? What was the initial response from journals that you have approached? How has the perception of Registered Reports changed over the years?

After many years of being trained in the current system, I basically just had enough of publication bias and the “academic game” in psychology – a game where publishing neat stories in prestigious journals and attracting large amounts of grant funding is more rewarded than being accurate and honest. I reached a breaking point (which I write about in the book) and decided that I was either going to do something else with my life or try to change my environment. I opted for the latter and journal-based preregistration – what later became known as Registered Reports – seemed like the best way to do it. The general concept behind Registered Reports had been suggested, on and off, for about 50 years but nobody had yet managed to implement it. I got extremely lucky in being able to push it into the mainstream at the journal Cortex, thanks in no small part to the support of chief editor Sergio Della Sala.

The initial response from journals was quite cautious. Many were – and still are – concerned about whether Registered Reports will somehow produce lower quality science or reduce their impact factors. In reality, they produce what in my view are among the highest quality empirical papers you will see in their respective fields – they are rigorously reviewed with transparent, high-powered methods, and the evidence also suggests that they are cited well above average. Over the last four years we’ve seen more than 50 journals adopt the format (including in some prominent journals such as Nature Human Behaviour and BMC Biology) and the community has warmed up to them as published examples have begun appearing. Many journals are now seeing them as a strength and a sign that they value reproducible open science. They are realising that adding Registered Reports to their arsenal is a small and simple step for attracting high-quality research, and that having them widely available is potentially a giant leap for science as a whole.

Max Planck, the famous German Physicist, once said that science advances a funeral at a time. Let’s hope that is not true —  we simply don’t have the time for that. What skills, ideas, and practices should the next generation of psychological researchers be familiar and competent with? What further resources can you recommend?

I agree – there is no time to wait for funerals, especially in our unstable political climate. The world is changing quickly and science needs to adapt. I believe young scientists can protect themselves in two ways: first, by learning open science and robust methods now. Journals and funders are becoming increasingly cognisant of the need to ensure greater reproducibility and many of the measures that are currently optional will inevitably become mandatory. So make sure you learn how to archive your data, or preregister your protocol. Learn R and become familiar with the underlying philosophy of frequentist and Bayesian hypothesis testing. Do you understand what a p value is? What power is and isn’t? What a Bayes factor tells you? My second recommendation is to recognise these tumultuous times in science for what they are: a political revolution. It’s easy for more vulnerable members of a community to be crushed during a revolution, especially if isolated, so young scientists need to unionise behind open science to ensure that their voices are heard. Form teams to help shape the reforms that you want to see in the years ahead, whether that’s Registered Reports or open data and materials in peer review, or becoming a COS Ambassador. One day, not long from now, all this will be yours so make sure the system works for you and your community.

Fabian Dablander

Fabian Dablander is currently finishing his thesis in Cognitive Science at the University of Tübingen and Daimler Research & Development on validating driving simulations. He is interested in innovative ways of data collection, Bayesian statistics, open science, and effective altruism. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss

Not solely about that Bayes: Interview with Prof. Eric-Jan Wagenmakers

Last summer saw the publication of the most important work in psychology in decades: the Reproducibility Project (Open Science Collaboration, 2015; see here and here for context). It stirred up the community, resulting in many constructive discussions but also in verbally violent disagreement. What unites all parties, however, is the call for more transparency and openness in research.

Eric-Jan “EJ” Wagenmakers has argued for pre-registration of research (Wagenmakers et al., 2012; see also here) and direct replications (e.g., Boekel et al., 2015; Wagenmakers et al., 2015), for a clearer demarcation of exploratory and confirmatory research (de Groot, 1954/2013), and for a change in the way we analyze our data (Wagenmakers et al., 2011; Wagenmakers et al., in press).

Concerning the latter point, EJ is a staunch advocate of Bayesian statistics. With his many collaborators, he writes the clearest and wittiest exposures to the topic (e.g., Wagenmakers et al., 2016; Wagenmakers et al., 2010). Crucially, he is also a key player in opening Bayesian inference up to social and behavioral scientists more generally; in fact, the software JASP is EJ’s brainchild (see also our previous interview).

EJ

In sum, psychology is changing rapidly, both in how researchers communicate and do science, but increasingly also in how they analyze their data. This makes it nearly impossible for university curricula to keep up; courses in psychology are often years, if not decades, behind. Statistics classes in particular are usually boringly cookbook oriented and often fraught with misconceptions (Wagenmakers, 2014). At the University of Amsterdam, Wagenmakers succeeds in doing differently. He has previously taught a class called “Good Science, Bad Science”, discussing novel developments in methodology as well as supervising students in preparing and conducting direct replications of recent research findings (cf. Frank & Saxe, 2012).

Now, at the end of the day, testing undirected hypotheses using p values or Bayes factors only gets you so far – even if you preregister the heck out of it. To move the field forward, we need formal models that instantiate theories and make precise quantitative predictions. Together with Michael Lee, Eric-Jan Wagenmakers has written an amazing practical cognitive modeling book, harnessing the power of computational Bayesian methods to estimate arbitrarily complex models (for an overview, see Lee, submitted). More recently, he has co-edited a book on model-based cognitive neuroscience on how formal models can help bridge the gap between brain measurements and cognitive processes (Forstmann & Wagenmakers, 2015).

Long-term readers of the JEPS bulletin will note that topics ranging from openness of research, pre-registration and replication, and research methodology and Bayesian statistics are recurring themes. It has thus been only a matter of time for us to interview Eric-Jan Wagenmakers and ask him questions concerning all areas above. In addition, we ask: how does he stay so immensely productive? What tips does he have for students interested in an academic career; and what can instructors learn from “Good Science, Bad Science”? Enjoy the ride!


Bobby Fischer, the famous chess player, once said that he does not believe in psychology. You actually switched from playing chess to pursuing a career in psychology; tell us how this came about. Was it a good move?

It was an excellent move, but I have to be painfully honest: I simply did not have the talent and the predisposition to make a living out of playing chess. Several of my close friends did have that talent and went on to become international grandmasters; they play chess professionally. But I was actually lucky. For players outside of the world top-50, professional chess is a career trap. The pay is poor, the work insanely competitive, and the life is lonely. And society has little appreciation for professional chess players. In terms of creativity, hard work, and intellectual effort, an international chess grandmaster easily outdoes the average tenured professor. People who do not play chess themselves do not realize this.

Your list of publications gets updated so frequently, it should have its own RSS feed! How do you grow and cultivate such an impressive network of collaborators? Do you have specific tips for early career researchers?

At the start of my career I did not publish much. For instance, when I finished my four years of grad studies I think I had two papers. My current publication rate is higher, and part of that is due to an increase in expertise. It is just easier to write papers when you know (or think you know) what you’re talking about. But the current productivity is mainly due to the quality of my collaborators. First, at the psychology department of the University of Amsterdam we have a fantastic research master program. Many of my graduate students come from this program, having been tried and tested in the lab as RAs. When you have, say, four excellent graduate students, and each publishes one article a year, that obviously helps productivity. Second, the field of Mathematical Psychology has several exceptional researchers that I have somehow managed to collaborate with. In the early stages I was a graduate student with Jeroen Raaijmakers, and this made it easy to start work with Rich Shiffrin and Roger Ratcliff. So I was privileged and I took the opportunities that were given. But I also work hard, of course.

There is a lot of advice that I could give to early career researchers but I will have to keep it short. First, in order to excel in whatever area of life, commitment is key. What this usually means is that you have to enjoy what you are doing. Your drive and your enthusiasm will act as a magnet for collaborators. Second, you have to take initiative. So read broadly, follow the latest articles (I remain up to date through Twitter and Google Scholar), get involved with scientific organizations, coordinate a colloquium series, set up a reading group, offer your advisor to review papers with him/her, attend summer schools, etc. For example, when I started my career I had seen a new book on memory and asked the editor of Acta Psychologica whether I could review it for them. Another example is Erik-Jan van Kesteren, an undergraduate student from a different university who had attended one of my talks about JASP. He later approached me and asked whether he could help out with JASP. He is now a valuable member of the JASP team. Third, it helps if you are methodologically strong. When you are methodologically strong –in statistics, mathematics, or programming– you have something concrete to offer in a collaboration.

Considering all projects you are involved in, JASP is probably the one that will have most impact on psychology, or the social and behavioral sciences in general. How did it all start?

In 2005 I had a conversation with Mark Steyvers. I had just shown him a first draft of a paper that summarized the statistical drawbacks of p-values. Mark told me “it is not enough to critique p-values. You should also offer a concrete alternative”. I agreed and added a section about BIC (the Bayesian Information Criterion). However, the BIC is only a rough approximation to the Bayesian hypothesis test. Later I became convinced that social scientists will only use Bayesian tests when these are readily available in a user-friendly software package. About 5 years ago I submitted an ERC grant proposal “Bayes or Bust! Sensible hypothesis tests for social scientists” that contained the development of JASP (or “Bayesian SPSS” as I called it in the proposal) as a core activity. I received the grant and then we were on our way.

I should acknowledge that much of the Bayesian computations in JASP depend on the R BayesFactor package developed by Richard Morey and Jeff Rouder. I should also emphasize the contribution by JASPs first software engineer, Jonathon Love, who suggested that JASP ought to feature classical statistics as well. In the end we agreed that by including classical statistics, JASP could act as a Trojan horse and boost the adoption of Bayesian procedures. So the project started as “Bayesian SPSS”, but the scope was quickly broadened to include p-values.

JASP is already game-changing software, but it is under continuous development and improvement. More concretely, what do you plan to add in the near future? What do you hope to achieve in the long-term?

In terms of the software, we will shortly include several standard procedures that are still missing, such as logistic regression and chi-square tests. We also want to upgrade the popular Bayesian procedures we have already implemented, and we are going to create new modules. Before too long we hope to offer a variable views menu and a data-editing facility. When all this is done it would be great if we could make it easier for other researchers to add their own modules to JASP.

One of my tasks in the next years is to write a JASP manual and JASP books. In the long run, the goal is to have JASP be financially independent of government grants and university support. I am grateful for the support that the psychology department at the University of Amsterdam offers now, and for the support they will continue to offer in the future. However, the aim of JASP is to conquer the world, and this requires that we continue to develop the program “at break-neck speed”. We will soon be exploring alternative sources of funding. JASP will remain free and open-source, of course.

You are a leading advocate of Bayesian statistics. What do researchers gain by changing the way they analyze their data?

They gain intellectual hygiene, and a coherent answer to questions that makes scientific sense. A more elaborate answer is outlined in a paper that is currently submitted to a special issue for Psychonomic Bulletin & Review: https://osf.io/m6bi8/ (Part I).

The Reproducibility Project used different metrics to quantify the success of a replication – none of them really satisfactory. How can a Bayesian perspective help illuminate the “crisis of replication”?

As a theory of knowledge updating, Bayesian statistics is ideally suited to address questions of replication. However, the question “did the effect replicate?” is underspecified. Are the effect sizes comparable? Does the replication provide independent support for the presence of the effect? Does the replication provide support for the position of the proponents versus the skeptics? All these questions are slightly different, but each receives the appropriate answer within the Bayesian framework. Together with Josine Verhagen, I have explored a method –the replication Bayes factor– in which the prior distribution for the replication test is the posterior distribution obtained from the original experiment (e.g., Verhagen & Wagenmakers, 2014). We have applied this intuitive procedure to a series of recent experiments, including the multi-lab Registered Replication Report of Fritz Strack’s Facial Feedback hypothesis. In Strack’s original experiment, participants who held a pen with their teeth (causing a smile) judged cartoons to be funnier than participants who held a pen with their lips (causing a pout). I am not allowed to tell you the result of this massive replication effort, but the paper will be out soon.

You have recently co-edited a book on model-based cognitive neuroscience. What is the main idea here, and what developments in this area are most exciting to you?

The main idea is that much of experimental psychology, mathematical psychology, and the neurosciences pursue a common goal: to learn more about human cognition. So ultimately the interest is in latent constructs such as intelligence, confidence, memory strength, inhibition, and attention. The models that have been developed in mathematical psychology are able to link these latent constructs to specific model parameters. These parameters may in turn be estimated by behavioral data, by neural data, or by both data sets jointly. Brandon Turner is one of the early career mathematical psychologists who has made great progress in this area. So the mathematical models are a vehicle to achieve an integration of data from different sources. Moreover, insights from neuroscience can provide important constraints that help inform mathematical modeling. The relation is therefore mutually beneficial. This is summarized in the following paper: http://www.ejwagenmakers.com/2011/ForstmannEtAl2011TICS.pdf

One thing that distinguishes science from sophistry is replication; yet it is not standard practice. In “Good Science, Bad Science”, you had students prepare a registered replication plan. What was your experience teaching this class? What did you learn from the students?

This was a great class to teach. The students were highly motivated and oftentimes it felt more like lab-meeting than like a class. The idea was to develop four Registered Report submissions. Some time has passed, but the students and I still intend to submit the proposals for publication.

The most important lesson this class has taught me is that our research master students want to learn relevant skills and conduct real research. In the next semester I will teach a related course, “Good Research Practices”, and I hope to attain the same high levels of student involvement. For the new course, I plan to have students read a classic methods paper that identifies a fallacy; next the students will conduct a literature search to assess the current prevalence of the fallacy. I have done several similar projects, but never with master students (e.g., http://www.ejwagenmakers.com/2011/NieuwenhuisEtAl2011.pdf and http://link.springer.com/article/10.3758/s13423-015-0913-5).

What tips and tricks can you share with instructors planning to teach a similar class?

The first tip is to set your aims high. For a research master class, the goal should be publication. Of course this may not always be realized, but it should be the goal. It helps if you can involve colleagues or graduate students. If you set your aims high, the students know that you take them seriously, and that their work matters. The second tip is to arrange the teaching so that the students do most of the work. The students need to develop a sense of ownership about their projects, and they need to learn. This will not happen if you treat the students as passive receptacles. I am reminded of a course that I took as an undergraduate. In this course I had to read chapters, deliver presentations, and prepare questions. It was one of the most enjoyable and inspiring courses I had ever taken, and it took me decades to realize that the professor who taught the course actually did not have to do much at all.

Many scholarly discussions these days take place on social media and blogs. You’ve joined twitter yourself over a year ago. How do you navigate the social media jungle, and what resources can you recommend to our readers?

I am completely addicted to Twitter, but I also feel it makes me a better scientist. When you are new to Twitter, I recommend that you start by following a few people that have interesting things to say. Coming from a Bayesian perspective, I recommend Alexander Etz (@AlxEtz) and Richard Morey (@richarddmorey). And of course it is essential to follow JASP (@JASPStats). As is the case for all social media, the most valuable resource you have is the “mute” option. Prevent yourself from being swamped by holiday pictures and exercise it ruthlessly.

Fabian Dablander

Fabian Dablander is currently finishing his thesis in Cognitive Science at the University of Tübingen and Daimler Research & Development on validating driving simulations. He is interested in innovative ways of data collection, Bayesian statistics, open science, and effective altruism. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss

Replicability and Registered Reports

Last summer saw the publication of a monumental piece of work: the reproducibility project (Open Science Collaboration, 2015). In a huge community effort, over 250 researchers directly replicated 100 experiments initially conducted in 2008. Only 39% of the replications were significant at the 5% level. Average effect size estimates were halved. The study design itself—conducting direct replications on a large scale—as well as its outcome are game-changing to the way we view our discipline, but students might wonder: what game were we playing before, and how did we get here?

In this blog post, I provide a selective account of what has been dubbed the “reproducibility crisis”, discussing its potential causes and possible remedies. Concretely, I will argue that adopting Registered Reports, a new publishing format recently also implemented in JEPS (King et al., 2016; see also here), increases scientific rigor, transparency, and thus replicability of research. Wherever possible, I have linked to additional resources and further reading, which should help you contextualize current developments within psychological science and the social and behavioral sciences more general.

How did we get here?

In 2005, Ioannidis made an intriguing argument. Because the prior probability of any hypothesis being true is low, researchers continuously running low powered experiments, and as the current publishing system is biased toward significant results, most published research findings are false. Within this context, spectacular fraud cases like Diederik Stapel (see here) and the publication of a curious paper about people “feeling the future” (Bem, 2011) made 2011 a “year of horrors” (Wagenmakers, 2012), and toppled psychology into a “crisis of confidence” (Pashler & Wagenmakers, 2012). As argued below, Stapel and Bem are emblematic of two highly interconnected problems of scientific research in general.

Publication bias

Stapel, who faked results of more than 55 papers, is the reductio ad absurdum of the current “publish or perish” culture[1]. Still, the gold standard to merit publication, certainly in a high impact journal, is p < .05, which results in publication bias (Sterling, 1959) and file-drawers full of nonsignificant results (Rosenthal, 1979; see Lane et al., 2016, for a brave opening; and #BringOutYerNulls). This leads to a biased view of nature, distorting any conclusion we draw from the published literature. In combination with low-powered studies (Cohen, 1962; Button et al., 2013; Fraley & Vazire; 2014), effect size estimates are seriously inflated and can easily point in the wrong direction (Yarkoni, 2009; Gelman & Carlin, 2014). A curious consequence is what Lehrer has titled “the truth wears off” (Lehrer, 2010). Initially high estimates of effect size attenuate over time, until nothing is left of them. Just recently, Kaplan and Lirvin reported that the proportion of positive effects in large clinical trials shrank from 57% before 2000 to 8% after 2000 (Kaplan & Lirvin, 2015). Even a powerful tool like meta-analysis cannot clear the view of a landscape filled with inflated and biased results (van Elk et al., 2015). For example, while meta-analyses concluded that there is a strong effect of ego-depletion of Cohen’s d=.63, recent replications failed to find an effect (Lurquin et al., 2016; Sripada et al., in press)[2].

Garden of forking paths

In 2011, Daryl Bem reported nine experiments on people being able to “feel to future” in the Journal of Social and Personality Psychology, the flagship journal of its field (Bem, 2011). Eight of them yielded statistical significance, p < .05. We could dismissively say that extraordinary claims require extraordinary evidence, and try to sail away as quickly as possible from this research area, but Bem would be quick to steal our thunder.

A recent meta-analysis of 90 experiments on precognition yielded overwhelming evidence in favor of an effect (Bem et al., 2015). Alan Turing, discussing research on psi related phenomena, famously stated that

“These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately, the statistical evidence, at least of telepathy, is overwhelming.” (Turing, 1950, p. 453; cf. Wagenmakers et al., 2015)

How is this possible? It’s simple: Not all evidence is created equal. Research on psi provides us with a mirror of “questionable research practices” (John, Loewenstein, & Prelec, 2012) and researchers’ degrees of freedom (Simmons, Nelson, & Simonsohn, 2011), obscuring the evidential value of individual experiments as well as whole research areas[3]. However, it would be foolish to dismiss this as being a unique property of obscure research areas like psi. The problem is much more subtle.

The main issue is that there is a one-to-many mapping from scientific to statistical hypotheses[4]. When doing research, there are many parameters one must set; for example, should observations be excluded? Which control variables should be measured? How to code participants’ responses? What dependent variables should be analyzed? By varying only a small number of these, Simmons et al. (2011) found that the nominal false positive rate of 5% skyrocketed to over 60%. They conclude that the “increased flexibility allows researchers to present anything as significant.” These issues are elevated by providing insufficient methodological detail in research articles, by a low percentage of researchers sharing their data (Wicherts et al., 2006; Wicherts, Bakker, & Molenaar, 2011), and in fields that require complicated preprocessing steps like neuroimaging (Carp, 2012; Cohen, 2016; Luck and Gaspelin, in press).

An important amendment is that researchers need not be aware of this flexibility; a p value might be misleading even when there is no “p-hacking”, and the hypothesis was posited ahead of time (i.e. was not changed after the fact—HARKing; Kerr, 1992). When decisions are contingent on the data are made in an environment in which different data would lead to different decisions, even when these decisions “just make sense,” there is a hidden multiple comparison problem lurking (Gelman & Loken, 2014). Usually, when conducting N statistical tests, we control for the number of tests in order to keep the false positive rate at, say, 5%. However, in the aforementioned setting, it is not clear what N should be exactly. Thus, results of statistical tests lose their meaning and carry little evidential value in such exploratory settings; they only do so in confirmatory settings (de Groot, 1954/2014; Wagenmakers et al., 2012). This distinction is at the heart of the problem, and gets obscured because many results in the literature are reported as confirmatory, when in fact they may very well be exploratory—most frequently, because of the way scientific reporting is currently done, there is no way for us to tell the difference.

To get a feeling for the many choices possible in statistical analysis, consider a recent paper in which data analysis was crowdsourced from 29 teams (Silberzahn et al., submitted). The question posited to them was whether dark-skinned soccer players are red-carded more frequently. The estimated effect size across teams ranged from .83 to 2.93 (odds ratios). Nineteen different analysis strategies were used in total, with 21 unique combinations of covariates; 69% found a significant relationship, while 31% did not.

A reanalysis of Berkowitz et al. (2016) by Michael Frank (2016; blog here) is another, more subtle example. Berkowitz and colleagues report a randomized controlled trial, claiming that solving short numerical problems increase children’s math achievement across the school year. The intervention was well designed and well conducted, but still, Frank found that, as he put it, “the results differ by analytic strategy, suggesting the importance of preregistration.”

Frequently, the issue is with measurement. Malte Elson—whose twitter is highly germane to our topic—has created a daunting website that lists how researchers use the Competitive Reaction Time Task (CRTT), one of the most commonly used tools to measure aggressive behavior. It states that there are 120 publications using the CRTT, which in total analyze the data in 147 different ways!

This increased awareness of researchers’ degrees of freedom and the garden of forking paths is mostly a product of this century, although some authors have expressed this much earlier (e.g., de Groot, 1954/2014; Meehl, 1985; see also Gelman’s comments here). The next point considers an issue much older (e.g., Berkson, 1938), but which nonetheless bears repeating.

Statistical inference

In psychology and much of the social and behavioral sciences in general, researchers overly rely on null hypothesis significance testing and p values to draw inferences from data. However, the statistical community has long known that p values overestimate the evidence against H0 (Berger & Delampady, 1987; Wagenmakers, 2007; Nuzzo, 2014). Just recently, the American Statistical Association released a statement drawing attention to this fact (Wasserstein & Lazar, 2016); that is, in addition to it being easy to obtain p < .05 (Simmons, Nelson, & Simonsohn, 2011), it is also quite a weak standard of evidence overall.

The last point is quite pertinent because the statement that 39% of replications in the reproducibility project were “successful” is misleading. A recent Bayesian reanalysis concluded that the original studies themselves found weak evidence in support of an effect (Etz & Vandekerckhove, 2016), reinforcing all points I have made so far.

Notwithstanding the above, p < .05 is still the gold standard in psychology, and is so for intricate historical reasons (cf., Gigerenzer, 1993). At JEPS, we certainly do not want to echo calls nor actions to ban p values (Trafimow & Marks, 2015), but we urge students and their instructors to bring more nuance to their use (cf., Gigerenzer, 2004).

Procedures based on classical statistics provide different answers from what most researchers and students expect (Oakes, 1986; Haller & Krauss; 2002; Hoekstra et al., 2014). To be sure, p values have their place in model checking (e.g., Gelman, 2006—are the data consistent with the null hypothesis?), but they are poorly equipped to measure the relative evidence for H1 or H0 brought about by the data; for this, researchers need to use Bayesian inference (Wagenmakers et al., in press). Because university curricula often lag behind current developments, students reading this are encouraged to advance their methodological toolbox by browsing through Etz et al. (submitted) and playing with JASP[5].

Teaching the exciting history of statistics (cf. Gigerenzer et al., 1989; McGrayne, 2012), or at least contextualizing the developments of currently dominating statistical ideas, is a first step away from their cookbook oriented application.

Registered reports to the rescue

While we can only point to the latter, statistical issue, we can actually eradicate the issue of publication bias and the garden of forking paths by introducing a new publishing format called Registered Reports. This format was initially introduced to the journal Cortex by Chris Chambers (Chambers, 2013), and it is now offered by more than two dozen journals in the fields of psychology, neuroscience, psychiatry, and medicine (link). Recently, we have also introduced this publishing format at JEPS (see King et al., 2016).

Specifically, researchers submit a document including the introduction, theoretical motivation, experimental design, data preprocessing steps (e.g., outlier removal criteria), and the planned statistical analyses prior to data collection. Peer review only focuses on the merit of the proposed study and the adequacy of the statistical analyses[5]. If there is sufficient merit to the planned study, the authors are guaranteed in-principle acceptance (Nosek & Lakens, 2014). Upon receiving this acceptance, researchers subsequently carry out the experiment, and submit the final manuscript. Deviations from the first submissions must be discussed, and additional statistical analyses are labeled exploratory.

In sum, by publishing regardless of the outcome of the statistical analysis, registered reports eliminate publication bias; by specifying the hypotheses and analysis plan beforehand, they make apparent the distinction between exploratory and confirmatory studies (de Groot 1954/2014), avoid the garden of forking paths (Gelman & Loken, 2014), and guard against post-hoc theorizing (Kerr, 1998).

Even though registered reports are commonly associated with high power (80-95%), this is unfeasible for student research. However, note that a single study cannot be decisive in any case. Reporting sound, hypothesis-driven, not-cherry-picked research can be important fuel for future meta-analysis (for an example, see Scheibehenne, Jamil, & Wagenmakers, in press).

To avoid possible confusion, note that preregistration is different from Registered Reports: The former is the act of specifying the methodology before data collection, while the latter is a publishing format. You can preregister your study on several platforms such as the Open Science Framework or AsPredicted. Registered reports include preregistration but go further and have the additional benefits such as peer review prior to data collection and in-principle acceptance.

Conclusion

In sum, there are several issues impeding progress in psychological science, most pressingly the failure to distinguish between exploratory and confirmatory research, and publication bias. A new publishing format, Registered Reports, provides a powerful means to address them both, and, to borrow a phrase from Daniel Lakens, enable us to “sail away from the seas of chaos into a corridor of stability” (Lakens & Evers, 2014).

Suggested Readings

  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
  • Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632-638.
  • Gelman, A., & Loken, E. (2014). The Statistical Crisis in Science. American Scientist, 102(6), 460-465.
  • King, M., Dablander, F., Jakob, L., Agan, M., Huber, F., Haslbeck, J., & Brecht, K. (2016). Registered Reports for Student Research. Journal of European Psychology Students, 7(1), 20-23
  • Twitter (or you might miss out)

Footnotes

[1] Incidentally, Diederik Stapel published a book about his fraud. See here for more.

[2] Baumeister (2016) is a perfect example of how not to respond to such a result. Michael Inzlicht shows how to respond adequately here.

[3] For a discussion of these issues with respect to the precognition meta-analysis, see Lakens (2015) and Gelman (2014).

[4] Another related, crucial point is the lack of theory in psychology. However, as this depends on whether you read the Journal of Mathematical Psychology or, say, Psychological Science, it is not addressed further. For more on this point, see for example Meehl (1978), Gigerenzer (1998), and a class by Paul Meehl which has been kindly converted to mp3 by Uri Simonsohn.

[5] However, it would be premature to put too much blame on p. More pressingly, the misunderstandings and misuse of this little fellow point towards a catastrophic failure in undergraduate teaching of statistics and methods classes (for the latter, see Richard Morey’s recent blog post). Statistics classes in psychology are often boringly cookbook oriented, and so students just learn the cookbook. If you are an instructor, I urge you to have a look at “Statistical Rethinking” by Richard McElreath. In general, however, statistics is hard, and there are many issues transcending the frequentist versus Bayesian debate (for examples, see Judd, Westfall, and Kenny, 2012; Westfall & Yarkoni, 2016).

[6] Note that JEPS already publishes research regardless of whether p < .05. However, this does not discourage us from drawing attention to this benefit of Registered Reports, especially because most other journals have a different policy.

This post was edited by Altan Orhon.

Fabian Dablander

Fabian Dablander is currently finishing his thesis in Cognitive Science at the University of Tübingen and Daimler Research & Development on validating driving simulations. He is interested in innovative ways of data collection, Bayesian statistics, open science, and effective altruism. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss

Bayesian Statistics: Why and How

bayes_hot_scaled

Bayesian statistics is what all the cool kids are talking about these days. Upon closer inspection, this does not come as a surprise. In contrast to classical statistics, Bayesian inference is principled, coherent, unbiased, and addresses an important question in science: in which of my hypothesis should I believe in, and how strongly, given the collected data?  (more…)

Fabian Dablander

Fabian Dablander is currently finishing his thesis in Cognitive Science at the University of Tübingen and Daimler Research & Development on validating driving simulations. He is interested in innovative ways of data collection, Bayesian statistics, open science, and effective altruism. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss

Crowdsource your research with style

Would you like to collect data quick and efficiently? Would you like to have a sample that generalizes beyond western, educated, industrialized, rich and democratic participants? While you acknowledge social media as a powerful means to distribute your studies, you feel that there must be a “better way”? Then this practical introduction to crowdsourcing is exactly what you need. I will show you how to use Crowdflower, a crowdsourcing platform to attract participants from all over the world to take part in your experiments. However, before we get too excited, let’s quickly go through the relevant terminology. (more…)

Fabian Dablander

Fabian Dablander is currently finishing his thesis in Cognitive Science at the University of Tübingen and Daimler Research & Development on validating driving simulations. He is interested in innovative ways of data collection, Bayesian statistics, open science, and effective altruism. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss