Python Programming in Psychology – From Data Collection to Analysis

Why programming?

Programming is a skill that all psychology students should learn. I can think of so many reasons on why, including automating boring stuff, and practicing problem solving skills through learning to code and programming.  In this post I will focus on two more immediate ways that may be relevant for a Psychology student, particularly during data collection and data analysis. For a more elaborated discussion on the topic read the post on my personal blog: Every Psychologist Should Learn Programming.

Here is what we will do in this post:

  • Basic Python by example (i.e., a t-test for paired samples)
  • Program a Flanker task using the Python library Expyriment
  • Visualise and analyse data

Before going into how to use Python programming in Psychology I will briefly discuss why programming may be good for data collection and analysis.

Data collection

The data collection phase of Psychological research has largely been computerised. Thus, many of the methods and tasks used to collect data are created using software. Many of these tools offer graphical user interfaces (GUIs) that may at many times cover your needs. For instance, E-prime offers a GUI which enables you to, basically, drag and drop “objects” onto a timeline to create your experiment. However, in many tasks you may need to write some customised code on top of your built experiment. For instance, quasi-randomisation may be hard to implement in the GUI without some coding (i.e., by creating CSV-files with trial order and such). At some point in your study of the human mind you will probably need to write code before collecting data.

Data collection

Data Analysis

Most programming languages can of course offer both graphical and statistical analysis of data. For instance, R statistical programming environment has recently gained more and more popularity in Psychology as well as in other disciplines. In other fields Python is also gaining popularity when it comes to analysing and visualisation of data. MATLAB has for many years also been used for quantitative methods in Psychology and cognitive science (e.g., for Psychophysical analysis, cognitive modelling, and general statistics).  Python offers extensive support for both Web scraping and the analysis of scraped data.

What language should one learn?

Okay. Okay. Programming may be useful for Psychologists! But there are so many languages! Where should I start?!” One very good start would be to learn Python. Python is a general-purpose and high-level language that was created by Guido van Rossum. Nowadays it is administrated by the non-profit organisation Python Software Foundation. Python is open source. Among many things this means that Python is free. Even for commercial use. Python is usually used and referred to as a scripting language. Thanks to its flexibility, Python is one of the most popular programming languages (e.g., 4th on the TIOBE Index for June 2016).

Programming in Psychology

One of the most important aspects, however, is that there are a variety of both general-purpose (unlike R that focuses on statistical analysis) and specialised Python packages. Good news for us interested in Psychology! This means that there are specialised libraries for creating experiments (e.g., Expyriment, PsychoPy and OpenSesame), fitting psychometric functions (e.g., pypsignifit 3.0), and analysing data (e.g., Pandas and Statsmodels). In fact, there are packages focusing on only enabling data analyses of EEG/ERP data (see my resources list for more examples). Python can be run interactively using the Python interpreter (hold on I am going to show an example later). Note, that Python comes in two major versions 2.7 (legacy) and 3.5. Discussing them is really out of the scope for this post but you can read more here.

Python from data collection to analysis

In this part of the post. you will learn how Python can be used from creating an experiment to visualising and analysing the data collected during that experiment. I have chosen a task that fits one of my research interests; attention and cognitive function. From doing research on distractors in the auditory and tactile modalities and how they impact visual tasks I am, in general, interested in how some types of information cannot be blocked out. How is it that we are unable to suppress certain responses (i.e., response inhibition)? A well-used task to measure inhibition is the Flanker task (e.g., Colcombe, Kramer, Erickson, & Scalf, 2005; Eriksen & Eriksen, 1974).  In the task we are going to create we will have two type of stimuli; congruent and incongruent. The task is to respond as quickly and accurate as possible to the direction an arrow is pointing. In congruent trials, the target arrow is surrounded by arrows pointing in the same direction (e.g., “<<<<<“) whereas on incongruent trials the surrounding arrows points to another direction (e.g., “<<><<“). Note, the target arrow is the one in the middle (e.g., the third).

For simplicity, we will examine whether the response time (RT) in congruent trials is different from RT in incongruent trials. Since we only will have two means to compare (incongruent vs congruent) we can use the paired sample t-test.

The following part is structured such that you get information on how to install Python and the libraries used. After this is done, you will get some basic information on how to write a Python script and then how to write the t-test function. After that, you get guided through how to write the Flanker task using Expyriment and, finally, you get to learn how to handle, visualise, and analyse the data from the Flanker task.

Installation of needed libraries

Before using Python you may need to install Python and the libraries that are used in the following examples. Python 2.7 can be downloaded here.

If you are running a Windows machine and have installed Python 2.7.11 your next step is to download and install Pygame.  The second library needed is SciPy which is a set of external libraries for scientific computing in Python. Installing SciPy on Windows machines are a bit complicated; first, download NumPy and SciPy,  open up windows command prompt (here is how) and use Pip to install NumPy and SciPy:

Open the command prompt, change directory to where the files were downloaded and install the packages using Pip.

Open the command prompt, change directory to where the files were downloaded and install the packages using Pip.

Expyriment, seaborn, and pandas can be downloaded and installed using Pip:

Linux users can install  the packages using Pip only and Mac users can see here on how to install the SciPy stack. If you think that the installation procedure is cumbersome I suggest that you install a scientific Python distribution (e.g., Anaconda) that will get you both Python and the libraries needed (except for Expyriment).

How to write Python scripts

Python scripts are typically written in a text editor. Windows computers comes with one called Notepad:

Notepad text editor can be used to write Python scripts (.py).

Notepad text editor can be used to write Python scripts (.py).

OS-X users can use TextEdit. Whichever text editor you end up using is not crucial but you need to save your files with the file ending .py.

Writing a t-test function

Often a Python script uses modules/libraries and these are imported at the beginning of the document. As previously mentioned the t-test script is going to use SciPy but we also need some math functions (i.e., square root). These modules are going to be imported first in our script as will become clear later on.

Before we start defining our function, I am briefly going to touch on what a function is and describe one of the datatypes we are going to use. In Python a function is parts of organised code that can be used again later. The function we will create is going to be called paired_ttest and takes the arguments x, and y. What this means is that we can send the scores from two different conditions (x and y) to the function. Our function requires the x and y variables to be of the datatype list. In the list other values can be stored (e.g., in our case the RTs in the incongruent and congruent trials). Each value stored in a list gets an index (note, in Python the index start at 0). For instance, if we have a list containing 5 differences scores we can get each of them individually by using the index on which they are stored. If we start the Python interpreter we can type the following code (see here if you are unsure on how to start the Python interpreter):

List indices

Returning to the function we are going to write, I follow this formula for the paired sample t-test:

t-test equation used for Python function

Basically,  (“d-bar”; the d with the line above) is the mean difference between two scores, Sd is the standard deviation of the differences, and n is the sample size.

Creating our function

Now we go on with defining the function in our Python script (i.e., def is what tells Python that the code in following lines are part of the function). Our function needs to calculate the difference score for each subject. Here we first create a list (i.e., di on line 5). We also need to know the sample size and we can obtain that by getting the length of the list x (by using the function len()). Note, here another datatype, int, is used. Int is short for integer and stores whole numbers. Also, worth noting here is that di and n are indented. In Python indentation is used to mark where certain code blocks start and stop.

Next we use a Python loop (e.g., line 7 below). A loop is typically used when we want to repeat something n number of times. To calculate the difference score we need to take each subject’s score in the x condition and subtract it to the score in the y condition (line 8). Here we use the list indices (e.g., x[i]). That is, i is an integer starting at 0 and going to n and the first repetition of the loop will get the first (i.e., index 0) subjects scores. The average difference score is now easy to calculate. It is just the sum of all difference scores divided by sample size (see, line 10).

Note, here we use another datatype, float. The float type represents real numbers and is stored with decimal point. In Python 2.7, we need to do this because dividing integers will lead to rounded results.

In the next part of our t-test function we are going to calculate the standard deviation. First, a float datatype is created (std_di) by using a dot after the digit (i.e., 0.). The scripts continue with looping through each difference score and adding the squared departure each subject’s score is from the average (i.e., d-dbar) to the std_di variable. In Python, squaring is done by typing “**” (see line 14). Finally, the standard deviation is obtained by taking the square root (using sqrt() from NumPy) of the value obtained in the loop.

Next statistic to be calculated is the Standard error of the mean (line 16). Finally, one line 17 and 18 we can calculate the t-value and p-value. On line 20 we add all information in the dictionary datatype that can store other objects. However, the dictionary store objects linked to keys (e.g., “T-value” in our example below).

The complete script, with an example how to use it, can be found here.

Flanker task in Expyriment

In this part of the post we are going to create the Flanker task using a  Python library called Expyriment (Krause & Lindemann, 2014).

First, we import expyriment.

We continue with creating variables that contain basic settings of our Flanker task. As can be seen in the code below we are going to have 4 trials per block, 6 blocks, and durations of 2000ms. Our flanker stimuli are stored in a list and we have some task instructions (note “\n” is the newline character and “\” just tells the Python interpreter that the string continues on the next line).

It may be worth pointing out that most Python libraries and modules have a set of classes. The classes contain a set of methods. So what is a “Class” and what is a “Method”? Essentially, a class is a template to create an object. An object can be said be a “storage” of both variables and functions. Returning to our example, we now create the Experiment class. This object will, for now, contain the task name (“Flanker Task”). The last line of the code block uses a method to initialise our object (i.e., our experiment).

We now carry on with the design of our experiment. First, we start with a for loop. In the loop we go from the first block to the last. Each block is created and temporarily stored in the variable temp_block.

Next we are going to create our trials for each block. First, in the loop we create a stimulus. Here we use the list created previously (i.e., flanker_stimuli). We can obtain one object (e.g., “<<<<<“) from the list by using the trial (4 stimuli in each list and 4 trials/block) as the index. Remember, in our loop each trial will be a number from 0 to n (e.g., number of trials) After a stimulus is created we create a trial and add the stimulus to the trial.

Since the flanker task can have both congruent (e.g., “<<<<<“) and incongruent trials (“<<><<“) we want to store this. The conditional statement (“if”) just checks whether there are as many of the first object in the list (e.g., “<“) as the length of the list. Note, count is a method of the list type object and counts the occurrences of something in the list. If the length and the number of arrows are the same the trial type is congruent:

Next we need to create the response mapping. In the tutorial example we are going to use the keys x and m as response keys. In Expyriment all character keys are represented as numbers. In the end of the code block we add the congruent/incongruent and response mapping information to our trial which, finally, is added to our block.

At the end of the block loop we use the method shuffle_trials to randomise our trials and the block is, finally, added to our experiment.

Our design is now finalised. Expyriment will also save our data (lucky us, right?!) and we need to specify the column names for the data files. Expyriment has a method (FixCross) for creating fixation cross and we want one!

We are now ready to start our experiment and present the task instructions on the screen. The last line makes the task wait for spacebar to be pressed in,

The subjects will be prompted with this text:

Expyriment task instructions for the Flanker task

Expyriment task instructions for the Flanker task

After the spacebar is pressed the task starts. It starts with the trials in the first block, of course. In each trial the stimulus is preloaded, a fixation cross is presented for 2000ms (experiment.clock.wait(durations)), and then the flanker stimuli are presented.

Fixation cross is first presented for 2000ms followed by flanker stimuli (2000ms).

Fixation cross is first presented for 2000ms followed by flanker stimuli (2000ms).

The next line to be executed is line 52 and the code on that line resets a timer so that we can use it later.  On line 54 we get response (key) and RT using the keyboard class and its wait method. We use the arguments keys (K_x and K_m are our keys, remember) and duration (2000ms). Here we use the clock method and subtract the current time (from the time that we reset the clock) from durations (line 57). This has to be done because the program waits for the subject to press a key (i.e., “m” or “r”) and next trial would continue when a key is pressed.

Accuracy is controlled using the if and else statements. That is, the actual response is compared to the correct response. After the accuracy has been determined the we add the variables the order we previously created them (i.e., “block”, “correctresp”, “response”, “trial”, “RT”, “accuracy”, “trialtype”).
Finally, when the 4 trials of a block have been run, we implement a short break (i.e., 3000 ms) and present some text notifying the participant.

The experiment end with thanking the participants for their contribution:

A recording of the task can be seen in this video:

That was how to create a Flanker task using Expyriment. For a better overview of the script as a whole see this GitHub gist. Documentation of Expyriment can be found here: Expyriment docs. To run a Python script you can open up the command prompt and change to the directory where the script is (using the command cd):

Running command prompt to execute the Flanker Script.

Data processing and analysis

Assume that we have collected data using the Flanker task and now we want to analyse our data. Expyriment saves the data of each subject in files with the file ending “.xpd”. Conveniently, the library also comes packed with methods that enables us to preprocess our data.

We are going to create a comma-separated value file (.csv) that we later are going to use to visualise and analyse our data. Lets create a script called “data_processing.py”. First, we import a module called os which lets us find the current directory (os.getcwd()) and by using os.sep we make our script compatible with both Windows, Linux, and OS-X. The variable datafolder stores the path to the data. In the last line, we use data_preprocessing to write a .csv file (“flanker_data.csv”) from the files starting with the name “flanker” in our data folder. Note, the Python script need to be run in the same directory as the folder ‘data’ is. Another option is to change the datafolder variable (e.g., datafolder =’path_to_where_the_data_is’).

Descriptive statistics and visualising

Each subject’s data files are now put together in “flanker_data.csv” and we can start our analyses. Here we are going to use the libraries Pandas and Seaborn. Pandas is very handy to create data structures. That is, it makes working with our data much easier. Here, in the code block below, we import Pandas as pd and Seaborn as sns. It makes using them a bit easier. The third line is going to make our plot white and without a grid.

Now we can read our csv-file (‘flanker_data.csv’). When reading in our data we need to skip the first (“# –– coding: UTF-8 –-” is no use for us!):

Concatenated data file (.csv)

Concatenated data file (.csv)

Reading in data from the data file and skipping the first row:

Pandas makes descriptive statistics quite easy as well. Since we are interested in the two types of trials, we group them. For this example, we are only going to look at the RTs:

count mean std min 25% 50% 75% max
trialtype
congruent 360 560.525000 36.765310 451.0 534.75 561.0 584.0 658.0
incongruent 360 642.088889 55.847114 488.0 606.75 639.5 680.0 820.0

One way to obtain quite a lot information on our to trial types and RTs is doing a violin plot:

Violin plot of RT in the incongruent and congruent trials.

Testing our hypothesis

Just a brief reminder, we are interested here in whether people can suppress the irrelevant information (i.e., the flankers pointing to another direction than the target). We use the paired sample t-test to see if the difference in RT in incongruent and congruent trials is different from zero.

First, we need to aggregate the data, and we start by grouping our data by trial type and subject number. We can then get the mean RT for the two trial types:

Next, we are going to take the RT (values, in the script) and assign them to x and y. Remember, the t-test function we started off with takes two lists containing data. The last code in the code block below calls the function which returns the statistics needed (i.e., t-value, p-value, and degree of freedom).

Finally, before printing the results we may want to round the values. We use a for loop and go for each key and value in our dictionary (i.e., t_value). On line 7 we then round our numbers.

Printing the variable t_value (line 8 above) renders the following output:

We can conclude that there was a significant difference in the RT for incongruent (M =  642.08, SD = 55.85) and congruent (M = 560.53, SD = 36.52) trials; t(29) = 27.358, p < .001.

That was how to use Python from data collection to analysing data. If you want to play around with the scripts for processing data files for 30 simulated subjects can be downloaded here: data_flanker_expy.zip.  All the scripts described above, as well as the script to simulate the subjects (i.e., run the task automatically), can be found on this GitHub Gist. Feel free to use the Flanker task above. If you do, I would suggest that you add a couple of practice trials.

Resources

As previously mentioned the Python community is large and helpful. Thus, there are so many resources to turn to both for learning Python and finding help. It can thus be hard to know where to start. Therefore, the end of this post contains a few of the Python resources I have found useful or interesting. All resources I list below are free.

Learning Python

Python in Psychology:

Python distributions

If you think that installing Python packages seem complicated and time consuming there are a number of distributions. These distributions aims to simplify package management. That is, when you install one of them you will get many of the packages that you would have to install one by one. There are many distributions (see here) but I have personally used Anaconda and Python(x, y).

Data Collection

  • PsychoPy (Peirce, 2007) – offers both a GUI and you can use the API to program your experiments. You will find some learning/teaching resources on the homepage
  • Expyriment – the library used in the tutorial above
  • OpenSesame (Mathôt, Schreij, & Theeuwes, 2012) – offers both Python scripting (mainly inline scripts) and a GUI for building your experiments. You will find examples and tutorials on OpenSesame’s homepage.
  • PyGaze (Dalmaijer, Mathôt, & der Stigchel, 2014) – a toolbox for eye-tracking data and experiments.

Statistics

  • Pandas – Python data analysis (descriptive, mainly) toolkit
  • Statsmodels – Python library enabling many common statistical methods
  • pypsignifit – Python toolbox for fitting psychometric functions (Psychophysics)
  • MNE – For processing and analysis of electroencephalography (EEG) and magnetoencephalography (MEG) data

Getting help

  • Stackoverflow – On Stackoverflow you can answer questions concerning every programming language. Questions are tagged with the programming language. Also, some of the developers of PsychoPy are active and you can tag your questions with PsychoPy.
  • User groups for PsychoPy and Expyriment can be found on Google Groups.
  • OpenSesame Forum e.g., the subforums for PyGaze and, most important, Expyriment.

That was it; I hope you have found my post valuable. If you have any questions you can either leave a comment here, on my homepage or email me.

References

Colcombe, S. J., Kramer, A. F., Erickson, K. I., & Scalf, P. (2005). The implications of cortical recruitment and brain morphology for individual differences in inhibitory function in aging humans. Psychology and Aging, 20(3), 363–375. http://doi.org/10.1037/0882-7974.20.3.363

Dalmaijer, E. S., Mathôt, S., & der Stigchel, S. (2014). PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods, 46(4), 913–921. doi:10.3758/s13428-013-0422-2

Eriksen, B. a., & Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics, 16(1), 143–149. doi:10.3758/BF03203267

Krause, F., & Lindemann, O. (2014). Expyriment: A Python library for cognitive and neuroscientific experiments. Behavior Research Methods, 46(2), 416-428. http://doi.org/10.3758/s13428-013-0390-6

Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods, 44(2), 314–324. http://doi.org/10.3758/s13428-011-0168-7

Peirce, J. W. (2007). PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods, 162(1-2), 8–13. http://doi.org/10.1016/j.jneumeth.2006.11.017

The Statistics Hell has expanded: An interview with Prof. Andy Field

FieldDoes the mention of the word “statistics” strike fear into your heart and send shivers down your spine? The results section of your thesis seeming like that dark place one should avoid at all cost? Heteroscedasticity gives you nightmares? You dread having to explain to someone what degrees of freedom are? What is the point of using ANOVA if we can do a series of t-tests? If any of these remind you of the pain of understanding statistics, or the dread of how much more lies ahead during your studies, when all you really want is someone to explain it in a humanly understandable way—look no further. Quite a few fellow students might tell you “You should go and look at Andy Field’s books. Now, at least, I understand stats”. The “Discovering statistics using …” is a gentle, student friendly introduction to statistics. Principles are introduced at a slow pace, with plenty of workable examples so that anyone with basic maths skills will be able to digest it. Now add a lens of humor and sarcasm that will have you giggling about statistics in no time!

There is a new book!

As JEPS has been excited about introducing Bayesian statistics into the lives of more psychology students (see here, here, and here for introductions, and here for software to play around with the Bayesian approach), the idea of a new book by Andy Field—whose work many of us love and wholeheartedly recommend—which incorporates this amazing approach was thrilling news.

We used this occasion to talk to Andy Field—who is he, what motivates him, and what are his thoughts on the future of psychology?

With your new book, you expand the Statistics hell with Bayesian statistics. Why is this good news for students?

andy_field

There has, for a long time, been an awareness that the traditional method of testing hypotheses (null hypothesis significance testing, NHST) has its limitations. Some of these limitations are fundamental, whereas others are more about how people apply the method rather too blindly. Bayesian approaches offer an alternative, and arguably, more logical way to look at estimation and hypothesis testing. It is not without its own critics though, and it has its own set of different issues to consider. However, it is clear that there is a groundswell of support for Bayesian approaches, and that people are going to see these methods applied more and more in scientific papers. The problem is that Bayesian methods can be quite technical, and a lot of books and papers are fairly impenetrable. It can be quite hard to make the switch (or even understand what switch you would be making).

My new book essentially tries to lay some very basic foundations. It’s not a book about Bayesian statistics, it’s a book about analysing data and fitting models and I explain both the widely used classical methods and also some basic Bayesian alternatives (primarily Bayes factors). The world is not going to go Baysian overnight, so what I’m trying to do is to provide a book that covers the material that lecturers and undergraduates want covered, but also encourages them to think about the limitations of those approaches and the alternatives available to them. Hopefully, readers will have their interest piqued enough to develop their understanding by reading more specifically Bayesian books. To answer the question then, there are two reasons why introducing Bayesian approaches is a good thing for students: (1) it will help them to understand more what options are available to them when they analyse data; and (2) published research will increasingly use Bayesian methods so it will help them to make sense of what other scientists are doing with their data.

Your books are the savior for many not-so-technical psychology students. How did you first come up with writing your classic ‘Discovering Statistics with ….’ book?

Like many PhD students I was teaching statistics and SPSS to fund my PhD. I used to enjoy the challenge of trying to come up with engaging examples, and generally being a bit silly/off the wall. The student feedback was always good, and at the time I had a lot of freedom to produce my own teaching materials. At around that time, a friend-of-a-friend Dan Wright (a cognitive psychologist who was at the time doing a postdoc at City Univerity in London) was good friends with Ziyad Marar, who now heads the SAGE publications London office but at the time was a commissioning editor. Dan had just published a stats book with SAGE and Ziyad had commissioned him to help SAGE to find new authors. I was chatting to Dan during a visit to City University, and got onto the subject of me teaching SPSS and my teaching materials and whatever and he said ‘Have you ever thought of turning those into a book?’ Of course I hadn’t because books seemed like things that ‘proper’ academics did, not me. Subsequently Dan introduced me to Ziyad, who wanted to sign me up to do the book, I was in such a state of disbelief that anyone would want to publish a book written by me that I blindly agreed. The rest is history!

As an aside, I started writing it before completing my PhD although most of it was done afterwards, and I went so over the word limit that SAGE requested that I do the typesetting myself because (1) they didn’t think it would sell much (a reasonable assumption given I was a first-time author); and (2) this would save a lot of production costs. Essentially they were trying to cut their losses (and on the flip side, this also allowed me to keep the book as it was and not have to edit it to half the size!). It is a constant source of amusement to us all how much we thought the book would be a massive failure! I guess the summary is, it happened through a lot of serendipitous events. There was no master plan. I just wrote from the heart and hoped for the best, which is pretty much what I’ve done ever since.

Questionable research practices and specifically misuse of statistical methods has been a hot topic in the last years. In your opinion, what are the critical measures that have to be taken in order to improve the situation?

Three things spring immediately to mind: (1) taking the analysis away from the researcher; (2) changing the incentive structures; (3) a shift towards estimation. I’ll elaborate on these in turn.

Psychology is a very peculiar science. It’s hard to think of many other disciplines where you are expected to be an expert theoretician in a research area and also a high-level data analyst with a detailed understanding of complex statistical models. It’s bizarre really. The average medic, for example, when doing a piece of research will get expert advice from a trials unit on planning, measurement, randomization and once the data are in they’ll be sent to the biostats unit to fit the models. In other words, they are not expected to be an expert in everything: expertise is pooled. One thing, then, that I think would help is if psychologists didn’t analyse their own data but instead they were sent to a stats expert with no vested interest in the results. That way data processing and analysis could be entirely objective.

The other thing I would immediately change in academia is the incentive structures. They are completely ****** up. The whole ‘publish or perish’ mentality does nothing but harm science and waste public money. The first thing it does it create massive incentives to publish anything regardless of how interesting it is but it also incentivises ‘significance’ because journals are far more likely to publish significant results. It also encourages (especially in junior scientists) quantity over quality, and it fosters individual rather than collective motivations. For example, promotions are all about the individual demonstrating excellence rather than them demonstrating a contribution to a collective excellence. To give an example, in my research area of child anxiety I frequently have the experience that I disappear for a while to write a stats book and ignore completely child anxiety research for, say, 6 months. When I come back and try to catch up on the state of the art, hundreds, possible thousands of new papers have come out, mostly small variations on a theme, often spread across multiple publications. The signal to noise ratio is absolutely suffocating. My feeling on whether anything profound has changed in my 6 months out of the loop is ‘absolutely not’ despite several hundred new papers. Think of the collective waste of time, money and effort to achieve ‘absolutely not’. It’s good science done by extremely clever people, but everything is so piecemeal that you can’t see the word for the trees. The meaningful contributions are lost. Of course I understand that science progresses in small steps, but it has become ridiculous, and I believe that the incentive structures mean that many researchers prioritise personal gain over science. Researchers are, of course, doing what their universities expect them to do, but I can’t help but feel that psychological science would benefit from people doing fewer studies in bigger teams to address larger questions. Even at a very basic level this would mean that sample sizes would increase dramatically in psychology (which would be a wholly good thing). For this to happen, the incentive structures need to change. Value should be maximised for working in large teams, on big problems, and for saving up results to publish in more substantial papers; contribution to grants and papers should also become more balanced regardless of whether you’re first author, last author or part of a team of 30 authors.

From a statistical point of view we have to shift away from ‘all or nothing thinking’ towards estimation. From the point of view of publishing science a reviewer should ask three questions (1) is the research answering an interesting question that genuinely advances our knowledge: (2) was it well conducted to address the question being asked – i.e. does it meet the necessary methodological standards?; and (3) what do the estimates of the effects in the model tell us about the question being asked. If we strive to answer bigger questions in larger samples then p-values really become completely irrelevant (I actually think their almost irrelevant anyway but …). Pre-registration of studies helps a lot because it forces journals to address the first two questions when deciding whether to publish, but it also helps with question 3 because by making the significance of the estimates irrelevant to the decision to publish it frees the authors to focus on estimation rather than p-values. There are differing views of course on how to estimate (Classical vs Bayes, confidence intervals vs. credibility intervals etc.) but at heart, I think a shift from p-values to estimation can only be a good thing.

At JEPS we are offering students experience in scientific publishing at an early stage of their career. What could be done at universities to make students acquainted with the scientific community already during their bachelor- or master studies?

I think that psychology, as a discipline, embeds training in academic publishing within degree and PhD programs through research dissertations and the like (although note my earlier comments about the proliferation of research papers!). Nowadays though scientists are expected to engage with many different audiences through blogs, the media and so on, we could probably do more to prepare students for that by incorporating assignments into degrees that are based on public engagement. (In fact, at Sussex – and I’m sure elsewhere –  we do have these sorts of assignments).

Statistics is the predominant modeling language in almost any science and therefore sufficient knowledge about it is the prerequisite of doing any empirical work. Despite this fact, why do you think do many psychology students are reluctant to learn statistics? What could be done in education to change this attitude? How to keep it entertaining while still getting stuff done?

This really goes back to my earlier question of whether we should expect researchers to be data analysis experts. Perhaps we shouldn’t, although if we went down the route of outsourcing data analysis then a basic understanding of processing data and the types of models that can be fit would help statisticians to communicate what they have done and why.

There are lots of barriers to learning statistics. Of course anxiety is a big one, but it’s also just a very different thing to psychology. It’s a bit like putting a geography module in an English literature degree and then asking ‘why aren’t the students interested in geography?’. The answer is simple: it’s not English literature, it’s not what they want to study. It’s the same deal. People doing a psychology degree are interested in psychology, if they were interested in data they’d have chosen a maths or stats degree. The challenge is trying to help students to realize that statistical knowledge gives you power to answer interesting questions. It’s a tool, not just in research, but in making sense in an increasingly data-driven world. Numeracy and statistics, in particular, has never been more important than it is now because of the ease with which data can be collected and, therefore, the proliferation of contexts in which data is used to communicate a message to the public.

In terms of breaking down those barriers I feel strongly that teaching should be about making your own mark. What I do is not ‘correct’ (and some students hate my teaching) it’s just what works for me and my personality. In my previous books I’ve tried to use memorable examples, use humour, and I tend to have a naturally chatty writing style. In the new book I have embedded all of the academic content into a fictional story. I’m hoping that the story will be good enough to hook people in and they’ll learn statistics almost as a by-product of reading the story. Essentially they share a journey with the main character in which he keeps having to learn about statistics. I’m hoping that if the reader invests emotionally in that character then it will help them to stay invested in his journey and invested in learning. The whole enterprise is a massive gamble, I have no idea whether it will work, but as I said before I write from my heart and hope for the best!

Incidentally if you want to know more about the book and the process of creating it, see http://discoveringstatistics.blogspot.co.uk/2016/04/if-youre-not-doing-something-different.html

What was your inspiration for the examples in the book? How did you come up with Satan’s little SPSS helper and other characters? How did you become the gatekeeper of the statistics hell?

field_book

The statistics hell thing comes from the fact that I listen to a lot of heavy metal music and many bands have satanic imagery. Of course, in most cases it’s just shock tactics rather than reflecting a real philosophical position, but I guess I have become a bit habituated to it. Anyway, when I designed my website (which desperately needs an overhaul incidentally) I just thought it would be amusing to poke fun at the common notion that ‘statistics is hell’. It’s supposed to be tongue-in-cheek.

As for characters in the SPSS/R/SAS book, they come from random places really. Mostly the reasons are silly and not very interesting. A few examples: the cat is simply there to look like my own cat (who is 20 now!); the Satan’s slave was because I wanted to have something with the acronym SPSS (Satan’s Personal Statistics Slave); and Oliver Twisted flags additional content so I wanted to use the phrase ‘Please sir! Can I have some more …’ like the character Oliver Twist in the Dicken’s novel. Once I knew that, it was just a matter of making him an unhinged.

The new book, of course, is much more complicated because it is a fictional story with numerous characters with different appearances and personalities. I have basically written a novel and a statistics textbook and merged the two. Therefore, each character is a lot deeper than the faces in the SPSS book – they have personalities, histories, emotions. Consequently, they have very different influences. Then, as well as the characters the storyline and the fictional world in which the story is set were influenced by all sorts of things. I’d could write you a thesis on it! In fact, I have a file on my hard drive of ‘bits of trivia’ about the new book where I kept notes on why I did certain things, where names or personalities came from, who influence the appearance of characters or objects and so on. If the book becomes a hit then come back to me and ask what influenced specific things in the book and I can probably tell you! I also think it’s nice to have some mystery and not give away too much about why the book turned out the way it did!

If you could answer any research question, what would it be?

I’d like to discover some way to make humans more tolerant of each other and of different points of view, but possibly even more than that I’d like to discover a way that people could remain at a certain age until they felt it was time to die. Mortality is the cloud over everyone’s head, but I think immortality would probably be a curse because I think you get worn down by the changing world around you. I like to think that there’s a point where you feel that you’ve done what you wanted to do and you’re ready to go. I’d invent something that allows you to do that – just stay physically at an age you liked being, and go on until you’ve had enough. There is nothing more tragic than a life ended early, so I’d stop that.

Thank you for taking the time for this interview and sharing your insights with us. We have one last question: On a 7-point Likert scale, how much do you like 7-point Likert scale?

It depends which way around the extremes are labelled …. ;-)

 

For more information on ‘An adventure in statistics: the reality enigma’ see:

 

 

 

Editor’s Pick: Our favorite MOOCs

There used to be a time when students could attend classes at their university or in their vicinity – and that was it. Lately, the geospatial restriction has vanished with the introduction of massive open online courses (MOOC’s). This format of online courses are part of the “open education” idea, offering everyone with an internet connection an opportunity to participate in various courses, presented by more and less known institutions and universities. The concept is more or less similar for all courses: anyone can join, and lectures are available in form of a video and as lecture notes. During the course, whether it is a fixed-date or self-paced (as in you deciding when to complete tasks), you will need to take quizzes, exams, and/or written projects if you wish to complete the course. In less than 10 years, this idea has grown to include millions of users, hundreds of countries and more than a dozen universities around the world, while continuing to grow.
shutterstock_242216224

A few years back, most courses were free and offered certificates as a reward for course completion. Nowadays, you can participate in most courses offered, but if you wish to get a certificate, there is a fee. As with every course in universities, professors or assistants are available for your questions and there is a forum for interacting with other people enrolled. In case you aren’t confident you will be able to fully understand a course in english, some of the popular courses come subtitles. If you fall in love with the format and would like to contribute, Coursera offers the possibility of you becoming a translator.

Lifelong learning is the norm nowadays. By taking MOOC, you can gain new skills and knowledge in any area of interest or keep up with the latest trends in your field. In case you are considering a change in your career or are going to start university soon, it is a nice way to sneak a peek into what the topic entails with all the time flexibility you’d like to have and from the comfort of wherever you are.

The following courses are grouped into categories, from general introductions to specific topics that enhance your methodological toolbox. Apart from the courses the JEPS team can personally recommend you, you can find a list of currently available MOOC’s on https://www.mooc-list.com/
 
Introduction to psychology – University of Toronto
If you are considering studying psychology or are just interested in psychology in general and are looking for a nice and comprehensive introduction, this course is yours. It covers all topics and gives you a good overview of how psychology came to be, what fields it covers, and a student favorite—mental illness. The lectures are easy to follow, cover the main topics any good textbook would cover in a more interactive and interesting way, and include the most famous experiments in psychology.

Writing in the Sciences – Stanford University
A truly excellent course that starts explaining how to improve punctuation, sentences, and paragraphs to communicate ideas as clear as possible. It also offers incredibly helpful models for how to structure your research paper. The course makes extensive use of examples so that you can apply the techniques immediately to your own work. This course will change how you write your thesis!

Understanding the Brain: The Neurobiology of Everyday Life – University of Chicago
The brain is a complex system and its neurobiology is no exception. This course takes you through all the important parts of the nervous system (beyond the brain itself) involved in our everyday functioning. Each lecture includes a very well explained theory and physiology behind the topic at hand, accompanied by very interesting examples and real-life cases to give you a better understanding. Highly recommended is the lecture on strokes–from their originas, what happens to the brain during one, to consequences to a person’s functioning.

The Brain and Space – Duke University
If you have ever wondered how our brain perceives the space around and interprets the input we get from our senses into the major picture, this course will give you a very detailed image of this complex phenomenon. Even though a general understanding of neuroscience and perception is recommended, the material can be understood with some help of Wikipedia for explanation of any unknown concepts. Everything you wanted to know about vision, spatial orientation, and perception in general is here.

Programming for Everybody (Getting started with Python) – University of Michigan
First part of the five-part course on Python programming, this is a very nice and slow-paced introductory course into the world of programming. As no previous knowledge is required, everything is explained in an easily understandable manner with a lot of examples. The shining star of this course is the professor himself, whose funny remarks make the daunting task of writing code a fun experience. In case of any doubts, there is a big and very active community on the forum ready to help at any moment.

Machine learning – Stanford University
A great introductory course in machine learning. It starts with linear regression and quickly advances into more advanced topics such as model selection, neural networks, support vector machines, large scale machine learning. The course gives both a first overview over the field and teaches you hands-on machine learning skills you can immediately apply to your research!

Calculus single variable (Five-part course) – University of Pennsylvania
Most probably the best calculus course in the world. It only requires high-school math knowledge and from there on builds up a deep knowledge about calculus by using fantastic graphics and many intuitive examples. A challenging course that is worth every minute spent on!

Introduction to Neuroeconomics: How the Brain Makes Decisions – Higher School of Economics
As neuroeconomy and psychology have been gaining a lot of attention recently, this course gives a comprehensive overview of the foundations for this new hot field and the research. As this course is highly interdisciplinary, expect to learn about neuroanatomy, psychological processes, and principles of economy merging into one theory behind decision-making. From bees, monkeys, game theory, why we dislike losing above all, and group dynamics–this course covers it all.

Statistical Learning – Stanford University
An outstanding statistics course taught by two of the world’s most famous statisticians, Trevor Hastie and Rob Tibshirani. They present tough statistical concepts in an incredibly intuitive manner and provide an R-lab after each topic to make sure that you are able to apply new knowledge immediately. They provide both of their textbooks free download for download, one heavier on the math, the other more applied.

The Addicted Brain – Emory University
Navigating in the modern world includes being exposed to (mis)information about various psychoactive substances. As having the information backed by scientific research is less biased and solid, this should be the place to learn about this topic. The course goes through all major addictive substances: from the more legal ones like alcohol, nicotine, and caffeine; medication and illegal substances; along with ways in which they change the brain and affect behavior. Lastly, two lectures cover the risks of addiction along with treatments and recent policy developments.

Drugs and the Brain – CALTECH
Building on the basics of “the Addicted Brain” (I suggest taking that one prior to this one), the course goes more in depth into what happens on a molecular level in the brain the moment a drug is taken. A big part of the course requires learning the principles of psychopharmacology, which I would wholeheartedly recommend for anyone who either wants to be a clinical psychologist or is interested in how drugs for various psychiatric diagnoses work. The course goes beyond the scope of the more basic previously mentioned course by covering neurodegenerative diseases we often hear about but aren’t really sure what they entail, along with serious headaches or migraines.

Let us know if you found this helpful or if you have any tips. Maybe you’ll find some inspiration to take a course yourself while browsing the ones we have mentioned. If you have a suggestion or previous experience with this, feel free to comment below!

JEPS introduces Registered Reports: Here is how it works

For  more than six years, JEPS has been publishing student research, both in the form of classic Research Articles as well as Literature Reviews. As of April 2016, JEPS offers another publishing format: Registered Reports. In this blog post we explain what Registered Reports are, why they could be interesting for you as a student, and how the review process works.

What are Registered Reports?

Registered Reports are a new form of research article, in which the editorial decision is based on peer review that takes place before data collection.  The review process is thereby divided into two stages: first, your research question and methodology is evaluated, while the data is yet to be collected. In case your Registered Report gets in-principle accepted, you are guaranteed to get your final manuscript published once the data is collected – irrespective of your findings. The second step of the review process then only consists of checking whether you sticked to the methodology you proposed in the Registered Report.

The format of Registered Reports alleviates many problems associated with the current publishing culture, such as the publication bias (see also our previous post): For instance, the decision whether the manuscript gets published is independent of the outcome of statistical tests and therefore publication bias is ruled out. Also, you have to stick to the hypothesis and methodology in your Registered Report and therefore a clear line between exploratory and confirmatory research is maintained.

How does the review process work exactly?

You submit a manuscript consisting of the motivation (introduction) of your research and a detailed description of your hypotheses and the methodology and analysis you intend to use to investigate your hypotheses. Your research plan will then be reviewed by at least two researchers who are experts in your field of psychology. Note that in case Registered Reports Pipeline

Reviewers might ask for revisions of your proposed methodology or analysis. Once all reviewer concerns have been sufficiently addressed, the Registered Report is accepted. This means that you can now collect your data and if you don’t make important changes to your hypotheses and methodology, you are guaranteed publication of  your final manuscript, in format very similar to our Research Articles. Any changes have to be clearly indicated as such. In the second stage of the review process, they will be examined. 

 

Why are Registered Reports interesting for you as a student?

First, you get feedback about your project from experts in your field of psychology. It is very likely that this feedback will make your research stronger and improves your design design. This avoids the situation that you collected your data but then realize during the review process that your methodology is not watertight. Therefore, Registered Reports offer you the chance to rule out methodological problems before collecting the data, possibly saving a lot of headache after. And then having your publication assured.

Second, it takes away the pressure to get “good results” as your results are published regardless of the outcome of your analysis. Further, the fact that your methodology was reviewed before data collection allows to give null-results more weight. Normally, registered reports also include control conditions that help interpreting any (null-) results.

Lastly, Registered Reports enable you to be open and transparent about your scientific practices. When your work is published as a Registered Report, there is a clear separation between confirmatory and exploratory data analysis. While you can change your analysis after your data collection is completed, you have to declare and explain the changes.This adds credibility to the conclusions of your paper and increases the likelihood that future research can build on your work.

And lastly, some practical points

Before you submit, you therefore need to think about, in detail, the research question you want to investigate, and how you plan to analyse your data. This includes a description of your procedures in sufficient detail that others can replicate it and of your proposed sample, a definition of exclusion criteria, a plan of your analysis (incl. Pre-processing steps), and, if you want to do Null Hypothesis significance testing, a power analysis.

Further, you can withdraw your study at any point – however, when this happens after the in-principle acceptance, many journals will publish your work in a special section of the journal called “Withdrawn Reports”. The great thing is that null-result need not to dishearten you – if you received an IPA, your study will still be published – and given that it was pre-registered and pre-peer reviewed, chances are high that others can built on your null-result.

Lastly, you should note that you need not register your work with a journal – you can also register it on the Open Science Framework, for example. In this case, however, your work won’t be reviewed.

Are you as excited about Registered Reports as we are? Are you considering submitting your next project as a Registered Report? Check out our Submission guidelines for further info. Also, please do not hesitate to contact us in case you have any questions!

Suggested Reading

Chambers et al., (2013): Open letter to the Guardian

http://www.theguardian.com/science/blog/2013/jun/05/trust-in-science-study-pre-registration

Gelman & Loken (2013): Garden of forking paths

http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf

Replicability and Registered Reports

Last summer saw the publication of a monumental piece of work: the reproducibility project (Open Science Collaboration, 2015). In a huge community effort, over 250 researchers directly replicated 100 experiments initially conducted in 2008. Only 39% of the replications were significant at the 5% level. Average effect size estimates were halved. The study design itself—conducting direct replications on a large scale—as well as its outcome are game-changing to the way we view our discipline, but students might wonder: what game were we playing before, and how did we get here?

In this blog post, I provide a selective account of what has been dubbed the “reproducibility crisis”, discussing its potential causes and possible remedies. Concretely, I will argue that adopting Registered Reports, a new publishing format recently also implemented in JEPS (King et al., 2016; see also here), increases scientific rigor, transparency, and thus replicability of research. Wherever possible, I have linked to additional resources and further reading, which should help you contextualize current developments within psychological science and the social and behavioral sciences more general.

How did we get here?

In 2005, Ioannidis made an intriguing argument. Because the prior probability of any hypothesis being true is low, researchers continuously running low powered experiments, and as the current publishing system is biased toward significant results, most published research findings are false. Within this context, spectacular fraud cases like Diederik Stapel (see here) and the publication of a curious paper about people “feeling the future” (Bem, 2011) made 2011 a “year of horrors” (Wagenmakers, 2012), and toppled psychology into a “crisis of confidence” (Pashler & Wagenmakers, 2012). As argued below, Stapel and Bem are emblematic of two highly interconnected problems of scientific research in general.

Publication bias

Stapel, who faked results of more than 55 papers, is the reductio ad absurdum of the current “publish or perish” culture[1]. Still, the gold standard to merit publication, certainly in a high impact journal, is p < .05, which results in publication bias (Sterling, 1959) and file-drawers full of nonsignificant results (Rosenthal, 1979; see Lane et al., 2016, for a brave opening; and #BringOutYerNulls). This leads to a biased view of nature, distorting any conclusion we draw from the published literature. In combination with low-powered studies (Cohen, 1962; Button et al., 2013; Fraley & Vazire; 2014), effect size estimates are seriously inflated and can easily point in the wrong direction (Yarkoni, 2009; Gelman & Carlin, 2014). A curious consequence is what Lehrer has titled “the truth wears off” (Lehrer, 2010). Initially high estimates of effect size attenuate over time, until nothing is left of them. Just recently, Kaplan and Lirvin reported that the proportion of positive effects in large clinical trials shrank from 57% before 2000 to 8% after 2000 (Kaplan & Lirvin, 2015). Even a powerful tool like meta-analysis cannot clear the view of a landscape filled with inflated and biased results (van Elk et al., 2015). For example, while meta-analyses concluded that there is a strong effect of ego-depletion of Cohen’s d=.63, recent replications failed to find an effect (Lurquin et al., 2016; Sripada et al., in press)[2].

Garden of forking paths

In 2011, Daryl Bem reported nine experiments on people being able to “feel to future” in the Journal of Social and Personality Psychology, the flagship journal of its field (Bem, 2011). Eight of them yielded statistical significance, p < .05. We could dismissively say that extraordinary claims require extraordinary evidence, and try to sail away as quickly as possible from this research area, but Bem would be quick to steal our thunder.

A recent meta-analysis of 90 experiments on precognition yielded overwhelming evidence in favor of an effect (Bem et al., 2015). Alan Turing, discussing research on psi related phenomena, famously stated that

“These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately, the statistical evidence, at least of telepathy, is overwhelming.” (Turing, 1950, p. 453; cf. Wagenmakers et al., 2015)

How is this possible? It’s simple: Not all evidence is created equal. Research on psi provides us with a mirror of “questionable research practices” (John, Loewenstein, & Prelec, 2012) and researchers’ degrees of freedom (Simmons, Nelson, & Simonsohn, 2011), obscuring the evidential value of individual experiments as well as whole research areas[3]. However, it would be foolish to dismiss this as being a unique property of obscure research areas like psi. The problem is much more subtle.

The main issue is that there is a one-to-many mapping from scientific to statistical hypotheses[4]. When doing research, there are many parameters one must set; for example, should observations be excluded? Which control variables should be measured? How to code participants’ responses? What dependent variables should be analyzed? By varying only a small number of these, Simmons et al. (2011) found that the nominal false positive rate of 5% skyrocketed to over 60%. They conclude that the “increased flexibility allows researchers to present anything as significant.” These issues are elevated by providing insufficient methodological detail in research articles, by a low percentage of researchers sharing their data (Wicherts et al., 2006; Wicherts, Bakker, & Molenaar, 2011), and in fields that require complicated preprocessing steps like neuroimaging (Carp, 2012; Cohen, 2016; Luck and Gaspelin, in press).

An important amendment is that researchers need not be aware of this flexibility; a p value might be misleading even when there is no “p-hacking”, and the hypothesis was posited ahead of time (i.e. was not changed after the fact—HARKing; Kerr, 1992). When decisions are contingent on the data are made in an environment in which different data would lead to different decisions, even when these decisions “just make sense,” there is a hidden multiple comparison problem lurking (Gelman & Loken, 2014). Usually, when conducting N statistical tests, we control for the number of tests in order to keep the false positive rate at, say, 5%. However, in the aforementioned setting, it is not clear what N should be exactly. Thus, results of statistical tests lose their meaning and carry little evidential value in such exploratory settings; they only do so in confirmatory settings (de Groot, 1954/2014; Wagenmakers et al., 2012). This distinction is at the heart of the problem, and gets obscured because many results in the literature are reported as confirmatory, when in fact they may very well be exploratory—most frequently, because of the way scientific reporting is currently done, there is no way for us to tell the difference.

To get a feeling for the many choices possible in statistical analysis, consider a recent paper in which data analysis was crowdsourced from 29 teams (Silberzahn et al., submitted). The question posited to them was whether dark-skinned soccer players are red-carded more frequently. The estimated effect size across teams ranged from .83 to 2.93 (odds ratios). Nineteen different analysis strategies were used in total, with 21 unique combinations of covariates; 69% found a significant relationship, while 31% did not.

A reanalysis of Berkowitz et al. (2016) by Michael Frank (2016; blog here) is another, more subtle example. Berkowitz and colleagues report a randomized controlled trial, claiming that solving short numerical problems increase children’s math achievement across the school year. The intervention was well designed and well conducted, but still, Frank found that, as he put it, “the results differ by analytic strategy, suggesting the importance of preregistration.”

Frequently, the issue is with measurement. Malte Elson—whose twitter is highly germane to our topic—has created a daunting website that lists how researchers use the Competitive Reaction Time Task (CRTT), one of the most commonly used tools to measure aggressive behavior. It states that there are 120 publications using the CRTT, which in total analyze the data in 147 different ways!

This increased awareness of researchers’ degrees of freedom and the garden of forking paths is mostly a product of this century, although some authors have expressed this much earlier (e.g., de Groot, 1954/2014; Meehl, 1985; see also Gelman’s comments here). The next point considers an issue much older (e.g., Berkson, 1938), but which nonetheless bears repeating.

Statistical inference

In psychology and much of the social and behavioral sciences in general, researchers overly rely on null hypothesis significance testing and p values to draw inferences from data. However, the statistical community has long known that p values overestimate the evidence against H0 (Berger & Delampady, 1987; Wagenmakers, 2007; Nuzzo, 2014). Just recently, the American Statistical Association released a statement drawing attention to this fact (Wasserstein & Lazar, 2016); that is, in addition to it being easy to obtain p < .05 (Simmons, Nelson, & Simonsohn, 2011), it is also quite a weak standard of evidence overall.

The last point is quite pertinent because the statement that 39% of replications in the reproducibility project were “successful” is misleading. A recent Bayesian reanalysis concluded that the original studies themselves found weak evidence in support of an effect (Etz & Vandekerckhove, 2016), reinforcing all points I have made so far.

Notwithstanding the above, p < .05 is still the gold standard in psychology, and is so for intricate historical reasons (cf., Gigerenzer, 1993). At JEPS, we certainly do not want to echo calls nor actions to ban p values (Trafimow & Marks, 2015), but we urge students and their instructors to bring more nuance to their use (cf., Gigerenzer, 2004).

Procedures based on classical statistics provide different answers from what most researchers and students expect (Oakes, 1986; Haller & Krauss; 2002; Hoekstra et al., 2014). To be sure, p values have their place in model checking (e.g., Gelman, 2006—are the data consistent with the null hypothesis?), but they are poorly equipped to measure the relative evidence for H1 or H0 brought about by the data; for this, researchers need to use Bayesian inference (Wagenmakers et al., in press). Because university curricula often lag behind current developments, students reading this are encouraged to advance their methodological toolbox by browsing through Etz et al. (submitted) and playing with JASP[5].

Teaching the exciting history of statistics (cf. Gigerenzer et al., 1989; McGrayne, 2012), or at least contextualizing the developments of currently dominating statistical ideas, is a first step away from their cookbook oriented application.

Registered reports to the rescue

While we can only point to the latter, statistical issue, we can actually eradicate the issue of publication bias and the garden of forking paths by introducing a new publishing format called Registered Reports. This format was initially introduced to the journal Cortex by Chris Chambers (Chambers, 2013), and it is now offered by more than two dozen journals in the fields of psychology, neuroscience, psychiatry, and medicine (link). Recently, we have also introduced this publishing format at JEPS (see King et al., 2016).

Specifically, researchers submit a document including the introduction, theoretical motivation, experimental design, data preprocessing steps (e.g., outlier removal criteria), and the planned statistical analyses prior to data collection. Peer review only focuses on the merit of the proposed study and the adequacy of the statistical analyses[5]. If there is sufficient merit to the planned study, the authors are guaranteed in-principle acceptance (Nosek & Lakens, 2014). Upon receiving this acceptance, researchers subsequently carry out the experiment, and submit the final manuscript. Deviations from the first submissions must be discussed, and additional statistical analyses are labeled exploratory.

In sum, by publishing regardless of the outcome of the statistical analysis, registered reports eliminate publication bias; by specifying the hypotheses and analysis plan beforehand, they make apparent the distinction between exploratory and confirmatory studies (de Groot 1954/2014), avoid the garden of forking paths (Gelman & Loken, 2014), and guard against post-hoc theorizing (Kerr, 1998).

Even though registered reports are commonly associated with high power (80-95%), this is unfeasible for student research. However, note that a single study cannot be decisive in any case. Reporting sound, hypothesis-driven, not-cherry-picked research can be important fuel for future meta-analysis (for an example, see Scheibehenne, Jamil, & Wagenmakers, in press).

To avoid possible confusion, note that preregistration is different from Registered Reports: The former is the act of specifying the methodology before data collection, while the latter is a publishing format. You can preregister your study on several platforms such as the Open Science Framework or AsPredicted. Registered reports include preregistration but go further and have the additional benefits such as peer review prior to data collection and in-principle acceptance.

Conclusion

In sum, there are several issues impeding progress in psychological science, most pressingly the failure to distinguish between exploratory and confirmatory research, and publication bias. A new publishing format, Registered Reports, provides a powerful means to address them both, and, to borrow a phrase from Daniel Lakens, enable us to “sail away from the seas of chaos into a corridor of stability” (Lakens & Evers, 2014).

Suggested Readings

  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
  • Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632-638.
  • Gelman, A., & Loken, E. (2014). The Statistical Crisis in Science. American Scientist, 102(6), 460-465.
  • King, M., Dablander, F., Jakob, L., Agan, M., Huber, F., Haslbeck, J., & Brecht, K. (2016). Registered Reports for Student Research. Journal of European Psychology Students, 7(1), 20-23
  • Twitter (or you might miss out)

Footnotes

[1] Incidentally, Diederik Stapel published a book about his fraud. See here for more.

[2] Baumeister (2016) is a perfect example of how not to respond to such a result. Michael Inzlicht shows how to respond adequately here.

[3] For a discussion of these issues with respect to the precognition meta-analysis, see Lakens (2015) and Gelman (2014).

[4] Another related, crucial point is the lack of theory in psychology. However, as this depends on whether you read the Journal of Mathematical Psychology or, say, Psychological Science, it is not addressed further. For more on this point, see for example Meehl (1978), Gigerenzer (1998), and a class by Paul Meehl which has been kindly converted to mp3 by Uri Simonsohn.

[5] However, it would be premature to put too much blame on p. More pressingly, the misunderstandings and misuse of this little fellow point towards a catastrophic failure in undergraduate teaching of statistics and methods classes (for the latter, see Richard Morey’s recent blog post). Statistics classes in psychology are often boringly cookbook oriented, and so students just learn the cookbook. If you are an instructor, I urge you to have a look at “Statistical Rethinking” by Richard McElreath. In general, however, statistics is hard, and there are many issues transcending the frequentist versus Bayesian debate (for examples, see Judd, Westfall, and Kenny, 2012; Westfall & Yarkoni, 2016).

[6] Note that JEPS already publishes research regardless of whether p < .05. However, this does not discourage us from drawing attention to this benefit of Registered Reports, especially because most other journals have a different policy.

This post was edited by Altan Orhon.

Meet the Authors

Do you wish to publish your work but don’t know how to get started? We asked some of our student authors, Janne Hellerup Nielsen, Dimitar Karadzhov, and Noelle Sammon, to share their experience of getting published.

Janne Hellerup Nielsen is a psychology graduate from Copenhagen University. Currently, she works in the field of selection and recruitment within the Danish Defence. She is the first author of the research article “Posttraumatic Stress Disorder among Danish Soldiers 2.5 Years after Military Deployment in Afghanistan: The Role of Personality Traits as Predisposing Risk Factors”. Prior to this publication, she had no experience with publishing or peer review but she decided to submit her research to JEPS because “it is a peer reviewed journal and the staff at JEPS are very helpful, which was a great help during the editing and publishing process.”

Dimitar Karadzhov moved to Glasgow, United Kingdom to study psychology (bachelor of science) at the University of Glasgow. He completed his undergraduate degree in 2014 and he is currently completing a part-time master of science in global mental health at the University of Glasgow. He is the author of “Assessing Resilience in War-Affected Children and Adolescents: A Critical Review”. Prior to this publication, he had no experience with publishing or peer review. Now having gone through the publication process, he recommends fellow students to submit their work because “it is a great research and networking experience.”

Noelle Sammon has an honors degree in business studies. She returned to study in university in 2010 and completed a higher diploma in psychology in the National University of Ireland, Galway. She is currently completing a master’s degree in applied psychology at the University of Ulster, Northern Ireland. She plans to pursue a career in clinical psychology. She is the first author of the research article “The Impact of Attention on Eyewitness Identification and Change Blindness”. Noelle had some experience with the publication process while previously working as a research assistant. She describes her experience with JEPS as follows: “[It was] very professional and a nice introduction to publishing research. I found the editors that I was in contact with to be really helpful in offering guidance and support. Overall, the publication process took approximately 10 months from start to finish but having had the opportunity to experience this process, I would encourage other students to publish their research.”

How did the research you published come about?

Janne: “During my psychology studies, I had an internship at a research center in the Danish Defence. Here I was a part of a big prospective study regarding deployed soldiers and their psychological well-being after homecoming. I was so lucky to get to use the data from the research project to conduct my own studies regarding personality traits and the development of PTSD. I’ve always been interested in differential psychology—for example, why people manage the same traumatic experiences differently. Therefore, it was a great opportunity to do research within the field of personality traits and the development of PTSD, and even to do so with some greatly experienced supervisors, Annie and Søren.”

Dimitar: “In my final year of the bachelor of science degree in psychology, I undertook a critical review module. My assigned supervisor was liberal enough and gave me complete freedom to choose the topic I would like to write about. I then browsed a few The Psychologist editions I had for inspiration and was particularly interested in the area of resilience from a social justice perspective. Resilience is a controversial and fluid concept, and it is key to recovery from traumatic events such as natural disasters, personal trauma, war, terrorism, etc. It originates from biomedical sciences and it was fascinating to explore how such a concept had been adopted and researched by the social and humanitarian sciences. I was intrigued to research the similarities between biological resilience of human and non-human animals and psychological resilience in the face of extremely traumatic experiences such as war. To add an extra layer of complexity, I was fascinated by how the most vulnerable of all, children and adolescents, conceptualize, build, maintain, and experience resilience. From a researcher’s perspective, one of the biggest challenges is to devise and apply methods of inquiry in order to investigate the concept of resilience in the most valid, reliable, and culturally appropriate manner. The quantitative–qualitative dyad was a useful organizing framework for my work and it was interesting to see how it would fit within the resilience discourse.”

Noelle: “The research piece was my thesis project for the higher diploma (HDIP). I have always had an interest in forensic psychology. Moreover, while attending the National University of Ireland, Galway as part of my HDIP, I studied forensic psychology. This got me really interested in eyewitness testimony and the overwhelming amount of research highlighting the problematic reliability with it.”

What did you enjoy most in your research and what did you find difficult?

Janne: “There is a lot of editing and so forth when you publish your research, but then again it really makes sense because you have to be able to communicate the results of your research out to the public. To me, that is one of the main purposes of research: to be able to share the knowledge that comes out of it.”

Dimitar: “[I enjoyed] my familiarization with conflicting models of resilience (including biological models), with the origins and evolution of the concept, and with the qualitative framework for investigation of coping mechanisms in vulnerable, deprived populations. In the research process, the most difficult part was creating a coherent piece of work that was very informative and also interesting and readable, and relevant to current affairs and sociopolitical processes in low- and middle-income countries. In the publication process, the most difficult bit was ensuring my work adhered to the publication standards of the journal and addressing the feedback provided at each stage of the review process within the time scale requested.”

Noelle: “I enjoyed developing the methodology to test the research hypothesis and then getting the opportunity to test it. [What I found difficult was] ensuring the methodology would manipulate the variables required.”

How did you overcome these difficulties?

Janne: “[By] staying focused on the goal of publishing my research.”

Dimitar: “With persistence, motivation, belief, and a love for science! And, of course, with the fantastic support from the JEPS publication staff.”

Noelle: “I conducted a pilot using a sample of students asking them to identify any problems with materials or methodology that may need to be altered.”

What did you find helpful when you were doing your research and writing your paper?

Janne: “It was very important for me to get competent feedback from experienced supervisors.”

Dimitar: “Particularly helpful was reading systematic reviews, meta-analyses, conceptual papers, and methodological critique.”

Noelle: “I found my supervisor to be very helpful when conducting my research. In relation to the write-up of the paper, I found that having peers and non-psychology friends read and review my paper helped ensure that it was understandable, especially for lay people.”

Finally, here are some words of wisdom from our authors.

Janne: “Don’t think you can’t do it. It requires some hard work, but the effort is worth it when you see your research published in a journal.”

Dimitar: “Choose a topic you are truly passionate about and be prepared to explore the problem from multiple perspectives, and don’t forget about the ethical dimension of every scientific inquiry. Do not be afraid to share your work with others, look for feedback, and be ready to receive feedback constructively.”

Noelle: “When conducting research it is important to pick an area of research that you are interested in and really refine the research question being asked. Also, if you are able to get a colleague or peer to review it for you, do so.”

We hope our authors have inspired you to go ahead and make that first step towards publishing your research. We welcome your submissions anytime! Our publication guidelines can be viewed here. We also prepared a manual for authors that we hope will make your life easier. If you do have questions, feel free to get in touch at journal@efpsa.org.

This post was edited by Altan Orhon.

The Mind-the-Mind Campaign: Battling the Stigma of Mental Disorders

People suffering from mental disorders face great difficulties in their daily lives and deserve all possible support from their social environment. However, their social milieus are often host to stigmatizing behaviors that actually serve to increase the severity of their mental disorders: People diagnosed with a mental disorder are often believed to be dangerous and excluded from social activities. Individuals who receive treatment are seen as being “taken care of” and social support is extenuated. Concerned friends, with all their best intentions, might show apprehensiveness when it comes to approaching someone with a diagnosis, and end up doing nothing (Corrigan & Watson, 2002). These examples are not of exceptional, sporadic situations—according to the World Health Organisation, nine out of ten people with a diagnosis report suffering from stigmatisation (WHO, 2016). Continue reading

Structural equation modeling: What is it, what does it have in common with hippie music, and why does it eat cake to get rid of measurement error?

Do you want a statistics tool that is powerful; easy to learn; allows you to model complex data structures; combines the test, analysis of variance,  and multiple regression; and puts even more on top? Here it is! Statistics courses in psychology today often cover structural equation modeling (SEM), a statistical tool that allows one to go beyond classical statistical models by combining them and adding more. Let’s explore what this means, what SEM really is, and SEM’s surprising parallels with the hippie culture! Continue reading

Editors’ Pick: Our Favourite Psychology and Neuroscience Podcasts

Podcasts

As students of psychology, we are accustomed to poring through journal articles and course-approved textbooks to stay up-to-date on the latest developments in the field. While these resources are the cornerstones of scientific research, there are myriad other ways to enhance our understanding of our chosen disciplines – namely through podcasts! Continue reading

Introducing JASP: A free and intuitive statistics software that might finally replace SPSS

Are you tired of SPSS’s confusing menus and of the ugly tables it generates? Are you annoyed by having statistical software only at university computers? Would you like to use advanced techniques such as Bayesian statistics, but you lack the time to learn a programming language (like R or Python) because you prefer to focus on your research?

While there was no real solution to this problem for a long time, there is now good news for you! A group of researchers at the University of Amsterdam are developing JASP, a free open-source statistics package that includes both standard and more advanced techniques and puts major emphasis on providing an intuitive user interface.

The current version already supports a large array of analyses, including the ones typically used by researchers in the field of psychology (e.g. ANOVA, t-tests, multiple regression).

In addition to being open source, freely available for all platforms, and providing a considerable number of analyses, JASP also comes with several neat, distinctive features, such as real-time computation and display of all results. For example, if you decide that you want not only the mean but also the median in the table, you can tick “Median” to have the medians appear immediately in the results table. For comparison, think how this works in SPSS: First, you must navigate a forest of menus (or edit the syntax), then, you execute the new syntax. A new window appears and you get a new (ugly) table.

JASP_screenshoot_2

In JASP, you get better-looking tables in no time. Click here to see a short demonstration of this feature. But it gets even better—the tables are already in APA format and you can copy and paste them into Word. Sounds too good to be true, doesn’t it? It does, but it works!

Interview with lead developer Jonathon Love

Where is this software project coming from? Who pays for all of this? And what plans are there for the future? There is nobody who could answer these questions better than the lead developer of JASP, Jonathon Love, who was so kind as to answer a few questions about JASP.
J_love

How did development on JASP start? How did you get involved in the project?

All through my undergraduate program, we used SPSS, and it struck me just how suboptimal it was. As a software designer, I find poorly designed software somewhat distressing to use, and so SPSS was something of a thorn in my mind for four years. I was always thinking things like, “Oh, what? I have to completely re-run the analysis, because I forgot X?,” “Why can’t I just click on the output to see what options were used?,” “Why do I have to read this awful syntax?,” or “Why have they done this like this? Surely they should do this like that!”

At the same time, I was working for Andrew Heathcote, writing software for analyzing response time data. We were using the R programming language and so I was exposed to this vast trove of statistical packages that R provides. On one hand, as a programmer, I was excited to gain access to all these statistical techniques. On the other hand, as someone who wants to empower as many people as possible, I was disappointed by the difficulty of using R and by the very limited options to provide a good user interface with it.

So I saw that there was a real need for both of these things—software providing an attractive, free, and open statistics package to replace SPSS, and a platform for methodologists to publish their analyses with rich, accessible user interfaces. However, the project was far too ambitious to consider without funding, and so I couldn’t see any way to do it.

Then I met E.J. Wagenmakers, who had just received a European Research Council grant to develop an SPSS-like software package to provide Bayesian methods, and he offered me the position to develop it. I didn’t know a lot about Bayesian methods at the time, but I did see that our goals had a lot of overlap.

So I said, “Of course, we would have to implement classical statistics as well,” and E.J.’s immediate response was, “Nooooooooooo!” But he quickly saw how significant this would be. If we can liberate the underlying platform that scientists use, then scientists (including ourselves) can provide whatever analyses we like.

And so that was how the JASP project was born, and how the three goals came together:

  • to provide a liberated (free and open) alternative to SPSS
  • to provide Bayesian analyses in an accessible way
  • to provide a universal platform for publishing analyses with accessible user interfaces

 

What are the biggest challenges for you as a lead developer of JASP?

Remaining focused. There are hundreds of goals, and hundreds of features that we want to implement, but we must prioritize ruthlessly. When will we implement factor analysis? When will we finish the SEM module? When will data entry, editing, and restructuring arrive? Outlier exclusion? Computing of variables? These are all such excellent, necessary features; it can be really hard to decide what should come next. Sometimes it can feel a bit overwhelming too. There’s so much to do! I have to keep reminding myself how much progress we’re making.

Maintaining a consistent user experience is a big deal too. The JASP team is really large, to give you an idea, in addition to myself there’s:

  • Ravi Selker, developing the frequentist analyses
  • Maarten Marsman, developing the Bayesian ANOVAs and Bayesian linear regression
  • Tahira Jamil, developing the classical and Bayesian contingency tables
  • Damian Dropmann, developing the file save, load functionality, and the annotation system
  • Alexander Ly, developing the Bayesian correlation
  • Quentin Gronau, developing the Bayesian plots and the classical linear regression
  • Dora Matzke, developing the help system
  • Patrick Knight, developing the SPSS importer
  • Eric-Jan Wagenmakers, coming up with new Bayesian techniques and visualizations

With such a large team, developing the software and all the analyses in a consistent and coherent way can be really challenging. It’s so easy for analyses to end up a mess of features, and for every subsequent analysis we add to look nothing like the last. Of course, providing as elegant and consistent a user-experience is one of our highest priorities, so we put a lot of effort into this.

 

How do you imagine JASP five years from now?

JASP will provide the same, silky, sexy user experience that it does now. However, by then it will have full data entering, editing, cleaning, and restructuring facilities. It will provide all the common analyses used through undergraduate and postgraduate psychology programs. It will provide comprehensive help documentation, an abundance of examples, and a number of online courses. There will be textbooks available. It will have a growing community of methodologists publishing the analyses they are developing as additional JASP modules, and applied researchers will have access to the latest cutting-edge analyses in a way that they can understand and master. More students will like statistics than ever before.

 

How can JASP stay up to date with state-of-the-art statistical methods? Even when borrowing implementations written in R and the like, these always have to be implemented by you in JASP. Is there a solution to this problem?

Well, if SPSS has taught us anything, you really don’t need to stay up to date to be a successful statistical product, ha-ha! The plan is to provide tools for methodologists to write add-on modules for JASP—tools for creating user interfaces and tools to connect these user interfaces to their underlying analyses. Once an add-on module is developed, it can appear in a directory, or a sort of “App Store,” and people will be able to rate the software for different things: stability, user-friendliness, attractiveness of output, and so forth. In this way, we hope to incentivize a good user experience as much as possible.

Some people think this will never work—that methodologists will never put in all that effort to create nice, useable software (because it does take substantial effort). But I think that once methodologists grasp the importance of making their work accessible to as wide an audience as possible, it will become a priority for them. For example, consider the following scenario: Alice provides a certain analysis with a nice user interface. Bob develops an analysis that is much better than Alice’s analysis, but everyone uses Alice’s, because hers is so easy and convenient to use. Bob is upset because everyone uses Alice’s instead of his. Bob then realizes that he has to provide a nice, accessible user experience for people to use his analysis.

I hope that we can create an arms race in which methodologists will strive to provide as good a user experience as possible. If you develop a new method and nobody can use it, have you really developed a new method? Of course, this sort of add-on facility isn’t ready yet, but I don’t think it will be too far away.

 

You mention on your website that many more methods will be included, such as structural equation modeling (SEM) or tools for data manipulation. How can you both offer a large amount of features without cluttering the user interface in the future?

Currently, JASP uses a ribbon arrangement; we have a “File” tab for file operations, and we have a “Common” tab that provides common analyses. As we add more analyses (and as other people begin providing additional modules), these will be provided as additional tabs. The user will be able to toggle on or off which tabs they are interested in. You can see this in the current version of JASP: we have a proof-of-concept SEM module that you can toggle on or off on the options page. JASP thus provides you only with what you actually need, and the user interface can be kept as simple as you like.

 

Students who are considering switching to JASP might want to know whether the future of JASP development is secured or dependent on getting new grants. What can you tell us about this?

JASP is currently funded by a European Research Council (ERC) grant, and we’ve also received some support from the Centre for Open Science. Additionally, the University of Amsterdam has committed to providing us a software developer on an ongoing basis, and we’ve just run our first annual Bayesian Statistics in JASP workshop. The money we charge for these workshops is plowed straight back into JASP’s development.

We’re also developing a number of additional strategies to increase the funding that the JASP project receives. Firstly, we’re planning to provide technical support to universities and businesses that make use of JASP, for a fee. Additionally, we’re thinking of simply asking universities to contribute the cost of a single SPSS license to the JASP project. It would represent an excellent investment; it would allow us to accelerate development, achieve feature parity with SPSS sooner, and allow universities to abandon SPSS and its costs sooner. So I don’t worry about securing JASP’s future, I’m thinking about how we can expand JASP’s future.

Of course, all of this depends on people actually using JASP, and that will come down to the extent that the scientific community decides to use and get behind the JASP project. Indeed, the easiest way that people can support the JASP project is by simply using and citing it. The more users and the more citations we have, the easier it is for us to obtain funding.

Having said all that, I’m less worried about JASP’s future development than I’m worried about SPSS’s! There’s almost no evidence that any development work is being done on it at all! Perhaps we should pass the hat around for IBM.

 

What is the best way to get started with JASP? Are there tutorials and reproducible examples?

For classical statistics, if you’ve used SPSS, or if you have a book on statistics in SPSS, I don’t think you’ll have any difficulty using JASP. It’s designed to be familiar to users of SPSS, and our experience is that most people have no difficulty moving from SPSS to JASP. We also have a video on our website that demonstrates some basic analyses, and we’re planning to create a whole series of these.

As for the Bayesian statistics, that’s a little more challenging. Most of our effort has been going in to getting the software ready, so we don’t have as many resources for learning Bayesian statistics ready as we would like. This is something we’ll be looking at addressing in the next six to twelve months. E.J. has at least one (maybe three) books planned.

That said, there are a number of resources available now, such as:

  • Alexander Etz’s blog
  • E.J.’s website provides a number of papers on Bayesian statistics (his website also serves as a reminder of what the internet looked like in the ’80s)
  • Zoltan Dienes book is a great for Bayesian statistics as well

However, the best way to learn Bayesian statistics is to come to one of our Bayesian Statistics with JASP workshops. We’ve run two so far and they’ve been very well received. Some people have been reluctant to attend—because JASP is so easy to use, they didn’t see the point of coming and learning it. Of course, that’s the whole point! JASP is so easy to use, you don’t need to learn the software, and you can completely concentrate on learning the Bayesian concepts. So keep an eye out on the JASP website for the next workshop. Bayes is only going to get more important in the future. Don’t be left behind!