The ‘science’ in scientific peer-reviewed journals

The editors of the Journal of Personality and Social Psychology have created quite a buzz when word got out that they are planning to publish a paper on extrasensory perception (ESP) in an issue of their journal. Daryl J. Bem, an emeritus professor at Cornell University, is the author of this controversial paper that provoked reactions in mainstream media (e.g. an article in the New York Times) and also in the academia, with prompt criticism of the said paper by Wagenmakers, Wetzels, Borsboom and van der Maas (submitted) and Rouder and Morey (submitted). What are the implications of a distinctively parapsychology focused research paper being published in a mainstream psychology journal? Is this a failure of the review process or proof that current scientific review is truly unbiased?

The previously mentioned critiques written by Wagenmakers et al (submitted) and Rouder and Morey (submitted) primarily focus on the statistical flaws in Bem’s paper. They argue that the statistical analysis used by Bem wrongly leads to the conclusion that ESP exists. Wagenmakers et al (submitted) took this critique even further by saying:

Instead of revising our beliefs regarding psi, Bem’s research should instead cause us to revise our beliefs on methodology: the field of psychology currently uses methodological and statistical strategies that are too weak, too malleable, and offer far too many opportunities for researchers to befuddle themselves and their peers. (p. 2)

Looking at it this way, this controversy which is at a cursory glance limited to the world of scientific publishing is maybe indicative of a broader crisis – that of the whole field of research in psychology. If the paper’s topic was not as controversial as ESP, would it even provoke such criticism of the statistics and methodology used? Considering it passed the review process at the Journal of Personality and Social Psychology, the used statistical methods and the conclusions based on them were apparently acceptable according to the journal’s standards.

I am not trying to imply that the paper should have been rejected outright because of its topic – only that as many as possible of its weak points and problematic conclusions should have been addressed during the review and the necessary clarifications for those issues should have been included in the original article. This process of noting the ‘weak points’ and addressing their implications in the interpretation of results is the product of a good review. This way, they would not end up in misleading conclusions and provocative headlines in the media. Should this criticism of the Bem article review be directed at the reviewers at the Journal of Personality and Social Psychology? Not so much. It should be aimed at the general reviewing practice in psychology, which is if we agree with Wagenamekers et al (submitted), quite lax.

On this matter, the policy of JEPS on questionable methodology in the manuscripts we receive comes to mind. If a manuscript that is sent to JEPS has a brilliant or provocative idea and not so good methodology, we would prefer to publish it under certain circumstances. Does this mean JEPS standards are low regarding methodology of the manuscripts we receive? No. With this approach, we maximize cutting-edge and heuristically valuable research which, if we as the editors were too rigid, would not get published. But, at the same time, the author is supposed to recognize all the flaws and all the problems with the methodology she or he used. If the author did not recognize this, the reviewers must ask the author to include this part of self-criticism directed at the methodology into the manuscript before it gets published. For students, this level of self-criticism is expected. After all, we are just entering the world of science and scientific writing. But even after a student gets a few years of experience under his belt and she or he becomes a mature researcher, this practice should not stop. On the contrary, it should become even more scrutinous because of the researcher’s higher standards.

In a way, Bem’s Feeling the Future could be the Sokal affair of psychology on a much smaller scale considering we have no reason to suspect it was purposefully submitted to discredit the journal in question. It raises the questions of the methodological and statistical rigorousness of psychological journals and their reviewers, akin to the way Sokal’s article raised the questions regarding the scholarly value of postmodernist thought. I would like to reiterate, the problem with this paper being published in a renowned psychology journal is not with parapsychological topics getting spotlight from ‘serious’ researchers, but that of questionable methodology and statistics that are apparently widely accepted in the more mainstream (one might say, traditional) research in psychology. And Journal of Personality and Social Psychology is as mainstream as you get in scientific publishing in psychology, taking into account its current impact factor of 4.732.

Wagenmakers et al (submitted) offer a similar concern in the conclusion of their article, with which I wholeheartedly agree:

It would therefore be mistaken to interpret our assessment of the Bem experiments as an attack on research of unlikely phenomena; instead, our assessment suggests that something is deeply wrong with the way experimental psychologists design their studies and report their statistical results. It is a disturbing thought that many experimental findings, proudly and confidently reported in the literature as real, might in fact be based on statistical tests that are explorative and biased (see also Ioannidis, 2005). We hope the Bem article will become a signpost for change, a writing on the wall: psychologists must change the way they analyze their data. (p. 12)

I might add that the boards of reviewers in peer-reviewed journals must be the catalyst of this change, considering they are the filter standing between unpublished work and that what becomes cited literature. And of course the future researchers who will hopefully always keep the critical approach to their methodology, as they did in their student days.


Bem, D. J. (in press). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology.

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2, 696–701.

Rouder, J. N. & Morey, R. D. (submitted). An Assessment of The Evidence For Feeling The Future With A Discussion of Bayes Factor and Significance Testing.

Wagenmakers, E.-J., Wetzels, R., Borsboom, D., & van der Maas, H. (submitted). Why psychologists must change the way they analyze their data: The case of psi.



Ivan Flis is a graduate student of psychology at the Center for Croatian Studies at the University of Zagreb, Croatia. He is the Editor-in-Chief of the Journal of European Psychology Students (JEPS) and the Chair of the Right to Research Coalition Coordinating Committee for Africa, Europe and Middle East.

About the author

Ivan Flis Ivan Flis is a PhD student in History and Philosophy of Science at the Descartes Centre, Utrecht University; and has a degree in psychology from the University of Zagreb, Croatia. His research focuses on quantitative methodology in psychology, its history and application, and its relation to theory construction in psychological research. He had been an editor of JEPS for three years in the previous mandates.