Some of you might have asked themselves this question a couple of times when checking out the literature of a specific field. Imagine the following situation: You have completed your research and now you want to compare your results to research done previously. You finally found the suitable article, have the necessary effect sizes/power and can start comparing. But wait: Who actually tells you that the reported results are correct? Would you happen to notice if the results had been influenced by factors which are sometimes not even visible to the authors themselves? The probability of you detecting them, are tiny, especially as you only have certain information of how the study has been done and what elements have been removed. It is hard to digest, but most research findings, even those reported in high quality journals, are incorrect. Try to imagine the impact this situation has on your education and your research such as research in general.
In the following post I want to discuss multiple factors of why and how research results can be false and want to outline some aspects of how the situation might be improved. The main aspects are thereby retrieved from Ioannidis essay (2005). Continue reading
You have invested an uncountable amount of hours completing your research, and even have been lucky enough (!) with your results confirming the previously stated hypotheses deriving from an extensive literature search. Besides being already happy enough, the next step to consider would be the publication in a journal, the higher the impact factor the better. The hardworking (and fortunate) even complete this enduring step and can rejoice in receiving considerable attention from other scholars in the field. Passionate researchers know what I am talking about. But what about all those researchers that fail to confirm their hypotheses. Do they receive as much attention? The answer to this rhetorical question must presumably be: NO.
Results that are perceived as not advancing our through science constructed knowledge might never reach public attention. This tendency to report almost only positive findings in scientific journals is a common point of contention, especially in the social sciences. Continue reading
Source: Scargle, 2000
“The literature of social sciences contains horror stories of journal editors and others who consider a study worthwhile only if it reaches a statistically significant, positive conclusion; that is, an equally significant rejection of a hypothesis is not considered worthwhile” (Scargle, 2000).
This is a footnote in Jeffrey D. Scargle’s, an astrophysicist working for NASA, article about the publication bias in scientific journals. Usually, the psychologist in me would go all defensive of our precious little social science, but then one discovers this: a couple of researchers trying to publish a paper debunking Bem’s research on ESP (in layman terms, ESP means predicting the future). More precisely, their woes while trying to publish a paper with nonsignificant results. How many papers have you read that have nonsignificant results, that accept the null hypothesis? I have a feeling you have the same answer as me, and it’s frighteningly converging on zero. What happens to those papers? And what’s the implication of such a bias in publishing for science at large?
The editors of the Journal of Personality and Social Psychology have created quite a buzz when word got out that they are planning to publish a paper on extrasensory perception (ESP) in an issue of their journal. Daryl J. Bem, an emeritus professor at Cornell University, is the author of this controversial paper that provoked reactions in mainstream media (e.g. an article in the New York Times) and also in the academia, with prompt criticism of the said paper by Wagenmakers, Wetzels, Borsboom and van der Maas (submitted) and Rouder and Morey (submitted). What are the implications of a distinctively parapsychology focused research paper being published in a mainstream psychology journal? Is this a failure of the review process or proof that current scientific review is truly unbiased?