You have invested an uncountable amount of hours completing your research, and even have been lucky enough (!) with your results confirming the previously stated hypotheses deriving from an extensive literature search. Besides being already happy enough, the next step to consider would be the publication in a journal, the higher the impact factor the better. The hardworking (and fortunate) even complete this enduring step and can rejoice in receiving considerable attention from other scholars in the field. Passionate researchers know what I am talking about. But what about all those researchers that fail to confirm their hypotheses. Do they receive as much attention? The answer to this rhetorical question must presumably be: NO.
Results that are perceived as not advancing our through science constructed knowledge might never reach public attention. This tendency to report almost only positive findings in scientific journals is a common point of contention, especially in the social sciences.
Psychology and psychiatry, according to work by Fanelli (2010), are the worst offenders: they are five times more likely to report a positive result than the space sciences, which are at the other end of the spectrum. And the situation is not improving. In 1959, statistician Theodore Sterling found that 97% of the studies in four major psychology journals had reported statistically significant positive results. Some followup studies of a later date only confirmed this.
Psychology and other sciences are fighting a common phenomena, that has gained more traction in the past decades, with more and more people involved in the academia who are fervently pursuing positive findings to ensure their own survival in the academic world. The publication bias, seen like this, is not getting better with the expansion of the field. It’s getting worse.
Important biases and artifacts
It seems that the scientific world is loading high on the sensation seeking dimension, since studies that incorporate revolutionary scientific evidence are given way more attention than the ones confirming information that every non-psychologist could deductively elaborate. The most frowned upon approach, in this picture, would be the replication of previous findings. Researchers would argue that replicating studies seems too expensive and not rewarding enough.
All this puts a burden of proving what is already ‘proven’ on those who try to replicate studies — and they indeed face a tough slog. Consider the aftermath of Bem’s notorious paper. When the three groups who failed to reproduce the word-recall results combined and submitted their results for publication, the JPSP, Science and Psychological Science all said that they do not publish straight replications. The British Journal of Psychology sent the paper out for peer review, but rejected it. Bem was one of the peer reviewers on the paper. The beleaguered paper eventually found a home at PLoS ONE, a journal that publishes all “technically sound” papers, regardless of novelty.
In January, Hal Pashler, a psychologist from the University of California, San Diego, and his colleagues created a website called PsychFileDrawer where psychologists can submit unpublished replication attempts, whether successful or not. The site has been warmly received but has only nine entries so far. There are few incentives to submit: any submission opens up scientists to criticisms from colleagues and does little to help their publication record.
Coming back to the main problem, there are two constructs that need to be mentioned and characterized: The file drawer effect and the publication bias.
1. The file drawer effect:
is that many studies in a given area of research may be conducted but never reported, and those that are not reported may on average report different results from those that are reported. An extreme scenario is that a given null hypothesis of interest is in fact true, i.e. the association being studied does not exist, but the 5% of studies that by chance show a statistically significant result are published, while the remaining 95% where the null hypothesis was not rejected languish in researchers’ file drawers. Even a small number of studies lost “in the file drawer” can result in a significant bias (Scargle, 2000).
2. The publication bias, shortly summarized, encompasses the idea to publish only studies in which the authors find support for their hypothesis. (for further information see here).
An idea to overcome the artefact of publishing only significant studies is the statistical method of meta-analysis that, briefly described, accumulates the findings of a great variety of studies that have dealt with a particular research question to estimate an overall effect. Although it is not a new method, it seems to be the powerful future of research in social sciences.
Nevertheless, even this method is not a perfect alternative to omit biases, since it builds upon the foundation of the file drawer effect and the publication bias. Therefore, the use of a non-representative proportion of significant studies or of those studies giving results in a positive direction, will lead to a non-representative set of studies in the meta- analysis data set. A standard meta-analysis model, consequently, will result in a conclusion biased towards significance and positivity. This problem of meta-analyses whose data are solely based on the biased proportion of literature seems a crucial one. To overcome this vicious circle to some extent, there are a few statistical algorithms that try to calculate and consequently reduce the influence of positively biased results:
1. The funnel plot
A funnel plot is a useful graph designed to check the existence of the publication bias. It is mainly used in meta- analyses and systematic reviews. Assuming that the largest studies will be near the average (indicated by the orthogonal line) , and small studies will be spread on both sides of the average. Variation from this assumption can indicate publication bias.
Fig.1. The funnel plot on the left side without publication bias, the one on the right side is biased (Scherer, 2012).
The graph on the left side indicates the equal d of distribution of positive and negative studies, whereas the right funnel plot shows that there is a lack of studies with a small sample size and negative effects. This finding indicates the existence of a publication bias. Employing this easy method to investigate the distribution of study results can help identify studies that still might need to be published (if they are already lying in somebodys’s drawer) or that need to be further investigated.
2. The Trim and Fill Algorithm
The Trim and Fill Algorithm is an iterative algorithm, that indicates how many studies would be considered necessary and what kind of results of those studies would be needed in order to change the existence of the publication bias. The graph indicates those studies (blue colour) that have been employed to calculate the overall effect size of the meta-analysis. The white dots on the left side (one dot equals one study) indicate where and how many studies are necessary for an equal distribution of findings, and should be added to the already used ones.
3. Fail-safe N method
The intention of the Fail- safe N method is to reduce the publication bias and to identify the number of additional ‘negative’ studies (studies in which the intervention effect was zero) that would be needed to increase the P value for the meta-analysis to above 0.05 (Rosenthal 1979).
After stating all those problems and possible solutions, maybe you thought of some other ways to change those crucial biases and artefacts underlying the publication of research. Feel free to share and discuss those ideas with us! But when considering other alternatives, keep in mind that some of the researchers who published their positive and irreplicable findings in journals just might be the reviewers of your papers trumping their years of research and calling it an artifact of the publication bias .
Christopher, J., & Brannick, M. (2012). Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17(1), 120-128, doi: 10.1037/a0024445.
Duval, S., & Tweedie, R. (2000). Trim and Fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 56(2), 455-463. http://184.108.40.206/research/Technical%20Reports/1998/98-17%20Taylor%20Tweedie.pdf
Fanelli, D. (2010) “Positive” Results Increase Down the Hierarchy of the Sciences. PLoS ONE 5(4). doi:10.1371 http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010068
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin 86 (3): 638–641. doi:10.1037/0033-2909.86.3.638. http://www.cs.ucl.ac.uk/staff/M.Sewell/faq/publishing-research/Rosenthal1979.pdf
Scargle, J. D. (2000). Publication Bias: The “File-Drawer Problem” in Scientific Inference”. Journal of Scientific Exploration 14 (2): 94–106.
The Trimm and Fill graph was drawn from Eudocia, C. Q. et al. (2009), which you can access here.
As being part of EFPSA’s JEPS team, Sina Scherer works as JEPS Bulletin’s editor and is currently enrolled in the last year of her Master programme in Work and Organizational Psychology at the Westfälische Wilhelmsuniversität Münster. Her fields of interest cover the areas of Intercultural Psychology, Personality and Organizational Psychology such as Health Psychology.