The implications of bite-size science

Doing research takes a long time. Writing a paper based on the data acquired through research takes a long time. The review process of that paper, when it’s finally written, takes a long time (in some cases, 11 years). To shorten this arduous process the practice of shorter article formats in scientific journals is rising in prominence. This is what we call bite-size science–short reports usually covering one study. What are the benefits and what are the costs of moving to such brief formats?

Nick Haslam (2010) has compared three psychology journals and their citations through a 6 year period. All three journals have well-established formats for longer and shorter articles. For example, Psychological Science has “research articles” (less than 4,000 words), “research reports” (less than 2,500 words) and “short reports” (less than 1,500 words). What Haslam’s study has concluded is that longer articles have a slightly higher possibility of being cited on average. But, on the other hand, “short articles yield a similar or higher rate of such articles [highly cited ones] on a per-page basis” (Haslam, 2010, p. 263).  This leads the author to conclude that short articles “appear to be somewhat more efficient in generating scientific influence than standard articles” (Haslam, 2010).

These findings concur with the advantages of short reports listed in the introduction of Bertamini and Munafò’s (2012) article. According to these authors, short formats allow faster communication of results; ease of assimilation; ease of access for people outside the field; ease of processing for editors and reviewers; and more dynamic exchange of fresh ideas. This, in turn, the authors interpret alongside the practical goals of the researchers trying to get their work published. Researchers are under great pressure to publish; either to keep their jobs or to advance. With bite-size articles, one is firmly on the side of academic survival on this publish or perish continuum because writing shorter articles takes less time than writing longer ones. Not only because they are smaller but also because they are usually based on smaller data sets than longer articles with a number of studies and replications. Which, in turn, means a higher published article count for the researcher. Bertamini and Munafò (2012) put it like this “quantity is still important, as it contributes to résumé length, exposure, name recognition, and summary statistics of productivity” (p. 67).

However, they remain more critical of bite-size science than Haslam. They draw attention to the problematic side of his analysis–the proposed ‘edge’ of bite-size articles actually being an artifact of our citation measure. Bertamini and Munafò (2012) explain it through this hypothetical example:

There are some technical problems with a citation impact adjusted for length, and it should not be taken as a bona fide superior measure of impact. If the same findings can be written in either a short or long format, and assuming that the two articles would get cited equally, the impact per page would be higher for the short article, but it would be misleading to say that the short article has achieved any greater impact than the long one. Moreover, suppose I conduct two studies providing converging evidence for the same conclusion and I can publish them in one long article or in two short articles. My colleagues A. Friend and A. Foe always cite all my work because it is relevant for what they do. They will cite either one long or two short articles in all their publications. Based on their citations, each of the three articles would have the same impact, but on a per-page measure, the shorter articles are more influential. This would be purely because of how we measure impact, not because of a difference in influence (p. 68).

Another problem arises when we take into account that shorter articles usually include only one study and less data (especially a smaller sample size). Coupled with the publication bias, this will lead to what Bertamini and Munafò (2012) gravely suggest as an “even greater contamination of the literature by false positive findings” (p. 69).

When all this is taken into account, we see a standard web of interconnected problems in scientific publishing of psychology. Bite-size articles are seen as a viable option to ramp up more published papers (catering to the Impact Factor metric, despite its problems), which in turn only aggravates the publication bias and lessens the already questionable practice of publishing replication studies among psychologists. All in all, a far-reaching migration to a shorter format without a careful examination of the existing biases and problems in the publication process seems unwarranted — the costs just trump the benefits.


Both articles cited in this post are available through Green OA repositories:

Bertamini, M., & Munafò, M. R. (2012). Bite-Size Science and Its Undesired Side Effects. Perspectives on Psychological Science 7(1), p. 67-71.
doi: 10.1177/174569161142935. Retrieved from:

Haslam, N. (2010). Bite-Size Science: Relative Impact of Short Article Formats. Perspectives on Psychological Science 5(3), p. 263-264. doi: 10.1177/1745691610369466. Retrieved from:



Ivan Flis is a graduate student of psychology at the Center for Croatian Studies at the University of Zagreb, Croatia. He is the Editor-in-Chief of the Journal of European Psychology Students (JEPS) and the Chair of the Right to Research Coalition Coordinating Committee for Africa, Europe and Middle East.

About the author

Ivan Flis Ivan Flis is a PhD student in History and Philosophy of Science at the Descartes Centre, Utrecht University; and has a degree in psychology from the University of Zagreb, Croatia. His research focuses on quantitative methodology in psychology, its history and application, and its relation to theory construction in psychological research. He had been an editor of JEPS for three years in the previous mandates.