Tag Archives: publication bias

Why Are Most Research Findings Incorrect?

Some of you might have asked themselves this question a couple of times when checking out the literature of a specific field. Imagine the following situation: You have completed your research and now you want to compare your results to research done previously. You finally found the suitable article, have the necessary effect sizes/power and can start comparing. But wait: Who actually tells you that the reported results are correct? Would you happen to notice if the results had been influenced by factors which are sometimes not even visible to the authors themselves? The probability of you detecting them, are tiny, especially as you only have certain information of how the study has been done and what elements have been removed. It is hard to digest, but most research findings, even those reported in high quality journals, are incorrect. Try to imagine the impact this situation has on your education and your research such as research in general.

In the following post I want to discuss multiple factors of why and how research results can be false and want to outline some aspects of how the situation might be improved. The main aspects are thereby retrieved from Ioannidis essay (2005). Continue reading

Facebooktwitterrss

Replication Studies: It’s Time to Clean Up Your Act, Psychologists!

The past couple of years have been somewhat tumultuous for psychology. With the revelation of several high profile cases of fraud, the field has come under scrutiny, not least from psychologists themselves. Most notably, Simmons, Nelson and Simonsohn (2011) showed how the amount of possible decisions in the research process and flexibility researchers normally have can make finding significant findings increasingly likely (see their paper for recommendations to counteract this). It is likely that there are many cases of false positives (or Type 1 error) in the psychological literature (and other bodies of literature). Such errors in the literature, whether resulting from fraud or cultural norms in research practice, are difficult to remove due to the combination of the reluctance of journals to publish exact replications (see previous post on publication bias) and the reluctance of journals to publish null results (which exacerbates the file drawer effect). Thankfully, some initiatives have been established to counter this risk to the credibility of psychology in the form of systematic efforts at carrying out and publishing replications. However, negative attitudes towards replication remain. Many established researchers do not see any incentive in replication (Makel, Plucker & Hegarty, 2012). It is a waste of time that could be spent on their own projects and is perceived as more likely to cause frustration in their colleagues then be rewarded with a publication. This is where you, the students, can step in. Carrying out a replication is a great way for a student to hone his or her research skills, while providing a valuable service to psychology. Before exploring the role students could have in this issue, let’s examine exactly what replication is, the situation of replication in psychology currently, and the efforts that have been made to advocate for it.

Continue reading

Chris Noone

Chris Noone is a PhD student at the School of Psychology at the National University of Ireland, Galway. His research focuses on the effects of mood on higher-order cognition. He is very engaged in working for EFPSA as the Member Representative Coordinator on the Board of Management.

More Posts

Facebooktwitterrss

Evaluating qualitative research: Are we judging by the wrong standards?

Although qualitative research methods have grown increasingly popular,confusion exists over how their quality can be assessed and the idea persists that qualitative research is of lesser value when compared to quantitative research.  Quantitative and qualitative research have different historical roots and are based on very different concepts, yet the dominance of positivist ideas about what constitutes good quality, valid research in psychology has often led qualitative research to be evaluated according to criteria, that are designed to fit a very different paradigm.  Inevitably, the diverse perspectives which use qualitative methods and their differing views on how people should be studied mean there is disagreement and controversy over how quality should be evaluated.  Despite this, it is seen as important to develop common criteria which allow the quality of qualitative research to be evaluated on its own terms.

Continue reading

Lorna Rouse

Lorna Rouse

Lorna graduated from the Open University in 2009 with a BSc (honours) in psychology and is currently studying for an MSc in Psychological Research Methods at Anglia Ruskin University. Lorna has worked as a Research Assistant at the University of Cambridge, providing support for studies investigating recovery from traumatic brain injury. In her spare time she organises events for the Cambridge branch of the Open University Psychological Society. She is particularly interested in qualitative research methods and intellectual disabilities.

More Posts

Facebooktwitterrss

Bias in psychology: Bring in all significant results

You have invested an uncountable amount of hours completing your research, and even have been lucky enough (!) with your results confirming the previously stated hypotheses deriving from an extensive literature search. Besides being already happy enough, the next step to consider would be the publication in a journal, the higher the impact factor the better. The hardworking (and fortunate) even complete this enduring step and can rejoice in receiving considerable attention from other scholars in the field. Passionate researchers know what I am talking about. But what about all those researchers that fail to confirm their hypotheses. Do they receive as much attention? The answer to this rhetorical question must presumably be: NO.

Results that are perceived as not advancing our through science constructed knowledge might never reach public attention.  This tendency to report almost only positive findings in scientific journals is a common point of contention, especially in the social sciences. Continue reading

Sina Scherer

Sina Scherer

Sina Scherer, studying at University of Münster, Germany, and University of Padova, Italy. I have previously worked as JEPS Bulletin Editor and am active in a NMUN project simulating the political work of the United Nations as voluntary work. I am interested in cognitive neuroscience and intercultural psychology, anthropology and organizational psychology (aspects of work-life balance, expatriation).

More Posts

Facebooktwitterrss

Is qualitative research still considered the poor relation?

It sometimes seems that the entire area of psychology is characterised by the friction between words and numbers. When I first considered a career in psychology, as a UK student, I was faced with the confusing choice of psychology as either a Bachelor of Arts or a Bachelor of Science. The former spoke to me of enticing social science research, such as interpersonal attraction, whilst the latter screamed scary statistics – avoid, avoid, avoid! However, in the years that have passed since I had this decision to make, psychology has increasingly come to be defined as a science and the presiding impression is that the discipline takes a distinct pride in its commitment to numbers. This is perhaps the natural outcome of living in a world which dictates that evidence counts for everything, a trend which is keenly reflected in the media’s thirst for statistics-based research stories. However, I hear you ask, what has happened to the fate of “words” during this numerical domination of psychology?

This is where the field of qualitative research enters into the equation, with a number of researchers having elected to favour data gathering in the form of words, pictures or objects rather than through the standard route of numbers and statistics. However, there has long been a sense of qualitative research as the “poor relation” of quantitative efforts. The question is whether qualitative research is still somehow perceived as being of lesser value than quantitative research, and how this affects publication possibilities?

Continue reading

Facebooktwitterrss

The implications of bite-size science

Doing research takes a long time. Writing a paper based on the data acquired through research takes a long time. The review process of that paper, when it’s finally written, takes a long time (in some cases, 11 years). To shorten this arduous process the practice of shorter article formats in scientific journals is rising in prominence. This is what we call bite-size science–short reports usually covering one study. What are the benefits and what are the costs of moving to such brief formats?

Continue reading

Ivan Flis

Ivan Flis is a PhD student in History and Philosophy of Science at the Descartes Centre, Utrecht University; and has a degree in psychology from the University of Zagreb, Croatia. His research focuses on quantitative methodology in psychology, its history and application, and its relation to theory construction in psychological research. He had been an editor of JEPS for three years in the previous mandates.

More Posts

Facebooktwitterrss

What happens to studies that accept the null hypothesis?

Source: Scargle, 2000

“The literature of social sciences contains horror stories of journal editors and others who consider a study worthwhile only if it reaches a statistically significant, positive conclusion; that is, an equally significant rejection of a hypothesis is not considered worthwhile” (Scargle, 2000).

This is a footnote in Jeffrey D. Scargle’s, an astrophysicist working for NASA, article about the publication bias in scientific journals. Usually, the psychologist in me would go all defensive of our precious little social science, but then one discovers this: a couple of researchers trying to publish a paper debunking Bem’s research on ESP (in layman terms, ESP means predicting the future). More precisely, their woes while trying to publish a paper with nonsignificant results. How many papers have you read that have nonsignificant results, that accept the null hypothesis? I have a feeling you have the same answer as me, and it’s frighteningly converging on zero. What happens to those papers? And what’s the implication of such a bias in publishing for science at large?

Continue reading

Ivan Flis

Ivan Flis is a PhD student in History and Philosophy of Science at the Descartes Centre, Utrecht University; and has a degree in psychology from the University of Zagreb, Croatia. His research focuses on quantitative methodology in psychology, its history and application, and its relation to theory construction in psychological research. He had been an editor of JEPS for three years in the previous mandates.

More Posts

Facebooktwitterrss