Publishing in an APA journal might seem like an unattainable goal for someone who is still an undergraduate or master student. However, if you have good research, and supervisors who support you, there is a great chance you will achieve your goal. I was lucky enough to perform my final year dissertation with two fantastic supervisors, and it was this research that later went on to become the journal article being published in the Journal of Experimental Psychology: Human Perception & Performance. However, it was a very long road to travel down which I will re – travel with you in the following paragraphs of this post sharing the experiences I had.
Doing research takes a long time. Writing a paper based on the data acquired through research takes a long time. The review process of that paper, when it’s finally written, takes a long time (in some cases, 11 years). To shorten this arduous process the practice of shorter article formats in scientific journals is rising in prominence. This is what we call bite-size science–short reports usually covering one study. What are the benefits and what are the costs of moving to such brief formats?
As psychologists and, more importantly, as psychology students, we heavily rely on the peer-review process. When conducting an online search for journal articles that shall inform our next research project or assignment, we expect to find high-quality research right then and there. The peer-review process saves us time; we approach our search with the assumption that a large amount of articles that we find (at least those published in peer-reviewed journals) provide us with valuable insights into the area we are focussing on, even by just reading through the abstract. The reviewer is our friend! In this post I will offer some insight into my personal experiences regarding the peer-review process from the standpoint of the reviewer. More specifically I will highlight how I have systematically approached manuscripts that I was asked to review.
Journals in psychology, although most of them are not yet Open Access (optimistically speaking) as previous posts have indicated, function as working memory of scientific findings. They usually follow the idea of collecting and saving and commonly sharing findings that have been investigated qualitatively and quantitatively in the world and transmitting them worldwide and onto following generations. Although the idea of free access to most of the journals has not been fulfilled, journals nevertheless guide us through the quickly growing field of research. In order not to get too confused and overwhelmed by the mass of journals nowadays, this post intends to structure the journal world starting historically from the first and only journal in psychology established at the end of the 19th century. Continue reading
“The literature of social sciences contains horror stories of journal editors and others who consider a study worthwhile only if it reaches a statistically significant, positive conclusion; that is, an equally significant rejection of a hypothesis is not considered worthwhile” (Scargle, 2000).
This is a footnote in Jeffrey D. Scargle’s, an astrophysicist working for NASA, article about the publication bias in scientific journals. Usually, the psychologist in me would go all defensive of our precious little social science, but then one discovers this: a couple of researchers trying to publish a paper debunking Bem’s research on ESP (in layman terms, ESP means predicting the future). More precisely, their woes while trying to publish a paper with nonsignificant results. How many papers have you read that have nonsignificant results, that accept the null hypothesis? I have a feeling you have the same answer as me, and it’s frighteningly converging on zero. What happens to those papers? And what’s the implication of such a bias in publishing for science at large?
There are so many obstacles you have to face when doing your own research: After finding a suitable field, conducting your research and writing it down on paper, your supervisor might end up tearing it into pieces should they find shortcomings in your methodology or results section. In contrast to the widespread procedure, the authors of the study presented below have failed not only to discuss methodological issues, but they have made up a complete study that got published in the Indian Journal of Psychiatry. Has the entire review process failed for this study? What does this case teach you?
Four years ago, the President of the American Psychological Association, addressed the ‘thorny debate of Open Access’ as she puts it. What is APA’s standing on open access?
Does APA, probably the most influential organization in psychology today, support the goal of open access to research? I am a bit confused as to an answer to that question, so I tried to write an informed perspective on APA’s policy on open access. You can find what my inquiry has elucidated in the rest of this post.
When considering a research idea, we are bound to rely on previous findings on the topic. Work done in the field constructs the foundation for our research and determines its course and value. Inaccurate findings may lead to imprecise applications and end in further fallacies in your own new scientific knowledge that you construct. In order to set a solid basis for research on any topic and to prevent multiplication of misinformation, it is crucial to to critically evaluate existing scientific evidence. It is important to know which information can be regarded as plausible.
So what’s the criteria to determine whether a result can be trusted? As it is taught in the first classes in psychology, errors may emerge from any phase of the research process. Therefore, it all boils down to how the research has been conducted and the results presented.
Meltzoff (2007) emphasizes the key issues that can produce flawed results and interpretations and should therefore be carefully considered when reading articles. Here is a reminder on what to bear in mind when reading a research article:
The editors of the Journal of Personality and Social Psychology have created quite a buzz when word got out that they are planning to publish a paper on extrasensory perception (ESP) in an issue of their journal. Daryl J. Bem, an emeritus professor at Cornell University, is the author of this controversial paper that provoked reactions in mainstream media (e.g. an article in the New York Times) and also in the academia, with prompt criticism of the said paper by Wagenmakers, Wetzels, Borsboom and van der Maas (submitted) and Rouder and Morey (submitted). What are the implications of a distinctively parapsychology focused research paper being published in a mainstream psychology journal? Is this a failure of the review process or proof that current scientific review is truly unbiased?