Lessons from a published fake study

There are so many obstacles you have to face when doing your own research: After finding a suitable field, conducting your research and writing it down on paper, your supervisor might end up tearing it into pieces should they find shortcomings in your methodology or results section. In contrast to the widespread procedure, the authors of the study presented below have failed not only to discuss methodological issues, but they have made up a complete study that got published in the Indian Journal of Psychiatry. Has the entire review process failed for this study? What does this case teach you?

Well, it is not as dramatic as it seems. Although the study has been indeed published with major methodological issues, the author Andrade (2011) has intended to do so. Why publishing a paper describing a fictitious study of a fictitious drug?

In order to reveal common methodological problems of design, data analysis and data presentation, the paper on fake clinical trials has been published simultaneously with another paper. The companion paper published in the same journal issue critically examines the clinical fake-trial study providing author, reader, reviewer and researcher perspectives on problems that are related to design and the conduct of a clinical trial, on issues related to the analysis of data, on how to write a research paper and on how to critically read or review journal articles.

If you want to learn most from its methodological issues, you should read it and compare your critical assessment of the paper with the results presented in Andrade’s companion paper. Since they are both open access, you can easily put your knowledge on research to a test.

 

**SPOILER ALERT** The following will point out the mistakes made in Andrade (2011).

 

After you’ve read the article, see if you detected the following mistakes (and naturally you won’t be making these on your on when writing your next paper):

1. The title A 6-week, multicentre, randomized controlled clinical trial to evaluate the safety and efficacy of placeboxetine hydrochloride in the treatment of major depressive disorder in an Indian setting” seems too long and formulated in a too clumsy way.

2. No control group was used, which unfortunately is very common when investigating two drugs. Nevertheless your study should be designed correctly by employing a placebo group.

3. The “placeboxetine” was given at a higher dose, relative to its dose range, than the comparison drug but authors don’t explain why.

4. Although side effects are reported, there is no explanation how these were assessed.

5. Subtle but important issues with the statistics, such as reliance on t-tests.

6. The trial had some unusual features for a depression trial – with no explanation. Most patients were males in their 20s, while the norm is for about 65% females and an average age in the 40s; very few people dropped out; very few people who were screened were excluded, whereas most trials exclude a lot of people referring to specific reasons.

7. The effectiveness of both drugs was remarkably high (75% cure rate over 6 weeks – better than any treatment, drug or therapy, would be expected to show). Yet, authors fail to point this out.

8. It turns out that the trial was sponsored by the fictional pharmaceutical company, and was probably conducted to help get placeboxetine sold in India – but we only find this out in the small print in the end of the article.

9. Hot pink and white is not a good colour scheme for your graphs.

Therefore, when you write your paper, before getting criticized by your supervisor or the journals reviewers keep in mind those errors that might occur when designing, analysing or presenting your study, so that Andrade’s (2011) study can be the only one badly designed that managed to get published.

References
Andrade, C., (2011). A 6-week, multicentre, randomized controlled clinical trial to evaluate the safety and efficacy of placeboxetine hydrochloride in the treatment of major depressive disorder in an Indian setting. Indian Journal of Psychiatry, 53 (1), 69-72
Andrade, C., (2011). Placeboxetine for major depressive disorder: Researcher, author, reader, and reviewer perspectives on randomized controlled trials. Indian Journal of Psychiatry, 53 (1), 73-76

 

__________________________________________________________________________________________

As being part of EFPSA’s JEPS team, Sina Scherer works as JEPS Bulletin’s editor and is currently enrolled in the last year of her Master programme in Work and Organizational Psychology at the Westfälische Wilhelmsuniversität Münster. Her fields of interest cover the areas of Intercultural Psychology, Personality and Organizational Psychology such as Health Psychology.

About the author

Sina Scherer Sina Scherer, studying at University of Münster, Germany, and University of Padova, Italy. I have previously worked as JEPS Bulletin Editor and am active in a NMUN project simulating the political work of the United Nations as voluntary work. I am interested in cognitive neuroscience and intercultural psychology, anthropology and organizational psychology (aspects of work-life balance, expatriation).

Facebooktwitterrss
  • Ivan Flis

    A follow up on a social psychologist from the Netherlands that faked data in many papers. Not only that he faked data in research papers, but some of that data was used by his PhD students in writing their dissertations (thank you Dimitris for this information).

    http://news.sciencemag.org/scienceinsider/2011/10/report-dutch-lord-of-the-data-fo.html

  • Florian Cala

    Obviously, those kind of events are not new in the field of scientific researches, and I guess there will be many others to come in the near futur…anyway, here is an interesting review i’d like to share with you on the work of Cyril Burt, a man who brought theories on the Intelligence Conception that turned out to also be fraudulously designed. You could find it interesting, and I invite you to give it a look.

    http://pubs.socialistreviewindex.org.uk/sr196/parrington.htm

    What disturbs me the most about this case is that the work of Cyril Burt unfairly influenced the educative system of his time…

  • Ivan Flis

    This reminds me of Sacks’ critique of standardized testing, which can be attributed to intelligence tests too. In short, the Volvo Effect as he called it:

    “Although standardized tests have a relatively bleak record of predicting success in school and work, we know that they do tend to correlate exceedingly well with the income and education of one’s parents. Call it the ‘Volvo Effect.’ The data is so strong in this regard that one could make a good guess about a child’s standardized test scores by simply looking at how many degrees her parents have and what kind of car they drive.”

    But I see this as yet another nail in the coffin of the nature vs. nurture debate which ravaged psychology for its whole existence – not as a debunking of the concept of intelligence, but of that fictitious demarcation. But to think what are the repercussions of inventing data on such a central and more importantly, applied, concept as intelligence – truly frightening.

    Thanks for sharing!