Category Archives: Interviews

Interviews with various people working in publishing and research.

Not solely about that Bayes: Interview with Prof. Eric-Jan Wagenmakers

Last summer saw the publication of the most important work in psychology in decades: the Reproducibility Project (Open Science Collaboration, 2015; see here and here for context). It stirred up the community, resulting in many constructive discussions but also in verbally violent disagreement. What unites all parties, however, is the call for more transparency and openness in research.

Eric-Jan “EJ” Wagenmakers has argued for pre-registration of research (Wagenmakers et al., 2012; see also here) and direct replications (e.g., Boekel et al., 2015; Wagenmakers et al., 2015), for a clearer demarcation of exploratory and confirmatory research (de Groot, 1954/2013), and for a change in the way we analyze our data (Wagenmakers et al., 2011; Wagenmakers et al., in press).

Concerning the latter point, EJ is a staunch advocate of Bayesian statistics. With his many collaborators, he writes the clearest and wittiest exposures to the topic (e.g., Wagenmakers et al., 2016; Wagenmakers et al., 2010). Crucially, he is also a key player in opening Bayesian inference up to social and behavioral scientists more generally; in fact, the software JASP is EJ’s brainchild (see also our previous interview).

EJ

In sum, psychology is changing rapidly, both in how researchers communicate and do science, but increasingly also in how they analyze their data. This makes it nearly impossible for university curricula to keep up; courses in psychology are often years, if not decades, behind. Statistics classes in particular are usually boringly cookbook oriented and often fraught with misconceptions (Wagenmakers, 2014). At the University of Amsterdam, Wagenmakers succeeds in doing differently. He has previously taught a class called “Good Science, Bad Science”, discussing novel developments in methodology as well as supervising students in preparing and conducting direct replications of recent research findings (cf. Frank & Saxe, 2012).

Now, at the end of the day, testing undirected hypotheses using p values or Bayes factors only gets you so far – even if you preregister the heck out of it. To move the field forward, we need formal models that instantiate theories and make precise quantitative predictions. Together with Michael Lee, Eric-Jan Wagenmakers has written an amazing practical cognitive modeling book, harnessing the power of computational Bayesian methods to estimate arbitrarily complex models (for an overview, see Lee, submitted). More recently, he has co-edited a book on model-based cognitive neuroscience on how formal models can help bridge the gap between brain measurements and cognitive processes (Forstmann & Wagenmakers, 2015).

Long-term readers of the JEPS bulletin will note that topics ranging from openness of research, pre-registration and replication, and research methodology and Bayesian statistics are recurring themes. It has thus been only a matter of time for us to interview Eric-Jan Wagenmakers and ask him questions concerning all areas above. In addition, we ask: how does he stay so immensely productive? What tips does he have for students interested in an academic career; and what can instructors learn from “Good Science, Bad Science”? Enjoy the ride!


Bobby Fischer, the famous chess player, once said that he does not believe in psychology. You actually switched from playing chess to pursuing a career in psychology; tell us how this came about. Was it a good move?

It was an excellent move, but I have to be painfully honest: I simply did not have the talent and the predisposition to make a living out of playing chess. Several of my close friends did have that talent and went on to become international grandmasters; they play chess professionally. But I was actually lucky. For players outside of the world top-50, professional chess is a career trap. The pay is poor, the work insanely competitive, and the life is lonely. And society has little appreciation for professional chess players. In terms of creativity, hard work, and intellectual effort, an international chess grandmaster easily outdoes the average tenured professor. People who do not play chess themselves do not realize this.

Your list of publications gets updated so frequently, it should have its own RSS feed! How do you grow and cultivate such an impressive network of collaborators? Do you have specific tips for early career researchers?

At the start of my career I did not publish much. For instance, when I finished my four years of grad studies I think I had two papers. My current publication rate is higher, and part of that is due to an increase in expertise. It is just easier to write papers when you know (or think you know) what you’re talking about. But the current productivity is mainly due to the quality of my collaborators. First, at the psychology department of the University of Amsterdam we have a fantastic research master program. Many of my graduate students come from this program, having been tried and tested in the lab as RAs. When you have, say, four excellent graduate students, and each publishes one article a year, that obviously helps productivity. Second, the field of Mathematical Psychology has several exceptional researchers that I have somehow managed to collaborate with. In the early stages I was a graduate student with Jeroen Raaijmakers, and this made it easy to start work with Rich Shiffrin and Roger Ratcliff. So I was privileged and I took the opportunities that were given. But I also work hard, of course.

There is a lot of advice that I could give to early career researchers but I will have to keep it short. First, in order to excel in whatever area of life, commitment is key. What this usually means is that you have to enjoy what you are doing. Your drive and your enthusiasm will act as a magnet for collaborators. Second, you have to take initiative. So read broadly, follow the latest articles (I remain up to date through Twitter and Google Scholar), get involved with scientific organizations, coordinate a colloquium series, set up a reading group, offer your advisor to review papers with him/her, attend summer schools, etc. For example, when I started my career I had seen a new book on memory and asked the editor of Acta Psychologica whether I could review it for them. Another example is Erik-Jan van Kesteren, an undergraduate student from a different university who had attended one of my talks about JASP. He later approached me and asked whether he could help out with JASP. He is now a valuable member of the JASP team. Third, it helps if you are methodologically strong. When you are methodologically strong –in statistics, mathematics, or programming– you have something concrete to offer in a collaboration.

Considering all projects you are involved in, JASP is probably the one that will have most impact on psychology, or the social and behavioral sciences in general. How did it all start?

In 2005 I had a conversation with Mark Steyvers. I had just shown him a first draft of a paper that summarized the statistical drawbacks of p-values. Mark told me “it is not enough to critique p-values. You should also offer a concrete alternative”. I agreed and added a section about BIC (the Bayesian Information Criterion). However, the BIC is only a rough approximation to the Bayesian hypothesis test. Later I became convinced that social scientists will only use Bayesian tests when these are readily available in a user-friendly software package. About 5 years ago I submitted an ERC grant proposal “Bayes or Bust! Sensible hypothesis tests for social scientists” that contained the development of JASP (or “Bayesian SPSS” as I called it in the proposal) as a core activity. I received the grant and then we were on our way.

I should acknowledge that much of the Bayesian computations in JASP depend on the R BayesFactor package developed by Richard Morey and Jeff Rouder. I should also emphasize the contribution by JASPs first software engineer, Jonathon Love, who suggested that JASP ought to feature classical statistics as well. In the end we agreed that by including classical statistics, JASP could act as a Trojan horse and boost the adoption of Bayesian procedures. So the project started as “Bayesian SPSS”, but the scope was quickly broadened to include p-values.

JASP is already game-changing software, but it is under continuous development and improvement. More concretely, what do you plan to add in the near future? What do you hope to achieve in the long-term?

In terms of the software, we will shortly include several standard procedures that are still missing, such as logistic regression and chi-square tests. We also want to upgrade the popular Bayesian procedures we have already implemented, and we are going to create new modules. Before too long we hope to offer a variable views menu and a data-editing facility. When all this is done it would be great if we could make it easier for other researchers to add their own modules to JASP.

One of my tasks in the next years is to write a JASP manual and JASP books. In the long run, the goal is to have JASP be financially independent of government grants and university support. I am grateful for the support that the psychology department at the University of Amsterdam offers now, and for the support they will continue to offer in the future. However, the aim of JASP is to conquer the world, and this requires that we continue to develop the program “at break-neck speed”. We will soon be exploring alternative sources of funding. JASP will remain free and open-source, of course.

You are a leading advocate of Bayesian statistics. What do researchers gain by changing the way they analyze their data?

They gain intellectual hygiene, and a coherent answer to questions that makes scientific sense. A more elaborate answer is outlined in a paper that is currently submitted to a special issue for Psychonomic Bulletin & Review: https://osf.io/m6bi8/ (Part I).

The Reproducibility Project used different metrics to quantify the success of a replication – none of them really satisfactory. How can a Bayesian perspective help illuminate the “crisis of replication”?

As a theory of knowledge updating, Bayesian statistics is ideally suited to address questions of replication. However, the question “did the effect replicate?” is underspecified. Are the effect sizes comparable? Does the replication provide independent support for the presence of the effect? Does the replication provide support for the position of the proponents versus the skeptics? All these questions are slightly different, but each receives the appropriate answer within the Bayesian framework. Together with Josine Verhagen, I have explored a method –the replication Bayes factor– in which the prior distribution for the replication test is the posterior distribution obtained from the original experiment (e.g., Verhagen & Wagenmakers, 2014). We have applied this intuitive procedure to a series of recent experiments, including the multi-lab Registered Replication Report of Fritz Strack’s Facial Feedback hypothesis. In Strack’s original experiment, participants who held a pen with their teeth (causing a smile) judged cartoons to be funnier than participants who held a pen with their lips (causing a pout). I am not allowed to tell you the result of this massive replication effort, but the paper will be out soon.

You have recently co-edited a book on model-based cognitive neuroscience. What is the main idea here, and what developments in this area are most exciting to you?

The main idea is that much of experimental psychology, mathematical psychology, and the neurosciences pursue a common goal: to learn more about human cognition. So ultimately the interest is in latent constructs such as intelligence, confidence, memory strength, inhibition, and attention. The models that have been developed in mathematical psychology are able to link these latent constructs to specific model parameters. These parameters may in turn be estimated by behavioral data, by neural data, or by both data sets jointly. Brandon Turner is one of the early career mathematical psychologists who has made great progress in this area. So the mathematical models are a vehicle to achieve an integration of data from different sources. Moreover, insights from neuroscience can provide important constraints that help inform mathematical modeling. The relation is therefore mutually beneficial. This is summarized in the following paper: http://www.ejwagenmakers.com/2011/ForstmannEtAl2011TICS.pdf

One thing that distinguishes science from sophistry is replication; yet it is not standard practice. In “Good Science, Bad Science”, you had students prepare a registered replication plan. What was your experience teaching this class? What did you learn from the students?

This was a great class to teach. The students were highly motivated and oftentimes it felt more like lab-meeting than like a class. The idea was to develop four Registered Report submissions. Some time has passed, but the students and I still intend to submit the proposals for publication.

The most important lesson this class has taught me is that our research master students want to learn relevant skills and conduct real research. In the next semester I will teach a related course, “Good Research Practices”, and I hope to attain the same high levels of student involvement. For the new course, I plan to have students read a classic methods paper that identifies a fallacy; next the students will conduct a literature search to assess the current prevalence of the fallacy. I have done several similar projects, but never with master students (e.g., http://www.ejwagenmakers.com/2011/NieuwenhuisEtAl2011.pdf and http://link.springer.com/article/10.3758/s13423-015-0913-5).

What tips and tricks can you share with instructors planning to teach a similar class?

The first tip is to set your aims high. For a research master class, the goal should be publication. Of course this may not always be realized, but it should be the goal. It helps if you can involve colleagues or graduate students. If you set your aims high, the students know that you take them seriously, and that their work matters. The second tip is to arrange the teaching so that the students do most of the work. The students need to develop a sense of ownership about their projects, and they need to learn. This will not happen if you treat the students as passive receptacles. I am reminded of a course that I took as an undergraduate. In this course I had to read chapters, deliver presentations, and prepare questions. It was one of the most enjoyable and inspiring courses I had ever taken, and it took me decades to realize that the professor who taught the course actually did not have to do much at all.

Many scholarly discussions these days take place on social media and blogs. You’ve joined twitter yourself over a year ago. How do you navigate the social media jungle, and what resources can you recommend to our readers?

I am completely addicted to Twitter, but I also feel it makes me a better scientist. When you are new to Twitter, I recommend that you start by following a few people that have interesting things to say. Coming from a Bayesian perspective, I recommend Alexander Etz (@AlxEtz) and Richard Morey (@richarddmorey). And of course it is essential to follow JASP (@JASPStats). As is the case for all social media, the most valuable resource you have is the “mute” option. Prevent yourself from being swamped by holiday pictures and exercise it ruthlessly.

Facebooktwitterrss

Publishing a Registered Report as a Postgraduate Researcher

Registered Reports (RRs) are a new publishing format pioneered by the journal Cortex (Chambers 2013). This publication format emphasises the process of rigorous research, rather than the results, in an attempt to avoid questionable research practices such as p-hacking and HARK-ing, which ultimately reduce the reproducibility of research and contribute to publication bias in cognitive science (Chambers et al. 2014). A recent JEPS post by Dablander (2016) and JEPS’ own editorial for adopting RRs (King et al. 2016) have given a detailed explanation of the RR process. However, you may have thought that publishing a RR is reserved for only senior scientists, and is not a viable option for a postgraduate student. In fact, 5 out of 6 of the first RRs published by Cortex have had post-graduate students as authors, and publishing by RR offers postgraduates and early career researchers many unique benefits.

In the following article you will hear about the experience of Dr. Hannah Hobson, who published a RR in the journal Cortex as a part of her PhD project. I spoke to Hannah about the planning that was involved, the useful reviewer comments she received, and asked her what tips she has for postgraduates interested in publishing a RR. Furthermore, there are some comments from Professor Chris Chambers who is a section editor for Cortex on how postgraduates can benefit from using this publishing format.

Interview with Dr. Hannah Hobson

Hannah completed her PhD project on children’s behavioural imitation skills, and potential neurophysiological measures of the brain systems underlying imitation. Her PhD was based at the University of Oxford, under the supervision of Professor Dorothy Bishop. During her studies, Hannah became interested in mu suppression, an EEG measure purported to reflect the activity of the human mirror neuron system. However, she was concerned that much of research on mu suppression suffered from methodological problems, despite this measure being widely used in social cognitive neuroscience. Hannah and Dorothy thought it would be appropriate to publish a RR to focus on some of these issues. This study was published in the journal Cortex, and investigated whether mu suppression is a good measure of the human mirror neuron system (Hobson and Bishop 2016). I spoke to Hannah about her project and what her experience of publishing a RR was like during her PhD.

 

As you can hear from Hannah’s experience, publishing a RR was beneficial in ways that would not be possible with standard publishing formats. However, they are not suitable for every study. Drawing from Hannah’s experience and Chris Chambers’ role in promoting RRs, the main strengths and concerns for postgraduate students publishing a RR are summarised below.

Strengths

Reproducible findings

It has been highlighted that the majority of psychological studies suffer from low power. As well as limiting the chances of finding an effect, low-powered studies are more likely to lack reproducibility as they overemphasise the effect size (Button et al. 2013). As a part of the stage one submission, a formal power analysis needs to be performed to identify the number of participants required for a high powered study (>90%). Therefore, PhD studies published as RRs will have greater power and reproducibility in comparison to the average unregistered study (Chambers et al. 2014).

More certainty over publications

The majority of published PhD studies begin to emerge during the final year or during your first post-doctoral position. As the academic job markets becomes ever more competitive, publications are essential. As Professor Chambers notes, RRs “enable PhD students to list provisionally accepted papers on their CVs by the time they submit their PhDs”. Employers will see greater certainty in a RR with stage one approval than the ‘in preparation’ listed next to innumerable papers following the standard publishing format.

Lower rejection rate at stage two submission

Although reaching stage one approval is more difficult due to the strict methodological rigour required, there is greater certainty in the eventual outcome of the paper once you have in-principal acceptance. In Cortex, approximately 90% of unregistered reports are rejected upon submission, but only 10% of RRs which reach stage one review have been rejected, with none being rejected so far with in-principal acceptance.

“This means you are far more likely to get your paper accepted at the first journal you submit to, reducing the tedious and time-wasting exercise of submitting down a chain of journals after your work is finished and you may already be competing on the job market”. – Professor Chris Chambers

As Dorothy Bishop explains in her blog, once you have in-principle acceptance you are in control of the timing of the publication (Bishop 2016). This means that you will have a publication in print during your PhD, as opposed to starting to submit papers towards the end which may only be ‘in preparation’ by the time of your viva voce.

Constructive reviewer comments

As the rationale and methodology is peer-reviewed before the data-collection process, reviewers are able to make suggestions to improve the design of your study. In Hannah’s experience, a reviewer pointed out an issue with her control stimuli. If she had conducted the study following the standard format, reviewers would only be able to point this out retrospectively when there is no option to change it. This experience will also be invaluable during your viva voce. As you defend your work in front of the examiners, you know your study has already gone through several rounds of review, so you can be confident in how robust it is.

Things to consider

Time restraints

Recruiting and testing participants is a lengthy process, and you often encounter a series of setbacks. If you are already in the middle of your PhD, then you may not have time to go through stage one submission before collecting your data. In Hannah’s case, publishing a RR was identified early in the project which provided a sufficient amount of time to complete it during her PhD. If you are interested in RRs, it is advisable to start the submission process as early into your PhD as possible. You may even want to start the discussion during the interview process.

Ethics merry-go-round

During stage one submission, you need to provide evidence that you already have ethical approval. If the reviewers want you to make changes to the methodology, this may necessitate amending your ethics application. In busy periods, this process of going back and forth between the reviewers and your ethics committee can become time-consuming. As time constraints is the pertinent concern for postgraduates publishing a RR, this is an additional hurdle that must be negotiated. Whilst there is no easy solution to this problem, aiming to publish a RR must be identified early in your project to ensure you will have enough time, and have a back-up plan prepared for if things do not work out.

RRs are not available in every journal

Although there has been a surge in journals offering RRs, they are not available in every one. Your research might be highly specialised and the key journal in your area may not offer the option of a RR. If your research does not fit into the scope of a journal that offers RRs, you may not have the option to publish your study as a RR. Whist there is no simple solution for this, there is a regular list of journals offering RRs on the Open Science Framework (OSF).

Supervisor conflict

Although there are a number of prominent researchers behind the initiative (Guardian Open Letter 2013), there is not universal agreement with some researchers voicing concerns (Scott 2013, although see Chambers et al. 2014 for a rebuttal to many common concerns). There have been some vocal critics of RRs, and one of these critics might end up being your supervisor. If you want to conduct a RR as a part of your PhD and your supervisor is against it, there may be some conflict. Again, it is best to identify early on in your PhD if you want to publish a RR, and make sure both you and your supervisor are on the same page.

Conclusion

Publishing a RR as a postgraduate researcher is a feasible option that provides several benefits, both to the individual student and to wider scientific progress. Research published as a RR is more likely to produce reproducible findings, due to the necessary high level of power, reviewers’ critique before data collection, and guards against questionable research practices such as p-hacking or HARK-ing. Providing the work is carried out as agreed, a study that has achieved stage one approval is likely to be published, allowing students the opportunity to publish their hard work, even if the findings are negative. Moreover, going through several rounds of peer-review on the proposed methodology provides an additional layer of rigour (good for science), that aids your defence in your viva voce (good for you). Of course, it is not all plain sailing and there are a several considerations students will need to make before embarking on an RR. Nonetheless, despite these concerns, this publishing format is a step in the right direction for ensuring that robust research is being conducted right down to the level of postgraduate students.

If you like the idea but do not think formal pre-registration with a journal is suitable for your project, perhaps consider using the OSF. The OSF is a site where researchers can timestamp their hypotheses and planned analyses, allowing them to develop hypothesis-driven research habits. In one research group, it is necessary for all studies ranging from undergraduate projects to grant-funded projects to be registered on third-party websites such as the OSF (Munafò 2015). Some researchers such as Chris Chambers have even made it a requirement for applicants wanting to join their group to demonstrate a prior commitment to open science practices (Chambers 2016). Starting to pre-register your studies and publish RRs as a postgraduate student demonstrates this commitment, and will prove to be crucial as open science practices become an essential criterion in recruitment.

“To junior researchers I would say that pre-registration — especially as a Registered Report — is an ideal option for publishing high-quality, hypothesis-driven research that reflects an investment both in good science and your future career” – Professor Chris Chambers 

Pre-registration and RRs are both initiatives to improve the rigour and transparency of psychological science (Munafò et al. 2014). These initiatives are available to us as research students, and it is not just the responsibility of senior academics to fight against questionable research practises. We can join in too.

Acknowledgements

Thank you to Dr. Hannah Hobson who was happy to talk about her experience as a PhD student and for her expertise in recording the interview. Hannah also helped to write and revise the post. I would also like to thank Professor Chris Chambers for taking the time to provide some comments for the post.

Facebooktwitterrss

The Statistics Hell has expanded: An interview with Prof. Andy Field

FieldDoes the mention of the word “statistics” strike fear into your heart and send shivers down your spine? The results section of your thesis seeming like that dark place one should avoid at all cost? Heteroscedasticity gives you nightmares? You dread having to explain to someone what degrees of freedom are? What is the point of using ANOVA if we can do a series of t-tests? If any of these remind you of the pain of understanding statistics, or the dread of how much more lies ahead during your studies, when all you really want is someone to explain it in a humanly understandable way—look no further. Quite a few fellow students might tell you “You should go and look at Andy Field’s books. Now, at least, I understand stats”. The “Discovering statistics using …” is a gentle, student friendly introduction to statistics. Principles are introduced at a slow pace, with plenty of workable examples so that anyone with basic maths skills will be able to digest it. Now add a lens of humor and sarcasm that will have you giggling about statistics in no time!

There is a new book!

As JEPS has been excited about introducing Bayesian statistics into the lives of more psychology students (see here, here, and here for introductions, and here for software to play around with the Bayesian approach), the idea of a new book by Andy Field—whose work many of us love and wholeheartedly recommend—which incorporates this amazing approach was thrilling news.

We used this occasion to talk to Andy Field—who is he, what motivates him, and what are his thoughts on the future of psychology?

With your new book, you expand the Statistics hell with Bayesian statistics. Why is this good news for students?

andy_field

There has, for a long time, been an awareness that the traditional method of testing hypotheses (null hypothesis significance testing, NHST) has its limitations. Some of these limitations are fundamental, whereas others are more about how people apply the method rather too blindly. Bayesian approaches offer an alternative, and arguably, more logical way to look at estimation and hypothesis testing. It is not without its own critics though, and it has its own set of different issues to consider. However, it is clear that there is a groundswell of support for Bayesian approaches, and that people are going to see these methods applied more and more in scientific papers. The problem is that Bayesian methods can be quite technical, and a lot of books and papers are fairly impenetrable. It can be quite hard to make the switch (or even understand what switch you would be making).

My new book essentially tries to lay some very basic foundations. It’s not a book about Bayesian statistics, it’s a book about analysing data and fitting models and I explain both the widely used classical methods and also some basic Bayesian alternatives (primarily Bayes factors). The world is not going to go Baysian overnight, so what I’m trying to do is to provide a book that covers the material that lecturers and undergraduates want covered, but also encourages them to think about the limitations of those approaches and the alternatives available to them. Hopefully, readers will have their interest piqued enough to develop their understanding by reading more specifically Bayesian books. To answer the question then, there are two reasons why introducing Bayesian approaches is a good thing for students: (1) it will help them to understand more what options are available to them when they analyse data; and (2) published research will increasingly use Bayesian methods so it will help them to make sense of what other scientists are doing with their data.

Your books are the savior for many not-so-technical psychology students. How did you first come up with writing your classic ‘Discovering Statistics with ….’ book?

Like many PhD students I was teaching statistics and SPSS to fund my PhD. I used to enjoy the challenge of trying to come up with engaging examples, and generally being a bit silly/off the wall. The student feedback was always good, and at the time I had a lot of freedom to produce my own teaching materials. At around that time, a friend-of-a-friend Dan Wright (a cognitive psychologist who was at the time doing a postdoc at City Univerity in London) was good friends with Ziyad Marar, who now heads the SAGE publications London office but at the time was a commissioning editor. Dan had just published a stats book with SAGE and Ziyad had commissioned him to help SAGE to find new authors. I was chatting to Dan during a visit to City University, and got onto the subject of me teaching SPSS and my teaching materials and whatever and he said ‘Have you ever thought of turning those into a book?’ Of course I hadn’t because books seemed like things that ‘proper’ academics did, not me. Subsequently Dan introduced me to Ziyad, who wanted to sign me up to do the book, I was in such a state of disbelief that anyone would want to publish a book written by me that I blindly agreed. The rest is history!

As an aside, I started writing it before completing my PhD although most of it was done afterwards, and I went so over the word limit that SAGE requested that I do the typesetting myself because (1) they didn’t think it would sell much (a reasonable assumption given I was a first-time author); and (2) this would save a lot of production costs. Essentially they were trying to cut their losses (and on the flip side, this also allowed me to keep the book as it was and not have to edit it to half the size!). It is a constant source of amusement to us all how much we thought the book would be a massive failure! I guess the summary is, it happened through a lot of serendipitous events. There was no master plan. I just wrote from the heart and hoped for the best, which is pretty much what I’ve done ever since.

Questionable research practices and specifically misuse of statistical methods has been a hot topic in the last years. In your opinion, what are the critical measures that have to be taken in order to improve the situation?

Three things spring immediately to mind: (1) taking the analysis away from the researcher; (2) changing the incentive structures; (3) a shift towards estimation. I’ll elaborate on these in turn.

Psychology is a very peculiar science. It’s hard to think of many other disciplines where you are expected to be an expert theoretician in a research area and also a high-level data analyst with a detailed understanding of complex statistical models. It’s bizarre really. The average medic, for example, when doing a piece of research will get expert advice from a trials unit on planning, measurement, randomization and once the data are in they’ll be sent to the biostats unit to fit the models. In other words, they are not expected to be an expert in everything: expertise is pooled. One thing, then, that I think would help is if psychologists didn’t analyse their own data but instead they were sent to a stats expert with no vested interest in the results. That way data processing and analysis could be entirely objective.

The other thing I would immediately change in academia is the incentive structures. They are completely ****** up. The whole ‘publish or perish’ mentality does nothing but harm science and waste public money. The first thing it does it create massive incentives to publish anything regardless of how interesting it is but it also incentivises ‘significance’ because journals are far more likely to publish significant results. It also encourages (especially in junior scientists) quantity over quality, and it fosters individual rather than collective motivations. For example, promotions are all about the individual demonstrating excellence rather than them demonstrating a contribution to a collective excellence. To give an example, in my research area of child anxiety I frequently have the experience that I disappear for a while to write a stats book and ignore completely child anxiety research for, say, 6 months. When I come back and try to catch up on the state of the art, hundreds, possible thousands of new papers have come out, mostly small variations on a theme, often spread across multiple publications. The signal to noise ratio is absolutely suffocating. My feeling on whether anything profound has changed in my 6 months out of the loop is ‘absolutely not’ despite several hundred new papers. Think of the collective waste of time, money and effort to achieve ‘absolutely not’. It’s good science done by extremely clever people, but everything is so piecemeal that you can’t see the word for the trees. The meaningful contributions are lost. Of course I understand that science progresses in small steps, but it has become ridiculous, and I believe that the incentive structures mean that many researchers prioritise personal gain over science. Researchers are, of course, doing what their universities expect them to do, but I can’t help but feel that psychological science would benefit from people doing fewer studies in bigger teams to address larger questions. Even at a very basic level this would mean that sample sizes would increase dramatically in psychology (which would be a wholly good thing). For this to happen, the incentive structures need to change. Value should be maximised for working in large teams, on big problems, and for saving up results to publish in more substantial papers; contribution to grants and papers should also become more balanced regardless of whether you’re first author, last author or part of a team of 30 authors.

From a statistical point of view we have to shift away from ‘all or nothing thinking’ towards estimation. From the point of view of publishing science a reviewer should ask three questions (1) is the research answering an interesting question that genuinely advances our knowledge: (2) was it well conducted to address the question being asked – i.e. does it meet the necessary methodological standards?; and (3) what do the estimates of the effects in the model tell us about the question being asked. If we strive to answer bigger questions in larger samples then p-values really become completely irrelevant (I actually think their almost irrelevant anyway but …). Pre-registration of studies helps a lot because it forces journals to address the first two questions when deciding whether to publish, but it also helps with question 3 because by making the significance of the estimates irrelevant to the decision to publish it frees the authors to focus on estimation rather than p-values. There are differing views of course on how to estimate (Classical vs Bayes, confidence intervals vs. credibility intervals etc.) but at heart, I think a shift from p-values to estimation can only be a good thing.

At JEPS we are offering students experience in scientific publishing at an early stage of their career. What could be done at universities to make students acquainted with the scientific community already during their bachelor- or master studies?

I think that psychology, as a discipline, embeds training in academic publishing within degree and PhD programs through research dissertations and the like (although note my earlier comments about the proliferation of research papers!). Nowadays though scientists are expected to engage with many different audiences through blogs, the media and so on, we could probably do more to prepare students for that by incorporating assignments into degrees that are based on public engagement. (In fact, at Sussex – and I’m sure elsewhere –  we do have these sorts of assignments).

Statistics is the predominant modeling language in almost any science and therefore sufficient knowledge about it is the prerequisite of doing any empirical work. Despite this fact, why do you think do many psychology students are reluctant to learn statistics? What could be done in education to change this attitude? How to keep it entertaining while still getting stuff done?

This really goes back to my earlier question of whether we should expect researchers to be data analysis experts. Perhaps we shouldn’t, although if we went down the route of outsourcing data analysis then a basic understanding of processing data and the types of models that can be fit would help statisticians to communicate what they have done and why.

There are lots of barriers to learning statistics. Of course anxiety is a big one, but it’s also just a very different thing to psychology. It’s a bit like putting a geography module in an English literature degree and then asking ‘why aren’t the students interested in geography?’. The answer is simple: it’s not English literature, it’s not what they want to study. It’s the same deal. People doing a psychology degree are interested in psychology, if they were interested in data they’d have chosen a maths or stats degree. The challenge is trying to help students to realize that statistical knowledge gives you power to answer interesting questions. It’s a tool, not just in research, but in making sense in an increasingly data-driven world. Numeracy and statistics, in particular, has never been more important than it is now because of the ease with which data can be collected and, therefore, the proliferation of contexts in which data is used to communicate a message to the public.

In terms of breaking down those barriers I feel strongly that teaching should be about making your own mark. What I do is not ‘correct’ (and some students hate my teaching) it’s just what works for me and my personality. In my previous books I’ve tried to use memorable examples, use humour, and I tend to have a naturally chatty writing style. In the new book I have embedded all of the academic content into a fictional story. I’m hoping that the story will be good enough to hook people in and they’ll learn statistics almost as a by-product of reading the story. Essentially they share a journey with the main character in which he keeps having to learn about statistics. I’m hoping that if the reader invests emotionally in that character then it will help them to stay invested in his journey and invested in learning. The whole enterprise is a massive gamble, I have no idea whether it will work, but as I said before I write from my heart and hope for the best!

Incidentally if you want to know more about the book and the process of creating it, see http://discoveringstatistics.blogspot.co.uk/2016/04/if-youre-not-doing-something-different.html

What was your inspiration for the examples in the book? How did you come up with Satan’s little SPSS helper and other characters? How did you become the gatekeeper of the statistics hell?

field_book

The statistics hell thing comes from the fact that I listen to a lot of heavy metal music and many bands have satanic imagery. Of course, in most cases it’s just shock tactics rather than reflecting a real philosophical position, but I guess I have become a bit habituated to it. Anyway, when I designed my website (which desperately needs an overhaul incidentally) I just thought it would be amusing to poke fun at the common notion that ‘statistics is hell’. It’s supposed to be tongue-in-cheek.

As for characters in the SPSS/R/SAS book, they come from random places really. Mostly the reasons are silly and not very interesting. A few examples: the cat is simply there to look like my own cat (who is 20 now!); the Satan’s slave was because I wanted to have something with the acronym SPSS (Satan’s Personal Statistics Slave); and Oliver Twisted flags additional content so I wanted to use the phrase ‘Please sir! Can I have some more …’ like the character Oliver Twist in the Dicken’s novel. Once I knew that, it was just a matter of making him an unhinged.

The new book, of course, is much more complicated because it is a fictional story with numerous characters with different appearances and personalities. I have basically written a novel and a statistics textbook and merged the two. Therefore, each character is a lot deeper than the faces in the SPSS book – they have personalities, histories, emotions. Consequently, they have very different influences. Then, as well as the characters the storyline and the fictional world in which the story is set were influenced by all sorts of things. I’d could write you a thesis on it! In fact, I have a file on my hard drive of ‘bits of trivia’ about the new book where I kept notes on why I did certain things, where names or personalities came from, who influence the appearance of characters or objects and so on. If the book becomes a hit then come back to me and ask what influenced specific things in the book and I can probably tell you! I also think it’s nice to have some mystery and not give away too much about why the book turned out the way it did!

If you could answer any research question, what would it be?

I’d like to discover some way to make humans more tolerant of each other and of different points of view, but possibly even more than that I’d like to discover a way that people could remain at a certain age until they felt it was time to die. Mortality is the cloud over everyone’s head, but I think immortality would probably be a curse because I think you get worn down by the changing world around you. I like to think that there’s a point where you feel that you’ve done what you wanted to do and you’re ready to go. I’d invent something that allows you to do that – just stay physically at an age you liked being, and go on until you’ve had enough. There is nothing more tragic than a life ended early, so I’d stop that.

Thank you for taking the time for this interview and sharing your insights with us. We have one last question: On a 7-point Likert scale, how much do you like 7-point Likert scale?

It depends which way around the extremes are labelled …. ;-)

 

For more information on ‘An adventure in statistics: the reality enigma’ see:

 

 

 

Facebooktwitterrss

Meet the Authors

Do you wish to publish your work but don’t know how to get started? We asked some of our student authors, Janne Hellerup Nielsen, Dimitar Karadzhov, and Noelle Sammon, to share their experience of getting published.

Janne Hellerup Nielsen is a psychology graduate from Copenhagen University. Currently, she works in the field of selection and recruitment within the Danish Defence. She is the first author of the research article “Posttraumatic Stress Disorder among Danish Soldiers 2.5 Years after Military Deployment in Afghanistan: The Role of Personality Traits as Predisposing Risk Factors”. Prior to this publication, she had no experience with publishing or peer review but she decided to submit her research to JEPS because “it is a peer reviewed journal and the staff at JEPS are very helpful, which was a great help during the editing and publishing process.”

Dimitar Karadzhov moved to Glasgow, United Kingdom to study psychology (bachelor of science) at the University of Glasgow. He completed his undergraduate degree in 2014 and he is currently completing a part-time master of science in global mental health at the University of Glasgow. He is the author of “Assessing Resilience in War-Affected Children and Adolescents: A Critical Review”. Prior to this publication, he had no experience with publishing or peer review. Now having gone through the publication process, he recommends fellow students to submit their work because “it is a great research and networking experience.”

Noelle Sammon has an honors degree in business studies. She returned to study in university in 2010 and completed a higher diploma in psychology in the National University of Ireland, Galway. She is currently completing a master’s degree in applied psychology at the University of Ulster, Northern Ireland. She plans to pursue a career in clinical psychology. She is the first author of the research article “The Impact of Attention on Eyewitness Identification and Change Blindness”. Noelle had some experience with the publication process while previously working as a research assistant. She describes her experience with JEPS as follows: “[It was] very professional and a nice introduction to publishing research. I found the editors that I was in contact with to be really helpful in offering guidance and support. Overall, the publication process took approximately 10 months from start to finish but having had the opportunity to experience this process, I would encourage other students to publish their research.”

How did the research you published come about?

Janne: “During my psychology studies, I had an internship at a research center in the Danish Defence. Here I was a part of a big prospective study regarding deployed soldiers and their psychological well-being after homecoming. I was so lucky to get to use the data from the research project to conduct my own studies regarding personality traits and the development of PTSD. I’ve always been interested in differential psychology—for example, why people manage the same traumatic experiences differently. Therefore, it was a great opportunity to do research within the field of personality traits and the development of PTSD, and even to do so with some greatly experienced supervisors, Annie and Søren.”

Dimitar: “In my final year of the bachelor of science degree in psychology, I undertook a critical review module. My assigned supervisor was liberal enough and gave me complete freedom to choose the topic I would like to write about. I then browsed a few The Psychologist editions I had for inspiration and was particularly interested in the area of resilience from a social justice perspective. Resilience is a controversial and fluid concept, and it is key to recovery from traumatic events such as natural disasters, personal trauma, war, terrorism, etc. It originates from biomedical sciences and it was fascinating to explore how such a concept had been adopted and researched by the social and humanitarian sciences. I was intrigued to research the similarities between biological resilience of human and non-human animals and psychological resilience in the face of extremely traumatic experiences such as war. To add an extra layer of complexity, I was fascinated by how the most vulnerable of all, children and adolescents, conceptualize, build, maintain, and experience resilience. From a researcher’s perspective, one of the biggest challenges is to devise and apply methods of inquiry in order to investigate the concept of resilience in the most valid, reliable, and culturally appropriate manner. The quantitative–qualitative dyad was a useful organizing framework for my work and it was interesting to see how it would fit within the resilience discourse.”

Noelle: “The research piece was my thesis project for the higher diploma (HDIP). I have always had an interest in forensic psychology. Moreover, while attending the National University of Ireland, Galway as part of my HDIP, I studied forensic psychology. This got me really interested in eyewitness testimony and the overwhelming amount of research highlighting the problematic reliability with it.”

What did you enjoy most in your research and what did you find difficult?

Janne: “There is a lot of editing and so forth when you publish your research, but then again it really makes sense because you have to be able to communicate the results of your research out to the public. To me, that is one of the main purposes of research: to be able to share the knowledge that comes out of it.”

Dimitar: “[I enjoyed] my familiarization with conflicting models of resilience (including biological models), with the origins and evolution of the concept, and with the qualitative framework for investigation of coping mechanisms in vulnerable, deprived populations. In the research process, the most difficult part was creating a coherent piece of work that was very informative and also interesting and readable, and relevant to current affairs and sociopolitical processes in low- and middle-income countries. In the publication process, the most difficult bit was ensuring my work adhered to the publication standards of the journal and addressing the feedback provided at each stage of the review process within the time scale requested.”

Noelle: “I enjoyed developing the methodology to test the research hypothesis and then getting the opportunity to test it. [What I found difficult was] ensuring the methodology would manipulate the variables required.”

How did you overcome these difficulties?

Janne: “[By] staying focused on the goal of publishing my research.”

Dimitar: “With persistence, motivation, belief, and a love for science! And, of course, with the fantastic support from the JEPS publication staff.”

Noelle: “I conducted a pilot using a sample of students asking them to identify any problems with materials or methodology that may need to be altered.”

What did you find helpful when you were doing your research and writing your paper?

Janne: “It was very important for me to get competent feedback from experienced supervisors.”

Dimitar: “Particularly helpful was reading systematic reviews, meta-analyses, conceptual papers, and methodological critique.”

Noelle: “I found my supervisor to be very helpful when conducting my research. In relation to the write-up of the paper, I found that having peers and non-psychology friends read and review my paper helped ensure that it was understandable, especially for lay people.”

Finally, here are some words of wisdom from our authors.

Janne: “Don’t think you can’t do it. It requires some hard work, but the effort is worth it when you see your research published in a journal.”

Dimitar: “Choose a topic you are truly passionate about and be prepared to explore the problem from multiple perspectives, and don’t forget about the ethical dimension of every scientific inquiry. Do not be afraid to share your work with others, look for feedback, and be ready to receive feedback constructively.”

Noelle: “When conducting research it is important to pick an area of research that you are interested in and really refine the research question being asked. Also, if you are able to get a colleague or peer to review it for you, do so.”

We hope our authors have inspired you to go ahead and make that first step towards publishing your research. We welcome your submissions anytime! Our publication guidelines can be viewed here. We also prepared a manual for authors that we hope will make your life easier. If you do have questions, feel free to get in touch at journal@efpsa.org.

This post was edited by Altan Orhon.

Facebooktwitterrss

The Mind-the-Mind Campaign: Battling the Stigma of Mental Disorders

People suffering from mental disorders face great difficulties in their daily lives and deserve all possible support from their social environment. However, their social milieus are often host to stigmatizing behaviors that actually serve to increase the severity of their mental disorders: People diagnosed with a mental disorder are often believed to be dangerous and excluded from social activities. Individuals who receive treatment are seen as being “taken care of” and social support is extenuated. Concerned friends, with all their best intentions, might show apprehensiveness when it comes to approaching someone with a diagnosis, and end up doing nothing (Corrigan & Watson, 2002). These examples are not of exceptional, sporadic situations—according to the World Health Organisation, nine out of ten people with a diagnosis report suffering from stigmatisation (WHO, 2016). Continue reading

Facebooktwitterrss

Introducing JASP: A free and intuitive statistics software that might finally replace SPSS

Are you tired of SPSS’s confusing menus and of the ugly tables it generates? Are you annoyed by having statistical software only at university computers? Would you like to use advanced techniques such as Bayesian statistics, but you lack the time to learn a programming language (like R or Python) because you prefer to focus on your research?

While there was no real solution to this problem for a long time, there is now good news for you! A group of researchers at the University of Amsterdam are developing JASP, a free open-source statistics package that includes both standard and more advanced techniques and puts major emphasis on providing an intuitive user interface.

The current version already supports a large array of analyses, including the ones typically used by researchers in the field of psychology (e.g. ANOVA, t-tests, multiple regression).

In addition to being open source, freely available for all platforms, and providing a considerable number of analyses, JASP also comes with several neat, distinctive features, such as real-time computation and display of all results. For example, if you decide that you want not only the mean but also the median in the table, you can tick “Median” to have the medians appear immediately in the results table. For comparison, think how this works in SPSS: First, you must navigate a forest of menus (or edit the syntax), then, you execute the new syntax. A new window appears and you get a new (ugly) table.

JASP_screenshoot_2

In JASP, you get better-looking tables in no time. Click here to see a short demonstration of this feature. But it gets even better—the tables are already in APA format and you can copy and paste them into Word. Sounds too good to be true, doesn’t it? It does, but it works!

Interview with lead developer Jonathon Love

Where is this software project coming from? Who pays for all of this? And what plans are there for the future? There is nobody who could answer these questions better than the lead developer of JASP, Jonathon Love, who was so kind as to answer a few questions about JASP.
J_love

How did development on JASP start? How did you get involved in the project?

All through my undergraduate program, we used SPSS, and it struck me just how suboptimal it was. As a software designer, I find poorly designed software somewhat distressing to use, and so SPSS was something of a thorn in my mind for four years. I was always thinking things like, “Oh, what? I have to completely re-run the analysis, because I forgot X?,” “Why can’t I just click on the output to see what options were used?,” “Why do I have to read this awful syntax?,” or “Why have they done this like this? Surely they should do this like that!”

At the same time, I was working for Andrew Heathcote, writing software for analyzing response time data. We were using the R programming language and so I was exposed to this vast trove of statistical packages that R provides. On one hand, as a programmer, I was excited to gain access to all these statistical techniques. On the other hand, as someone who wants to empower as many people as possible, I was disappointed by the difficulty of using R and by the very limited options to provide a good user interface with it.

So I saw that there was a real need for both of these things—software providing an attractive, free, and open statistics package to replace SPSS, and a platform for methodologists to publish their analyses with rich, accessible user interfaces. However, the project was far too ambitious to consider without funding, and so I couldn’t see any way to do it.

Then I met E.J. Wagenmakers, who had just received a European Research Council grant to develop an SPSS-like software package to provide Bayesian methods, and he offered me the position to develop it. I didn’t know a lot about Bayesian methods at the time, but I did see that our goals had a lot of overlap.

So I said, “Of course, we would have to implement classical statistics as well,” and E.J.’s immediate response was, “Nooooooooooo!” But he quickly saw how significant this would be. If we can liberate the underlying platform that scientists use, then scientists (including ourselves) can provide whatever analyses we like.

And so that was how the JASP project was born, and how the three goals came together:

  • to provide a liberated (free and open) alternative to SPSS
  • to provide Bayesian analyses in an accessible way
  • to provide a universal platform for publishing analyses with accessible user interfaces

 

What are the biggest challenges for you as a lead developer of JASP?

Remaining focused. There are hundreds of goals, and hundreds of features that we want to implement, but we must prioritize ruthlessly. When will we implement factor analysis? When will we finish the SEM module? When will data entry, editing, and restructuring arrive? Outlier exclusion? Computing of variables? These are all such excellent, necessary features; it can be really hard to decide what should come next. Sometimes it can feel a bit overwhelming too. There’s so much to do! I have to keep reminding myself how much progress we’re making.

Maintaining a consistent user experience is a big deal too. The JASP team is really large, to give you an idea, in addition to myself there’s:

  • Ravi Selker, developing the frequentist analyses
  • Maarten Marsman, developing the Bayesian ANOVAs and Bayesian linear regression
  • Tahira Jamil, developing the classical and Bayesian contingency tables
  • Damian Dropmann, developing the file save, load functionality, and the annotation system
  • Alexander Ly, developing the Bayesian correlation
  • Quentin Gronau, developing the Bayesian plots and the classical linear regression
  • Dora Matzke, developing the help system
  • Patrick Knight, developing the SPSS importer
  • Eric-Jan Wagenmakers, coming up with new Bayesian techniques and visualizations

With such a large team, developing the software and all the analyses in a consistent and coherent way can be really challenging. It’s so easy for analyses to end up a mess of features, and for every subsequent analysis we add to look nothing like the last. Of course, providing as elegant and consistent a user-experience is one of our highest priorities, so we put a lot of effort into this.

 

How do you imagine JASP five years from now?

JASP will provide the same, silky, sexy user experience that it does now. However, by then it will have full data entering, editing, cleaning, and restructuring facilities. It will provide all the common analyses used through undergraduate and postgraduate psychology programs. It will provide comprehensive help documentation, an abundance of examples, and a number of online courses. There will be textbooks available. It will have a growing community of methodologists publishing the analyses they are developing as additional JASP modules, and applied researchers will have access to the latest cutting-edge analyses in a way that they can understand and master. More students will like statistics than ever before.

 

How can JASP stay up to date with state-of-the-art statistical methods? Even when borrowing implementations written in R and the like, these always have to be implemented by you in JASP. Is there a solution to this problem?

Well, if SPSS has taught us anything, you really don’t need to stay up to date to be a successful statistical product, ha-ha! The plan is to provide tools for methodologists to write add-on modules for JASP—tools for creating user interfaces and tools to connect these user interfaces to their underlying analyses. Once an add-on module is developed, it can appear in a directory, or a sort of “App Store,” and people will be able to rate the software for different things: stability, user-friendliness, attractiveness of output, and so forth. In this way, we hope to incentivize a good user experience as much as possible.

Some people think this will never work—that methodologists will never put in all that effort to create nice, useable software (because it does take substantial effort). But I think that once methodologists grasp the importance of making their work accessible to as wide an audience as possible, it will become a priority for them. For example, consider the following scenario: Alice provides a certain analysis with a nice user interface. Bob develops an analysis that is much better than Alice’s analysis, but everyone uses Alice’s, because hers is so easy and convenient to use. Bob is upset because everyone uses Alice’s instead of his. Bob then realizes that he has to provide a nice, accessible user experience for people to use his analysis.

I hope that we can create an arms race in which methodologists will strive to provide as good a user experience as possible. If you develop a new method and nobody can use it, have you really developed a new method? Of course, this sort of add-on facility isn’t ready yet, but I don’t think it will be too far away.

 

You mention on your website that many more methods will be included, such as structural equation modeling (SEM) or tools for data manipulation. How can you both offer a large amount of features without cluttering the user interface in the future?

Currently, JASP uses a ribbon arrangement; we have a “File” tab for file operations, and we have a “Common” tab that provides common analyses. As we add more analyses (and as other people begin providing additional modules), these will be provided as additional tabs. The user will be able to toggle on or off which tabs they are interested in. You can see this in the current version of JASP: we have a proof-of-concept SEM module that you can toggle on or off on the options page. JASP thus provides you only with what you actually need, and the user interface can be kept as simple as you like.

 

Students who are considering switching to JASP might want to know whether the future of JASP development is secured or dependent on getting new grants. What can you tell us about this?

JASP is currently funded by a European Research Council (ERC) grant, and we’ve also received some support from the Centre for Open Science. Additionally, the University of Amsterdam has committed to providing us a software developer on an ongoing basis, and we’ve just run our first annual Bayesian Statistics in JASP workshop. The money we charge for these workshops is plowed straight back into JASP’s development.

We’re also developing a number of additional strategies to increase the funding that the JASP project receives. Firstly, we’re planning to provide technical support to universities and businesses that make use of JASP, for a fee. Additionally, we’re thinking of simply asking universities to contribute the cost of a single SPSS license to the JASP project. It would represent an excellent investment; it would allow us to accelerate development, achieve feature parity with SPSS sooner, and allow universities to abandon SPSS and its costs sooner. So I don’t worry about securing JASP’s future, I’m thinking about how we can expand JASP’s future.

Of course, all of this depends on people actually using JASP, and that will come down to the extent that the scientific community decides to use and get behind the JASP project. Indeed, the easiest way that people can support the JASP project is by simply using and citing it. The more users and the more citations we have, the easier it is for us to obtain funding.

Having said all that, I’m less worried about JASP’s future development than I’m worried about SPSS’s! There’s almost no evidence that any development work is being done on it at all! Perhaps we should pass the hat around for IBM.

 

What is the best way to get started with JASP? Are there tutorials and reproducible examples?

For classical statistics, if you’ve used SPSS, or if you have a book on statistics in SPSS, I don’t think you’ll have any difficulty using JASP. It’s designed to be familiar to users of SPSS, and our experience is that most people have no difficulty moving from SPSS to JASP. We also have a video on our website that demonstrates some basic analyses, and we’re planning to create a whole series of these.

As for the Bayesian statistics, that’s a little more challenging. Most of our effort has been going in to getting the software ready, so we don’t have as many resources for learning Bayesian statistics ready as we would like. This is something we’ll be looking at addressing in the next six to twelve months. E.J. has at least one (maybe three) books planned.

That said, there are a number of resources available now, such as:

  • Alexander Etz’s blog
  • E.J.’s website provides a number of papers on Bayesian statistics (his website also serves as a reminder of what the internet looked like in the ’80s)
  • Zoltan Dienes book is a great for Bayesian statistics as well

However, the best way to learn Bayesian statistics is to come to one of our Bayesian Statistics with JASP workshops. We’ve run two so far and they’ve been very well received. Some people have been reluctant to attend—because JASP is so easy to use, they didn’t see the point of coming and learning it. Of course, that’s the whole point! JASP is so easy to use, you don’t need to learn the software, and you can completely concentrate on learning the Bayesian concepts. So keep an eye out on the JASP website for the next workshop. Bayes is only going to get more important in the future. Don’t be left behind!

 

Facebooktwitterrss

Of Elephants and Effect Sizes – Interview with Geoff Cumming

We all know these crucial moments while analysing our hard-earned data – the moment of truth – is there a star above the small p? Maybe even two? Can you write a nice and simple paper or do you have to bend your back to explain why people do not, surprisingly, behave the way you thought they would? It all depends on those little stars, below or above .05, significant or not, black or white. Continue reading

Facebooktwitterrss

Interview with Prof. Dermot Barnes-Holmes

 

Prof. Dermot Barnes-Holmes was a Foundation Professor at the Department of Psychology at National University of Ireland, Maynooth. He is known for his research in human language and cognition through the development of the Relational Frame Theory (RFT) with Steven C. Hayes, and its applications in various psychological settings. barnes_holmes_pic_edit

What I enjoy most about my job as a researcher … Supervising research students who are passionate about and genuinely interested in their research. Sharing what is often a voyage of intellectual discovery for both the student and me is still, after all these years, by far the most stimulating and enjoyable feature of what I do as an academic. Continue reading

Facebooktwitterrss

Interview with Prof. Alice Mado Proverbio

Prof. Alice Mado Proverbio has a degree in Experimental Psychology from the University of Rome “La Sapienza” and a PhD in General Psychology from the University of Padua. She did her Post Doctoral training at the University of California at Davis and at the University of Padua. As a research scientist at the University of Trieste, she guided the Cognitive Electrophysiology Laboratory from 1996 to 2000. Since 2001, she is Associate Professor of Psychobiology and Physiological Psychology at University of Milano-Bicocca. She founded the “Cognitive Electrophysiology” Lab at the same University in 2003. In 2014, she received the Habilitation as full Professor.

What I enjoy most about my job as a researcher …  Without a doubt what I enjoy most about my job as a researcher is the possibility to create and devise new experiments, to test new exciting ideas, to challenge pre-existing models with new hypotheses that I gather from discussions with people, but especially from a lot of reading and listening to insightful talks. Continue reading

Facebooktwitterrss

Interview with Prof. Csikzentmihalyi

 

Prof. Mihaly Csikszentmihalyi is the Distinguished Professor of Psychology and Management at Claremont Graduate University and was the former head of the department of psychology at the University of Chicago. He is noted for his research on happiness and creativity, on which he published over 120 scientific articles and book chapters. He is also well known for introducing the concept of flow in his seminal work “Flow: The Psychology of Optimal Experience“. Csikszentmihalyi_Mihaly_WEB

What I enjoy most about my job as a researcher …  two things: the early analysis of data, when you are looking for patterns — exploring the psychological landscape, so to speak. Then the last part, when you start writing and trying to find the best way to express what you have learned. Continue reading

Facebooktwitterrss