Do you want a statistics tool that is powerful; easy to learn; allows you to model complex data structures; combines the t test, analysis of variance, and multiple regression; and puts even more on top? Here it is! Statistics courses in psychology today often cover structural equation modeling (SEM), a statistical tool that allows one to go beyond classical statistical models by combining them and adding more. Let’s explore what this means, what SEM really is, and SEM’s surprising parallels with the hippie culture! Continue reading
Bayesian statistics is what all the cool kids are talking about these days. Upon closer inspection, this does not come as a surprise. In contrast to classical statistics, Bayesian inference is principled, coherent, unbiased, and addresses an important question in science: in which of my hypothesis should I believe in, and how strongly, given the collected data? Continue reading
We all know these crucial moments while analysing our hard-earned data – the moment of truth – is there a star above the small p? Maybe even two? Can you write a nice and simple paper or do you have to bend your back to explain why people do not, surprisingly, behave the way you thought they would? It all depends on those little stars, below or above .05, significant or not, black or white. Continue reading
Every scientific discipline is determined by the object of measurement and the selection of appropriate methods of data collection and statistical analysis. Faulty methodology can lead to incorrect information in the results, without the researcher being aware of this. Taking incorrect knowledge as correct into account while conducting further research has far-reaching negative consequences. One of these errors present, to some degree, in every single research is bias. It is a particularly dangerous one, because it usually goes undetected by the researcher. But if you are aware of its threat there are ways to avoid it. In research, it occurs when systematic error is introduced into sampling or testing by selecting or encouraging one outcome or answer over others. It comes in numerous ways and forms. The rest of this post will focus on causes of bias in the field of gender studies.
Many psychology students find themselves in a situation where their research did not yield any significant results. This can be immensely frustrating since they have put a lot of time and effort into designing the study, as well as in collecting and analyzing the data. In some cases, be it out of desperation or pressure to publish interesting findings, certain students will effectively “hunt” for results by conducting statistical tests on all possible variable combinations. For instance, after noticing that a hypothesized correlation between two variables proved to be non-significant, a student might create a correlation matrix of all continuous variables of her study and hope for at least one pair to be significantly related to each other. Other students might include one, two, or even more covariates in their analysis of variance (turning it into an ANCOVA), thereby hoping that the interaction they initially hypothesized between their key factors will become significant.
Having started my PhD in Psychology just recently, I have been a psychology student for a long time now. Doing a Bachelor’s and a Master’s degree has surely given me the chance to observe my own progression as a researcher as well as others. In my experience, a large number of students choose a very specific population of focus when it comes to their major projects. For example, a researcher might be interested to understand how international university students’ anxiety affects their concentration. Generally you might think that such a correlational research project would result in interesting findings – but what if it didn’t?
One of the best advice I have ever received from my lecturer is that the main purpose of major projects is not to publish significant results or to deliver a groundbreaking piece of research (although this is the ideal case scenario); it is to prepare us for the future and to make us good researchers when it counts (i.e. in the ‘real world’). While this is very realistic and somewhat reassuring, I firmly believe that there is one route that a lot of student researchers can take in order to ensure that they come out of the research process with rich, useful and satisfying data (because after all, we all have egos): by using mixed methods! Continue reading
There is nothing as dull in a student’s life as badly made PowerPoint presentations. Using PowerPoint has become a rule, whenever you present something in an university setting or otherwise. Everybody does it. And even when you follow all the hints on ‘how to make a good presentations’, like the ones Maris talked about in our last post at the JEPS Bulletin, you end up with just a PowerPoint presentation. How to change that and spice things up?
Presenting your research results might be the highlight in your undergraduate degree. This is your chance to tell the audience why your findings are relevant. What would make a good presentation? Naturally, the one that convinces them – your work has its place in the pool of knowledge. What’s the formula to make people listen (and follow your story)?
If you ever attended even the most basic statistics class, you have been warned about data manipulation. Even more so,if somebody mentions data manipulation and statistics, your mind inevitably leads you to the media. Reporting on scientific results derived from statistics, reporters often omit the warnings and precautions the authors themselves expressed on any far-reaching conclusions based on their results. But a cautious, scientifically sound conclusion does not cut it as a headline. Despite this being a serious issue, especially for psychologists, what better way to illustrate it than with a joke?