Category Archives: How-to

Magical 7±2 Tips for Psychologists Participating in a Hackathon

A hackathon is an event, typically lasting for 24-48 hours, in which a group of people with diverse backgrounds come together to solve a problem by building a first working prototype of a solution (usually a web app, program or a utility).

There is something inherently likable, or dare I say, smart, about hackathons. They have a specific goal, your progress and results are measurable, getting a first working prototype is both achievable and realistic, and it will all be over in 24-48 hours. I have come to appreciate hackathons a lot over the last five months where I’ve participated in five, and won two of them with my teams. I would like to invite you to participate in one as well by giving you 7±2 tips to make your hackathon experience especially enjoyable.

#1 Just go

There’s more to a hackathon than just programming. Every team needs to tackle a wide variety of tasks ranging from totally non-technical to highly technical. Someone has to make nice visuals, look for evidence to back the product, write code to make it work and combine all the work into a meaningful proposal for the jury. The best teams in a hackathon have a diverse set of skills in a team (although always at least one developer).

Worst case scenario is that you’ve grown your professional network, enjoyed some social meals and gained invaluable experience of developing something from an ideation phase to a first working prototype.

#2 Focus on your unique skillset

Your expertise in your favorite domain in psychology will be an important contribution to the team. In the Accenture Digital Hackathon our Bitein team worked on making language learning easier. As an experimental psychologist with a special interest in memory research, I could make sure that our product incorporates spaced repetition and self-testing – two most scientifically backed ways to enhance memory.

From psychological research we know that brainstorming sessions generate way more ideas when participants brainstorm on their own first, and only then share their ideas with others. We can use our questionnaire building skills to carry out – decent market research, or design experiments (A/B tests) to make confident causal statements about the solid base for our product. The more you develop your technical skills, the more you can be involved with the implementation of these ideas yourself.

#3 Focus on giving your best (not winning)

Winning is not under your control, doing your best is. Winning is a destination, doing your best is a process that optimizes your chances of getting there. Having the attitude of focusing on things under your control allows you to feel good about the progress you are making without making unfair comparisons with others. In every hackathon I’ve been to, there have been teams who silently leave the event thinking their great idea was crap just because they didn’t win a prize.

Doing your best includes working with a goal in mind and with a clear understanding of the judging criteria, also optimizing your chances of winning a sponsor prize. But, judges make their decision based on the competition of that event. Also, you end up in a team with people you didn’t know before and your team might choose to pursue an idea in a field you are not familiar with – yet.

BiteIn team choosing a project at Accenture Digital Hackathon. From the right: Taavi Kivisik, Amine Rhord, Jedda Boyle, our mentor Nima Rahbari, Zhi Li and Paulina Siwak. (Photo courtesy of Marija Jankovic.)

#4 Prepare for the pitch, and practise!

One of the biggest mistakes teams can do in a hackathon is to underestimate the importance of an amazing pitch. In most cases, those 2 minutes are the only time the judges ever hear about your product (a technical check is done separately).

Hackathon organizer and pitch coach Brian Collins recommends teams to choose the pitcher early and start practising early. Also, at least the pitcher should get a good night’s sleep. It means that the pitcher knows in advance to start gathering punchlines, finding her own phrasing that would carry the meaning seamlessly, and packaging it in a unique manner. Three hours of pitch preparation has been the absolute minimum in my teams.

#5 Have a working prototype

There is a big difference between teams selling an idea, and teams that sell an idea with a working prototype. If your idea relies on translating parts of webpages, then demonstrate that you can do that and forget building the login screen. Get something ready, and then, don’t break it (or just use Git).

At IBC Hackfest, our Skipaclass team’s lead developer Itay Kinnrot categorically refused to make any changes to the code during the last hour before the deadline. Our prototype was working flawlessly and our pitcher could sell our future development plans on top of that solid basis. We won.

#6 Have fun

Hackathons are engaging, thrilling and intense. Most people even spend the night at the venue. It can quickly induce a state where the only thing you think of is your idea and your prototype. But, hackathons bring together an amazing bunch of people. Take time to learn more about your teammates, who they are as fellow human beings. Take time to talk to that mentor working in a company you would love to work for. One day, one of these mentors might offer you a job just like Stefan Hogendoorn (mentor at IBC Hackfest) offered me a job at Qlouder.

 

Participating in hackathons has been lots of fun and a great place for professional development. Just type in ‘hackathon’ and your current city to get hackathon experiences of your own.
PS! If you are still wondering why ‘magical’ and ‘7±2’, then click here.

Taavi Kivisik

Data scientist and developer at Qlouder. While at the University of Tartu and University of Toronto, I was inspired to learn more about efficient learning and mnemonics. Midway through the studies I discovered my passion for research methodology and technical side of research, statistics and programming, also machine learning. I’m volunteering as a Lead Archivist for the Nordic Psychology Students’ Conference (NPSC). I'm former President of the Estonian Psychology Students’ Association and former Junior Editor at the Journal of European Psychology Students’ (JEPS). I sometimes tweet @tkivisik .

More Posts

Follow Me:
TwitterLinkedIn

Facebooktwitterrss

Open online education: Research findings and methodological challenges

With a reliable internet connection comes access to the enormous World Wide Web. Being so large, we rely on tools like Google to search and filter all this information. Additional filters can be found in sites like Wikipedia, offering a library style access to curated knowledge, but it too is enormous. In more recent years, open online courses has rapidly become a highly popular method of gaining easy access to curated, high quality, as well as pre-packaged knowledge. A particularly popular variety is the Massive Open Online Course, or MOOC, which are found on platforms like Coursera and edX. The promise – global and free access to high quality education – has often been applauded. Some have heralded the age of the MOOC as the death of campus based teaching. Others are more critical, often citing the high drop-out rates as a sign of failure, or argue that MOOCs do not or cannot foster ‘real’ learning (e.g., Zemsky, 2014; Pope, 2014).

For those who are not aware of the MOOC phenomenon I will first briefly introduce them. In the remainder of this post I will discuss how we can learn about open online courses, what the key challenges are, and how the field can move forward.

What’s all this buzz about?

John Daniel (2012) called MOOCs the official educational buzzword of 2012, and the New York Times called it the Year of the MOOC. However, the movement started before that, somewhere around 2001 when the Massachusetts Institute of Technology (MIT) launched its OpenCourseWare (OCW) to share all its courses online. Individual teachers have been sharing digital content before (e.g., ‘Open Educational Resources’ or OER; Lane & McAndrew, 2010), but the scale and quality of OCW was pioneering. Today, MOOCs can be found on various platforms, such as the ones described in Table 1 below.

Table 1. Overview of several major platforms offering MOOCs

Platform Free content Paid certifications  For profit
Coursera Partial Yes Yes
edX Everything Yes No
Udacity Everything Yes Yes
Udemy Partial Yes Yes
P2PU Yes No No

MOOCs, and open online courses in general, have the goal of making high quality education available to everyone, everywhere. MOOC participants indeed come from all over the world, although participants from Western countries are still overrepresented (Nesterko et al., 2013). Nevertheless, there are numerous inspiring stories from students all over the world, for whom taking one or more MOOCs has had dramatic effects on their lives. For example, Battushig Myanganbayar, a 15 year old boy from Mongolia, took the Circuits and Electronics MOOC, a sophomore-level course from MIT. He was one of the 340 students out of 150.000 who obtained a perfect score, which led to his admittance to MIT (New York Times, 2013).

Stories like these make it much clearer that MOOCs are not to replace contemporary forms of education, but are an amazing addition to it. Why? Because books, radios, and the computer also did not replace education, but enhanced it. In some cases, such as in the story of Battushig, MOOCs provide a variety and quality of education which would otherwise not be accessible at all, due to lacking higher educational institutes. Open online courses provide a new source of high quality education, which is not just accessible to a few students in a lecture hall but has the potential to reach almost everyone who is interested. Will MOOCs replace higher education institutes? Maybe, or maybe not; I think this question mis  ses the point of MOOCs.

In the remainder of this article I will focus on MOOCs from my perspective as a researcher. From this perspective, open online education is in some ways a new approach to education and should thus be investigated on its own. On the other hand, key learning mechanisms (e.g., information processing, knowledge integration, long-term memory consolidation) of human learners are independent of societal changes such as the use of new technologies (e.g., Merrill, Drake, Lacy, & Pratt, 1996). The science of educational instruction has a firm knowledge base and could be used to further our understanding of these generic learning mechanisms, which are inherent to humans.

What are MOOCs anyway?

The typical MOOC is a series of educational videos, often interconnected by other study materials such as texts, and regularly followed-up by quizzes. Usually these MOOCs are divided into approximately 5 to 8 weeks of content. In Figure 1 you see an example of Week 1 from the course ‘Improving your statistical inferences’ by Daniel Lakens.

Figure 1. Example content of a single week in a MOOC

What do students do in a MOOC? To be honest, most do next to nothing. That is, most students who register for a course do not even access it or do so very briefly. However, the thousands of students per course who are active describe a wide variety of learning paths and behaviors. See Figure 2 for an example of a single study in a single course. It shows how this particular student engages very regularly with the course, but the duration and intensity of each session differs substantially. Lectures (shown in green) are often watched in long sessions, while (s)he makes much more often, but shorter, visits to the forum. In the bottom you see a surprising spurt of quiz activity, which might reflect the student’s desire to see what type of questions will be asked later in the course.

Figure 2. Activities of a single user in a single course. Source: Jasper Ginn

Of all the activities which are common for most MOOCs, educational videos are most central to the student learning experience (Guo, Kim, & Rubin, 2014; Liu et al., 2013). The central position of educational videos is reflected by students’ behavior and their intentions: most students plan to watch all videos in a MOOC, and also spend the majority of their time watching these videos (Campbell, Gibbs, Najafi, & Severinski, 2014; Seaton, Bergner, Chuang, Mitros, & Pritchard, 2014). The focus on videos does come with various consequences. Video production is typically expensive and time intensive labor. In addition, they are not as easily translated to other languages, which is contradictory to the aim of making the content accessible to students all around the world. There are many non-native English speakers in MOOCs, while these are almost exclusively presented in English. This raises the question to what extent non-native English speakers can benefit from these courses, compared to native speakers. Open online education may be available to most, the content might not be as accessible for many, for example due to language barriers. It is important to design online education in such a way that it minimizes detrimental effects of potential language barriers to increase its accessibility for a wider audience. While subtitles are often provided, it is unclear whether they promote learning (Markham, Peter, & McCarthy, 2001), hamper learning (Kalyuga, Chandler, & Sweller, 1999), or have no impact at all (van der Zee et al., 2017).

How do we learn about (online) learning?

Research on online learning, and MOOCs in particular, is a highly interdisciplinary field where many perspectives are combined. While research on higher education is typically done primarily by educational scientists, MOOCs are also studied in fields such as computer science and machine learning. This has resulted in an interesting divide in the literature, as researchers from some disciplines are used to publish only in journals (e.g., Computers & Education, Distance Education, International Journal of Computer-Supported Collaborative Learning) while other disciplines focus primarily on conference proceedings (e.g., Learning @ Scale, eMOOCs, Learning Analytics and Knowledge).

Learning at scale opens up a new frontier to learn about learning. MOOCs and similar large-scale online learning platforms give an unprecedented view of learners’ behavior, and potentially, learning. In online learning research, the setting in which the data is measured is not just an approximation of, but equals the world under examination, or at least comes very close to it. That is, measures of students’ behavior do not need to rely on self-reports, but can often be directly derived from log data (e.g., automated measurements of all activities inside an online environment). While this type of research has its advantages, it also comes with various risks and challenges, which I will attempt to outline.

Big data, meaningless data

Research on MOOCs is blessed and cursed with a wide variety of data. For example, it is possible to track every user’s mouse clicks. We also have detailed information about page views, forum data (posts, likes, reads), clickstream data, and interactions with videos. This is all very interesting, except that nobody really knows what it means if a student has clicked two times instead of three times. Nevertheless, the amount of mouse clicks is a strong predictor of ‘study success’, because students who click more, more often finish the course and do so with higher grade. As can be seen Figure 3, the correlations between various mouse clicks metrics and grade ranges from 0.50 to 0.65. However, it would be absurd to recommend students to click more and believe that this will increase their grades. Mouse clicks, in isolation, are inherently ambiguous, if not outright meaningless.

Figure 3. Pairwise Spearman rank correlations between various metrics for all clickers (upper triangle, N = 108008) and certificate earners (lower triangle, N = 7157), from DeBoer, Ho, Stump and Breslow (2014)

When there is smoke, but no fire

With mouse clicks, it will be obvious that this is a problem and will be recognized by many. However, the same problem can secretly underlie many other measured variables which are not that easily recognized. For example, how can we interpret the finding that some students watch a video longer than other students? Findings like this are readily interpreted as being meaningful, for example as signifying that these students were more ‘engaged’, while you could just as well argue that they were got distracted, were bored, etc. There is a classical reasoning fallacy which often underlies these arguments. Because it is reasonable to state that increased engagement will lead to longer video dwelling times, observing the latter is (incorrectly!) assumed to signify the former. In other words: if A leads to B, observing B does not allow you to conclude A. As there are many plausible explanations of differences in video dwelling times, observing such differences cannot be directly interpreted without additional data. This is an inherent problem with many types of big data: you have an enormous amount of granular data which often cannot be directly interpreted. For example, Guo et al. (2014) states that shorter videos and certain video production styles are “much more engaging” than their alternatives. While enormous amounts of data was used, it was in essence a correlational study, such that the claims about which video types are better is based on observational data which do not allow causal inference. More students stop watching a longer video than they do when watching shorter videos, which is interpreted as meaning that the shorter videos are more engaging. While this might certainly be true, it is difficult to make these claims when confounding variables have not been accounted for. As an example, shorter and longer videos do not differ just in time but might also differ in complexity, and the complexity of online educational videos is also strongly correlated with video dwelling time (Van der Sluis, Ginn, & Van der Zee, 2016). More importantly, they showed that the relationship between a video’s complexity (insofar that can be measured) and dwelling time appears to be non-linear, as shown in Figure 4. Non-linear relationships between variables which are typically measured observationally should make us very cautious about making confident claims. For example, in Figure 4 a relative dwelling time of 4 can be found both for an information rate of ~0.2 (below average complexity) as ~1.7 (above average complexity). In other words, if all you know is the dwelling time this does not allow you to make any conclusions about the complexity due to the non-linear relationship

Figure 4. The non-linear relationship between dwelling time and information rate (as a measure of complexity per second). Adapted from Van der Sluis, Ginn, & Van der Zee (2016).

Ghost relationships

Big data and education is a powerful, but dangerous combination. No matter the size of your data set, or variety of variables, correlation data remains incredible treacherous to interpret, especially when the data is granular and lacks 1-to-1 mapping to relevant behavior or cognitive constructs. Given that education is inherently about causality (that is, it aims to change learner’s behavior and/or knowledge), research on online learning should employ a wide array of study methodologies as to properly gather the type of evidence which is required to make claims about causality. It does not require gigabytes of event log data to establish there is some relationship between students’ video watching behavior and quiz results. It does require proper experimental designs to establish causal relationships and effectiveness of interventions and course design. For example, Kovacs (2016) found that students watch videos with in-video questions more often, and are less likely to prematurely stop watching these videos. While this provides some evidence on the benefits of in-video questions, it was a correlational study comparing videos with and without in-video questions. There might have been more relevant differences between the videos, other than the presence of in-video questions. For example, it is reasonable to assume that teachers do not randomly select which videos will have in-video questions, but will choose to add questions to more difficult videos. Should this be the case, a correlational study comparing different videos with and without in-video questions might be confounded by other factors such as the complexity of the video content, to the extent that the relationship might be opposite of what will be found in correlational studies. These type of correlational relationships which can be ‘ghost relationships’ which appear real at first sight, but have no bearing on reality.

The way forward

The granularity of the data, and the various ways how they can be interpreted challenges the validity and generalizability of this type of research. With sufficiently large sample sizes, amount of variables, and researchers’ degrees of freedom, you will be guaranteed to find ‘potentially’ interesting relationships in these datasets. A key development in this area (and science in general) is pre-registering research methodology before a study is performed, in other to decrease ‘noise mining’ and increase the overall veracity of the literature. For more on the reasoning behind pre-registration, see also the JEPS Bulletin three part series on the topic, starting with Dablander (2016). The Learning @ Scale conference, which is already at the center of research on online learning, is becoming a key player in this movement, as they explicitly recommend the use of pre-registered protocols for submitted papers for the conference in 2017.

A/B Testing

Experimental designs (often called “A/B tests” in this literature) are increasingly common in the research on online learning, but they too are not without dangers, and need to be carefully crafted (Reich, 2015). Data in open online education are not only different due to their scale, they require reconceptualization. There are new measures, such as the highly granular measurements described above, as well as existing educational variables which require different interpretations (DeBoer, Ho, Stump and Breslow, 2014). For example, in traditional higher education it would be considered dramatic if over 90% of the students do not finish a course, but this normative interpretation of drop-out rates cannot be uncritically applied to the context of open online education. While registration barriers are substantial for higher education, they are practically nonexistent in MOOCs. In effect, there is no filter which pre-selects the highly motivated students, resulting in many students who just want to take a peek and then stop participating. Secondly, in traditional education dropping out is interpreted as a loss for both the student and the institute. Again, this interpretation does not transfer to the context of MOOCs, as the students who drop out after watching only some videos might have successfully completed their personal learning goals.

Rebooting MOOC research

The next generation of MOOC research needs to adopt a wider range of research designs with greater attention to causal factors promoting student learning (Reich, 2015). To advance in understanding it becomes essential to compliment granular (big) data with other sources of information in an attempt to triangulate its meaning. Triangulation can be done in a various way, from multiple proxy measurements of the same latent construct within a single study, to repeated measurements across separate studies. A good example of triangulation in research on online learning is combining granular log data (such as video dwelling time), student output (such as essays), and subjective measures (such as self-reported behavior) in order to triangulate students’ behavior. Secondly, these models themselves require validation through repeated applications across courses and populations. Convergence between these different (types of) measurements strengthens singular interpretations of (granular) data, and is often a necessary exercise. Inherent to triangulation is increasing the variety within and between datasets, such that they become richer in meaning and usable for generalizable statements.

Replications (both direct and conceptual) are fundamental for this effort. I would like to end with this quote from Justin Reich (2015), which reads: “These challenges cannot be addressed solely by individual researchers. Improving MOOC research will require collective action from universities, funding agencies, journal editors, conference organizers, and course developers. At many universities that produce MOOCs, there are more faculty eager to teach courses than there are resources to support course production. Universities should prioritize courses that will be designed from the outset to address fundamental questions about teaching and learning in a field. Journal editors and conference organizers should prioritize publication of work conducted jointly across institutions, examining learning outcomes rather than engagement outcomes, and favoring design research and experimental designs over post hoc analyses. Funding agencies should share these priorities, while supporting initiatives—such as new technologies and policies for data sharing—that have potential to transform open science in education and beyond.”

Further reading

Here are three recommended readings related to this topic:

  1. Reich, J. (2015). Rebooting MOOC research. Science, 347(6217), 34-35.
  2. DeBoer, J., Ho, A. D., Stump, G. S., & Breslow, L. (2014). Changing “course” reconceptualizing educational variables for massive open online courses. Educational Researcher.
  3. Daniel, J., (2012). Making Sense of MOOCs: Musings in a Maze of Myth, Paradox and Possibility. Journal of Interactive Media in Education. 2012(3), p.Art. 18.

References

Butler, A. C., & Roediger III, H. L. (2007). Testing improves long-term retention in a simulated classroom setting. European Journal of Cognitive Psychology19(4-5), 514-527.

Campbell, J., Gibbs, A. L., Najafi, H., & Severinski, C. (2014). A comparison of learner intent and behaviour in live and archived MOOCs. The International Review of Research in Open and Distributed Learning15(5).

Cepeda, N. J., Coburn, N., Rohrer, D., Wixted, J. T., Mozer, M. C., & Pashler, H. (2009). Optimizing distributed practice: Theoretical analysis and practical implications. Experimental psychology56(4), 236-246.

Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility. Journal of interactive Media in education2012(3).

DeBoer, J., Ho, A. D., Stump, G. S., & Breslow, L. (2014). Changing “course” reconceptualizing educational variables for massive open online courses. Educational Researcher.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest14(1), 4-58.

Guo, P. J., Kim, J., & Rubin, R. (2014, March). How video production affects student engagement: An empirical study of mooc videos. In Proceedings of the first ACM conference on Learning@ scale conference (pp. 41-50). ACM.

Guo, P. J., Kim, J., & Rubin, R. (2014, March). How video production affects student engagement: An empirical study of mooc videos. In Proceedings of the first ACM conference on Learning@ scale conference (pp. 41-50). ACM.

Johnson, C. I., & Mayer, R. E. (2009). A testing effect with multimedia learning. Journal of Educational Psychology101(3), 621.

Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split-attention and redundancy in multimedia instruction. Applied cognitive psychology, 13(4), 351-371.

Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. science319(5865), 966-968.

Konstan, J. A., Walker, J. D., Brooks, D. C., Brown, K., & Ekstrand, M. D. (2015). Teaching recommender systems at large scale: evaluation and lessons learned from a hybrid MOOC. ACM Transactions on Computer-Human Interaction (TOCHI)22(2), 10.

Lane, A., & McAndrew, P. (2010). Are open educational resources systematic or systemic change agents for teaching practice?. British Journal of Educational Technology41(6), 952-962.

Liu, Y., Liu, M., Kang, J., Cao, M., Lim, M., Ko, Y., … & Lin, J. (2013, October). Educational Paradigm Shift in the 21st Century E-Learning. In E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (Vol. 2013, No. 1, pp. 373-379).

Markham, P., Peter, L. A., & McCarthy, T. J. (2001). The effects of native language vs. target language captions on foreign language students’ DVD video comprehension. Foreign language annals, 34(5), 439-445.

Mayer, R. E. (2003). The promise of multimedia learning: using the same instructional design methods across different media. Learning and instruction13(2), 125-139.

Mayer, R. E., Mathias, A., & Wetzell, K. (2002). Fostering understanding of multimedia messages through pre-training: Evidence for a two-stage theory of mental model construction. Journal of Experimental Psychology: Applied8(3), 147.

Merrill, M. D., Drake, L., Lacy, M. J., Pratt, J., & ID2 Research Group. (1996). Reclaiming instructional design. Educational Technology36(5), 5-7.

Nesterko, S. O., Dotsenko, S., Han, Q., Seaton, D., Reich, J., Chuang, I., & Ho, A. D. (2013, December). Evaluating the geographic data in MOOCs. In Neural information processing systems.

Ozcelik, E., Arslan-Ari, I., & Cagiltay, K. (2010). Why does signaling enhance multimedia learning? Evidence from eye movements. Computers in human behavior26(1), 110-117.

Plant, E. A., Ericsson, K. A., Hill, L., & Asberg, K. (2005). Why study time does not predict grade point average across college students: Implications of deliberate practice for academic performance. Contemporary Educational Psychology30(1), 96-116.

Pope, J. (2015). What are MOOCs good for?. Technology Review118(1), 69-71.

Reich, J. (2015). Rebooting MOOC research. Science, 347(6217), 34-35.

Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in cognitive sciences15(1), 20-27.

Seaton, D. T., Bergner, Y., Chuang, I., Mitros, P., & Pritchard, D. E. (2014). Who does what in a massive open online course?. Communications of the ACM57(4), 58-65.

Van der Sluis, F., Ginn, J., & Van der Zee, T. (2016, April). Explaining Student Behavior at Scale: The Influence of Video Complexity on Student Dwelling Time. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale (pp. 51-60). ACM.

Van der Zee, T., Admiraal, W., Paas, F., Saab, N., & Giesbers, B. (2017). Effects of Subtitles, Complexity, and Language proficiency on Learning from Online Education Videos. Journal of Media Psychology, in print. Pre-print available at https://osf.io/n6zuf/.

Zemsky, R. (2014). With a MOOC MOOC here and a MOOC MOOC there, here a MOOC, there a MOOC, everywhere a MOOC MOOC. The Journal of General Education63(4), 237-243.

Tim van der Zee

Skeptical scientist. I study how people learn from educational videos in open online courses, and how we can help them learn better. PhD student at Leiden University (the Netherlands), but currently a visiting scholar at MIT and UMass Lowell. You can follow me on Twitter: @Research_Tim and read my blog at www.timvanderzee.com

More Posts - Website

Follow Me:
Twitter

Facebooktwitterrss

Introduction to Data Analysis using R

R Logo

R is a statistical programming language whose popularity is quickly overtaking SPSS and other “traditional” point-and-click software packages (Muenchen, 2015). But why would anyone use a programming language, instead of point-and-click applications, for data analysis? An important reason is that data analysis rarely consists of simply running a statistical test. Instead, many small steps, such as cleaning and visualizing data, are usually repeated many times, and computers are much faster at doing repetitive tasks than humans are. Using a point-and-click interface for these “data cleaning” operations is laborious and unnecessarily slow:

“[T]he process of tidying my data took me around 10 minutes per participant as I would do it all manually through Excel. Even for a moderate sample size, this starts to take up a large chunk of time that could be spent doing other things like writing or having a beer” (Bartlett, 2016).

A programmed analysis would seamlessly apply the tidying steps to every participant in the blink of an eye, and would itself constitute an exact script of what operations were applied to the data, making it easier to repeat the steps later.

Learning to use a programming language for data analysis reduces human labor and saves time that could be better spent doing more important (or fun) things. In this post, I introduce the R programming language, and motivate its use in Psychological science. The introduction is aimed toward students and researchers with no programming experience, but is suitable for anyone with an interest in learning the basics of R.

The R project for statistical computing

“R is a free software environment for statistical computing and graphics.” (R Core Team, 2016)

Great, but what does that mean? R is a programming language that is designed and used mainly in the statistics, data science, and scientific communities. R has “become the de-facto standard for writing statistical software among statisticians and has made substantial inroads in the social and behavioural sciences” (Fox, 2010). This means that if we use R, we’ll be in good company (and that company will likely be even better and numerous in the future, see (Muenchen, 2015)).

To understand what R is, and is not, it may be helpful to begin by contrasting R to its most common alternative, SPSS. Many psychologists are familiar with SPSS, which has a graphical user interface (GUI), allowing the user to look at the two-dimensional data table on screen, and click through various drop-down menus to conduct analyses on the data. In contrast, R is an object oriented programming language. Data is loaded into R as a “variable”, meaning that in order to view it, the user has to print it on the screen. The power of this approach is that the data is an object in a programming environment, and only your imagination limits what functions you can apply to the data. R also has no GUI to navigate with the mouse; instead, users interact with the data by typing commands.

SPSS is expensive to use: Universities have to pay real money to make it available to students and researchers. R and its supporting applications, on the other hand, are completely free—meaning that both users and developers have easier access to it. R is an open source software package, which means that many cutting edge statistical methods are more quickly implemented in R than SPSS. This is apparent, for example, in the recent uprising of Bayesian methods for data analysis (e.g. Buerkner, 2016).

Further, SPSS’s facilities for cleaning, organizing, formatting, and transforming data are limited—and not very user friendly, although this is a subjective judgment—so users often resort to a spreadsheet program (Microsoft Excel, say) for data manipulation. R has excellent capacities for all steps in the analysis pipeline, including data manipulation, and therefore the analysis never has to spread across multiple applications. You can imagine how the possibility for mistakes, and time needed, is reduced when the data file(s) doesn’t need to be juggled between applications. Switching between applications, and repeatedly clicking through drop-down menus means that, for any small change, the human using the computer must re-do every step of the analysis. With R, you can simply re-use your analysis script and just import different data to it.

workflow

Figure 1. Two workflows for statistical discovery in the empirical sciences. “Analysis” consists of multiple operations, and is spread over multiple applications in Workflow 2, but not in Workflow 1. Therefore, “analysis” is more easily documented and repeated in Workflow 1. This fact alone may work to reduce mistakes in data analysis. The dashed line from R to Word Processor indicates an optional step: You can even write manuscripts with RStudio, going directly from R to Communicating Results.

These considerations lead to contrasting the two different workflows in Figure 1. Workflow 1 uses a programming language, such as R. It is difficult to learn, but beginners generally get started with real analysis in an hour or so. The payoff for the initial difficulty is great: The workflow is reproducible (users can save scripts and show their friends exactly what they did to create those beautiful violin plots); the workflow is flexible (want to do everything just the way you did it, but instead do the plots for males instead of females? Easy!); and most importantly, repetitive, boring, but important work is delegated to a computer.

The final point requires some reflecting; after all, computer programs all work on computers, so it sounds like a tautology. But what I mean is that repetitive tasks can be wrapped in a simple function (these are usually already available—you don’t have to create your own functions) which then performs the tasks as many times as you would like to. Many tasks in the data cleaning stage, for example, are fairly boring and repetitive (calculating summary statistics, aggregating data, combining spreadsheets or columns across spreadsheets), but less so when one uses a programming language.

Workflow 2, on the other hand, is easy to learn because there are few well-defined and systematic parts to it—everything is improvised on a task-by-task basis and done manually by copy-pasting, pointing-and-clicking and dragging and dropping. “Clean and organize” the data in Excel. “Analyze” in SPSS. In the optimal case where the data is perfectly aligned to the format that SPSS expects, you can get a p-value in less than a minute (excluding SPSS start-up time, which is quickly approaching infinity) by clicking through the drop-down menus. That is truly great, if that is all you want. But that’s rarely all that we want, and data is rarely in SPSS’s required format.

Workflow 2 is not reproducible (that is, it may be very difficult if not impossible to exactly retrace your steps through an analysis), so although you may know roughly that you “did an ANOVA”, you may not remember which cases were included, what data was used, how it was transformed, etc. Workflow 2 is not flexible: You’ve just done a statistical test on data from Experiment 1? Great! Can you now do it for Experiment 2, but log-transform the RTs? Sure, but then you would have to restart from the Excel step, and redo all that pointing and clicking. This leads to Workflow 2 requiring the human to do too much work, and spend time on the analysis that could be better spent “doing other things like writing or having a beer” (Bartlett, 2016).

So, what is R? It is a programming language especially suited for data analysis. It allows you to program (more on this below!) your analyses instead of pointing and clicking through menus. The point here is not that you can’t do analysis with a point-and-click SPSS style software package. You can, and you can do a pretty damn good job with it. The point is that you can work less and be more productive if you’re willing to spend some initial time and effort learning Workflow 1 instead of the common Workflow 2. And that requires getting started with R.

Getting started with R: From 0 to R in 100 seconds

If you haven’t already, go ahead and download R, and start it up on your computer. Like most programming languages, R is best understood through its console—the interface that lets you interact with the language.

Figure 2. The R console.

After opening R on your computer, you should see a similar window on your computer. The console allows us to type input, have R evaluate it, and return output. Just like a fancy calculator. Here, our first input was assigning (R uses the left arrow, <-, for assignment) all the integers from 0 to 100 to a variable called numbers. Computer code can often be read from right to left; the first one here would say “integers 0 through to 100, assign to numbers”. We then calculated the mean of those numbers by using R’s built in function, mean(). Everything interesting in R is done by using functions: There are functions for drawing figures, transforming data, running statistical tests, and much, much more.

Here’s another example, this time we’ll create some heights data for kids and adults (in centimeters) and conduct a two-sample t-test (every line that begins with a “#>” is R’s output):

That’s it, a t-test in R in a hundred seconds! Note, c() stands for “combine”, so kids is now a numeric vector (collection of numbers) with 5 elements. The t-test results are printed in R’s console, and are straightforward to interpret.

Save your analysis scripts

At its most basic, data analysis in R consists of importing data to R, and then running functions to visualize and model the data. R has powerful functions for covering the entire process going from Raw Data to Communicating Results (or Word Processor) in Figure 1. That is, users don’t need to switch between applications at various steps of the analysis workflow. Users simply type in code, let R evaluate it, and receive output. As you can imagine, a full analysis from raw data to a report (or table of summary statistics, or whatever your goal is) may involve lots of small steps—transforming variables in the data, plotting, calculating summaries, modeling and testing—which are often done iteratively. Recognizing that there may be many steps involved, we realize that we better save our work so that we can investigate and redo it later, if needed. Therefore for each analysis, we should create a text file containing all those steps, which could then be run repeatedly with minor tweaks, if required.

To create these text files, or “R scripts”, we need a text editor. All computers have a text editor pre-installed, but programming is often easier if you use an integrated development environment (IDE), which has a text editor and console all in one place (often with additional capacities.) The best IDE for R, by far, is RStudio. Go ahead and download RStudio, and then start it. At this point you can close the other R console on your computer, because RStudio has the console available for you.

Getting started with RStudio

rstudio

Figure 3. The RStudio IDE to R.

Figure 3 shows the main view of RStudio. There are four rectangular panels, each with a different purpose. The bottom left panel is the R console. We can type input in the console (on the empty line that begins with a “>”) and hit return to execute the code and obtain output. But a more efficient approach is to type the code into a script file, using the text editor panel, known as the source panel, in the top left corner. Here, we have a t-test-kids-grownups.R script open, which consists of three lines of code. You can write this script on your own computer by going to File -> New File -> R Script in RStudio, and then typing in the code you see in Figure 3. You can execute each line by hitting Control + Return, on Windows computers, or Command + Return on OS X computers. Scripts like this constitute the exact documentation of what you did in your analysis, and as you can imagine, are pretty important.

The two other panels are for viewing things, not so much for interacting with the data. Top right is the Environment panel, showing the variables that you have saved in R. That is, when you assign something into a variable (kids <- c(100, 98, 89, 111, 101)), that variable (kids) is visible in the Environment panel, along with its type (num for numeric), size (1:5, for 5), and contents (100, 98, 89, 111, 101). Finally, bottom right is the Viewer panel, where we can view plots, browse files on the computer, and do various other things.

With this knowledge in mind, let’s begin with a couple easy things. Don’t worry, we’ll get to actual data soon enough, once we have the absolute basics covered. I’ll show some code and evaluate it in R to show its output too. You can, and should, type in the commands yourself to help you understand what they do (type each line in an R script and execute the line by pressing Cmd + Enter. Save your work every now and then.)

Here’s how to create variables in R (try to figure out what’s saved in each variable):

And here’s how to print those variable’s contents on the screen. (I’ll provide a comment for each line, comments begin with a # and are not evaluated by R. That is, comments are read by humans only.)

Transforming data is easy: R automatically applies operations to vectors of (variables containing multiple) numbers, if needed. Let’s create z-scores of kids heights.

I hope you followed along. You should now have a bunch of variables in your R Environment. If you typed all those lines into an R script, you can now execute them again, or modify them and then re-run the script, line-by-line. You can also execute the whole script at once by clicking “Run”, at the top of the screen. Congratulations, you’ve just programmed your first computer program!

User contributed packages

One of the best things about R is that it has a large user base, and lots of user contributed packages, which make using R easier. Packages are simply bundles of functions, and will enhance your R experience quite a bit. Whatever you want to do, there’s probably an R package for that. Here, we will install and load (make available in the current session) the tidyverse package (Wickham, 2016), which is designed for making tidying data easier.

It’s important that you use the tidyverse package if you want to follow along with this tutorial. All of the tasks covered here are possible without it, but the functions from tidyverse make the tasks easier, and certainly easier to learn.

Using R with data

Let’s import some data to R. We’ll use example data from Chapter 4 of the Intensive Longitudinal Methods book (Bolger & Laurenceau, 2013). The data set is freely available on the book’s website. If you would like to follow along, please donwload the data set, and place it in a folder (unpack the .zip file). Then, use RStudio’s Viewer panel, and its Files tab, to navigate to the directory on your computer that has the data set, and set it as the working directory by clicking “More”, then “Set As Working Directory”.

setwd

Figure 4. Setting the Working Directory

Setting the working directory properly is extremely important, because it’s the only way R knows where to look for files on your computer. If you try to load files that are not in the working directory, you need to use the full path to the file. But if your working directory is properly set, you can just use the filename. The file is called “time.csv”, and we load it into a variable called d using the read_csv() function. (csv stands for comma separated values, a common plain text format for storing data.) You’ll want to type all these functions to an R script, so create a new R script and make sure you are typing the commands in the Source panel, not the Console panel. If you set your working directory correctly, once you save the R script file, it will be saved in the directory right next to the “time.csv” file.

d is now a data frame (sometimes called a “tibble”, because why not), whose rows are observations, and columns the variables associated with those observations.

This data contains simulated daily intimacy reports of 50 individuals, who reported their intimacy every evening, for 16 days. Half of these simulated participants were in a treatment group, and the other half in a control group. To print the first few rows of the data frame to screen, simply type its name:

The first column, id is a variable that specifies the id number who that observation belongs to. int means that the data in this column are integers. time indicates the day of the observation, and the authors coded the first day at 0 (this will make intercepts in regression models easier to interpret.) time01 is just time but recoded so that 1 is at the end of the study. dbl means that the values are floating point numbers. intimacy is the reported intimacy, and treatment indicates whether the person was in the control (0) or treatment (1) group. The first row of this output also tells us that there are 800 rows in total in this data set, and 5 variables (columns). Each row is also numbered in the output (leftmost “column”), but those values are not in the data.

Data types

It’s important to verify that your variables (columns) are imported into R in the appropriate format. For example, you would not like to import time recorded in days as a character vector, nor would you like to import a character vector (country names, for example) as a numeric variable. Almost always, R (more specifically, read_csv()) automatically uses correct formats, which you can verify by looking at the row between the column names and the values.

There are five basic data types: int for integers, num (or dbl) for floating point numbers (1.12345…), chr for characters (also known as “strings”), factor (sometimes abbreviated as fctr) for categorical variables that have character labels (factors can be ordered if required), and logical (abbreviated as logi) for logical variables: TRUE or FALSE. Here’s a little data frame that illustrates the basic variable types in action:

Here we are also introduced a very special value, NA. NA means that there is no value, and we should always pay special attention to data that has NAs, because it may indicate that some important data is missing. This sample data explicitly tells us that we don’t know whether this person likes matlab or not, because the variable is NA. OK, let’s get back to the daily intimacy reports data.

Quick overview of data

We can now use the variables in the data frame d and compute summaries just as we did above with the kids’ and adults’ heights. A useful operation might be to ask for a quick summary of each variable (column) in the data set:

To get a single variable (column) from the data frame, we call it with the $ operator (“gimme”, for asking R to give you variables from within a data frame). To get all the intimacy values, we could just call d$intimacy. But we better not, because that would print out all 800 intimacy values into the console. We can pass those values to functions instead:

If you would like to see the first six values of a variable, you can use the head() function:

head() works on data frames as well, and you can use an optional number argument to specify how many first values you’d like to see returned:

A look at R’s functions

Generally, this is how R functions work, you name the function, and specify arguments to the function inside the parentheses. Some of these arguments may be data or other input (d, above), and some of them change what the argument does and how (2, above). To know what arguments you can give to a function, you can just type the function’s name in the console with a question mark prepended to it:

Importantly, calling the help page reveals that functions’ arguments are named. That is, arguments are of the form X = Y, where X is the name of the argument, and Y is the value you would like to set it to. If you look at the help page of head() (?head), you’ll see that it takes two arguments, x which should be an object (like our data frame d, (if you don’t know what “object” means in this context, don’t worry—nobody does)), and n, which is the number of elements you’d like to see returned. You don’t always have to type in the X = Y part for every argument, because R can match the arguments based on their position (whether they are the first, second, etc. argument in the parentheses). We can confirm this by typing out the full form of the previous call head(d, 2), but this time, naming the arguments:

Now that you know how R’s functions work, you can find out how to do almost anything by typing into a search engine: “How to do almost anything in R”. The internet (and books, of course) is full of helpful tutorials (see Resources section, below) but you will need to know these basics about functions in order to follow those tutorials.

Creating new variables

Creating new variables is also easy. Let’s create a new variable that is the square root of the reported intimacy (because why not), by using the sqrt() function and assigning the values to a new variable (column) within our data frame:

Recall that sqrt(d$intimacy) will take the square root of every 800 values of the vector of intimacy values, and return a vector of 800 squared values. There’s no need to do this individually for each value.

We can also create variables using conditional logic, which is useful for creating verbal labels for numeric variables, for example. Let’s create a verbal label for each of the treatment groups:

We created a new variable, Group in d, that is “Control” if the treatment variable on that row is 0, and “Treatment” otherwise.

Remember our discussion of data types above? d now contains integer, double, and character variables. Make sure you can identify these in the output, above.

Aggregating

Let’s focus on aggregating the data across individuals, and plotting the average time trends of intimacy, for the treatment and control groups.

In R, aggregating is easiest if you think of it as calculating summaries for “groups” in the data (and collapsing the data across other variables). “Groups” doesn’t refer to experimental groups (although it can), but instead any arbitrary groupings of your data based on variables in it, so the groups can be based on multiple things, like time points and individuals, or time points and experimental groups.

Here, our groups are the two treatment groups and 16 time points, and we would like to obtain the mean for each group at each time point by collapsing across individuals

The above code summarized our data frame d by calculating the mean intimacy for the groups specified by group_by(). We did this by first creating a data frame that is d, but is grouped on Group and time, and then summarizing those groups by taking the mean intimacy for each of them. This is what we got:

A mean intimacy value for both groups, at each time point.

Plotting

We can now easily plot these data, for each individual, and each group. Let’s begin by plotting just the treatment and control groups’ mean intimacy ratings:

unnamed-chunk-21-1

Figure 5. Example R plot, created with ggplot(), of two groups’ mean intimacy ratings across time.

For this plot, we used the ggplot() function, which takes as input a data frame (we used d_groups from above), and a set of aesthetic specifications (aes(), we mapped time to the x axis, intimacy to the y axis, and color to the different treatment Groups in the data). We then added a geometric object to display these data (geom_line() for a line.)

To illustrate how to add other geometric objects to display the data, let’s add some points to the graph:

unnamed-chunk-22-1

Figure 6. Two groups’ mean intimacy ratings across time, with points.

We can easily do the same plot for every individual (a panel plot, but let’s drop the points for now):

unnamed-chunk-23-1

Figure 7. Two groups’ mean intimacy ratings across time, plotted separately for each person.

The code is exactly the same, but now we used the non-aggregated raw data d, and added an extra function that wraps each id’s data into their own little subplot (facet_wrap(); remember, if you don’t know what a function does, look at the help page, i.e. ?facet_wrap). ggplot() is an extremely powerful function that allows you to do very complex and informative graphs with systematic, short and neat code. For example, we may add a linear trend (linear regression line) to each person’s panel. This time, let’s only look at the individuals in the experimental group, by using the filter() command (see below):

unnamed-chunk-24-1

Figure 8. Treatment group’s mean intimacy ratings across time, plotted separately for each person, with linear trend lines.

Data manipulation

We already encountered an example of manipulating data, when we aggregated intimacy over some groups (experimental groups and time points). Other common operations are, for example, trimming the data based on some criteria. All operations that drop observations are conceptualized as subsetting, and can be done using the filter() command. Above, we filtered the data such that we plotted the data for the treatment group only. As another example, we can get the first week’s data (time is less than 7, that is, days 0-6), for the control group only, by specifying these logical operations in the filter() function

Try re-running the above line with small changes to the logical operations. Note that the two logical operations are combined with the AND command (&), you can also use OR (|). Try to imagine what replacing AND with OR would do in the above line of code. Then try and see what it does.

A quick detour to details

At this point it is useful to remind that computers do exactly what you ask them to do, nothing less, nothing more. So for instance, pay attention to capital letters, symbols, and parentheses. The following three lines are faulty, try to figure out why:

Why does this data frame have zero rows?

Error? What’s the problem?

Error? What’s the problem?

(Answers: 1. Group is either “Control” or “Treatment”, not “control” or “treatment”. 2. Extra parenthesis at the end. 3. == is not the same as =, the double == is a logical comparison operator, asking if two things are the same, the single = is an assignment operator.)

Advanced data manipulation

Let’s move on. What if we’d like to detect extreme values? For example, let’s ask if there are people in the data who show extreme overall levels of intimacy (what if somebody feels too much intimacy!). How can we do that? Let’s start thinking like programmers and break every problem into the exact steps required to answer the problem:

  1. Calculate the mean intimacy for everybody
  2. Plot the mean intimacy values (because always, always visualize your data)
  3. Remove everybody whose mean intimacy is over 2 standard deviations above the overall mean intimacy (over-intimate people?) (note that this is a terrible exclusion criteria here, and done for illustration purposes only)

As before, we’ll group the data by person, and calculate the mean (which we’ll call int).

We now have everybody’s mean intimacy in a neat and tidy data frame. We could, for example, arrange the data such that we see the extreme values:

Nothing makes as much sense as a histogram:

unnamed-chunk-31-1

Figure 9. Histogram of everybody’s mean intimacy ratings.

It doesn’t look like anyone’s mean intimacy value is “off the charts”. Finally, let’s apply our artificial exclusion criteria: Drop everybody whose mean intimacy is 2 standard deviations above the overall mean:

Then we could proceed to exclude these participants (don’t do this with real data!), by first joining the d_grouped data frame, which has the exclusion information, with the full data frame d

and then removing all rows where exclude is TRUE. We use the filter() command, and take only the rows where exclude is FALSE. So we want our logical operator for filtering rows to be “not-exclude”. “not”, in R language, is !:

I saved the included people in a new data set called d2, because I don’t actually want to remove those people, but just illustrated how to do this. We could also in some situations imagine applying the exclusion criteria to individual observations, instead of individual participants. This would be as easy as (think why):

Selecting variables in data

After these artificial examples of removing extreme values (or people) from data, we have a couple of extra variables in our data frame d that we would like to remove, because it’s good to work with clean data. Removing, and more generally selecting variables (columns) in data frames is most easily done with the select() function. Let’s select() all variables in d except the squared intimacy (sqrt_int), average intimacy (int) and exclusion (exclude) variables (that is, let’s drop those three columns from the data frame):

Using select(), we can keep variables by naming them, or drop them by using -. If no variables are named for keeping, but some are dropped, all unnamed variables are kept, as in this example.

Regression

Let’s do an example linear regression by focusing on one participant’s data. The first step then is to create a subset containing only one person’s data. For instance, we may ask a subset of d that consists of all rows where id is 30, by typing

Linear regression is available using the lm() function, and R’s own formula syntax:

Generally, for regression in R, you’d specify the formula as outcome ~ predictors. If you have multiple predictors, you combine them with addition (“+”): outcome ~ IV1 + IV2. Interactions are specified with multiplication (“*“): outcome ~ IV1 * IV2 (which automatically includes the main effects of IV1 and IV2; to get an interaction only, use”:" outcome ~ IV1:IV2). We also specified that for the regression, we’d like to use data in the d_sub data frame, which contains only person 30’s data.

Summary of a fitted model is easily obtained:

Visualizing the model fit is also easy. We’ll use the same code as for the figures above, but also add points (geom_point()), and a linear regression line with a 95% “Confidence” Ribbon (geom_smooth(method="lm")).

unnamed-chunk-40-1

Figure 10. Person 30’s intimacy ratings over time (points and black line), with a linear regression model (blue line and gray Confidence Ribbon).

Pretty cool, right? And there you have it. We’ve used R to do a sample of common data cleaning and visualization operations, and fitted a couple of regression models. Of course, we’ve only scratched the surface, and below I provide a short list of resources for learning more about R.

Conclusion

Programming your statistical analyses leads to a flexible, reproducible and time-saving workflow, in comparison to more traditional point-and-click focused applications. R is probably the best programming language around for applied statistics, because it has a large user base and many user-contributed packages that make your life easier. While it may take an hour or so to get acquainted with R, after initial difficulty it is easy to use, and provides a fast and reliable platform for data wrangling, visualization, modeling, and statistical testing.

Finally, learning to code is not about having a superhuman memory for function names, but instead it is about developing a programmer’s mindset: Think your problem through and decompose it to small chunks, then ask a computer to do those chunks for you. Do that a couple of times and you will magically have memorized, as a byproduct, the names of a few common functions. You learn to code not by reading and memorizing a tutorial, but by writing it out, examining the output, changing the input and figuring out what changed in the output. Even better, you’ll learn the most once you use code to examine your own data, data that you know and care about. Hopefully, you’ll be now able to begin doing just that.

Resources

The web is full of fantastic R resources, so here’s a sample of some materials I think would useful to beginning R users.

Introduction to R

  • Data Camp’s Introduction to R is a free online course on R.

  • Code School’s R Course is an interactive web tutorial for R beginners.

  • YaRrr! The Pirate’s Guide to R is a free e-book, with accompanying YouTube lectures and witty writing (“it turns out that pirates were programming in R well before the earliest known advent of computers.”) YaRrr! is also an R package that helps you get started with some pretty cool R stuff (Phillips, 2016). Recommended!

  • The Personality Project’s Guide to R (Revelle, 2016b) is a great collection of introductory (and more advanced) R materials especially for Psychologists. The site’s author also maintains a popular and very useful R package called psych (Revelle, 2016a). Check it out!

  • Google Developers’ YouTube Crash Course to R is a collection of short videos. The first 11 videos are an excellent introduction to working with RStudio and R’s data types, and programming in general.

  • Quick-R is a helpful collection of R materials.

Data wrangling

These websites explain how to “wrangle” data with R.

  • R for Data Science (Wickham & Grolemund, 2016) is the definitive source on using R with real data for efficient data analysis. It starts off easy (and is suitable for beginners) but covers nearly everything in a data-analysis workflow apart from modeling.

  • Introduction to dplyr explains how to use the dplyr package (Wickham & Francois, 2016) to wrangle data.

  • Data Processing Workflow is a good resource on how to use common packages for data manipulation (Wickham, 2016), but the example data may not be especially helpful.

Visualizing data

Statistical modeling and testing

R provides many excellent packages for modeling data, my absolute favorite is the brms package (Buerkner, 2016) for bayesian regression modeling.

References

Bartlett, J. (2016, November 22). Tidying and analysing response time data using r. Statistics and substance use. Retrieved November 23, 2016, from https://statsandsubstances.wordpress.com/2016/11/22/tidying-and-analysing-response-time-data-using-r/

Bolger, N., & Laurenceau, J.-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research. Guilford Press. Retrieved from http://www.intensivelongitudinal.com/

Buerkner, P.-C. (2016). Brms: Bayesian regression models using stan. Retrieved from http://CRAN.R-project.org/package=brms

Fox, J. (2010). Introduction to statistical computing in r. Retrieved November 23, 2016, from http://socserv.socsci.mcmaster.ca/jfox/Courses/R-course/index.html

Muenchen, R., A. (2015). The popularity of data analysis software. R4stats.com. Retrieved November 22, 2016, from http://r4stats.com/articles/popularity/

Phillips, N. (2016). Yarrr: A companion to the e-book “YaRrr!: The pirate’s guide to r”. Retrieved from https://CRAN.R-project.org/package=yarrr

R Core Team. (2016). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org/

Revelle, W. (2016a). Psych: Procedures for psychological, psychometric, and personality research. Evanston, Illinois: Northwestern University. Retrieved from https://CRAN.R-project.org/package=psych

Revelle, W. (2016b). The personality project’s guide to r. Retrieved November 22, 2016, from http://personality-project.org/r/

Wickham, H. (2016). Tidyverse: Easily install and load ’tidyverse’ packages. Retrieved from https://CRAN.R-project.org/package=tidyverse

Wickham, H., & Francois, R. (2016). Dplyr: A grammar of data manipulation. Retrieved from https://CRAN.R-project.org/package=dplyr

Wickham, H., & Grolemund, G. (2016). R for data science. Retrieved from http://r4ds.had.co.nz/

Matti Vuorre

Matti Vuorre

Matti Vuorre is a PhD Student at Columbia University in New York City. He studies cognitive psychology and neuroscience, and focuses on understanding the mechanisms underlying humans' metacognitive capacities.

More Posts - Website

Facebooktwitterrss

Python Programming in Psychology – From Data Collection to Analysis

Why programming?

Programming is a skill that all psychology students should learn. I can think of so many reasons on why, including automating boring stuff, and practicing problem solving skills through learning to code and programming.  In this post I will focus on two more immediate ways that may be relevant for a Psychology student, particularly during data collection and data analysis. For a more elaborated discussion on the topic read the post on my personal blog: Every Psychologist Should Learn Programming.

Here is what we will do in this post:

  • Basic Python by example (i.e., a t-test for paired samples)
  • Program a Flanker task using the Python library Expyriment
  • Visualise and analyse data

Before going into how to use Python programming in Psychology I will briefly discuss why programming may be good for data collection and analysis.

Data collection

The data collection phase of Psychological research has largely been computerised. Thus, many of the methods and tasks used to collect data are created using software. Many of these tools offer graphical user interfaces (GUIs) that may at many times cover your needs. For instance, E-prime offers a GUI which enables you to, basically, drag and drop “objects” onto a timeline to create your experiment. However, in many tasks you may need to write some customised code on top of your built experiment. For instance, quasi-randomisation may be hard to implement in the GUI without some coding (i.e., by creating CSV-files with trial order and such). At some point in your study of the human mind you will probably need to write code before collecting data.

Data collection

Data Analysis

Most programming languages can of course offer both graphical and statistical analysis of data. For instance, R statistical programming environment has recently gained more and more popularity in Psychology as well as in other disciplines. In other fields Python is also gaining popularity when it comes to analysing and visualisation of data. MATLAB has for many years also been used for quantitative methods in Psychology and cognitive science (e.g., for Psychophysical analysis, cognitive modelling, and general statistics).  Python offers extensive support for both Web scraping and the analysis of scraped data.

What language should one learn?

Okay. Okay. Programming may be useful for Psychologists! But there are so many languages! Where should I start?!” One very good start would be to learn Python. Python is a general-purpose and high-level language that was created by Guido van Rossum. Nowadays it is administrated by the non-profit organisation Python Software Foundation. Python is open source. Among many things this means that Python is free. Even for commercial use. Python is usually used and referred to as a scripting language. Thanks to its flexibility, Python is one of the most popular programming languages (e.g., 4th on the TIOBE Index for June 2016).

Programming in Psychology

One of the most important aspects, however, is that there are a variety of both general-purpose (unlike R that focuses on statistical analysis) and specialised Python packages. Good news for us interested in Psychology! This means that there are specialised libraries for creating experiments (e.g., Expyriment, PsychoPy and OpenSesame), fitting psychometric functions (e.g., pypsignifit 3.0), and analysing data (e.g., Pandas and Statsmodels). In fact, there are packages focusing on only enabling data analyses of EEG/ERP data (see my resources list for more examples). Python can be run interactively using the Python interpreter (hold on I am going to show an example later). Note, that Python comes in two major versions 2.7 (legacy) and 3.5. Discussing them is really out of the scope for this post but you can read more here.

Python from data collection to analysis

In this part of the post. you will learn how Python can be used from creating an experiment to visualising and analysing the data collected during that experiment. I have chosen a task that fits one of my research interests; attention and cognitive function. From doing research on distractors in the auditory and tactile modalities and how they impact visual tasks I am, in general, interested in how some types of information cannot be blocked out. How is it that we are unable to suppress certain responses (i.e., response inhibition)? A well-used task to measure inhibition is the Flanker task (e.g., Colcombe, Kramer, Erickson, & Scalf, 2005; Eriksen & Eriksen, 1974).  In the task we are going to create we will have two type of stimuli; congruent and incongruent. The task is to respond as quickly and accurate as possible to the direction an arrow is pointing. In congruent trials, the target arrow is surrounded by arrows pointing in the same direction (e.g., “<<<<<“) whereas on incongruent trials the surrounding arrows points to another direction (e.g., “<<><<“). Note, the target arrow is the one in the middle (e.g., the third).

For simplicity, we will examine whether the response time (RT) in congruent trials is different from RT in incongruent trials. Since we only will have two means to compare (incongruent vs congruent) we can use the paired sample t-test.

The following part is structured such that you get information on how to install Python and the libraries used. After this is done, you will get some basic information on how to write a Python script and then how to write the t-test function. After that, you get guided through how to write the Flanker task using Expyriment and, finally, you get to learn how to handle, visualise, and analyse the data from the Flanker task.

Installation of needed libraries

Before using Python you may need to install Python and the libraries that are used in the following examples. Python 2.7 can be downloaded here.

If you are running a Windows machine and have installed Python 2.7.11 your next step is to download and install Pygame.  The second library needed is SciPy which is a set of external libraries for scientific computing in Python. Installing SciPy on Windows machines are a bit complicated; first, download NumPy and SciPy,  open up windows command prompt (here is how) and use Pip to install NumPy and SciPy:

Open the command prompt, change directory to where the files were downloaded and install the packages using Pip.

Open the command prompt, change directory to where the files were downloaded and install the packages using Pip.

Expyriment, seaborn, and pandas can be downloaded and installed using Pip:

Linux users can install  the packages using Pip only and Mac users can see here on how to install the SciPy stack. If you think that the installation procedure is cumbersome I suggest that you install a scientific Python distribution (e.g., Anaconda) that will get you both Python and the libraries needed (except for Expyriment).

How to write Python scripts

Python scripts are typically written in a text editor. Windows computers comes with one called Notepad:

Notepad text editor can be used to write Python scripts (.py).

Notepad text editor can be used to write Python scripts (.py).

OS-X users can use TextEdit. Whichever text editor you end up using is not crucial but you need to save your files with the file ending .py.

Writing a t-test function

Often a Python script uses modules/libraries and these are imported at the beginning of the document. As previously mentioned the t-test script is going to use SciPy but we also need some math functions (i.e., square root). These modules are going to be imported first in our script as will become clear later on.

Before we start defining our function, I am briefly going to touch on what a function is and describe one of the datatypes we are going to use. In Python a function is parts of organised code that can be used again later. The function we will create is going to be called paired_ttest and takes the arguments x, and y. What this means is that we can send the scores from two different conditions (x and y) to the function. Our function requires the x and y variables to be of the datatype list. In the list other values can be stored (e.g., in our case the RTs in the incongruent and congruent trials). Each value stored in a list gets an index (note, in Python the index start at 0). For instance, if we have a list containing 5 differences scores we can get each of them individually by using the index on which they are stored. If we start the Python interpreter we can type the following code (see here if you are unsure on how to start the Python interpreter):

List indices

Returning to the function we are going to write, I follow this formula for the paired sample t-test:

t-test equation used for Python function

Basically,  (“d-bar”; the d with the line above) is the mean difference between two scores, Sd is the standard deviation of the differences, and n is the sample size.

Creating our function

Now we go on with defining the function in our Python script (i.e., def is what tells Python that the code in following lines are part of the function). Our function needs to calculate the difference score for each subject. Here we first create a list (i.e., di on line 5). We also need to know the sample size and we can obtain that by getting the length of the list x (by using the function len()). Note, here another datatype, int, is used. Int is short for integer and stores whole numbers. Also, worth noting here is that di and n are indented. In Python indentation is used to mark where certain code blocks start and stop.

Next we use a Python loop (e.g., line 7 below). A loop is typically used when we want to repeat something n number of times. To calculate the difference score we need to take each subject’s score in the x condition and subtract it to the score in the y condition (line 8). Here we use the list indices (e.g., x[i]). That is, i is an integer starting at 0 and going to n and the first repetition of the loop will get the first (i.e., index 0) subjects scores. The average difference score is now easy to calculate. It is just the sum of all difference scores divided by sample size (see, line 10).

Note, here we use another datatype, float. The float type represents real numbers and is stored with decimal point. In Python 2.7, we need to do this because dividing integers will lead to rounded results.

In the next part of our t-test function we are going to calculate the standard deviation. First, a float datatype is created (std_di) by using a dot after the digit (i.e., 0.). The scripts continue with looping through each difference score and adding the squared departure each subject’s score is from the average (i.e., d-dbar) to the std_di variable. In Python, squaring is done by typing “**” (see line 14). Finally, the standard deviation is obtained by taking the square root (using sqrt() from NumPy) of the value obtained in the loop.

Next statistic to be calculated is the Standard error of the mean (line 16). Finally, one line 17 and 18 we can calculate the t-value and p-value. On line 20 we add all information in the dictionary datatype that can store other objects. However, the dictionary store objects linked to keys (e.g., “T-value” in our example below).

The complete script, with an example how to use it, can be found here.

Flanker task in Expyriment

In this part of the post we are going to create the Flanker task using a  Python library called Expyriment (Krause & Lindemann, 2014).

First, we import expyriment.

We continue with creating variables that contain basic settings of our Flanker task. As can be seen in the code below we are going to have 4 trials per block, 6 blocks, and durations of 2000ms. Our flanker stimuli are stored in a list and we have some task instructions (note “\n” is the newline character and “\” just tells the Python interpreter that the string continues on the next line).

It may be worth pointing out that most Python libraries and modules have a set of classes. The classes contain a set of methods. So what is a “Class” and what is a “Method”? Essentially, a class is a template to create an object. An object can be said be a “storage” of both variables and functions. Returning to our example, we now create the Experiment class. This object will, for now, contain the task name (“Flanker Task”). The last line of the code block uses a method to initialise our object (i.e., our experiment).

We now carry on with the design of our experiment. First, we start with a for loop. In the loop we go from the first block to the last. Each block is created and temporarily stored in the variable temp_block.

Next we are going to create our trials for each block. First, in the loop we create a stimulus. Here we use the list created previously (i.e., flanker_stimuli). We can obtain one object (e.g., “<<<<<“) from the list by using the trial (4 stimuli in each list and 4 trials/block) as the index. Remember, in our loop each trial will be a number from 0 to n (e.g., number of trials) After a stimulus is created we create a trial and add the stimulus to the trial.

Since the flanker task can have both congruent (e.g., “<<<<<“) and incongruent trials (“<<><<“) we want to store this. The conditional statement (“if”) just checks whether there are as many of the first object in the list (e.g., “<“) as the length of the list. Note, count is a method of the list type object and counts the occurrences of something in the list. If the length and the number of arrows are the same the trial type is congruent:

Next we need to create the response mapping. In the tutorial example we are going to use the keys x and m as response keys. In Expyriment all character keys are represented as numbers. In the end of the code block we add the congruent/incongruent and response mapping information to our trial which, finally, is added to our block.

At the end of the block loop we use the method shuffle_trials to randomise our trials and the block is, finally, added to our experiment.

Our design is now finalised. Expyriment will also save our data (lucky us, right?!) and we need to specify the column names for the data files. Expyriment has a method (FixCross) for creating fixation cross and we want one!

We are now ready to start our experiment and present the task instructions on the screen. The last line makes the task wait for spacebar to be pressed in,

The subjects will be prompted with this text:

Expyriment task instructions for the Flanker task

Expyriment task instructions for the Flanker task

After the spacebar is pressed the task starts. It starts with the trials in the first block, of course. In each trial the stimulus is preloaded, a fixation cross is presented for 2000ms (experiment.clock.wait(durations)), and then the flanker stimuli are presented.

Fixation cross is first presented for 2000ms followed by flanker stimuli (2000ms).

Fixation cross is first presented for 2000ms followed by flanker stimuli (2000ms).

The next line to be executed is line 52 and the code on that line resets a timer so that we can use it later.  On line 54 we get response (key) and RT using the keyboard class and its wait method. We use the arguments keys (K_x and K_m are our keys, remember) and duration (2000ms). Here we use the clock method and subtract the current time (from the time that we reset the clock) from durations (line 57). This has to be done because the program waits for the subject to press a key (i.e., “m” or “r”) and next trial would continue when a key is pressed.

Accuracy is controlled using the if and else statements. That is, the actual response is compared to the correct response. After the accuracy has been determined the we add the variables the order we previously created them (i.e., “block”, “correctresp”, “response”, “trial”, “RT”, “accuracy”, “trialtype”).
Finally, when the 4 trials of a block have been run, we implement a short break (i.e., 3000 ms) and present some text notifying the participant.

The experiment end with thanking the participants for their contribution:

A recording of the task can be seen in this video:

That was how to create a Flanker task using Expyriment. For a better overview of the script as a whole see this GitHub gist. Documentation of Expyriment can be found here: Expyriment docs. To run a Python script you can open up the command prompt and change to the directory where the script is (using the command cd):

Running command prompt to execute the Flanker Script.

Data processing and analysis

Assume that we have collected data using the Flanker task and now we want to analyse our data. Expyriment saves the data of each subject in files with the file ending “.xpd”. Conveniently, the library also comes packed with methods that enables us to preprocess our data.

We are going to create a comma-separated value file (.csv) that we later are going to use to visualise and analyse our data. Lets create a script called “data_processing.py”. First, we import a module called os which lets us find the current directory (os.getcwd()) and by using os.sep we make our script compatible with both Windows, Linux, and OS-X. The variable datafolder stores the path to the data. In the last line, we use data_preprocessing to write a .csv file (“flanker_data.csv”) from the files starting with the name “flanker” in our data folder. Note, the Python script need to be run in the same directory as the folder ‘data’ is. Another option is to change the datafolder variable (e.g., datafolder =’path_to_where_the_data_is’).

Descriptive statistics and visualising

Each subject’s data files are now put together in “flanker_data.csv” and we can start our analyses. Here we are going to use the libraries Pandas and Seaborn. Pandas is very handy to create data structures. That is, it makes working with our data much easier. Here, in the code block below, we import Pandas as pd and Seaborn as sns. It makes using them a bit easier. The third line is going to make our plot white and without a grid.

Now we can read our csv-file (‘flanker_data.csv’). When reading in our data we need to skip the first (“# –– coding: UTF-8 –-” is no use for us!):

Concatenated data file (.csv)

Concatenated data file (.csv)

Reading in data from the data file and skipping the first row:

Pandas makes descriptive statistics quite easy as well. Since we are interested in the two types of trials, we group them. For this example, we are only going to look at the RTs:

count mean std min 25% 50% 75% max
trialtype
congruent 360 560.525000 36.765310 451.0 534.75 561.0 584.0 658.0
incongruent 360 642.088889 55.847114 488.0 606.75 639.5 680.0 820.0

One way to obtain quite a lot information on our to trial types and RTs is doing a violin plot:

Violin plot of RT in the incongruent and congruent trials.

Testing our hypothesis

Just a brief reminder, we are interested here in whether people can suppress the irrelevant information (i.e., the flankers pointing to another direction than the target). We use the paired sample t-test to see if the difference in RT in incongruent and congruent trials is different from zero.

First, we need to aggregate the data, and we start by grouping our data by trial type and subject number. We can then get the mean RT for the two trial types:

Next, we are going to take the RT (values, in the script) and assign them to x and y. Remember, the t-test function we started off with takes two lists containing data. The last code in the code block below calls the function which returns the statistics needed (i.e., t-value, p-value, and degree of freedom).

Finally, before printing the results we may want to round the values. We use a for loop and go for each key and value in our dictionary (i.e., t_value). On line 7 we then round our numbers.

Printing the variable t_value (line 8 above) renders the following output:

We can conclude that there was a significant difference in the RT for incongruent (M =  642.08, SD = 55.85) and congruent (M = 560.53, SD = 36.52) trials; t(29) = 27.358, p < .001.

That was how to use Python from data collection to analysing data. If you want to play around with the scripts for processing data files for 30 simulated subjects can be downloaded here: data_flanker_expy.zip.  All the scripts described above, as well as the script to simulate the subjects (i.e., run the task automatically), can be found on this GitHub Gist. Feel free to use the Flanker task above. If you do, I would suggest that you add a couple of practice trials.

Resources

As previously mentioned the Python community is large and helpful. Thus, there are so many resources to turn to both for learning Python and finding help. It can thus be hard to know where to start. Therefore, the end of this post contains a few of the Python resources I have found useful or interesting. All resources I list below are free.

Learning Python

Python in Psychology:

Python distributions

If you think that installing Python packages seem complicated and time consuming there are a number of distributions. These distributions aims to simplify package management. That is, when you install one of them you will get many of the packages that you would have to install one by one. There are many distributions (see here) but I have personally used Anaconda and Python(x, y).

Data Collection

  • PsychoPy (Peirce, 2007) – offers both a GUI and you can use the API to program your experiments. You will find some learning/teaching resources on the homepage
  • Expyriment – the library used in the tutorial above
  • OpenSesame (Mathôt, Schreij, & Theeuwes, 2012) – offers both Python scripting (mainly inline scripts) and a GUI for building your experiments. You will find examples and tutorials on OpenSesame’s homepage.
  • PyGaze (Dalmaijer, Mathôt, & der Stigchel, 2014) – a toolbox for eye-tracking data and experiments.

Statistics

  • Pandas – Python data analysis (descriptive, mainly) toolkit
  • Statsmodels – Python library enabling many common statistical methods
  • pypsignifit – Python toolbox for fitting psychometric functions (Psychophysics)
  • MNE – For processing and analysis of electroencephalography (EEG) and magnetoencephalography (MEG) data

Getting help

  • Stackoverflow – On Stackoverflow you can answer questions concerning every programming language. Questions are tagged with the programming language. Also, some of the developers of PsychoPy are active and you can tag your questions with PsychoPy.
  • User groups for PsychoPy and Expyriment can be found on Google Groups.
  • OpenSesame Forum e.g., the subforums for PyGaze and, most important, Expyriment.

That was it; I hope you have found my post valuable. If you have any questions you can either leave a comment here, on my homepage or email me.

References

Colcombe, S. J., Kramer, A. F., Erickson, K. I., & Scalf, P. (2005). The implications of cortical recruitment and brain morphology for individual differences in inhibitory function in aging humans. Psychology and Aging, 20(3), 363–375. http://doi.org/10.1037/0882-7974.20.3.363

Dalmaijer, E. S., Mathôt, S., & der Stigchel, S. (2014). PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods, 46(4), 913–921. doi:10.3758/s13428-013-0422-2

Eriksen, B. a., & Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics, 16(1), 143–149. doi:10.3758/BF03203267

Krause, F., & Lindemann, O. (2014). Expyriment: A Python library for cognitive and neuroscientific experiments. Behavior Research Methods, 46(2), 416-428. http://doi.org/10.3758/s13428-013-0390-6

Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods, 44(2), 314–324. http://doi.org/10.3758/s13428-011-0168-7

Peirce, J. W. (2007). PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods, 162(1-2), 8–13. http://doi.org/10.1016/j.jneumeth.2006.11.017

Erik Marsja

Erik Marsja

Erik Marsja is a Ph.D. student at the Department of Psychology, Umeå University, Sweden. In his dissertation work, he examines attention and distraction from a cross-modal and multisensory perspective (i.e., using auditory, visual, and tactile stimuli). Erik is teaching in both qualitative and quantitative research methods, applied cognitive psychology, cognitive psychology, and perception. In the lab group he has been part of since his Bachelor's thesis he has been responsible for programming his own, and some of the other members and collaborators, experiments. Programming skills have been, and will be, something valuable for his research and his career. Some of the code that have been used can be found on his GitHub page.

More Posts - Website

Facebooktwitterrss

Meet the Authors

Do you wish to publish your work but don’t know how to get started? We asked some of our student authors, Janne Hellerup Nielsen, Dimitar Karadzhov, and Noelle Sammon, to share their experience of getting published.

Janne Hellerup Nielsen is a psychology graduate from Copenhagen University. Currently, she works in the field of selection and recruitment within the Danish Defence. She is the first author of the research article “Posttraumatic Stress Disorder among Danish Soldiers 2.5 Years after Military Deployment in Afghanistan: The Role of Personality Traits as Predisposing Risk Factors”. Prior to this publication, she had no experience with publishing or peer review but she decided to submit her research to JEPS because “it is a peer reviewed journal and the staff at JEPS are very helpful, which was a great help during the editing and publishing process.”

Dimitar Karadzhov moved to Glasgow, United Kingdom to study psychology (bachelor of science) at the University of Glasgow. He completed his undergraduate degree in 2014 and he is currently completing a part-time master of science in global mental health at the University of Glasgow. He is the author of “Assessing Resilience in War-Affected Children and Adolescents: A Critical Review”. Prior to this publication, he had no experience with publishing or peer review. Now having gone through the publication process, he recommends fellow students to submit their work because “it is a great research and networking experience.”

Noelle Sammon has an honors degree in business studies. She returned to study in university in 2010 and completed a higher diploma in psychology in the National University of Ireland, Galway. She is currently completing a master’s degree in applied psychology at the University of Ulster, Northern Ireland. She plans to pursue a career in clinical psychology. She is the first author of the research article “The Impact of Attention on Eyewitness Identification and Change Blindness”. Noelle had some experience with the publication process while previously working as a research assistant. She describes her experience with JEPS as follows: “[It was] very professional and a nice introduction to publishing research. I found the editors that I was in contact with to be really helpful in offering guidance and support. Overall, the publication process took approximately 10 months from start to finish but having had the opportunity to experience this process, I would encourage other students to publish their research.”

How did the research you published come about?

Janne: “During my psychology studies, I had an internship at a research center in the Danish Defence. Here I was a part of a big prospective study regarding deployed soldiers and their psychological well-being after homecoming. I was so lucky to get to use the data from the research project to conduct my own studies regarding personality traits and the development of PTSD. I’ve always been interested in differential psychology—for example, why people manage the same traumatic experiences differently. Therefore, it was a great opportunity to do research within the field of personality traits and the development of PTSD, and even to do so with some greatly experienced supervisors, Annie and Søren.”

Dimitar: “In my final year of the bachelor of science degree in psychology, I undertook a critical review module. My assigned supervisor was liberal enough and gave me complete freedom to choose the topic I would like to write about. I then browsed a few The Psychologist editions I had for inspiration and was particularly interested in the area of resilience from a social justice perspective. Resilience is a controversial and fluid concept, and it is key to recovery from traumatic events such as natural disasters, personal trauma, war, terrorism, etc. It originates from biomedical sciences and it was fascinating to explore how such a concept had been adopted and researched by the social and humanitarian sciences. I was intrigued to research the similarities between biological resilience of human and non-human animals and psychological resilience in the face of extremely traumatic experiences such as war. To add an extra layer of complexity, I was fascinated by how the most vulnerable of all, children and adolescents, conceptualize, build, maintain, and experience resilience. From a researcher’s perspective, one of the biggest challenges is to devise and apply methods of inquiry in order to investigate the concept of resilience in the most valid, reliable, and culturally appropriate manner. The quantitative–qualitative dyad was a useful organizing framework for my work and it was interesting to see how it would fit within the resilience discourse.”

Noelle: “The research piece was my thesis project for the higher diploma (HDIP). I have always had an interest in forensic psychology. Moreover, while attending the National University of Ireland, Galway as part of my HDIP, I studied forensic psychology. This got me really interested in eyewitness testimony and the overwhelming amount of research highlighting the problematic reliability with it.”

What did you enjoy most in your research and what did you find difficult?

Janne: “There is a lot of editing and so forth when you publish your research, but then again it really makes sense because you have to be able to communicate the results of your research out to the public. To me, that is one of the main purposes of research: to be able to share the knowledge that comes out of it.”

Dimitar: “[I enjoyed] my familiarization with conflicting models of resilience (including biological models), with the origins and evolution of the concept, and with the qualitative framework for investigation of coping mechanisms in vulnerable, deprived populations. In the research process, the most difficult part was creating a coherent piece of work that was very informative and also interesting and readable, and relevant to current affairs and sociopolitical processes in low- and middle-income countries. In the publication process, the most difficult bit was ensuring my work adhered to the publication standards of the journal and addressing the feedback provided at each stage of the review process within the time scale requested.”

Noelle: “I enjoyed developing the methodology to test the research hypothesis and then getting the opportunity to test it. [What I found difficult was] ensuring the methodology would manipulate the variables required.”

How did you overcome these difficulties?

Janne: “[By] staying focused on the goal of publishing my research.”

Dimitar: “With persistence, motivation, belief, and a love for science! And, of course, with the fantastic support from the JEPS publication staff.”

Noelle: “I conducted a pilot using a sample of students asking them to identify any problems with materials or methodology that may need to be altered.”

What did you find helpful when you were doing your research and writing your paper?

Janne: “It was very important for me to get competent feedback from experienced supervisors.”

Dimitar: “Particularly helpful was reading systematic reviews, meta-analyses, conceptual papers, and methodological critique.”

Noelle: “I found my supervisor to be very helpful when conducting my research. In relation to the write-up of the paper, I found that having peers and non-psychology friends read and review my paper helped ensure that it was understandable, especially for lay people.”

Finally, here are some words of wisdom from our authors.

Janne: “Don’t think you can’t do it. It requires some hard work, but the effort is worth it when you see your research published in a journal.”

Dimitar: “Choose a topic you are truly passionate about and be prepared to explore the problem from multiple perspectives, and don’t forget about the ethical dimension of every scientific inquiry. Do not be afraid to share your work with others, look for feedback, and be ready to receive feedback constructively.”

Noelle: “When conducting research it is important to pick an area of research that you are interested in and really refine the research question being asked. Also, if you are able to get a colleague or peer to review it for you, do so.”

We hope our authors have inspired you to go ahead and make that first step towards publishing your research. We welcome your submissions anytime! Our publication guidelines can be viewed here. We also prepared a manual for authors that we hope will make your life easier. If you do have questions, feel free to get in touch at journal@efpsa.org.

This post was edited by Altan Orhon.

Leonor Agan

Leonor is a postgraduate student at the Centre for Clinical Brain Sciences (University of Edinburgh), pursuing a MSc in Neuroimaging for Research. She holds a BSc in Psychology from the Ateneo de Manila University in the Philippines and a BA in Psychology from Maynooth University in Ireland.  She worked as a Research Assistant in Trinity College Institute of Neuroscience, Complex and Adaptive Systems Laboratory (University College Dublin), and Psychology Department (University College Dublin). Her research interests include cognition, memory, and neuroimaging techniques, specifically diffusion MRI and its applications in disease. She is also an Editor of the Journal of European Psychology Students. Find her on Twitter @leonoragan and link in with her.

More Posts

Facebooktwitterrss

Bayesian Statistics: Why and How

bayes_hot_scaled

Bayesian statistics is what all the cool kids are talking about these days. Upon closer inspection, this does not come as a surprise. In contrast to classical statistics, Bayesian inference is principled, coherent, unbiased, and addresses an important question in science: in which of my hypothesis should I believe in, and how strongly, given the collected data?  Continue reading

Fabian Dablander

Fabian Dablander

Fabian Dablander is currently doing his masters in cognitive science at the University of Tübingen. You can find him on Twitter @fdabl.

More Posts - Website

Facebooktwitterrss

How not to worry about APA style

If you have gone through the trouble of picking up a copy of the Publication Manual of the American Psychological Association (APA, 2010), I’m sure your first reaction was similar to mine: “Ugh! 272 pages of boredom.” Do people actually read this monster? I don’t know. I don’t think so. I know I haven’t read every last bit of it. You may be relieved to hear that your reaction resonates with some of the critique that has been voiced by senior researchers in Psychology, such as Henry L. Roediger III (2004). But let’s face it: APA style is not going anywhere. It is one of the major style regimes in academia and is used in many fields other than Psychology, including medical and other public health journals. And to be fair, standardizing academic documents is not a bad idea. It helps readers to efficiently access the desired information. It helps authors by making the journal’s expectations regarding style explicit, and it helps reviewers to concentrate on the content of a manuscript. Most importantly, the guidelines set a standard that is accepted by a large number of outlets. Imagine a world in which you had to familiarize yourself with a different style every time you chose a new outlet for your scholarly work. Continue reading

Frederik Aust

Frederik Aust

Frederik Aust is pursuing a PhD in cognitive psychology at the University of Cologne. He is interested in mathematical models of memory and cognition, open science, and R programming.

More Posts - Website

Facebooktwitterrss

Make the Most of Your Summer: Summer Schools in Europe


11051177_10205216017873360_1194271846_mWhy should you attend Summer Schools?

To put it simply: there is no better way to learn about psychology (and related disciplines), to travel, and to meet new people, all at the same time! Summer schools offer the opportunity to explore areas of psychology that might not be taught at your university, or to really explore a subject, seeing as this scheme allows you to  focus your work on one topic in the company of students who are enthusiastic about the same subject. Last year, I attended a summer school on Law, Criminology and Psychology – coming from Germany, where Criminology is in the Law faculty, that was my opportunity to learn more about eye-witness accounts, lie detection, psychopathy, and how to interrogate children. Aside from classic lectures, summer schools often include seminars and group work. Continue reading

Katharina Brecht

Katharina Brecht

Aside from her role as Editor-in-Chief of the Journal of European Psychology Students, Katharina is currently pursuing her PhD at the University of Cambridge. Her research interests revolve around the evolution and development of social cognition.

More Posts

Facebooktwitterrss

Most frequent APA mistakes at a glance

APA-guidelines, don’t we all love them? As an example, take one simple black line used to separate words – the hyphen: not only do you have to check whether a term needs a hyphen or a blank space will suffice, you also have to think about the different types of hyphens (Em-dash, En-dash, minus, and hyphen). Yes, it is not that much fun. And at JEPS we often get the question: why do we even have to adhere to those guidelines?

APA_errors

Common APA Errors; Infographic taken from the EndNote Blog http://bit.ly/1uWDqnO

The answer is rather simple: The formatting constraints imposed by journals enable for the emphasis to be placed on the manuscript’s content during the review process. The fact that all manuscripts submitted share the same format allows for the Reviewers to concentrate on the content without being distracted by unfamiliar and irregular formatting and reporting styles.

The Publication Manual counts an impressive 286 pages and causes quite some confusion. In JEPS, we have counted the most frequent mistakes in manuscripts submitted to us – data that the EndNote-blog has translated into this nice little graphic.

Here you can find some suggestions on how to avoid these mistakes in the first place.

 References

American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association.

Vainre, M. (2011). Common mistakes made in APA style. JEPS Bulletin, retrieved from http://blog.efpsa.org/2011/11/20/common-mistakes-made-in-apa-style/

Katharina Brecht

Katharina Brecht

Aside from her role as Editor-in-Chief of the Journal of European Psychology Students, Katharina is currently pursuing her PhD at the University of Cambridge. Her research interests revolve around the evolution and development of social cognition.

More Posts

Facebooktwitterrss

Bayesian Statistics: What is it and Why do we Need it?

prlipohellThere is a revolution in statistics happening: The Bayesian revolution. Psychology students who are interested in research methods (which I hope everyone is!) should know what this revolution is about. Gaining this knowledge now instead of later might spare you lots of misconceptions about statistics as it is usually instructed in psychology, and it might help you gain a deeper understanding of the foundations of statistics. To make sure that you can try out everything you learn immediately, I conducted analysis in the free statistics software R (www.r-project.org; click HERE for a tutorial how to get started with R, and install RStudio for an enhanced R-experience) and I provide the syntax for the analysis directly in the article so you can easily try them out. So let’s jump in: What is “Bayesian Statistics”, and why do we need it? Continue reading

Peter Edelsbrunner

Peter Edelsbrunner

Peter is currently doctoral student at the section for learning and instruction research of ETH Zurich in Switzerland. He graduated from Psychology at the University of Graz in Austria. Peter is interested in conceptual knowledge development and the application of flexible mixture models to developmental research. Since 2011 he has been active in the EFPSA European Summer School and related activities.

More Posts

Facebooktwitterrss