Welcome!

Welcome to the home page for the PsyPag-MSCP-Section Simulation Summer School ran between the 4th-30th June 2021. The schedule and embedded videos are below.

We were overwhelmed with the amount of interest in both the live sesions and videos and would definitely consider running another school (not necessarily in the summer). If you would be interested in delivering a session, please do get in contact at SimSummerSchool[a][t]gmail.com. We would be particularly keen for attendees to deliver sessions, extending what they have learned this year!

Pre-Requisites

A general pre-requisite for the summer school is a basic familiarity with R and R-Studio. Fortunately there are numerous videos on YouTube to get you started - see below for a playlist by Andy Field and a video by Dorothy Bishop. The individual sessions may have their own, so check those too!

The Sessions


Friday 4th June

Simulation for factorial designs with {faux} with Lisa DeBruine

Date and Time

Friday 4th June, Time (UTC+1) 13:00, Duration 2 hours

Summary

In this workshop I will introduce the {faux} R package for simulating factorial designs from existing data or data parameters. I will also cover adding continuous predictors to exsiting data and simulating multiple datasets for simulation studies.

Video


Monday 7th June

Intro to simulation-based power analysis with Oli Clark

Date and Time

Monday 7th June, Time (UTC+1) 13:00, Duration 1.5 hours

Summary

This will be a really basic introduction to random number generation, loops, and functions which will put you in good stead for the remainder of the workshops. We will look at simulating independent samples and running t-tests on them thousands of times to get an idea of the distribution of p-values when there is a known effect, and when there is not.

Video


Wednesday 9th June

Simulation-based power analysis for regression with Andrew Hales 

Date and Time

Wednesday 9th June, Time (UTC+1) 16:00, Duration 1.5 hours

Summary

This workshop will provide a basic introduction to power analyses for regression using simulation in R.

Video


Friday 11th June

Simulating response accuracy data at the item-level using a logistic multilevel model with Sarah Chadwick

Date and Time

Friday 11th June, Time (UTC+1) 13:00, Duration 2 hours

Summary

In this session we will simulate participants’ item-level response data, where the observed response is binary (0 = incorrect, 1 = correct). We will look at this in the context of a multilevel design, where the dataset includes multiple observations of the same participants and the same items. To do this, we will use a logistic multilevel model to describe the response generating process. Building from a simple model of the response generating process, we will look at including single and multiple predictors which change the log odds of observing a correct response (i.e. effects). Random intercept and effect variances, and the covariance between these, will also be explored. We will also look at how we could estimate the statistical power to detect an effect of interest according to our defined model, for a given experimental design. The freely available programs R and R Studio will be used to code and run the simulations. All code will be available for the simulations in this session.

Video


Monday 14th June

Simulation-based inference with R for ANOVA design experiments with Vimal Rao

Date and Time

Monday 14th June, Time (UTC+1) 16:00, Duration 1.5 hours

Summary

A common design for psychological experiments is a comparison of measures between groups of participants based on their experimental condition. This design is analogous to the ANOVA statistical model. Using the mathematically-defined F-distribution is an opaque way of operationalizing ANOVA designs. Instead, this workshop focus on simulation-based approaches and will teach attendees how to utilize R and simulation to compare experimental data to the expectations based on candidate hypotheses. Topics covered will include (1) Specifying a detailed design model, (2) Identifying random sources of variation, (3) identifying and quantifying potential experimental sources of variation, (4) simulating results under candidate models, (5) drawing conclusions.

Video


Wednesday 16th June

Advancing Quantitative Science with Monte Carlo Simulation with Mark Lai

Date and Time

Wednesday 16th June, Time (UTC+1) 17:00, Duration 1.5 hours

Summary

In this workshop, we will introduce the basic statistical foundation of the Monte Carlo method and discuss how it is used to evaluate properties of estimators (e.g., unbiasedness, efficiency, robustness). An illustrative example will be used to compare the efficiency of the median relative to the mean. I will also discuss the concept of Monte Carlo simulation error, and touch on simulating multivariate normal data for structural equation modeling

Video


Friday 18th June

Using simulation in pre-registration with James Bartlett

Date and Time

Friday 18th June, Time (UTC+1) 13:00, Duration 2 hours

Summary

In this workshop, I will cover a more general approach to demonstrate how you can take advantage of simulation to make a more effective pre-registration. I will use an R Markdown template that makes it easy to include a simulated power analysis (covered by other workshops) and planned analysis steps using simulated data. The first hour will be dedicated to a walk through of creating a preregistration using simulated data. The second hour will be dedicated to working on your own preregistration with the opportunity to troubleshoot.

Video


Monday 21st June

Using simulation before implementing policies in education with Dan Wright

Date and Time

Monday 21st June, Time (UTC+1) 17:00, Duration

Summary

I will discuss two sets of simulations that evaluated algorithms that were in use to give grades to schools. For both, participants will construct the code for the simulations (with guidance from the instructor) and run the simulations. We will discuss whether governments should use simulation of algorithms prior to their implementation if they are not sure how they will perform.

Video


Wednesday 23rd June

Simulated Data to Introduce Causal Inference with Karsten Luebke 

Date and Time

Wednesday 23rd June, Time (UTC+1) 13:00, Duration 2 hours

Summary

The conclusions we can draw from data depend on our knowledge about the data generating process.We will simulate data according to an assumed model and discuss the conclusions based on linear regression. With that we will get further insights into Simpson’s and Berkson’s paradox.

Video


Friday 25th June

Simulation of multivariate, non-normal data with Oscar L. Olvera Astivia

Date and Time

Friday 25th June, Time (UTC+1) 17:00, Duration 2 hours

Summary

This workshop introduces the NOrmal To Anything (NORTA) method as a general technique to simulate a variety of non-normal distributions where the researcher controls the correlation structure of the data (i.e., the population correlation matrix) as well as the non-normality of the univariate, marginal distributions (both continuous and discrete). It begins with a quick overview of the multivariate normal distribution and how it can be altered to allow the user to either control the skewness and excess kurtosis (for continuous data) or the number of categories and frequency (for discrete data). It aims to connect the NORTA approach to popular techniques such as the 3rd order polynomial transformation and other algorithms which are commonly used in simulation studies in the psychology, education and the social sciences.

Video


Monday 28th June

Posterior/prior predictive checks in Bayesian modelling with Mark Andrews

Date and Time

Monday 28th June, Time (UTC+1) 13:00, Duration

Summary

Posterior predictive checks are used to evaluate model fits and model assiumptions in Bayesian (MCMC) based analyses. They involve drawing samples from the posterior distribution to general hypothetical data sets, and then comparing these data sets with the observed data to indentify discrepancies between the model’s predictions and the reality of the data. Posterior predictive checks are particularly useful in complex models where the assumptions of the model are not very easy to check. By contrast to posterior predictive checks, prior predictive checks are very useful to clarify the assumptions of choices of priors. Again, hypothetical data sets are generated and these make it clear what kinds of the data the prior distribution does and does not assume.

Video


Wednesday 30th June

So, you want to be a SimDesign(er)? with Mark Adkins

Date and Time

Wednesday 30th June, Time (UTC+1) 14:00, Duration 2 hours

Summary

The purpose of this workshop is to demonstrate how to write safe, effective, and intuitive R code for Monte Carlo simulation experiments containing one or more simulation factors. A few of the attractive Monte Carlo simulation coding strategies we will cover are: 1) How to write code which is intuitive to read, write, and debug; 2) How to take advantage of SimDesign’s built-in features for creating flexible and extensible simulations; 3) Computational efficiency; 4) Reproducibility at the macro and micro level; 5) Safe and reliable code execution.

Video