Mark Thornton

Speaker: Mark Thornton

Title: Detecting and visualizing human body pose in naturalistic video

Bio: Mark Thornton is an Assistant Professor in the Department of Psychological and Brain Sciences at Dartmouth College, where he directs the Social Computation Representation And Prediction Laboratory (SCRAP Lab). His research focuses on social prediction: how people anticipate the thoughts, feelings, and actions of other people. He studies the principles that the mind and brain use to organize social knowledge and generate accurate, efficient social predictions. Thornton’s research draws on a broad range of techniques including functional magnetic resonance imaging (fMRI), computational modeling, machine learning, and behavioral experiments. Thornton received his A.B. from Princeton University, his Ph.D. in Psychology from Harvard University, and his postdoctoral training at Princeton University before starting his current position in 2020.

Tutorial Description: Humans have an incredibly rich behavioral repertoire, composed of thousands of distinct actions, gestures, and involuntary movements. By examining which parts of this repertoire individuals tend to favor, and studying how their brains respond to observing others perform different behaviors, we can gain deep insights into the mind and brain. However, manually annotating people’s body position in naturalistic video is impractically time-consuming. In recent years, researchers studying non-human animals have made major strides using markerless methods to automatically annotate the poses and movements of model organisms such as zebrafish, fruit flies, and rodents. Detecting human body pose offers many unique challenges, but machine learning is beginning to offer practical solutions to this problem. In this workshop, we will learn to use state-of-the-art computer vision tools to measure human body pose in naturalistic videos. We will generate dynamic, interactive visualizations of these pose data that will allow us to see people in action and explore their movements from every angle. We will also use face identification and interactive visualizations to help us track and annotate identities over time. These tools can be used both for extracting important features from naturalistic fMRI stimuli, and for rapid automatic phenotyping of freely behaving participants.