Episode 2 Embarrassment | The Feelings Lab

Published on Oct 4, 2021

In this week's episode of The Feelings Lab, we're discussing embarrassment! Returning hosts Dr. Alan Cowen, Dr. Dacher Keltner, and Matt Forte will be joined by guest host Dr. Jessica Tracy (Director of the Emotion and Self Lab at the University of British Columbia) and special guest, comedian Ali Kolbert (as seen on The Tonight Show).

Begin by hearing Dr. Dacher Keltner describe the concept of emotional contagion and how the feeling of embarrassment helps strengthen our collective identity.

Next, hear about this in practice as guest Ali Kolbert explains how emotions seem to spread across comedy club crowds—and how stepping over the line can cause backlash.

Later in the episode, Dr. Alan Cowen, Hume's Chief Scientist, comments on how the feeling of embarrassment has evolved across time and age. Specifically, social media may make the embarrassment we experience in our everyday lives feel worse than it used to.

And discover that while animals show recognizable displays of submission, psychologist Dr. Jessica Tracy explains — your dog’s expression of remorse may not be one of them.

All this and more can be found in our full episode, available on Apple and Spotify

Subscribe, and tell a friend to subscribe!


Sign up now to get notified of any updates or new articles.

Recent articles


What is semantic space theory?

Our models and products are built on a cutting-edge approach to understanding emotion: semantic space theory (SST), which uses computational methods and data-driven approaches to map the full spectrum of our feelings

Blog - 2.14.24

Publication in iScience: Understanding what facial expressions mean in different cultures

How many different facial expressions do people form? How do they differ in meaning across cultures? Can AI capture these nuances? Our new paper provides new in-depth answers to these questions with the help of machine learning.

Blog Card 2.9.24

Introducing a new evaluation for creative ability in Large Language Models

Introducing HumE-1 (Human Evaluation 1), our new evaluation for large language models (LLMs) that uses human ratings to evaluate LLMs for their ability to perform creative tasks in the ways that matter to us, evoking the feelings we want to feel.