Podcast

Episode 11 Compassion and Robots | The Feelings Lab

Published on Feb 1, 2022

In the Season 2 Premiere of The Feelings Lab, join Hume AI CEO, Dr. Alan Cowen and Embodied CEO, Dr. Paolo Pirjanian, with host Matt Forte as they discuss "Compassion and Robots."

What will it take to assuage some people's fear of robots? Can robots empathize? Can they deliver therapies, aid in child development, and give us deeper insight into ourselves? We discuss what it will take to make robots compassionate, and how the future of AI may hinge on this central challenge.

Dr. Paolo Pirjanian, CEO of Embodied, starts us off by noting how curious robots can help humans think through our own questions and even reflect on our feelings.

Next hear Dr. Alan Cowen, CEO of Hume AI, discuss how giving robots the empathic abilities needed to care for human well-being will help us avoid the outcomes that people are most afraid of.

From R2D2 to Her, Dr. Alan Cowen, CEO of Hume AI, and Dr. Paolo Pirjanian, CEO of Embodied, reflect on what sci-fi has gotten right and wrong about the future of robots.

All this and more can be found in our full episode, available on Apple and Spotify

Subscribe, and tell a friend to subscribe!

Subscribe

Sign up now to get notified of any updates or new articles.

Recent articles

blogtile
Science

What is semantic space theory?

Our models and products are built on a cutting-edge approach to understanding emotion: semantic space theory (SST), which uses computational methods and data-driven approaches to map the full spectrum of our feelings

Blog - 2.14.24
Science

Publication in iScience: Understanding what facial expressions mean in different cultures

How many different facial expressions do people form? How do they differ in meaning across cultures? Can AI capture these nuances? Our new paper provides new in-depth answers to these questions with the help of machine learning.

Blog Card 2.9.24
Science

Introducing a new evaluation for creative ability in Large Language Models

Introducing HumE-1 (Human Evaluation 1), our new evaluation for large language models (LLMs) that uses human ratings to evaluate LLMs for their ability to perform creative tasks in the ways that matter to us, evoking the feelings we want to feel.