Episode 24 Trust & Safety in Online Conversations | The Feelings Lab
Published on Jul 19, 2022
Join Hume AI CEO Dr. Alan Cowen, Brand Bastion CEO Jenny Wolfram, and host Matt Forte as they discuss Trust and Safety in Online Conversations. How can we give people the trust in one another, and the safety from threats and harassment, to speak freely online? How can we ensure that people's messages are being heard by organizations that can make a difference? Amid the rise in online communities, we consider the essential role that AI will play in empowering authentic voices, from classifying harmful content to delivering care to the vulnerable. Conversational AI has the promise of enabling organizations to engage with communities in a way that is authentic, receptive, and fluid, supporting a democracy of ideas.
We begin with Brand Bastion's CEO and founder Jenny Wolfram describing how humans can work collaboratively with AI to analyze thousands of social media comments and help organizations respond in a way that is safe, authentic, and receptive.
Hume AI's CEO Dr. Alan Cowen describes the promises and pitfalls of conversational AI, which is just beginning to understand expressive language like sarcasm, but will soon be capable of incorporating personalization and context to engage in productive dialogues.
Hume AI's CEO Dr. Alan Cowen and Brand Bastion CEO Jenny Wolfram share how AI helps organizations to engage in meaningful customer conversations online that humanize and personalize brands, providing them with critical feedback for making important decisions.
Hume AI's CEO Dr. Alan Cowen shares how online platforms often unintentionally reward controversial and provocative statements but seldom surface more careful and reasoned responses, making it particularly challenging to respond to unfair criticism.
Subscribe
Sign up now to get notified of any updates or new articles.
Share article
Recent articles
Are emotional expressions universal?
Do people around the world express themselves in the same way? Does a smile mean the same thing worldwide? And how about a chuckle, a sigh, or a grimace? These questions about the cross-cultural universality of expressions are among the more important and long-standing in behavioral sciences like psychology and anthropology—and central to the study of emotion.
How can artificial intelligence achieve the level of emotional intelligence required to understand what makes us happy? As AI becomes increasingly integrated into our daily lives, the need for AI to understand emotional behaviors and what they signal about our intentions and preferences has never been more critical.
For AI to enhance our emotional well-being and engage with us meaningfully, it needs to understand the way we express ourselves and respond appropriately. This capability lies at the heart of a field of AI research that focuses on machine learning models capable of identifying and categorizing emotion-related behaviors. However, this area of research is frequently misunderstood, often sensationalized under the umbrella term "emotion AI"--AI that can “detect” emotions, an impossible form of mind-reading.