EVI Web Search Demo: The First Interactive Voice AI Podcast
Published on May 15, 2024
Hume’s Empathic Voice Interface (EVI) is now the first voice AI API capable of native web search.
The first conversational voice AI podcast
To showcase our Empathic Voice Interface's (EVI) new ultra-fast web search functionalities, we’re introducing Chatter, the first interactive voice AI podcast. Chatter uses real-time web search to provide daily news updates — users can interrupt the conversational AI host to switch topics, or dig deeper into their favorite stories.
Speak with voice AI
Experience an early window into the future of interactive media here: https://chatter.hume.ai/
Imagine what you can build with empathic voice AI and web search:
-
Smart shopping assistants: seamlessly search for product reviews, compare prices, and find the best deals—all through voice commands.
-
Dynamic educational tools: create interactive learning experiences that use web search to find educational content tailored to student’s unique needs.
-
On-demand travel advisors: develop voice assistants that provide real-time travel tips, from restaurant reviews to local attractions, easily offer users up-to-date recommendations.
Chatter is just one exciting example of what’s possible with web search - the potential for innovative voice AI applications is limitless. Developers can start building today: platform.hume.ai
Subscribe
Sign up now to get notified of any updates or new articles.
Share article
Recent articles
Introducing EVI 2, our new foundational voice-to-voice model
EVI 2 is our new foundational voice-to-voice model. It is one of the first AI models with which you can have remarkably human-like voice conversations. It can converse rapidly and fluently with users with subsecond response times, understand a user’s tone of voice, generate any tone of voice, and even respond to some more niche requests like changing its speaking rate or rapping. It can emulate a wide range of personalities, accents, and speaking styles and possesses emergent multilingual capabilities.
Comparing the world’s first voice-to-voice AI models
The world’s first working voice-to-voice models are Hume AI's Empathic Voice Interface 2 (EVI 2) and OpenAI's GPT-4o Advanced Voice Mode (GPT-4o-voice). EVI 2 is publicly available, as an app and an API that developers can build on. On the other hand, GPT-4o-voice has been previewed to a small number of ChatGPT users. Here we explore the similarities, differences, and potential applications of these systems.
How Tone AI uses Hume’s API to boost audience growth
How Tone AI uses Hume’s Expression Measurement API to boost audience growth for NFL teams and media organizations