EVI Web Search Demo: The First Interactive Voice AI Podcast
Published on May 15, 2024
![Screen Shot 2024 05 15 at 10.41.21 Am](https://hume-website.directus.app/assets/a866c1d7-b16c-4a6f-9722-06165f83f1da/Screen Shot 2024-05-15 at 10.41.21 AM.png?width=1920&height=1800&quality=75&format=webp&fit=inside)
Hume’s Empathic Voice Interface (EVI) is now the first voice API capable of native web search.
To showcase EVI’s new ultra-fast web search functionalities, we’re introducing Chatter, the first interactive voice AI podcast. Chatter uses real-time web search to provide daily news updates — users can interrupt the AI host to switch topics, or dig deeper into their favorite stories. Experience an early window into the future of interactive media here: https://chatter.hume.ai/
Imagine what you can build with empathic voice AI and web search:
-
Smart shopping assistants: seamlessly search for product reviews, compare prices, and find the best deals—all through voice commands.
-
Dynamic educational tools: create interactive learning experiences that use web search to find educational content tailored to student’s unique needs.
-
On-demand travel advisors: develop voice assistants that provide real-time travel tips, from restaurant reviews to local attractions, easily offer users up-to-date recommendations.
Chatter is just one exciting example of what’s possible with web search - the potential for innovative voice AI applications is limitless. Developers can start building today: beta.hume.ai
Subscribe
Sign up now to get notified of any updates or new articles.
Share article
Recent articles
![Blog 2.14.24 (1)](https://hume-website.directus.app/assets/58908544-0b2f-4497-8930-7e0e7dadcde0/Blog - 2.14.24 (1).jpg?width=1000&height=1000&quality=75&format=webp&fit=inside)
Understanding how emotions are experienced and expressed across different cultures has long been a central focus of debate and study in psychology, cognitive science, and anthropology. What emotions do people in different cultures experience in response to the same evocative scenes and scenarios? What facial movements do they produce? How are feelings and expressions related?
![Frame](https://hume-website.directus.app/assets/00a32aa2-dfb9-4aab-a467-2bf982f15cd5/Frame.png?width=1000&height=1000&quality=75&format=webp&fit=inside)
Introducing Hume’s Empathic Voice Interface (EVI) API
Last month, we released the demo of our Empathic Voice Interface (EVI). The first emotionally intelligent voice AI API is finally here! EVI does a lot more than stitch together transcription, LLMs, and text-to-speech. With a new empathic LLM (eLLM) that processes your tone of voice, EVI unlocks new capabilities like knowing when to speak, generating more empathic language, and intelligently modulating its own tune, rhythm, and timbre. EVI is the first voice AI that really sounds like it understands you.
Hume Raises $50M Series B and Releases New Empathic Voice Interface
Hume AI raised a $50m Series B round led by EQT Ventures and joined by Union Square Ventures, Nat Friedman & Daniel Gross, Metaplanet, Northwell Holdings, Comcast Ventures, and LG Technology Ventures.