Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI Introduces ChatGPT's Advanced Voice Mode and Realtime API for Natural Voice Interactions
Oct 1, 2024, 09:27 PM
OpenAI has introduced a new "advanced voice mode" for ChatGPT, enabling users to interact with the AI chatbot using natural spoken language. This feature allows users to ask questions and receive responses that mimic human conversation, marking a significant step towards more natural and intuitive human-computer interactions. The advanced voice mode is powered by OpenAI's Realtime API, which handles audio inputs and outputs directly, streamlining the creation of voice assistants. The API connects to GPT-4 via WebSocket and supports function calling, enabling faster and more natural conversations with automatic interruption handling. Users have the flexibility to pass text inputs and set exactly when interruptions occur, enhancing interaction customization. OpenAI CEO Sam Altman remarked that the voice mode was the first time he felt tricked into thinking an AI was a person, noting that it taps into neural circuitry evolved for human social interactions.
View original story
Markets
Yes • 50%
No • 50%
Official announcements from major tech companies such as Google, Apple, or Amazon
No • 50%
Yes • 50%
Surveys or studies published by reputable research firms or academic institutions
No • 50%
Yes • 50%
Official OpenAI blog or press releases
50-60 • 25%
More than 20 • 25%
11 to 20 • 25%
5 to 10 • 25%
Less than 5 • 25%
More than 80 • 25%
71-80 • 25%
61-70 • 25%
Official OpenAI documentation or press releases
Dissatisfied • 25%
Very Satisfied • 25%
Satisfied • 25%
Neutral • 25%
User satisfaction surveys conducted by OpenAI or independent research firms
OpenAI ChatGPT • 25%
Apple Siri • 25%
Google Assistant • 25%
Amazon Alexa • 25%
Market analysis reports from firms like Gartner or IDC