Loading...
Loading...
Browse all stories on DeepNewz
VisitWill a study report that over 50% of users feel OpenAI's ChatGPT voice mode mimics human conversation by December 31, 2024?
Yes • 50%
No • 50%
Surveys or studies published by reputable research firms or academic institutions
OpenAI Introduces ChatGPT's Advanced Voice Mode and Realtime API for Natural Voice Interactions
Oct 1, 2024, 09:27 PM
OpenAI has introduced a new "advanced voice mode" for ChatGPT, enabling users to interact with the AI chatbot using natural spoken language. This feature allows users to ask questions and receive responses that mimic human conversation, marking a significant step towards more natural and intuitive human-computer interactions. The advanced voice mode is powered by OpenAI's Realtime API, which handles audio inputs and outputs directly, streamlining the creation of voice assistants. The API connects to GPT-4 via WebSocket and supports function calling, enabling faster and more natural conversations with automatic interruption handling. Users have the flexibility to pass text inputs and set exactly when interruptions occur, enhancing interaction customization. OpenAI CEO Sam Altman remarked that the voice mode was the first time he felt tricked into thinking an AI was a person, noting that it taps into neural circuitry evolved for human social interactions.
View original story
Yes • 50%
No • 50%
High accuracy and naturalness of voice • 25%
Ease of integration with existing systems • 25%
Privacy and security concerns • 25%
Cost and pricing concerns • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Very Satisfied • 25%
Satisfied • 25%
Neutral • 25%
Dissatisfied • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
50-60 • 25%
More than 20 • 25%
11 to 20 • 25%
5 to 10 • 25%
Less than 5 • 25%
More than 80 • 25%
71-80 • 25%
61-70 • 25%
Dissatisfied • 25%
Very Satisfied • 25%
Satisfied • 25%
Neutral • 25%