Now you can Talk with AI: with OpenAI Advanced Voice Mode
🛑 OpenAI has announced the launch of Advanced Voice Mode, which offers more natural, human-like conversation and interaction than the previous alpha version. This new mode can say the phrase "sorry for the delay" in over 50 languages, suggesting potential support for numerous languages in the future—or perhaps it's an amusing apology for the delayed launch.
OpenAI has introduced several exciting features in this mode:
- ✅ Custom Instructions: You can now customize your experience by specifying preferences, such as avoiding certain topics or focusing on others.
- ✅ Memory Feature: This allows ChatGPT to remember previous conversations for future reference, making interactions more personalized.
- ✅ Five New Voices: A total of 9 voices are now available, each with different dialects and personalities.
- ✅ Improved Dialect Understanding: It can now better recognize various dialects, understand non-verbal cues, and respond with emotional expressions, plus the ability to interrupt conversations when needed.
Notably, the video and screen-sharing feature introduced four months ago is absent. Advanced Voice Mode will be available to ChatGPT Plus and Teams Plan users.
However, many developers complained about the limitation of the new update, and missing features as well, you can check it here.
Potential uses of this technology include virtual assistants, customer service bots, language learning tools, and even emotional support systems, where natural, empathetic interactions could significantly enhance user experience.
Resources
- OpenAI just launched advanced voice mode
- Advanced voice mode released 09252024
- Introducing the Realtime API