OpenAI has expressed concerns about the potential for users to develop an unhealthy dependence on ChatGPT for companionship, particularly with the introduction of its new human-like voice feature. This feature, which began rolling out to paying customers last week, closely replicates natural human speech. It responds in real-time, adapts to interruptions and even includes conversational sounds like laughter and “hmms.” The AI can also interpret the speaker’s emotional state through their tone.
According to a safety review report released by OpenAI on Thursday, the company is concerned that users may begin to form emotional attachments to the AI, which could diminish their need for human interaction. While this might provide some solace to lonely individuals, there are worries that it could negatively impact healthy relationships. The report also raises concerns that users might place too much trust in the AI because of its human-like voice, despite its potential for making errors.
These concerns arise in the context of broader apprehensions about the rapid deployment of AI technologies, which are reshaping our lives, work and interactions without a complete understanding of the long-term effects. As users often find novel ways to engage with new technology, unintended consequences can emerge.
There are already instances of users forming what they describe as romantic relationships with AI chatbots, a trend that has alarmed experts such as Liesel Sharabi, a professor at Arizona State University. She cautions that these deep emotional connections with evolving technology may not be sustainable and could cause emotional harm.
The report from OpenAI also mentions that ChatGPT’s voice feature could potentially influence social norms, noting that AI has the tendency to be interrupted, something considered impolite in human conversations and might alter user expectations. Despite these concerns, OpenAI remains committed to the safe development of AI and intends to continue researching the emotional effects of its tools on users.