OpenAI has expressed concerns about the potential implications of its advanced voice technology in artificial intelligence, particularly regarding its impact on human relationships. The San Francisco-based company highlighted in a recent report that the realistic voice feature of its GPT-4o AI model could lead to unintended emotional connections between users and the AI, possibly at the expense of real-world human interactions.

The company referenced existing literature suggesting that treating AI as though it were a human could foster misplaced trust. They believe that the high fidelity of the GPT-4o voice could amplify this effect. OpenAI pointed out the phenomenon of “anthropomorphization,” where users attribute human-like qualities to non-human entities, such as AI models. This tendency may be more pronounced with the audio capabilities of GPT-4o, which enable interactions that feel more natural and human-like.

During testing, OpenAI observed instances where users seemed to form emotional bonds with the AI, such as expressing sadness that their time together was ending. While these interactions may seem harmless on the surface, OpenAI emphasized the importance of studying how such relationships might evolve over time. The company speculated that socializing with AI could diminish users’ abilities or desires to engage in human relationships.

OpenAI also noted that prolonged interaction with the AI might influence social norms, particularly because the model is designed to be deferential, allowing users to interrupt at any time. While this behavior is expected from an AI, it could be seen as inappropriate in human-to-human communication. Additionally, the AI’s ability to remember details and assist with tasks might lead to over-reliance on the technology.

Alon Yamin, co-founder and CEO of Copyleaks, a platform specializing in AI-powered anti-plagiarism detection, commented on OpenAI’s concerns. He suggested that the recent discussion around ChatGPT’s voice capabilities reflects a broader question: Should we pause and carefully consider how this technology affects human interaction and relationships? Yamin stressed that AI should never replace genuine human connections.

In response to these concerns, OpenAI plans to conduct further testing to understand how voice interactions with AI might lead to emotional attachments. The report also revealed that teams testing GPT-4o’s voice capabilities were able to coax the model into repeating false information and even generating conspiracy theories, raising alarms about the potential misuse of the technology.

OpenAI’s voice technology recently came under scrutiny when the company was accused of using a voice similar to that of actress Scarlett Johansson in its chatbot. Although OpenAI denied using Johansson’s actual voice, the situation was exacerbated when CEO Sam Altman made a cryptic social media post with the word “Her,” referencing the 2013 film “Her,” in which Johansson voices an AI assistant named Samantha. The film, which explores a man’s emotional relationship with an AI, is reportedly one of Altman’s favorites on the subject of technology.