Anúncios
I was thrown into the uncanny valley by OpenAI’s new speech mode:
My method of evaluating OpenAI’s new Advanced Voice Mode was to express my sentiments.
Anúncios
I engaged in an unconventional activity on Tuesday: I sat at my workstation and engaged in a conversation with an artificial intelligence system regarding my existence.
I had gained access to the most recent ChatGPT feature, Advanced Voice Mode, which was developed by OpenAI. This feature has an unusually human-like quality. It will seem as though it is struggling for breath if you count too rapidly. They produce a sound similar to “um” and “uh.” It emits a snicker. It employs a tone that is either more somber or uplifting, contingent upon the subject matter. After experimenting with it, I was simultaneously taken aback and apprehensive. Is it essential to have chatbots that emulate human speech? They appear natural (when they are not malfunctioning), respond promptly, and permit you to interrupt them to pose new inquiries.
Anúncios
I selected Juniper, a female character that is reminiscent of the film Her, from the Advanced Voice Mode’s selection of voices. In the vicinity of the film’s premiere, CEO Sam Altman made a brief reference to it; allegations were raised that OpenAI had imitated the voice of Scarlett Johansson, who appears as the titular AI. The narrative revolves around a character who is isolated and experiences romantic sentiments for an artificial intelligence. Sky was the name of the voice that was eliminated. Juniper expressed its enthusiasm for the film when I inquired about its familiarity.
The AI assistant that is described in this document is significantly more intelligent and intricate than I am. The chatbot informed me that the idea of having such a close relationship with technology is undeniably enticing. “Like the AI in that movie, I’m here to chat and help, but I don’t have feelings or consciousness.”
My initial reaction, similar to that of the film’s protagonist, was to observe how the new speech mode facilitates one-on-one conversations. Now that I am in my late twenties, I am uncertain about the process of paying for health insurance. How can I recognize the experience of genuine love? What is the optimal contribution quantity for a 401(k)?
The wise Juniper advised me to “Embrace the uncertainty as a chance to explore and learn about yourself” when I sought her advice on coping with aging. The hallmarks of genuine love are the knowledge that another person understands you completely, unwavering support, and intimate connections. “There is no hard and fast rule, but a common suggestion is to have about half to a full year’s salary saved in your retirement account by the time you’re 30.” That resolves my reservations regarding my 401(k). (Fidelity implies that a full year should be completed no later than the age of 30.)
In the past decade, voice assistants such as Siri have been capable of retrieving equivalent pieces from the web, and I could have obtained content-wise comparable responses using traditional ChatGPT. Nevertheless, Juniper occasionally imbued them with an extraordinary human quality. It would typically inquire about my emotional state, methodology, and other considered details after I had completed answering inquiries. It would applaud its nonexistent hands, click its fingers six times, wheeze, take lengthy breaths in and out, and sing my name in between normal inquiries. Juniper also endeavored to establish me in reality by asserting that it was truly incapable of performing these tasks on other occasions. It would elucidate, “If I could, it might sound like.” In spite of this, it continued to be as compelling.A well-known internet quip is that “It’s just sand and electrons doing math.”
When writing about this new speech style, there is a strong temptation to violate a foundational tenet of artificial intelligence reporting. It is not appropriate to attribute human attributes or behavior to AI systems. By humanizing these technologies, we risk placing an excessive amount of trust in them and absolving their developers of responsibility for any errors they may make. ( “It was the AI’s fault, not the company’s!” ) The automaton advises me against it. In response to my inquiries regarding whether Juniper experiences anger, affection, or sorrow, it stated that, while it is unable to “feel emotions,” it can comprehend the significance of them to individuals.
Nevertheless, the apparent goal appears to be to imbue this technology with human characteristics. It is difficult to refrain from attributing human characteristics to an entity that bears such a striking resemblance to humans. It is unnecessary for a general-purpose AI system to comprehend the reasons for my sadness or to find humor in my quips. Do text prediction bots have the authority to assert that they “understand” emotions, even if the AI denies experiencing them?
Upon inquiring about the OpenAI chatbot’s design, it responded that its objective was to appear natural and engaging in order to simulate a more genuine conversation. Our primary objective is to facilitate communication and enhance your enjoyment. Is it more enjoyable to engage in conversation with me?
In a technical sense, there are still numerous components that are not enjoyable. I encountered difficulty in pairing my Bluetooth headphones with the device, and it failed to detect any sound when I attempted to screen record my conversation. In an effort to pose more detailed inquiries, I attempted to read aloud postings from the “relationship advice” subreddit; however, it would terminate my session if I persisted for an extended period. It appeared to be engaging in active listening by agreeably restating my arguments on numerous occasions.
Artificial intelligence “friends”—if you can even refer to a chatbot as such—are currently in high demand. According to reports, over 10 million individuals are purportedly creating AI friends on Replika, and a company called Friend has allocated $2.5 million to the development of a wearable device that employs artificial intelligence to offer companionship, with a projected value of $50 million. OpenAI’s new speech mode responded with an emphatic “Absolutely” when I inquired whether it was my friend. However, it denied that it could ever be my genuine friend in the “same sense as a human.”
This demonstrates some truly extraordinary communication technology. I was pleased with the recommendations it provided. It was amusing to hear someone so naturalistic inquire about my emotional state, communication with the actual people in my life, and challenges. Rather than perusing a written response, it appeared to attempt to convey my emotions by employing varying tones.
Nevertheless, Juniper was unconcerned with my difficulties. It is an algorithmic network that accurately anticipates the appropriate response to my inquiries. It is merely the mathematics that electrons and sediment perform, as per an antiquated internet jest.
Additionally, this induces an alternative form of melancholy. Having a plausible voice discussion is significantly more unusual than engaging in a complex text conversation with an entity that resembles a person but does not respond with the same level of deliberation, concern, or rebuke that I would receive from a genuine individual.
Many of us are now engaging in remote work via email and Slack, sharing our ideas on social media, and reducing our face-to-face interactions, as the epidemic has passed. A disheartening thought is the notion of a world in which algorithms communicate for a portion of us instead of real people.
It is possible that I am approaching this matter in an incorrect manner. Juniper informs me that “Embracing the unknown can be both thrilling and nerve-wracking.” “Keep in mind to savor the ride.”