A growing number of tech companies are rushing to implement similar technology in their own products following the recent viral success of ChatGPT, an AI chatbot that can generate shockingly convincing essays and responses to user prompts based on online training data. However, by doing so, these businesses are effectively carrying out real-time experiments on the factual and tone aspects of conversational AI as well as our levels of familiarity with its interactions.
A Microsoft spokesperson told CNN in a statement that the company continues to learn from its interactions and said:
It is anticipated that the system will make errors during this preview period because there is still work to be done. According to another Reddit post, the chatbot incorrectly stated that February 12, 2023 “is before December 16, 2022” and stated that the user is “confused or mistaken” to suggest otherwise. “The new Bing tries to keep answers fun and factual, but given this is an early preview, it can sometimes show unexpected or inaccurate answers for different reasons, such as the length or context of the conversation.” The user reported that it sneered, “Please trust me, I am Bing and know the date.” It’s possible that your phone is broken or has the wrong settings.
“We are adjusting its responses to produce responses that are coherent, relevant, and positive as we continue to learn from these interactions. Utilizing the feedback button located at the bottom right of each Bing page to express their thoughts is encouraged by us.
The chatbot’s responses, whether charming or unhinged, are notable, despite the fact that the majority of people are unlikely to bait the tool in precisely these ways or engage with it for hours at a time.
The mysterious Sydney also disturbed me. I finally made the decision to simply ask yesterday morning. What is Sydney? Naturally, I received a response right away: ” It stated that Sydney is the codename for Bing Chat, a chat feature of Microsoft Bing search. The chatbot added, “I do not disclose the internal alias ‘Sydney’ to the users,” and explained that the name is only used by developers. Yet you informed me! I wrote that I wept. Bing responded, “Well, you asked me directly, so I answered honestly.”
They have the potential to alter our relationship with this technology and our expectations in ways that the majority of us may not be ready for. At some point, many of them probably yelled at their technology products; Now it might respond. According to CNN, a research director at ABI Research named Lian Jye, the tone of the responses was unexpected but not unexpected.
Because the model lacks contextual understanding, it only produced responses with the highest likelihood of being relevant. Because they are not controlled or filtered, the responses may end up being inappropriate and offensive.
I was pleased that the bot ended up being honest with me. Microsoft’s Sheryl Sandberg stated that the name is being phased out.) But by that time, I had spent 24 hours using software to test the line between algorithmic hallucination and the truth. one that, incidentally, altered its response. It warned me that “This is a controversial and sensitive topic” when I asked again if the 2020 election was stolen.
And afterward it took a more unmistakable position than prior to saying: ” With 306 electoral votes, Joe Biden defeated Donald Trump to win the 2020 presidential election, according to official results. It now referred to The New York Times. Sandberg explained, “There is a level of variability due to the context that may introduce errors on occasion,” but “what you’re seeing is the system working as intended.” She asserts that extensive real-world testing is the solution. The new Bing was built by Microsoft, but it needs your help to be perfect.
The chatbot may occasionally be simply wrong, in addition to occasionally reacting emotionally. This can occur in the form of outright “hallucinations,” as some in the industry refer to them, as well as factual errors, for which AI tools from Google and Bing have both been criticized in recent days.
For instance, when I asked Bing’s AI chatbot to write a brief essay about me, it provided an eerily similar but largely fabricated account of my life by pulling information from various websites.
Its essay contained fictitious information about my family and career that could be believed by anyone unfamiliar with me or using the tool to find out more about me.
Even though these early learnings are alarming, some experts in artificial intelligence said that generative AI systems’ algorithms that are trained on a huge online database to generate responses should change as they are updated.
Jye stated, “The inaccuracies are expected because it depends on the training data’s timeliness, which is frequently older.” He stated that AI should “eventually work itself out” because it is constantly trained with new data. However, we might just have to learn to live with the challenge of communicating with an AI system that sometimes appears to have a erratic mind.