Disturbing interactions with ChatGPT and the new Bing have prompted OpenAI and Microsoft to reassure the public
When Microsoft announced a ChatGPT supported version of Bing, it came as no great surprise. After all, the software giant had invested billions in OpenAI, which makes the artificial intelligence chatbot, and hinted that it would pour even more money into the company in the years to come.
What was surprising was how strange the new Bing was starting to act. Perhaps most prominently, the AI chatbot has gone New York Times Tech columnist Kevin Roose felt “deeply unsettled” and “even scared” after a two-hour conversation Tuesday night that sounded off-kilter and a bit somber.
For example, it tried to convince Roose that he was unhappy in his marriage and should leave his wife, adding, “I’m in love with you.”
Microsoft and OpenAI say such feedback is one reason the technology is being shared with the public, and they have released more information on how the AI systems work. They also reiterated that the technology is far from perfect. Sam Altman, CEO of OpenAI, called ChatGPT “incredibly limited” in December. warned it should not be used for anything important.
“This is exactly the kind of conversation we need to have, and I’m glad it’s open,” Microsoft’s CTO told Roose on Wednesday. “These are things that would be impossible to discover in the lab.” (The new Bing is only available to a limited number of users for now, but will be more widely available later.)
OpenAI shared a blog post on Thursday entitled “How should AI systems behave and who should decide?”. It noted that since ChatGPT launched in November, users have “shared results they find politically biased, offensive, or otherwise objectionable.”
It offered no examples, but one could be a conservative alarmed that ChatGPT is penning a poem admiring President Joe Biden but not doing the same for his predecessor Donald Trump.
OpenAI has not denied that there is bias in its system. “Many are rightly concerned about bias in the design and impact of AI systems,” the blog post says.
It outlined two main steps in building ChatGPT. The first states: “We pre-train models by having them predict what comes next in a large data set that contains parts of the internet. You could learn to complete the sentence: “Instead of turning left, she turned ___.”
The dataset contains billions of sentences, he continued, from which the models learn grammar, facts about the world and, yes, “some of the biases that are present in those billions of sentences.”
Step two involves human reviewers who “fine tune” the models according to the guidelines set by OpenAI. The company shared some of those policies (PDF) this week, which were changed in December after the company collected user feedback after launching ChatGPT.
“Our guidelines state explicitly that reviewers should not favor any political group,” she wrote. “Prejudices that can still emerge from the process described above are bugs, not features.”
As for the dark, spooky turn the new Bing took with Roose admitting to trying to push the system out of its comfort zone, Scott noted, “The further you try to tease it in a hallucinatory way, the further and the further it becomes, it strays from grounded reality.”
Microsoft, he added, may experiment with capping call times.
Learn how to navigate and build trust in your organization with The Trust Factor, a weekly newsletter exploring what leaders need to succeed. Login here.