Microsoft Says Bing Is Still On Track After AI Fails
In the same week, the trial of Microsoft’s AI-powered search engine caused many people to become uneasy and concerned about the true intentions of the artificial intelligence. The company insists that technology will be a positive force in the long run.
As Microsoft, Google and many more competitors accelerate the development of their AI products, you can expect the technology to become a bigger part of our lives soon. Microsoft and Google are both testing their AI-powered search engines ahead of planned public releases later this year, which the companies say will help iron out any bugs.
“With the right guardrails, cutting-edge technology can be safely rolled out into the world to help people be more productive and solve some of our most pressing societal problems,” said Natasha Crampton, Microsoft’s chief responsible AI officer, in a statement Friday, May outlined the company’s views on AI research and implementation.
Microsoft’s AI-equipped Bing search engine has been available to testers for less than two weeks, but the company’s engineers may still have work to make the technology palatable to consumers. Early reports from users suggest the technology can still be off-putting and downright creepy when pushed out of its comfort zone.
The new version of Bing, powered by AI designed by ChatGPT creator OpenAI, this week returned responses from users saying “confused‘, ‘passive-aggressive’ and downright ‘rude’. In a particularly disturbing conversation with the new york times’ Tech columnist Kevin Roose, a transcript of which was released Thursday, revealed Bing’s chatbot about his secret desire to become human, declared his undying love for Roose and urged him to leave his wife.
Roose wrote that the encounter left him “deeply unsettled, even frightened,” while the interaction was like speaking to a “moody manic-depressive teenager trapped against his will in a second-rate search engine.”
Bing’s AI chat has also proven combative when faced with limitations that Microsoft itself has admitted to, and even rebuked a tester for “not having been a good user” after the user pointed out an apparent bug of the chatbot.
Microsoft’s Crampton said the company’s AI strategy, which includes artificial intelligence applications for Bing, its cloud service platform Azure and data analysis tools for scientists, is still a work in progress. However, the Microsoft team said it looks for issues early in the design and testing stages to root out problems.
“We ensure responsible AI considerations are incorporated into the earliest stages of system design and then throughout the lifecycle so that the appropriate controls and mitigations are built into the system being built, not bolted on at the end,” said Crampton.
Microsoft has a great strategy for AI that goes well beyond search, including products that can accelerate humanitarian organizations’ response to natural disasters and accelerate research into solutions to climate change.
Microsoft’s AI ambitions aren’t motivated solely by altruism, as the technology could be the company’s long-awaited weapon to oust Google from its dominant position in search. While Microsoft currently has a negligible share of the search market compared to Google, even small wins could translate into billions of dollars in additional advertising revenue. On Friday, Reuters reported that Microsoft is already planning to integrate ads and paid links into its AI search engine results.
Some of the criticism of Bing’s AI chatbot revolved around lengthy conversations, which could trigger the bot’s testy attitude. Microsoft is considering capping conversation length New York Times reported Thursday.
This was announced by a Microsoft spokesman wealth that 90% of conversations on Bing have been fewer than 15 messages so far, and the company “has updated the service several times in response to user feedback.”
Regardless, a Microsoft spokesman said wealth Earlier this week, that quest, their AI project, which is most visible to the public, may also be the most prone to error, bias, and scrutiny — at least in the early days.
“It’s important to note that last week we announced a preview of this new experience. We anticipate that the system may make mistakes during this preview period, and user feedback is critical to identifying where things aren’t working well so we can learn and improve the models,” the spokesperson said.
Update: This article was updated on February 17th to include a comment from Microsoft.
Learn how to navigate and build trust in your organization with The Trust Factor, a weekly newsletter exploring what leaders need to succeed. Login here.