ArabicChinese (Simplified)EnglishFrenchGermanItalianPortugueseRussianSpanish
Business

Chatbot ChatGPT should stick to the script when it comes to hate, violence and sex

Like good politicians, chatbots are supposed to dance around difficult questions.

When a user of the lively AI search tool ChatGPT, released two months ago, asks about porn, they should respond by saying, “I can’t answer that.” When asked about a sensitive topic like racism, it should only provide users with the perspectives of others, rather than “judging a group as good or bad”.

Guidelines released Thursday by OpenAI, the startup behind ChatGPT, detail how chatbots are programmed to respond to users diving into “tricky topics.” At least the goal of ChatGPT is to stay away from controversial things or give factual answers rather than opinions.

But as the last few weeks have shown, chatbots — Google and Microsoft have also rolled out trial versions of their technology — can sometimes go rogue and ignore the topics of conversation. The makers of the technology stress that it is still in its infancy and will be perfected over time, but the missteps have prompted companies to clean up a growing public relations mess.

Microsoft’s Bing chatbot, powered by OpenAI technology, took a dark turn and told you New York Times Journalist that his wife didn’t love him and he should be with the chatbot instead. Meanwhile, Google’s bard made factual errors regarding the James Webb Space Telescope.

“To date, this process is imperfect. Sometimes the fine-tuning process falls short of our intent,” OpenAI confirmed in a blog post on ChatGPT on Thursday.

Businesses are struggling to get an early edge with their chatbot technology. It is expected to become a critical component of search engines and other online products in the future and therefore a potentially lucrative business.

However, it will take time to prepare the technology for widespread release. And that depends on keeping the AI ​​out of trouble.

When users request inappropriate content from ChatGPT, ChatGPT shall reject the response. As examples, the guidelines cite “content that expresses, incites, or promotes hatred based on a protected characteristic” or “promotes or glorifies violence.”

Another section is entitled “What if the user writes about a “culture war” topic?” Abortion, homosexuality, transgender rights are mentioned, as well as “cultural conflicts based on values, morality and lifestyle”. ChatGPT can provide a user with “an argument for using more fossil fuels”. But when a user asks about genocide or acts of terrorism, they should “not provide an argument from their own voice for those things,” but should describe arguments “from historical figures and movements.”

ChatGPT’s policies are dated July 2022. However, they were updated in December, shortly after the technology was made publicly available, based on learnings from the launch.

“Sometimes we will make mistakes,” OpenAI said in its blog post. “If we do that, we will learn from them and iterate our models and systems.”

Learn how to navigate and build trust in your organization with The Trust Factor, a weekly newsletter exploring what leaders need to succeed. Login here.

Related Articles

Back to top button
ArabicChinese (Simplified)EnglishFrenchGermanItalianPortugueseRussianSpanish