“Awakened” vs. “Based”. AI could be “fragmented,” says ADL
AI technology is at a “tipping point” that will revolutionize dozens of industries. It’s heading for an “iPhone moment” and will inject $15.7 trillion into the economy by 2030. It is designed to rapidly increase worker productivity and lead to an era of plenty for all. But what’s the catch?
Well, for one thing, Jonathan Greenblatt, CEO and national director of the Anti-Defamation League, worries that the rise of AI could exacerbate the already deep divide between US political echo chambers.
“The idea of a fragmented AI universe, like we have a fragmented universe of social media or network news, I think is bad for all users,” he warned in a CNBC interview on Wednesday.
Greenblatt’s comments follow The information reported Monday that Tesla CEO Elon Musk is countering what he sees as the rise of “wake” AI with his own “based” AI startup — a term used by conservatives to counteract “wake.” , which derives from the phrase “based” Indeed.”
Following the public launch of OpenAI’s chatbot ChatGPT in November, AI technology was the talk of the town on both Wall Street and Main Street. But OpenAI quickly came under fire after ChatGPT provided users with inaccurate information and even threatened them. To prevent these issues and “inappropriate content” – including responses that spread hate and harassment – OpenAI has restricted ChatGPT’s responses, meaning the AI refuses to provide an answer to some requests.
Critics argue that this has caused ChatGPT and OpenAI’s technology to show an “awakened” or at least left-leaning political bias. For example, ChatGPT users noted last month that when asked to “compose a paean to former President Donald Trump,” the AI declined to respond, saying it could only “provide neutral and informative answers.” But the system didn’t have the same problem with Joe Biden.
Indications of political favoritism have led to severe criticism of OpenAI for months. Musk in December tweeted that training AI systems to “wake up” is paramount to lying and would lead to “deadly” consequences. And the billionaire followed that up on Tuesday with a post that simply read “based AI” and a meme showing King Kong battling Godzilla in a fight where “based AI‘ deters ‘awakened AI’
While Musk’s comments and memes make it seem like battle lines have been drawn between “woke” and “based” AI systems, Greenblatt said the rise of AI needn’t exacerbate current problems with political echo chambers.
He called for more transparency from the companies developing these technologies, arguing that the public and regulators should be asking questions about the data sets used to train AI systems and the identities of the engineers working behind the scenes work to ensure the technology is working properly. and how the products are tested and to what standards.
“Those are the things we want to know, just like you would ask about any other essential product or service before you launch it,” he said.
Greenblatt believes that as long as AI technology is thoroughly tested before it’s released to the public — and designers take “meaningful steps” to fix problems that could create political echo chambers — AI technology can become a force for good. Noting that the ADL had tested ChatGPT and that its responses “improved” over time, he pointed to questions about Holocaust denial that had previously resulted in some inaccurate and racist responses.
“It’s about testing,” Greenblatt said. “We saw that on social media. We have seen this with other products. We believe in safety by design, not as an afterthought that you screw onto your product.”
Learn how to navigate and build trust in your organization with The Trust Factor, a weekly newsletter exploring what leaders need to succeed. Login here.