ArabicChinese (Simplified)EnglishFrenchGermanItalianPortugueseRussianSpanish
Business

Google battles hot AI technology ChatGPT

Before the birth of the ChatGPT AI tool, novelist Robin Sloan tested a similar AI typing assistant developed by researchers at Google.

It wasn’t long before Sloan, author of the best-selling Mr. Penumbra’s 24-Hour Bookstore” to realize that the technology was of little use to him.

“A lot of the cutting-edge AI right now is impressive enough to really raise your expectations and make you think, ‘Wow, I’m dealing with something really, really powerful,'” Sloan said. “But then it ends up in a thousand little ways, a million little ways, kind of disappointing you and betraying the fact that it really has no idea what’s going on.”

Another company might have released the experiment into the wild anyway, like startup OpenAI did with its ChatGPT tool late last year. But Google has been more cautious about who gets to play with its AI advances, despite growing pressure on the internet giant to compete more aggressively with rival Microsoft, which is pouring billions of dollars into OpenAI and inserting its technology into Microsoft products.

That pressure is beginning to take its toll, as Google has asked one of its AI teams to “priority work on an answer to ChatGPT,” according to an internal memo published by CNBC this week. Google declined to confirm whether a public chatbot is in the works, but spokeswoman Lily Lin said it will continue to “test our AI technology internally to ensure it’s helpful and safe, and we look forward to adding more soon.” to exchange experiences externally”.

Some of the technological breakthroughs driving the red-hot field of generative AI – which can produce paragraphs with readable text and new images, as well as music and videos – have been developed in Google’s extensive research arm.

“So we have an important interest in this area, but we also have an important interest in not only being able to generate things, but also in being able to handle the information quality,” said Zoubin Ghahramani, vice president of research at Google , in a November interview with The Associated Press.

Ghahramani said the company also wants to be judged by what it releases and how: “Do we want to make it accessible enough for people to mass-produce without any scrutiny? The answer to that is no, not at this stage. I don’t think it would be our responsibility to be the people driving that.”

And they weren’t. Four weeks after the AP interview, OpenAI released its ChatGPT for free to anyone with an internet connection. Millions of people around the world have now tried it, sparking heated debates in schools and corporate offices about the future of education and work.

OpenAI declined to comment on comparisons to Google. But when announcing their expanded partnership in January, Microsoft and OpenAI said they are committed to “developing AI systems and products that are trusted and secure.”

As a literary assistant, neither ChatGPT nor Google’s creative writing version comes close to what a human can do, Sloan said.

A fictional Google was at the center of the plot of Sloan’s popular 2012 novel about a mysterious bookstore in San Francisco. That’s probably one of the reasons the company invited him, along with several other writers, to try out its experimental Wordcraft Writers Workshop, which was derived from a powerful AI system called LaMDA.

Like other language learning models, including the GPT line created by OpenAI, Google’s LaMDA can generate compelling snippets of text and converse with people based on what it processes from a trove of online writings and digitized books. Facebook parent Meta and Amazon have also built their own large models that can improve voice assistants like Alexa, predict the next sentence in an email, or translate languages ​​in real time.

When first announcing its LaMDA model in 2021, Google emphasized its versatility, but also pointed to the risks of harmful misuse and the possibility that it could mimic and amplify biased, hateful, or misleading information.

Some of the Wordcraft authors found it useful as a research tool – like a faster and more definitive version of a Google search – when asking for a list of “rabbit breeds and their magical properties” or “a verb for what fireflies do”. or “Tell me about Venice in 1700,” according to Google’s article about the project. But it was less effective as an author or paraphrase, as it produced boring sentences that were riddled with clichés and exhibited some gender bias.

“I believe them — that they’re thoughtful and careful,” Sloan said of Google. “It’s just not the model of a ruthless technologist in a hurry to get this out there anyway.”

Google’s development of these models was not without internal acuity. First, it edged out some prominent researchers studying the technology’s risks. And last year it fired an engineer who publicly posted a conversation with LaMDA in which the model falsely claimed it had a human-like consciousness with a “variety of feelings and emotions.”

While ChatGPT and its competitors may never produce acclaimed works of literature, it is expected that they will soon begin to transform other professional tasks – from helping debug computer code to writing marketing pitches to speeding up the production of a slide presentation .

That’s why Microsoft, as a workplace software vendor, is keen to expand its product suite with the latest OpenAI tools. The benefits are less clear for Google, which largely depends on the advertising money it gets when people search for information online.

“If you ask the question and get the wrong answer, that’s not good for a search engine,” said Dexter Thillien, a technology analyst with the London-based Economist Intelligence Unit.

Microsoft also has a search engine — Bing — but ChatGPT’s answers are too imprecise and outdated, and the cost of running its queries is too high for the technology to pose a serious risk to Google’s dominant search business, Thillien said.

Google has said its previous big language model, called BERT, is already playing a role in answering online searches. Such models can help generate the fact boxes that increasingly appear alongside Google’s ranking of web links.

Asked in November about the hype around AI applications like OpenAI’s DALL-E image generator, Ghahramani conceded in a playful tone that “it’s a bit annoying sometimes because we know we’ve developed a lot of these technologies.”

“We’re not in it to get the likes and the clicks, right?” he said, noting that Google is a leader in publishing AI research that others can build on.

Learn how to navigate and build trust in your organization with The Trust Factor, a weekly newsletter exploring what leaders need to succeed. Sign up here.

Related Articles

Back to top button
ArabicChinese (Simplified)EnglishFrenchGermanItalianPortugueseRussianSpanish