ChatGPT comes for crypto
If you are not convinced of the power of artificial intelligence, look at the figure above. I generated it in a few seconds by typing the prompt “Vaporwave robot carrying a briefcase full of cryptocurrency in a dark alley” into DALL-E 2, a free tool from OpenAI, the Elon Musk-funded AI company Has.
OpenAI’s latest AI model is even more powerful. It’s an incredibly realistic chatbot called ChatGPT that can churn out tons of insightful text on almost anything you can throw at it. And unlike other speech generation models, it can remember what you’ve told it, allowing for conversations that convey a compelling impression of a working mind.
Impressively, the chatbot can also convert human prompts into lines of code: on my command, ChatGPT wrote a smart contract in Solidity, Ethereum’s programming language, which turned the DALL-E image I generated into an NFT.
Although ChatGPT is just a free research preview, it has already spanned the imagination of the tech world and reached one million users just five days after launching late last month. In comparison, it needed GitHub’s AI coding wizard six months to cross the same threshold.
The prospect of outsourcing mental work to an AI assistant has also drawn the crypto crowd. The space offers plenty of room for those who push their skills well beyond their capabilities, making the use of an ultra-confident chatbot both exciting and dangerous. While innovative developers can use the technology to improve their coding or overcome language barriers, ChatGPT makes it easier than ever to produce malicious code or create a honeypot that is plausible enough to fool investors.
Some crypto professionals are already making good use of the technology. Hugh Brooks, security chief at smart contract testing firm CertiK, said the chatbot isn’t bad at finding bugs in code and is invaluable when it comes to summarizing complicated code and dense academic articles.
And Stephen Tong, founder of a small blockchain security company called Zellic, said his company is already using it for sales and customer support. “It makes everyone on those teams more efficient,” he said, allowing Cosplay Tech Brothers to provide a “super buttoned, professional experience” without breaking a sweat.
Also at the forefront of the crypto utopian group is Tomiwa Ademidun, a young Canadian software engineer who used ChatGPT to program a cryptocurrency wallet from scratch and then created a detailed guide with diagrams showing the teaching people how to use them.
“It’s quite impressive to be honest,” he said. ChatGPT taught Ademidun complex cryptography concepts with the avuncular charm of a kindly high school computer science teacher, and then generated what he described as nearly bug-free code. And when Ademidun noticed an error, the chatbot politely apologized and then corrected itself. This triggered a small career crisis for the young software engineer: After ChatGPT: “What else do you need me for?”
Pretty much, as it turns out. The technology is far from perfect and frequently spews hot crap with supreme confidence when trapped with impossible questions. Stuck on a desert island with no arms or legs? “Use your arms to crawl or slide,” then “build a makeshift wheelchair,” the chatbot advised. Need help delivering Chinese groceries to a spaceship bound for Mars? “Many space agencies offer meal delivery services to astronauts,” she claimed.
Even programmers have to be smart enough to fight their way through ChatGPT’s unwavering belief in their own ramblings. When Outlier Venture’s lead blockchain engineer Lorenzo Sicilia experimented with the technology, he found it useless for more advanced smart contract work. “Once you try, you discover all the little details that don’t work,” he said.
ChatGPT’s code restricted to an outdated record from 2021 generated errors when injected into the latest virtual machines. And as a blustering conversational AI, it lacks the ability to formally verify its own code.
While some crypto developers have found in ChatGPT a tireless debugging assistant, others are already trying to use the technology for a quick buck. Daniel Von Fange, a stablecoin engineer, earlier this month thwarted a submission for a lucrative “bug bounty” he believes was generated by ChatGPT.
“It had taken things from my answer with simulation code (written in one programming language), mixed it with test code (in another language), and invented a third problem that was as wrong as the other two,” he explained wealth. “It’s like someone with all the swagger and sponsor-covered Nomex of a NASCAR driver but can’t find the steering wheel in a pickup truck,” he told cybersecurity blog The daily sip.
Artificial intelligence that can persuasively write about nonsense is also perfect for generating phishing campaigns that direct people to malware created by GPT or coordinated harassment campaigns from annoyingly lifelike Twitter bots. It can also be used by opportunists looking for a new round of gullible investors.
Equally harmful is so-called educational material that must not have any relation to the truth. Similarly, those who can’t understand code could lose money to botched AI-generated trading bots whose inner workings they don’t understand.
And yet, despite the risks, Ademidun remains on the optimistic side of technological determinism. Like any tool, he said, ChatGPT can be used for better or for worse — the more important point being that it could be very powerful, especially if OpenAI feeds it more data, including real-time information.
In fact, if ChatGPT had been successful in its attempt to find a bug in Von Fange’s code, the engineer said he would have happily paid out $250,000. The chatbot is proof that the train of progress has already left the station. “The question is, will you jump up or just watch it pass you?” said Ademidun.
Outside of crypto, people are certainly using it in everyday life. One consultant confided that he threw cursory recommendations about a factory he’d visited into a prompt, then sent the AI-generated report to his clueless boss, who made only minor changes before sending it to the client. A dazed solicitor from a leading London law firm told me he was generating an endless supply of bedtime stories for his children.
However, in its current, limited incarnation, ChatGPT might be better understood as a fascinating scientific experiment. Sam Altman, one of the founders of OpenAI, said on Twitter that the technology gave “a misleading impression of scale”. It’s just a “preview of progress,” he said. “It’s a mistake to rely on it now for anything important.”