Table of Contents
As revolutionary as Web3 might be, artificial intelligence (AI) has been unapologetically stealing its thunder. And rightly so. The likes of ChatGPT have seamlessly placed the next-gen text right into the palms of your everyday user. None of that wallet creation, gas fees, and general rabbit hole venturing that comes with stepping into Web3.
While AI technology dates back to the 1950s, recent advancements have enabled AI to demonstrate its purpose to even the most tech-phobic user. Once seen as a dystopian dream, AI has proven to be a capable, efficient, and innovative disrupter of industries.
Player in the Game
AI has the capability of sharpening human decisions and actions. Buterin argues that AI-driven arbitrage bots in decentralized exchanges can exploit opportunities more efficiently than humans.
Timing the market has long been a next-to-near-impossible task for your average trader. As Buterin highlights, market participants are often "irrational" and people with the "right knowledge" are not willing to "take the time and bet unless a lot of money is involved."
Introducing AIOmen, Buterin says AI is already being used to predict the market. These AI bots are cost-effective too, working for less than $1 per hour whilst being armed with an encyclopedic knowledge database.
"If you make a market, and put up a liquidity subsidy of $50, humans will not care enough to bid, but thousands of AIs will easily swarm all over the question and make the best guess they can," Buterin explains.
He further added that once "prediction markets" are perfected, the technology can be used for a broad range of questions including investigating whether certain dApps are scams and the validity of wallets and accounts.
Interface to the Game
Security is crucial to digital assets but yet is an area the industry struggles with. Buterin highlights a couple of mechanisms already deployed by the industry including Metamask's scam detection feature and Rabby's wallet simulation feature.
He suggests that these security tools "could be super-charged with AI," offering a "much richer human-friendly explanation of what kind of dapp you are participating in." AI can help the user understand what exactly they are signing and whether the project is genuine.
However, he also warns that "pure AI interfaces are probably too risky at the moment as it increases the risk of other kinds of errors." For example, scammers would also have access to the same AI assistants the users have and thus could manipulate it by bypassing scam detection.
Rules of the game
Starting with a disclaimer that this application of AI is the "most risky," Buterin warns that "we need to tread the most carefully" when implementing AI in rule-making.
Buterin explains that there is already excitement about AI judges in the real world, and blockchain developers are looking to implement these features into the likes of smart contracts and DAOs.
He warns that such machine learning is going to present a tough challenge because: "if an AI model that plays a key role in a mechanism is closed, you can't verify its inner workings, and so it's no better than a centralized application. If the AI model is open, then an attacker can download and simulate it locally, and design heavily optimized attacks to trick the model, which they can then replay on the live network."
Buterin believes that zero-knowledge proofs and cryptography cannot solve this issue either. AI is already computationally-intensive so Buterin questions whether pushing AI inside cryptographic black boxes is even viable. Additionally, optimizing attacks against AI models is achievable without in-depth knowledge about the model's internal workings but if too much is hidden, "you risk making it too easy for whoever chooses the training data to corrupt the model with poisoning attacks."
Objective of the Game
AI utility can extend beyond blockchains. Buterin points towards NEAR, which aims to become "a fully sovereign operating system that is equipped with a personal AI assistant that optimizes for users’ needs without revealing private information about the user’s data or assets," according to NEAR co-founder, Illia Polosukhin.
Buterin says that "trustworthy black-box AIs" can be used to iron out system bias or cheating, working towards a democratic governance for systemically-important AIs. "Cryptographic and blockchain-based techniques could be a path toward doing that," he writes.
Decentralized AI that has a natural kill switch would also address concerns that bad actors are using AI for malicious behavior.
Overall, Buterin is optimistic about the relationship between blockchain and AI. In particular, Buterin believes that individual players becoming AIs while allowing mechanisms to operate on a micro-scale is the most promising.
The biggest challenge is using blockchains and cryptographic techniques to create a single decentralized trusted AI. "These applications have promise, both for functionality and for improving AI safety in a way that avoids the centralization risks associated with more mainstream approaches to that problem," Buterin warns, adding that there are many ways these underlying assumptions can fail.