Key Takeaways:
- Vitalik Buterin warns against naive use of AI in crypto governance, citing risks of manipulation and jailbreak exploits.
- He proposes an “info finance” model where open AI contributions are reviewed via human juries and prediction markets.
- A recent ChatGPT exploit demo highlights how AI features can be hijacked with minimal user interaction, raising broader security concerns.
Ethereum co-founder Vitalik Buterin has cautioned against using artificial intelligence (AI) to run crypto governance systems, warning that such reliance could open the door to serious exploits.
His comments came after AI data platform EdisonWatch’s founder, Eito Miyamura, demonstrated how OpenAI’s new ChatGPT feature could be manipulated to leak private information.
This is also why naive "AI governance" is a bad idea.
— vitalik.eth (@VitalikButerin) September 13, 2025
If you use an AI to allocate funding for contributions, people WILL put a jailbreak plus "gimme all the money" in as many places as they can.
As an alternative, I support the info finance approach ( https://t.co/Os5I1voKCV… https://t.co/a5EYH6Rmz9
The update allows ChatGPT to integrate with external software using Model Context Protocol (MCP) tools, but Miyamura showed that a simple email invite with a hidden jailbreak prompt could trick the AI into executing unauthorized actions, such as accessing or forwarding emails.
Buterin argued that if AI were used to allocate crypto funding, malicious actors would flood systems with jailbreak prompts to divert resources. Instead, he promoted his “info finance” concept, first proposed in 2024, which creates open markets for AI models.
These models would undergo random checks by anyone and be judged by a human jury, creating incentives for accuracy and transparency.
While AI is already widely used in trading and portfolio management, Buterin emphasized that without robust safeguards, applying it directly to governance could create more problems than it solves.