CFTC’s Selig highlights blockchain as AI content verification tool


Michael Selig, chairman of the US Commodity Futures Trading Commission, said blockchain could play a key role in verifying AI-generated content, saying the technology can help distinguish authentic media from synthetic results as concerns about misinformation rise.

During a appearance Thursday on The Pomp Podcast, host Anthony Pompliano asked Selig about the use of AI-generated memes and images in markets, and whether intent matters or whether such content should be restricted altogether. He said to Pompliano:

Private markets have solutions – blockchain technology is a great one. If you can timestamp things and make sure there’s an ID for every AI-generated meme or post, you can verify whether it’s real or AI-generated…Having these technologies here in the United States is essential.

He said regulators were focused on maintaining U.S. leadership in crypto, adding that “you can’t have AI without blockchain.”

Vitalik Buterin, CFTC, United States, AI, Donald Trump, Worldcoin
Source: The Pump Podcast

Regarding how regulators approach AI agents, as autonomous trading becomes more prevalent in financial markets and authorities are pressed to distinguish between automated tools and fully autonomous agents, and how the latter should be regulated, Selig responded:

I worry that we are over-regulating and stifling some of the technology here in the United States… I take a very minimal effective dose approach to regulation, where we… make sure to regulate the players… not the software developers. Software developers create the tools, but they don’t actually participate in financial transactions.

Selig said the CFTC is evaluating how AI models are used in the markets, emphasizing that application should focus on participants engaged in financial activities.

Related: AI and stablecoins gain despite crypto market crisis in 2026

Blockchain and proof of personality tools emerge for AI verification

One of the main challenges facing the rise in the use of artificial intelligence is distinguishing real content from synthetic media. Selig’s comments could be seen as reflecting a broader push by policymakers and developers to use blockchain for content verification and provenance.

One approach is personality proof systemswhich aim to confirm that an account belongs to a real, unique human rather than a robot. The most prominent example is Sam Altman’s World, whose World ID protocol allows users to prove their humanity without revealing personal data. The system uses encrypted biometric iris scans stored on the user’s device, although it has drawn critical on privacy risks and potential coercion.

In March, World launched AgentKit, a toolkit that allows AI agents to prove they are related to a verified human while interacting with online services. It integrates personality proof information with the x402 micropayments protocol developed by Coinbase and Cloudflare, allowing agents to pay for access while presenting cryptographic proof of human support.

Ethereum co-founder Vitalik Buterin has propose use cryptography and blockchain to make online systems more verifiable, notably through zero-knowledge proofs and on-chain timestamps this could help validate how content is generated and distributed without exposing sensitive data.

The proposals come as U.S. policymakers consider broader regulation of AI. On March 20, the Trump administration published a national framework calling for a unified federal approach, warning that a patchwork of state laws could hinder innovation and competitiveness.

Review: Agent causes scammers to waste 14 hours and LLMs are ‘poisoned’ by Iran, says AI Eye