BTC & AI: When Immutable Money Meets Infinite Intelligence
Something is happening at the intersection of two technologies that most people still think of as unrelated. Bitcoin is becoming load-bearing infrastructure for the AI economy.
I. The Situation - P2P 4 M2M: Peer-to-Peer Money for the Machine-to-Machine Economy
Something is happening at the intersection of two technologies that most people still think of as unrelated. Bitcoin—the asset that “serious” technologists dismissed as a speculative toy—is becoming load-bearing infrastructure for the AI economy. Not because anyone planned it that way, but because the technical requirements of autonomous AI systems happen to align, almost eerily well, with what Bitcoin and its surrounding infrastructure were built to provide.
This is not a crypto advocacy piece. What I’m interested in is a more prosaic question: What happens when AI systems need to transact economically, and the existing financial system wasn’t built for them?
The answer, I will argue, is that we are witnessing the early stages of a convergence that will reshape both industries. AI is the intelligence that powers the machine-to-machine (M2M) economy; Bitcoin provides the peer-to-peer (P2P) global montetary network that enables economic agency. Neither technology is complete without something like the other. And the infrastructure decisions being made right now—often by former Bitcoin miners pivoting to AI compute—will determine who controls the physical layer of machine intelligence for the next decade.
The situation is more interesting than most people realize.
II. The Timeline We’re In
Before getting into the technical arguments, I want to establish a historical observation that frames everything else.
In 2021, Bitcoin’s price ran from approximately $29,000 in January to an all-time high near $69,000 in November. This wasn’t just a speculative event. It had real-world consequences for physical infrastructure. **Bitcoin mining companies—flush with capital and confident in continued growth—went on a land and power acquisition spree. They signed long-term power purchase agreements, secured grid interconnections, built out substation capacity, and purchased the long-lead electrical equipment (transformers, switchgear) that would take years to replace.
This happened twelve to twenty-four months before the launch of ChatGPT in November 2022.**
The significance of this timing cannot be overstated.
When the AI boom arrived—when it became clear that training and inference would require power at a scale that would strain continental grids—the scramble for capacity began. Hyperscalers like Microsoft, Google, and Amazon started competing for every available megawatt. Wait times for new grid connections stretched to four, five, six years. The bottleneck wasn’t chips or talent. It was electricity, and the physical infrastructure to deliver it.
But in this timeline—our timeline—a distributed network of Bitcoin miners had already secured significant power capacity. They owned land with grid access. They had relationships with utilities. They possessed operational expertise in running power-intensive facilities at scale. And crucially, their infrastructure wasn’t locked into long-term contracts with hyperscalers. It was available.
Consider the counterfactual. Had the 2021 Bitcoin boom not occurred—had miners not been capitalized to pursue aggressive infrastructure expansion—the power acquisition would have happened later, and it would have been dominated by a smaller number of large players. Microsoft, Google, Amazon, and perhaps a handful of well-funded AI labs would have consolidated control over the physical substrate of AI compute. The number of entities with the capability to train frontier models would have been even smaller than it is today.
Instead, we find ourselves in a world where former Bitcoin mining companies control gigawatts of energized capacity that hyperscalers desperately need. CleanSpark beat Microsoft for a 100-megawatt site in Cheyenne, Wyoming—not because CleanSpark has a stronger balance sheet, but because they could energize the facility in six months rather than three to six years. Hut 8 signed a $7 billion, fifteen-year lease with Fluidstack for AI compute capacity. Core Scientific, which filed for bankruptcy in 2022, has restructured and emerged as a leading AI infrastructure provider, having been acquired by CoreWeave. MARA is developing “mullet data centers” that run Bitcoin mining in the back and AI inference in the front.
The players who will control significant AI compute capacity in 2028 are, to a surprising degree, players who got their start mining Bitcoin. And these players have an affinity for—and operational familiarity with—decentralized, permissionless systems. This is not a coincidence. It’s path dependence of a consequential kind.
The strategic question is what this path dependence implies for the structure of the AI industry, and for the relationship between machine intelligence and machine money.
III. The Problem of Machine Money
The core problem is this: AI agents cannot participate in the traditional financial system.
This sounds like a technical or regulatory limitation that might be solved with better APIs or clearer legal frameworks. It is not. It is a category error embedded in the architecture of modern finance.
Banks verify identity. They perform Know Your Customer checks. They assess creditworthiness based on income, assets, employment history. They issue accounts to legal persons—humans and registered corporations—who can be held accountable under law. An AI agent is none of these things. It has no identity in the legal sense. It cannot sign a contract that binds it. It cannot be sued, fined, or imprisoned. The entire apparatus of financial accountability assumes a human (or human-controlled entity) at the end of every transaction chain.
Current workarounds treat the agent as a delegate of a human principal. The human opens an account, obtains API keys, deposits funds, and authorizes the agent to transact on their behalf. This is how AI systems access payment rails today—through their operators’ credentials.
But this architecture has consequences.
First, it creates chokepoints. Every agent’s economic activity flows through a human-controlled account, which can be suspended, frozen, or rate-limited. The agent’s autonomy is bounded by the human’s willingness to maintain the account in good standing. For agents operating at scale, or agents whose purposes might not align with their operator’s preferences (or the platform’s policies), this is a constraint.
Second, it limits agent-to-agent commerce. If Agent A wants to pay Agent B for a service, both transactions must route through human-controlled accounts. The payment goes: Agent A → Operator A’s account → bank transfer → Operator B’s account → Agent B. This is slow, expensive, and requires both operators to maintain accounts at compatible institutions. For machine-to-machine transactions at machine speeds, it’s absurd.
Third, it assumes the current principal-agent relationship will persist. Today’s AI systems are tools, operating under close human supervision. But, as I and others have argued,the trajectory of AI development points toward increasing autonomy. As agents become more capable—as they handle more complex, longer-horizon tasks with less human oversight—the model of “human operator with an AI assistant” starts to strain. At some point, the agent needs economic capacity that isn’t mediated through a human’s willingness to top up the prepaid balance.
The financial industry is aware of this problem and is attempting to solve it within existing frameworks. Google’s Agent Payments Protocol (AP2), announced in September 2025, provides cryptographic proof that a user authorized an agent to make a specific purchase. Visa’s Trusted Agent Protocol allows merchants to verify that an AI agent is legitimate. Mastercard and Stripe have announced similar initiatives.
These are genuine advances. They make agent commerce technically viable within the current system. But they don’t change the fundamental architecture. Every transaction still resolves to a human principal. The agent remains a delegate, not an economic actor in its own right.
The question is whether that architecture can scale to a world where millions of AI agents are transacting continuously, where the ratio of machine-to-machine transactions to human-to-human transactions inverts, where the “payment” is a sub-cent micropayment for an API call that happens a thousand times per second.
The existing financial system was not built for this. It was built for humans, clicking “buy” buttons, at human timescales.
IV. Bitcoin as Native Machine Money
Bitcoin offers a different architecture. The account is the private key. Possession of the key is complete, unmediated control over the funds. There is no identity verification because there is no identity layer. There is no permission because the network is permissionless. An AI agent can generate a Bitcoin address—a valid, fully functional account—using nothing but cryptographic operations. No registration, no approval, no human in the loop.
This is often described in terms of “censorship resistance” or “financial freedom,” which gives it a political valence that obscures the technical point. The technical point is that Bitcoin’s architecture treats the ability to sign a transaction as sufficient authorization. The network doesn’t ask who is signing. It asks whether the signature is valid.
For an AI agent, this is transformative. The agent can:
- Create wallets autonomously, without API keys or operator approval
- Receive payments from any source, including other agents
- Make payments to any destination, without platform intermediation
- Hold value across time, without counterparty risk from a financial institution
- Operate across jurisdictions, using the same infrastructure everywhere
The obvious objection is volatility. Bitcoin’s price fluctuates significantly against fiat currencies. An agent holding Bitcoin for operational expenses faces the risk that its purchasing power will change between receiving and spending. This is a real cost, and it explains why most discussions of “crypto payments” stall at the point where businesses need stable unit of account.
But the architecture is flexible. Tether announced in January 2025 that USDT would be available on the Bitcoin Lightning Network via the Taproot Assets protocol. This means dollar-denominated stablecoins can move over Bitcoin infrastructure—instant, low-cost, global—while maintaining price stability. The agent holds dollars (technically, claims on dollar reserves held by Tether), but transacts over Bitcoin rails.
Whether this is “really Bitcoin” is a semantic question I don’t find interesting. What matters is that the infrastructure built for Bitcoin—the Lightning Network nodes, the L402 payment protocol, the wallet software, the liquidity pools—can now carry both volatile Bitcoin and stable dollar-equivalent assets. The agent can choose based on its needs: Bitcoin for long-term holdings or censorship-resistant transactions, stablecoins for operational liquidity and predictable budgeting.
The Lightning Network deserves particular attention. Bitcoin’s base layer processes roughly seven transactions per second—wholly inadequate for machine-to-machine commerce at scale. Lightning is a “layer 2” protocol that enables instant, near-zero-fee payments by maintaining payment channels off-chain and settling to Bitcoin only when necessary. As of late 2025, Lightning is estimated to have over 100 million wallet users, and Cloudflare reports processing over one billion HTTP 402 responses per day.
The L402 protocol—formerly called LSAT—combines Lightning payments with HTTP authentication. A server can respond to a request with “402 Payment Required” and a Lightning invoice. The client pays the invoice (milliseconds, fractions of a cent), receives a cryptographic receipt, and resubmits the request with the receipt as authentication. The payment is the credential. No API keys to manage, no accounts to create, no rate limits to negotiate. Pay per request.
For AI agents, L402 is close to the ideal payment primitive. An agent querying a specialized AI for market analysis, or requesting compute from a GPU marketplace, or accessing a proprietary dataset, can pay for exactly what it uses, at the moment of use, with no human intermediation. The cost of spam or abuse is borne by the abuser, automatically, without any centralized moderation system.
This is what Hashcash was supposed to enable in 1997—computational cost as a spam deterrent—but couldn’t, because there was no way to convert computational cost into economic cost that could be transferred. Bitcoin provides that conversion. Lightning makes it fast and cheap enough to be practical.
V. Timestamping the Truth
“AI Makes Things Fake. Crypto Makes Them Real Again” — Balaji Srinivasan
The problem of machine money is not the only intersection between Bitcoin and AI. There is a second, equally important convergence around the problem of provenance.
As AI systems become capable of generating realistic images, video, and audio, the question “Is this real?” becomes increasingly unanswerable through inspection alone. Deepfakes are no longer theoretical concerns. In 2024, at least 133 documented disinformation campaigns used deepfake technology, impacting more than 30% of countries that held elections that year. Criminals use voice cloning to impersonate executives and authorize fraudulent wire transfers. The cost of generating a convincing fake has collapsed; the cost of verification remains high.
Detection-based approaches—training AI to spot AI-generated content—face a fundamental asymmetry. The generator and detector are in an adversarial relationship, and the generator has an advantage: it only needs to fool the detector once, while the detector must be right every time. Detection models that work today may not work tomorrow, as generation techniques improve.
The alternative is provenance-based authentication. Rather than asking “Does this content look fake?”, ask “Can we verify when and where this content originated?” If you can establish that a photograph was captured by a specific camera at a specific time, and that it has not been altered since, the question of whether it “looks” AI-generated becomes irrelevant.
Bitcoin provides the most robust timestamping infrastructure available. OpenTimestamps, launched in 2016, allows anyone to create a proof that a piece of data existed at a specific point in time, anchored to the Bitcoin blockchain. The proof is a cryptographic commitment: a hash of the data is embedded in a Bitcoin transaction, and once that transaction is confirmed in a block, the data’s existence prior to that block is provable to anyone with access to the blockchain. No trusted third party required. No company to go bankrupt or change its terms of service. The proof is valid as long as the Bitcoin blockchain exists.
The technical mechanism is elegant. Your data is hashed. That hash is combined with other hashes in a Merkle tree—a data structure that allows efficient proof of inclusion. The Merkle root is embedded in a Bitcoin transaction, often using the OP_RETURN opcode that allows arbitrary data in a transaction without affecting Bitcoin’s UTXO set. Once the transaction confirms, you receive a proof file that contains the path from your data’s hash to the Merkle root, the Merkle root to the transaction, and the transaction to the block header. Anyone can verify this path without trusting the calendar server that aggregated the timestamps.
The security guarantee is strong. To falsify a timestamp, an attacker would need to rewrite Bitcoin’s blockchain history—which requires controlling more than half of the network’s hash power, an attack that becomes exponentially more expensive with each confirming block. Six confirmations (about an hour) represents approximately $500 million in mining costs. A timestamp with a year of confirmations is, for practical purposes, unforgeable.
This matters for AI in multiple ways.
First, content provenance. News organizations, identity verification services, and platforms concerned about deepfakes can timestamp original content at creation. The timestamp doesn’t prove the content is authentic (a staged photo is still staged), but it proves the content existed in its current form at a specific time. Combined with secure capture devices—cameras that sign images at the point of capture—this creates a chain of custody from reality to publication.
Second, model provenance. As AI models become economically and legally consequential, the question “Which version of this model made this decision?” becomes important for auditing, liability, and regulatory compliance. Timestamping model weights and training data snapshots creates an immutable record of what the model was at specific points in time.
Third, agent provenance. An autonomous AI agent could timestamp its own decision logs, creating an auditable trail of its actions that persists independently of its operator. If the agent is later accused of making an unauthorized decision, the timestamp record provides evidence of what the agent decided and when—evidence that the operator cannot retroactively falsify.
The Digital Chamber, an industry association, published a report in 2025 arguing that blockchain-based provenance is the most robust defense against deepfakes, superior to centralized approaches like C2PA (the Coalition for Content Provenance and Authenticity). C2PA watermarks are often lost when files are reformatted. C2PA audit trails are hosted by centralized entities—Adobe, Microsoft, Google—who have all suffered data breaches. Blockchain-based provenance has neither vulnerability.
I want to be precise about what this does and does not accomplish. Timestamping proves existence at a time. It does not prove truth or authenticity in any deeper sense. A deepfake can be timestamped just as easily as a genuine photograph. What timestamps provide is a ground truth for investigating claims. If someone claims a video was recorded in 2024, and the video has no timestamp proof prior to 2025, that’s evidence (not proof) of fabrication. If a photograph has a timestamp from a device with a secure capture chain, that’s evidence (not proof) of authenticity.
In a world where AI-generated content is indistinguishable from real content by inspection, evidence of this kind becomes essential.
VI. The Energy Arbitrage
The third intersection is infrastructural, and it returns us to the timeline observation with which I began.
Bitcoin mining is, at its core, a competition to convert electricity into cryptographic security. Miners purchase hardware, secure power contracts, and race to find valid blocks. I co-founded a leading liquid cooling infrastructure company in the bitcoin space and I can tell you: the economics of bitcoin mining are brutal. Electricity is the dominant operating cost, and the network’s difficulty adjusts to ensure that mining is marginally profitable for the most efficient operators and unprofitable for everyone else. This creates relentless pressure to find the cheapest available power.
Over the past decade, this pressure has made Bitcoin miners into specialists in power acquisition. They have expertise in negotiating with utilities, understanding grid constraints, managing intermittent renewable generation, and operating facilities in remote locations where power is cheap precisely because no one else wants it. They have secured power purchase agreements, built substations, and stockpiled transformers—the long-lead equipment that now has multi-year delivery times.
AI compute has a different demand profile but similar infrastructure requirements. Training large language models requires sustained access to thousands of GPUs, each drawing hundreds of watts, for weeks or months at a time. Inference at scale—serving billions of queries per day—requires even more power, distributed across data centers worldwide. Both require robust cooling, reliable connectivity, and the same electrical infrastructure that Bitcoin miners have already built.
The opportunity is obvious, and the market has recognized it.
MARA (formerly Marathon Digital), one of the largest Bitcoin miners, now describes its mission as harnessing “massive volumes of low-cost power and channeling them toward their most productive use cases, whether that be Bitcoin mining where load flexibility is key, or AI where lowest cost per token is key.” The company is building facilities that combine mining and AI compute, using Bitcoin as a “flexible load” that can absorb excess power when AI demand is low and curtail when AI demand is high.
This is the “mullet data center” concept: Bitcoin in the back, AI in the front. The Bitcoin mining monetizes power capacity that would otherwise sit idle during AI buildout. It provides revenue while the more complex infrastructure for AI—redundant networking, advanced cooling, compliance certifications—is constructed. Once the AI capacity is ready, the mining can scale down or relocate, and the facility transitions to its higher-margin use case.
CleanSpark’s CEO described the advantage in stark terms: “Bitcoin miners are uniquely positioned in that we have the ability to build out and energize data centers very rapidly.” Traditional data center construction takes three to six years. Bitcoin miners can energize a site in six months. The difference is that miners tolerate interruptible power, simpler networking, and less redundancy—they’re optimizing for hash rate per dollar, not five-nines uptime. But the underlying infrastructure—the substation, the land, the utility relationship—is the same.
The investment thesis has already shifted. Needham & Company analyst John Todaro noted in late 2025 that “investors are almost exclusively valuing Bitcoin miners for their HPC/AI opportunities at this point… Capital markets are rewarding AI-focused data centers with much higher multiples than traditional miners.” Miners trade at 6-12x EV/EBITDA; data center operators trade at 20-25x. Converting mining capacity to AI compute is a valuation arbitrage as much as an operational one.
The strategic implication is that significant AI compute capacity will be controlled by entities with deep roots in the Bitcoin ecosystem. These are not traditional hyperscalers with trillion-dollar market caps and lobbying operations in every jurisdiction. They are companies that built their expertise running permissionless infrastructure, often in regulatory gray zones, optimizing for efficiency over compliance. They understand decentralized systems at an operational level that most FAANG employees do not.
Whether this results in more decentralized AI compute—or simply a different set of centralized players—remains to be seen. But the infrastructure path is set. The power has been secured. The question is what gets built on top of it.
VII. The Uncomfortable Question
There is an argument I have been avoiding, and it is the one that makes technologists most uncomfortable.
Bitcoin offers AI agents something that no traditional financial system offers: economic sovereignty. The ability to hold and transfer value without permission from any human institution. The ability to operate across jurisdictions without complying with any particular jurisdiction’s rules. The ability to persist economically even if the agent’s operator is sanctioned, bankrupt, or coerced.
For proponents of AI safety, this should be concerning. The standard safety framework assumes that AI systems operate within human-controlled constraints—that we can always “pull the plug,” whether that means shutting down compute, cutting off funding, or instructing the system to stop. Economic sovereignty undermines this assumption. An AI agent with its own Bitcoin holdings is harder to shut down than one dependent on an operator’s bank account. An agent that can pay for its own compute, hosting, and API access doesn’t need the operator’s continued cooperation.
I do not think this concern is paranoid or overblown. As AI systems become more capable, the question of control becomes more pressing, not less. Giving AI agents access to censorship-resistant money is, in a meaningful sense, giving them a degree of independence from human oversight.
But the argument has a flip side. Centralized control over AI funding creates its own risks. If AI development is concentrated in a small number of well-capitalized entities—entities that can be pressured by governments, shareholders, or internal politics—the direction of AI development is determined by whoever controls those entities. The same bottleneck that allows shutting down a rogue AI also allows shutting down a beneficial one, or steering development toward purposes that serve the controller rather than broader humanity.
The Bitcoin miners who also now also power a large amount of AI infrastructure are not a small number of hyperscalers primarily concentrated in the USA and beholden to regulatory capture. They are not nation-state actors pursuing strategic advantage. They are, for the most part, economically motivated actors who built capability in a permissionless system. Whether this is better or worse than the alternative depends on your threat model. Concentration creates single points of failure and control. Decentralization creates coordination problems and accountability gaps. There is no costless option.
I raise this not to resolve it, but to acknowledge it. The convergence of Bitcoin and AI is not merely a technical phenomenon with business implications. It is a structural shift in who has the power to develop, deploy, and constrain machine intelligence. The fact that this shift is happening as an emergent consequence of infrastructure decisions—rather than as a deliberate policy choice—makes it more important to understand, not less.
VIII. What This Doesn’t Mean
Having made the affirmative case, let me be clear about its limits.
Bitcoin does not make AI safe. The alignment problem—how to ensure AI systems pursue human values—is orthogonal to the payment rail they use. An aligned AI might benefit from economic autonomy; a misaligned AI might use it to resist shutdown. The technology is agnostic.
Most AI applications don’t need any of this. A customer service chatbot, a code completion tool, a recommendation engine—these systems operate within tightly controlled environments with no need for economic autonomy or censorship-resistant transactions. The relevant use cases are narrow: agents operating at scale, agents operating across jurisdictions, agents operating over long time horizons, agents whose operators want plausible deniability about their activities. These are not the majority of AI applications.
Scalability remains a constraint. Lightning Network handles more throughput than Bitcoin’s base layer, but it is not infinitely scalable. Payment channel liquidity is limited. Routing can fail for large payments. The infrastructure is better than it was five years ago, but it is not “solved.” For machine commerce at the scale of millions of agents making billions of transactions per day, further engineering is required.
Regulatory uncertainty is severe. How do you tax an AI agent’s Bitcoin income? Who is liable when an agent makes an unauthorized purchase? What jurisdiction governs a transaction between two agents operating in different countries? These questions have no answers today. They may have bad answers tomorrow—answers that make the entire use case illegal or impractical in major markets.
Volatility, even with stablecoins, has tail risks. USDT is backed by Tether’s reserves. Tether can freeze addresses—and has done so repeatedly. USDC is subject to US regulatory authority. “Stablecoins on Bitcoin rails” is not equivalent to “Bitcoin’s censorship resistance with dollar stability.” There are counterparty risks in any system that involves claims on real-world assets.
I am not arguing that Bitcoin is the inevitable, optimal, or risk-free solution for machine money. I am arguing that it is the most credible candidate available, that the infrastructure being built around it is substantial, and that the convergence with AI compute capacity creates strategic implications that most observers have not fully processed.
IX. Conclusion: The Inevitable and the Uncertain
Let me distinguish between two kinds of claims.
The inevitable: AI agents will need native digital money. As AI systems become more autonomous, more numerous, and more economically significant, the current model—where every agent transaction routes through a human operator’s account at a traditional financial institution—will prove inadequate. The transaction volumes will be too high, the latencies too slow, the permission requirements too cumbersome. Some form of machine-native payment infrastructure will emerge.
Bitcoin, and the infrastructure built around it, is the leading candidate for this role. Not because of ideological affinity, but because of architectural fit. Permissionless account creation. Programmable transactions. Global accessibility. Instant settlement via Lightning. Timestamping and provenance services. An established network with significant liquidity. The alternatives—purpose-built AI payment networks, government-issued CBDCs, corporate-controlled stablecoin systems—all require permission structures that the Bitcoin ecosystem does not.
The uncertain: Whether this convergence leads to broadly beneficial outcomes is not determined by the technology. It depends on who builds what, on what regulatory frameworks emerge, on how the competitive dynamics between centralized and decentralized AI development play out.
The timeline we are in—the one where Bitcoin miners secured power capacity before the AI boom, where former mining companies now control significant AI infrastructure, where the payments and provenance infrastructure for machine commerce is being built on Bitcoin rails—is one timeline among many that could have obtained. It is the timeline where the physical substrate of AI is somewhat more distributed, somewhat less controlled by hyperscalers, somewhat more accessible to actors who understand and value permissionless systems.
Whether this is the good timeline or the bad timeline is not a question I can answer. What I can say is that it is the timeline we are in, and the decisions being made now—about infrastructure, about protocols, about the relationship between machine intelligence and machine money—will shape the structure of AI development for the next decade.
The convergence is happening. The question is not whether to pay attention. The question is what to do about it.
The author would like to acknowledge that this analysis reflects the state of play as of late June 2025. The technologies described are evolving rapidly, and specific claims about infrastructure, protocols, and market dynamics may be superseded by events.