The Useful Life of AI Capital
- RCD
- 23 minutes ago
- 8 min read
The hidden collision between static accounting, rapid capital deployment, and the AI innovation cycle.
TL:DR
The Accounting Divergence: Hyperscalers have quietly doubled server "useful life" estimates (from 3 to 6 years) to boost reported earnings, creating a dangerous disconnect from the reality of the accelerating 2 year AI innovation cycle.
Economic Obsolescence: While older chips remain physically functional, they face a "structural margin collapse" against newer, more efficient architectures (like Blackwell). They become "zombie assets" that generate less revenue than their operating costs long before they are fully depreciated.
The Valuation Risk: Current financials rely on straight-line depreciation that ignores the "shark fin" economic lifecycle of AI hardware. Realigning depreciation to a realistic three-year timeline would likely contract hyperscaler operating margins but remove the fog.
Supply Chain Implication: Component durability must be recalibrated to relevance. There is no premium for engineering parts to survive a decade in a machine that becomes economically obsolete in three years. Instead, suppliers must shift focus to ensuring uncompromised reliability during the brief, critical "Tech Frontier Window."
Introduction
The change began quietly. It started not on a factory floor, but in the footnotes of a quarterly report. In January 2020, Amazon extended the useful life of its servers from three years to four. It was the beginning of a divergence between how long hardware is assumed to create value on paper and how quickly it is displaced in reality.
By July 2022, Microsoft had adjusted its estimate from four years to six. Google followed suit shortly after, locking in a six-year standard by 2023. Over a relatively short period, the industry consensus for server longevity doubled. This significantly lowered depreciation expenses and benefited the bottom line across the sector.
But the pendulum has swung too far. We are now seeing the first signs of a 'depreciation retreat,' where the aggressive extensions of the past three years are colliding with the hard physics of the AI frontier. The question is no longer how long a server can last, but how quickly it must be replaced."

Skeptics [most notably here, but here and here] have flagged how these extended timelines polish the income statement, creating a widening gap between reported earnings and cash reality. Unsurprisingly, most sell-side analysts have remained deferential to hyperscaler management. However, a structural disconnect exists between the velocity of AI capital deployment, the innovation cycle, and the static, mechanistic application of standard accounting logic. This leaves the rationale for lengthening depreciation lives opaque. The failure to capture the true half-life of capital acts as a fog, obscuring what should be the single most crystalline economic opportunity of our time.
We uncovered this topic while performing a deep analysis for a client on the value created from extended capacitor lifetime in AI-related power supplies. That granular work revealed a macro truth: Hardware lifetime is shorter than the accountants are recognizing.
The Innovation Displacement
In the ruthlessly competitive domain of AI compute, innovation is an act of displacement. New architectures from Nvidia arrive roughly every 24 months, rendering previous generations less efficient and materially less relevant.
Jensen Huang, the CEO of Nvidia, recently revealed at CES 2026 that the upcoming Rubin architecture will slash inference costs by 10x and cut the GPU requirement for training by 4x. With that leap in performance, the Blackwell platform, itself a marvel just a year ago, was exposed as merely a basecamp on an ascent with no summit. The irony is brutal. Blackwell racks have not yet been installed long enough to produce their first commercial AI models. While the ledgers now anticipate a six-year lifespan, the AI innovation cycle is proving to be much shorter.

The main counterargument to this inconsistency is that older chips will simply migrate to less demanding inference tasks. However, inference is rapidly riding down the price elasticity curve and becoming a commodity market sensitive to power efficiency. An older AI processor, such as Ampere, may technically run a model, but it faces a margin collapse. As the market clearing price for inference is increasingly set by the most efficient architecture (Blackwell), the revenue an older chip generates often collapses below its own operating costs. It becomes a "zombie asset" because it is uncompetitive, effectively turning electricity into financial losses.
The Replacement-Intent Test
We appreciate that hyperscaler management teams are making complex judgments about the economic durability of their servers in a rapidly changing environment. Yet, relying on that second “inferencing act” obscures a critical danger: In the AI era, obsolescence often arrives long before the depreciation tables allow.
Accounting standards have long made a distinction between physical operability and economic usefulness. Assets are not depreciated based on how long they can physically operate, but rather on how long they are expected to contribute meaningfully to the business. In slower-moving industries, that judgment can be expressed through forecasts and impairment tests. In fast-moving technology environments, the same judgment is revealed more simply through capital allocation behavior. In every business, capital allocation follows perceived value.
Rational capital allocators do not begin a "useful life" discussion by asking how long a machine can physically run. They start by asking where value is actually created. Assets age, not when they stop working, but when they stop mattering. The most revealing question is deceptively simple: If this asset disappeared tomorrow, would you pay its current carrying book value to acquire it strictly for the job it is doing today?
This "replacement-intent test" is not a standard accounting procedure. Rather, it is a forensic adjustment lens, analogous to capitalizing R&D. The test cuts through forecasts and spreadsheets by revealing the implicit decision management teams attempt to model. If management would replace the asset with the same technology at that specific “book” price, it remains economically useful. If they would upgrade, discontinue the activity, or quietly let it go, the asset’s useful life has already ended.
Back to the AI Frontier
When we apply the replacement-intent test to the AI factories of today, the six-year assumption begins to strain against reality. If a superpod of Ampere DGX servers was lost to a fire inside an Azure data center tomorrow, would Microsoft actually deploy capital to replace it with an identical unit? The answer is very likely no. That capital would bypass the past entirely and flow directly toward Rubin.
This is the calculus of the AI innovation cycle: It does not matter how many hours CoreWeave can bill for a Hopper GPU or if their capacity utilization remains high. Nor does it matter if Nvidia is allowed to sell Hopper GPUs to China. In this environment, once the bulk of economic profit for the hyperscaler carrying the depreciation shifts to Blackwell, the older chips begin their immediate drift from the core to the margins.
Note that while our focus is on the Nvidia ecosystem for clarity, this logic applies equally to the custom silicon, or ASICs, developed by hyperscalers. Whether it is a GPU or a TPU, the physics of performance-per-dollar improvement dictates the economic lifecycle.
The Shark Fin
The innovation cycle and rapid CapEx growth expose a tension in current accounting for AI factories. Specifically, it is the reliance on straight-line depreciation. This convention assumes economic utility is consumed at a fixed rate, mirroring physical wear and tear. It implies a server generates equal value in its sixth year as in its first. While some neoclouds holding long-term, fixed-price contracts might justify this stability, for the vast majority of AI factories, this assumption breaks down. GPU spot pricing offers a useful proxy for marginal economic value, revealing how quickly the market discounts older generations once superior architectures become available.

The economic lifecycle of an AI server follows a distinct "shark fin" curve. It begins with a sharp spike in value during the exclusive Tech Frontier Window, serving as a high-margin cash cow for approximately two years. The arrival of next-generation silicon triggers obsolescence and a steep depreciation in book value. Historically, the book value would have crashed to zero, but booming demand for inference has raised the asset’s price floor, transforming abrupt obsolescence into a long, but diminished, tail of utility.


When useful life estimates were three years, straight-line accounting was a tolerable proxy. Extending this to six years exposes a stark gap between accounting stability and the reality of AI technology cycles. The extension effectively pulls future profitability into the present. Hyperscalers capture the bulk of the economic benefit upfront while deferring recognition of some of the cost. The result is a mismatch that accumulates over time. By years 5 and 6, when the hardware generates much lower-quality revenue and has been relegated to the strategic margins, it will still carry significant depreciation charges, increasing the likelihood of paper losses.
For now, this drag is obscured by the sheer velocity of the AI build-out. As long as hyperscalers continue to deploy new clusters and stack fresh shark fins on top of one another, the margin contribution from new hardware mathematically overwhelms the growing weight of the old. But the stakes are rising. The massive performance multiples we see in training and inference are no longer coming from Moore's Law, but from complex system-level gains in power delivery, cooling, and interconnects. As a result, each successive generation is becoming structurally more expensive to build. With every generation, the bets get bigger.
If the pace of CapEx decelerates, hyperscalers could be saddled with significant book value, and the income statement would have to absorb billions of dollars in depreciation expenses for older GPUs that are no longer driving the topline.
Notably, in February 2025, Amazon decreased their depreciation schedule citing the "increased pace of technology development, particularly in the area of AI". While the company absorbed a $700M charge, the move partially reconciled the discrepancy between accounting and reality.
In a recent interview, Microsoft CEO Satya Nadella all but confirmed this fear. He admitted the company had to retreat from certain datacenter projects, citing concerns over being saddled with the depreciation weight of a single generation of AI processors. If we cut through the fog, his hesitation wasn't truly about depreciation. Depreciation, after all, is an adjustable accounting construct. It was about the ROI. He was acknowledging that the economic useful life of the hardware simply couldn't justify the capital outlay.
Conclusion: The Valuation Delta
A precise financial framework would match value extraction with depreciation, thereby aligning expenses with this volatility. Ideally, this demands an accelerated schedule mirroring the consumption of the economic value. Yet, even a return to a three-year timeline would vastly improve upon the current six-year standard. This adjustment compresses costs into the period of relevance, ensuring that when hardware enters its long tail, it survives not as a burden, but as a fully depreciated, pure profit pool.
Methodical investors should strip away the administrative assumptions embedded in 10-K filings and perform a forensic adjustment on discounted cash flow models. Follow the rhythm of the innovation cycle rather than the financial engineering of a spreadsheet. Reset the useful life to three years and observe the difference. That delta is one part of the AI bubble risk that should see the light of day.
J.P. Morgan has provided a directional first pass at this difficult calculation. They project that aligning depreciation with reality would trigger operating margin contractions of 6% to 8% for most hyperscalers, with Oracle facing an even sharper contraction.
For the electronic supply chain, durability must be calibrated to relevance. As long as the innovation cycle maintains this velocity, there is no premium for engineering component technologies to survive a decade in a machine that becomes economically obsolete in three. Yet, the intensity of a compressed useful life demands perfection. The priority must shift from long-term endurance to uncompromised reliability during the Tech Frontier Window. The industry learned this lesson the hard way in early 2025, when cable backplane quality issues plagued the initial Blackwell ramp. That stumble didn’t just cause delays; it triggered an architectural pivot. The result is visible in Rubin, where the specific interconnect schemes responsible for the fragility are being minimized or designed out entirely.
We are fortunate to live in an extraordinary era of hardware technology where capital collides with physics. Our clients in the supply chain are being asked to make colossal bets to support an infrastructure build-out that rhymes with the great tech bubbles of the past. We do not envy the weight of those decisions. By illuminating the downstream useful life distortion, we hope to help our upstream clients make clearer decisions about their own technology strategy.
In the current AI age, a server’s economic prime, its true useful life, is confined to the Tech Frontier Window. Let’s ensure that we value the AI future correctly by accurately pricing the cost to get there.
Happy New Year!
