top of page
Search

Planning the Off-Ramp During the AI Boom

  • RCD
  • Oct 12
  • 9 min read

The AI super-cycle demands total focus and resource commitment. It is now being powered by a second wave of demand ignited by the sheer volume of AI video and storage needs. But the history of technology cycles shows that the promise of new technology is distinct from the investment opportunity. We are now witnessing a confidence game amplified by creative vendor and supplier financing, creating a dangerous entanglement of trade and equity transfers that multiplies risk. The contrarian defense is mandatory: Plan the collapse during the peak. This post details the analytical tools, starting with Strategic Diversification as an off ramp and the CoWoS Sanity Check for measuring rational demand. It then outlines why the Organizational firewall is the only sustainable defense to isolate AI risk and preserve long-term growth. You must ride this wave, but prudent planning requires implementing the off-ramp now.

The true costs of technology cycles are best understood by examining the wreckage they leave behind. By the end of 2003, Corning had laid off nearly half its workforce, roughly 20,000 people. The layoffs came in waves, creating a climate of fear and uncertainty. The company, one of three main suppliers of fiber optic cable during the dot-com boom, saw demand halt abruptly. To survive, Corning closed 11 factories globally. None of these closures were as symbolic as the shuttering of the Fall Brook Plant in Corning, NY. Operating for 72 years, the plant was a visible reminder that Corning was a factory town, and its closure signaled a massive structural decline, echoing the fate of old steel mills across the Rust Belt.

Today, a quarter-century later, the Tech hardware industry is being redefined by AI. The disparity in growth driven by AI is so vast and consequential that almost every other part of the industry has fallen to the background. The technology is also changing rapidly. So much so that we are all in an AI-time warp.

Over the last month, 2026 Capex projections at major hyperscalers vaulted from "similar to 2025" to double-digit growth. What initially appeared to be a standard memory down-cycle has abruptly corrected, sparking a new super-cycle and driving a radical re-evaluation of 2026 semiconductor forecasts.

The New Era of AI Slop

The catalyst for this sudden acceleration became apparent at the end of September when OpenAI announced the release of its updated video App, Sora 2(Meta released their own version during that time period as well). But you can’t appreciate the moment from a press release. You have to witness the first five seconds of a Sora 2 rendered output to feal the unsettling experience of having fast-forwarded through a decade of technological progress. It also fundamentally changed the AI growth story, signaling the immediate arrival of the "AI Slop" era, where the sheer volume of video generated by every consumer on the planet will become the driving force behind demand for cloud storage and the flash memory that supports it.


A quick Sora 2 video that took less than 30 seconds to create.


This impending explosion in data and memory demand is being multiplied by three distinct trends:

  1. Non-Traditional Cloud Build-Out: The on-again gating of processor exports to sovereign AI nation-states will juice demand as the rapid cloud build-out now includes non-traditional hyperscalers.

  2. ASIC and Merchant Alternatives: Technical and competitive necessity is now driving the adoption of specialized ASICs and merchant alternatives to Nvidia.

  3. Inference Disaggregation: By separating the inferencing process into multiple specialized steps, each with its own optimized processor and memory interface, the architecture radically improves memory efficiency while simultaneously driving up overall silicon usage.

All of these trends point to continued double digit growth of AI hardware through 2026. Memory supply will be tight across the whole industry because suppliers focused their investments on the lucrative High Bandwidth Memory (HBM) market leaving traditional DRAM and NAND capacity fixed.

It is inevitable that a large portion of the increased hyperscaler capex spending will begin to flow into storage. And simply because of supply diversification, Nvidia’s share of total capex will have to fall even if the overall spending continues to rise. Our retainer clients have access to these estimates and our overall updated forecasts.

ree

The growth trajectory of AI may still hold a few unexpected booster stages past 2026. While it’s not yet part of the mainstream dialogue, the persistent work on edge AI is clearly a layer of development that could trigger the next wave of expansion.

Systemic Risk

We remain extremely bullish on AI, driven by a profound belief in the technology’s sheer utility. However, this is where the narrative must confront a reckoning. The promise of a new technology is fundamentally separate from the actual investment opportunity, whether for a stockholder or a supply chain participant allocating capex.

Everything now feels levered on AI. Our internal pattern recognition engine is flashing red, warning of a potential bubble bursting crisis reminiscent of the dot-com bust.  If that earlier era taught us anything, it is that the upstream supply chain is uniquely vulnerable.

ree

Is AI the next dot-com? We don’t know. When advising clients, we always include a "systemic risk" slide—the image of a car driving down a cloud-covered winding path with limited visibility. The only consolation is that every organization in the supply chain is on the exact same road. Indeed, the risk of missing out may be even greater than the systemic danger of over-investment.

But the experience of Corning during the dot-com crash should be making every decision-maker in the upstream AI component supply chain twitch. Bubbles are not a function of technology, or industry, or time. Rather, they are a function of human behavior, and as far as we know, humans haven’t changed.

A bubble is ultimately a confidence game. It deflates when the rate of profit growth cannot sustain investor willingness to continue making bets.  A single unexpected announcement (like DeepSeek earlier this year), a sudden geopolitical crisis (Rare Earth export controls?), or one key initiative that fails to pay off (OpenAI Triton?) could be the push that sends the confidence bucket tumbling over the cliff. The recent circuitous partnership announcements have dangerously sharpened the cliff’s edge.

A voracious reader of AI news will note that there are several sophisticated graphics illustrating the complex financial arrangements currently circulating the web. See here, here, or here. Our modest diagram makes a singular point: these creative vendor and supplier financing arrangements are weaving a tangled web of trade and equity transfers.

One key conclusion from every post-mortem of previous bubbles is how financial leverage amplified the underlying risks. Though current AI deals are not precisely debt or securitization, they are a form of financial engineering. It aligns incentives, but also multiplies risk Value is transferred outside of normal trade, and it doesn't take much imagination to see how these arrangements cross into leverage-like territory once equity is used as collateral for further financing. It is an entanglement that already suggests a Shakespearean tragedy in the making, where intricate alliances inevitably lead to ruin.

The Contrarian Defense

The only sustainable defense for the upstream supplier is the contrarian strategy, enacted before the market's collective conviction begins to waiver. The key here is to “hedge” the AI systemic risk. There are generally five ways to do that:

  1. Contractual Hedging

  2. Operational Hedging

  3. Strategic Diversification

  4. Informational Hedging

  5. Organizational Hedging

We can dismiss the utility of contractual and operational hedges for most component suppliers. Contractual hedges (like take-or-pay agreements or customer financing) are only viable if you have a privileged moat, which unless you are TSMC, most organizations in ultra-competitive electronics supply chain lack. Operational hedges (agile manufacturing) are pertinent to EMS and ODM partners which can transfer skilled labor to product assembly where there is peak customer demand. But it is not easily applicable to component fabrication, which requires high capital investment to increase capacity.

The viable hedges for survival and prosperity are found in diversification and informational hedges.

Strategic Diversification

Most business case studies look at this type of hedging as a conglomerate corporate strategy. The better approach is to find ways to leverage AI-specific technology into other sectors. For many component and material technologies, this is relatively straightforward, as AI hardware makers have often ridden the coattails of developments in other industries to reduce their own development time.

We see off-ramps in areas like power delivery, thermal management, and interconnects. For instance, the new high-voltage power bus for next-generation servers was specifically designed to utilize component technologies developed in the EV market, such as Silicon Carbide (SiC). Similarly, the advancement of mSAP PCB technology for GPU cards (originally for smartphones) will eventually open up the process for widespread use outside of high-end AI platforms.

Although diversification is a necessary defense, it has inherent limitations. Though sectors like EVs, 6G, IoT, and foldable smartphones chart a path of future growth, their momentum pales in comparison to the rapid innovation cycles and economic expansion of AI. This creates the true tension: any strategic move toward diversification is a necessary hedge, but it will likely subtract from the company's overall growth if crucial resources are siphoned away from the monumental AI opportunity.

Informational Hedging (The CoWoS Sanity Check)

The goal of informational hedging is to gain a proper view of downstream end demand and adjust capacity expansion based on those inputs. While often cited as a cure for the bullwhip effect, it only works if the demand is measured at the true point of value exchange. The true value exchange in AI happens far downstream between the service/model supplier and the end customer. And even though hyperscalers and AI providers are becoming increasingly confident in their business models, it is impossible to tell how much of the value generated is given away as consumer surplus.

Even back in 2000, Corning was trying to gather feedback from downstream customers including WorldCom, MCI, Global Crossings, etc. about the true demand for fiber optic bandwidth. But internet usage was growing rapidly and the whole downstream supply chain had projections that overestimated the actual demand.

However, the current AI boom is a bit different that may allow some proxy into measuring actual demand. Unlike previous cycles, the AI boom is inherently supply constrained by one major supplier and its advanced packaging technology: TSMC and CoWoS (and its variants). Although TSMC doesn’t disclose specifics, their commanding moat allows them to charge “temporary receipts” (capacity reservation prepayments) to its customers as contractual hedges.

Therefore, an easy way to sanity check the rationality of capex expansion for other AI related components is to derive the ratio of its consumption to CoWoS wafer consumption.

If TSMC has a capacity limit, then by extension, other support components have a demand limit. If the capacity of a component technology (e.g., SiC wafers, HBM, PCB area, 800Gbps OSFP) exceeds the total available CoWoS limit, it is likely that their suppliers have overshot actual real demand.

There are a few caveats. First, TSMC is offloading some CoWoS demand to OSATS, so it is worth accounting for that available capacity as well.

Second, there is an inherent risk that using the foundry as a proxy ignores bottlenecks external to the silicon supply chain, such as the physical limitations of power generation feeding into data centers. The model assumes power constraints are already factored into TSMC's demand signal, which is likely a safe assumption, but admittedly, may not be the case.

Third, the usage per CoWoS wafer could change with new designs. So, it is important to understand the impact of design changes and the relative volume mix in the overall demand. This analytical rigor is worth the effort when facing huge capital expenditure decisions.

Finally,  no matter how much information a supplier has, the capacity expansion decision is inherently a prisoner’s dilemma. As any seasoned industry executive knows, even if your information model clearly signals an impending glut, you are still bound by competitive logic. If a supplier doesn't invest in new capacity, they risk losing market share to a competitor who chooses to aggressively invest. But if all suppliers in an industry invest, the overall capacity overshoots demand, and price erosion begins.

Unlike TSMC, most of the electronics supply chain lacks the competitive advantage to withstand severe price erosion during an industry glut. Whether they like it or not, suppliers are in the same jailhouse, and there is no real way to break free of this collective self-destruction. This is the ultimate limitation of the informational hedge: knowing the risk does not grant the freedom to avoid it.

Organizational Hedging (The Internal Firewall)

Ultimately, the best hedge for a technology bubble is within the organization, where all dependencies are internal. This is the necessary firewall to isolate the inherent AI risk from the rest of the organization. Organizational hedges include

  • Creating separate internal P&L statements for AI-related products, where investment decisions are made with higher hurdle rates to specifically account for the added AI bubble risk.

  • Creating contingency funds to weather out a possible AI storm, which can be used to cover potential write-downs or fund near-term diversification.

  • Tying executive compensation to metrics like cash flow or liquidity, which always focuses decision-makers to make sober bets on risk/reward.

It's Not Different

As Sir John Templeton said, "The four most dangerous words in investing are: 'This time is different." While Mr. Templeton advice was for financial investors, the quote is equally applicable to strategic industry players. Whether you are a supplier in the AI supply chain, or looking to enter, by all means, you must ride the wave. AI is a once in a century technology shift.  It should be the singular focus of every organization in the harware supply chain. But plan for the off-ramp while you still can.

We can help. If you find these posts insightful, subscribe above to receive them by email. If you would like to learn more about our consulting practice and how we assist organizations in the Tech hardware supply chain, please get in touch with us at info@rcdadvisors.com.

 
 
bottom of page