The man who turned 225 million dollars into 5.5 billion dollars explained on camera exactly why he made his biggest bet. This is Leopold Aschenbrenner, the same person whose Bloom Energy position is now worth close to 2 billion dollars after Oracle's 2.8 gigawatt fuel cell deal laying out the power math that drove every investment decision his fund has made. In 2022, the GPT-4 training cluster consumed roughly 10 megawatts of power and cost about 500 million dollars. AI compute has been scaling at roughly half an order of magnitude per year meaning the largest training cluster doubles in power requirement every 12 to 18 months without stopping. By 2024, the largest cluster was approximately 100 megawatts, the equivalent of 100,000 high-end GPUs and costs in the billions. By 2026, right now, the leading training cluster requires a full gigawatt of continuous power and that is the output of a large nuclear reactor. By 2028, the projection reaches 10 gigawatts, more electricity than most US states generate in total. By 2030, the trillion-dollar cluster, 100 gigawatts, over 20 percent of everything the United States currently produces in electricity, consumed by a single AI training installation. And that is just the training cluster. Inference, the continuous compute required to actually run AI products for hundreds of millions of users requires multiples of that on top. Meanwhile, total US electricity production has barely grown five percent over the last decade and the grid was not built for this. And the transformer shortage, the switchgear backorders, and the canceled data center projects that are making headlines right now are the first visible symptoms of a power system hitting a wall that Aschenbrenner saw coming years before the rest of the market. This is exactly why he built a 875 million dollar position in Bloom Energy, a company that generates electricity directly at the data center site using fuel cells, completely bypassing the grid bottleneck that is already stopping half of all planned US data centers from opening on schedule. The thesis was never complicated. The bottleneck in AI is not the models, not the chips, and not the software. The bottleneck is whether civilization can generate enough electricity to run the machines fast enough to matter.
A 2025 infographic/PDF that includes charts of projected U.S. datacenter electricity demand (2023–2030, TWh), US electricity sales projections, and regional comparisons—clearly illustrating how AI datacenter loads can reach state-scale power requirements and create the grid bottlenecks described in the tweet.
Source: S&P Global Commodity Insights (S&P Global)
Research Brief
What our analysis found
Leopold Aschenbrenner, a former OpenAI researcher who founded the investment fund Situational Awareness LP, has become one of the most closely watched figures in AI-adjacent investing after his fund's Q4 2025 13F filing revealed roughly $5.5 billion in U.S. equity exposure — reportedly grown from an initial $225 million. Among the fund's largest disclosed positions was an estimated $875 million to $911 million stake in Bloom Energy, the fuel-cell manufacturer that announced an expanded partnership with Oracle on April 13, 2026 to supply up to 2.8 gigawatts of on-site power capacity for AI data centers. The deal validated a core element of Aschenbrenner's investment thesis: that grid constraints, not model architecture or chip supply, represent the binding bottleneck on AI scaling.
The power projections underpinning the thesis originate from Aschenbrenner's widely circulated Situational Awareness report, which extrapolates AI compute scaling at roughly half an order of magnitude per year. Under that model, the largest training cluster moves from an estimated ~10 megawatts for GPT-4 (completed circa August 2022) to ~100 MW by 2024, ~1 GW by 2026, ~10 GW by 2028, and a theoretical ~100 GW by 2030 — a figure that would represent over 20 percent of current total U.S. electricity generation consumed by a single installation. A joint EPRI and Epoch AI report published in August 2025 lends partial support, projecting total U.S. AI power demand could rise from ~5 GW in the mid-2020s to 50+ GW by 2030 and acknowledging that a single frontier training run could require multiple gigawatts by the end of the decade.
However, the tweet's framing omits important nuance. The power and cost figures attributed to GPT-4's training cluster — ~10 MW and ~$500 million — are community estimates, not official OpenAI disclosures, and other analysts have produced different numbers. Furthermore, claims that 1 GW clusters are already operational in 2026 have been disputed: independent satellite imagery analysis of Elon Musk's xAI Colossus 2 facility suggested only about 350 MW of cooling capacity at a site publicly claimed to be 1 GW. Meanwhile, reporting by Bloomberg and Tom's Hardware confirms that roughly half of planned U.S. data center builds have been delayed or canceled due to transformer shortages, switchgear backorders, and grid interconnection bottlenecks — real supply-chain friction that supports the broad thesis even as the specific timeline projections remain speculative extrapolations rather than observed facts.
Fact Check
Evidence from both sides
Supporting Evidence
EPRI and Epoch AI jointly project multi-GW AI training runs by decade's end
A joint report published in August 2025 projects U.S. AI power demand rising from approximately 5 GW in the mid-2020s to over 50 GW by 2030, and explicitly models scenarios in which a single frontier training run could require multiple gigawatts — broadly consistent with the tweet's scaling trajectory (source: EPRI/Epoch AI press release, prnewswire.com).
Bloom Energy's Oracle deal confirms GW-scale on-site generation demand
On April 13, 2026, Bloom Energy announced an expanded partnership with Oracle to deploy up to 2.8 GW of fuel-cell capacity at data center sites, with an initial 1.2 GW contracted. This deal directly supports the tweet's claim that hyperscalers are bypassing the grid by purchasing on-site power at gigawatt scale (source: Bloom Energy press release, bloomenergy.com).
Aschenbrenner's fund 13F filing confirms the scale of the Bloom Energy position
The Q4 2025 13F filing for Situational Awareness LP showed approximately $5.5 billion in U.S. equity long positions, with Bloom Energy as one of the largest holdings at an estimated $875 million to $911 million, corroborating the tweet's investment figures (source: Forbes coverage of 13F, forbes.com).
Half of planned U.S. data center builds delayed or canceled
Reporting from Bloomberg and Tom's Hardware in 2026 confirms that roughly 50 percent of planned U.S. data center projects have been delayed or canceled due to shortages of transformers, switchgear, and other electrical infrastructure — validating the tweet's claim that the grid and supply chain are already hitting a wall (source: Tom's Hardware, tomshardware.com).
Epoch AI data tracks accelerating global AI data center power capacity
Epoch AI's publicly available datasets estimate total global AI data-center power capacity reached approximately 30 GW by Q4 2025, and their analyses discuss multi-GW single training runs as plausible later this decade, supporting the broader exponential-scaling narrative (source: Epoch AI data insights, epoch.ai).
Contradicting Evidence
The GPT-4 power and cost figures are estimates, not official disclosures
The claim that the GPT-4 training cluster consumed roughly 10 MW and cost about $500 million originates from back-of-envelope reconstructions by Aschenbrenner and other analysts who combined assumed GPU counts with rental and ownership cost models. OpenAI has never publicly confirmed these figures, and other analysts have produced meaningfully different estimates. They should be treated as order-of-magnitude approximations only (source: Situational Awareness report, situational-awareness.ai).
Claims of 1 GW operational clusters in 2026 are disputed by independent analysis
The tweet states that the leading training cluster now requires a full gigawatt, but independent satellite imagery analysis of xAI's Colossus 2 facility — one of the sites publicly claimed to operate at 1 GW — found only approximately 350 MW of cooling capacity as of January 2026, suggesting the actual operational power draw was well below the claimed figure (source: Tom's Hardware analysis, tomshardware.com).
The scaling trajectory is a modeled extrapolation, not an observed trend line
The half-order-of-magnitude-per-year scaling projection is a forward-looking model from the Situational Awareness report, not a confirmed empirical trend. Real-world deployment faces compounding constraints including capital availability, permitting, workforce shortages, and efficiency improvements in model training that could slow or alter the trajectory. EPRI and Epoch's own 50+ GW U.S. projection by 2030 is substantially below the 100 GW single-cluster scenario the tweet describes.
Efficiency gains and algorithmic improvements could reduce power requirements
The projections assume that compute scaling continues without significant efficiency offsets. In practice, advances in model architecture, training algorithms, sparsity, quantization, and hardware efficiency have historically reduced the compute required to reach a given performance level, potentially flattening the power demand curve relative to the extrapolation presented.
The $225 million to $5.5 billion growth figure lacks full context
The 13F filing reports long equity positions at a single snapshot in time and does not necessarily reflect the fund's total returns, leverage structure, or unrealized versus realized gains. The $5.5 billion figure represents reported 13F market value of holdings, not confirmed net asset value or audited fund performance from a $225 million starting base, making the implied return figure potentially misleading without additional disclosure.
Report an Issue
Found something wrong with this article? Let us know and we'll look into it.