.@romeovdean and I wrote a blog post to teach ourselves about the AI buildout. We were surprised by some of the things we learned: 1. There’s a huge fab CapEx overhang - with a single year of earnings in 2025, Nvidia could cover the last 3 years of TSMC’s ENTIRE CapEx. In 2025, NVIDIA will turn around $6B in depreciated TSMC Capex value into $200B in revenue. Further up the supply chain, a single year of NVIDIA's revenue almost matched the past 25 years of total R&D and capex from the five largest semiconductor equipment companies combined, including ASML, Applied Materials, & Tokyo Electron. If AI demand continues, let alone grows, there’s more than enough money to build more fabs. 2. We forecasted two different scenarios up till 2040: explosive growth, and AI winter. In the explosive growth scenario (which leads to $2T yearly AI CapEx by 2030), @sama 's vision of 1 GW a week for the leading company comes true in 2036. But in that world, global AI power draw would be twice US's current electricity generation. 3. For the last two decades, datacenter construction basically co-opted the power infrastructure left over from US deindustrialization. For AI CapEx to continue growing on its current trajectory, everyone upstream in the supply chain (from people making copper wire to turbines to transformers and switchgear) will need to expand production capacity. The key issue is that these companies have 10-30 year depreciation cycles for their factories (compare that to 3 years for chips). Given their usual low margins, they need steady profits for decades, and they’ve been burned by bubbles before. If there’s a financial overhang not just for fabs, but also for other datacenter components, could hyperscalers simply pay higher margins to accelerate capacity expansion? Especially given that chips are an overwhelming 60+% of the cost of a data center. We did some back-of-the-envelope math on gas turbine manufacturers which seems to indicate that hyperscalers could pay to have their capacity expanded for a relatively small share of total datacenter cost. As @tylercowen says, do not underrate the elasticity of supply. 4. We think China is likely to be ahead in AI in the long timelines world (aka no 2028 software singularity). Every 3 years, the chips depreciate, and the race starts anew. Once China catches up to leading-edge process nodes, AI just becomes a massive industrial race across the entire supply chain. And this seems to feed into China’s differential advantages. 5. Lead times are the overwhelming consideration for which energy source you use to power the datacenter, whether you plug the datacenter into the grid or not, and how you design the datacenter. This is because every month that the shell is not set up is a month that the chips (which are the overwhelming majority of your cost) aren’t being used. So you can see why natural gas, for example, is much preferred over current nuclear reactors. Nuclear has extremely low op-ex, but has extremely long lead times and high CapEx. Natural gas may not be renewable, but you can just set up a couple dozen gas turbines next to the datacenter, and get your chips whirring ASAP. Solar and wind also have short construction lead times, and you can smooth out their power with batteries. But you’d have to hire 1000s of people to lay out a Manhattan-sized solar farm to reliably power a single 1 GW datacenter. Much much more in the full blog post. Link below.