Skip to content

More compute, more intelligence

26 july, 2025

The AI race between companies is in full swing. XAI is planning to buy millions of Nvidia chips and already has built a 200.000 chip data center named Colossus. Another big player is Oracle, that is planning to buy 40 billion dollars worth of Nvidia chips for OpenAI’s US datacenter. Intriguingly, a billion dollars worth of these chips are also being smuggled to China, that benefits from acquiring them in order to stay in the AI race, highlighting the international stakes to be ahead. These GPUs efficiently handle massive matrix computations, making them ideal for AI training processes.

Companies believe they can improve intelligence by throwing more computing power at it. This is due to scaling laws, where larger model size, dataset and more compute predictably leads to improved performance. However, Moore’s Law, the observation that posits that computing power doubles every two years, appears to have ended since 2016. The growth slowdown is largely caused by physical constraints such as quantum tunneling. Even ASML states that transistor shrink is slowing. Nvidia’s CEO shared this sentiment in 2019, saying Moore’s Law is “dead”, though in 2025 he claimed AI chips are improving faster than Moore’s Law. One major implication is that we will not achieve higher intelligence by simply increasing compute. ASML VP of technology Jos Benschop warns “If we do not improve the power efficiency of our AI chips over time, the training of the models could consume the entire worldwide energy and that could happen around 2035.”.

So while scaling laws show predictable model performance improvements the physical limitations of sheer energy use needed to develop these models might be unsustainable.