On the thermodynamic limits to AI-powered economic growth

The newspaper The Economist recently published an interesting article summarising current debates and visions regarding the impact of AI on economic growth. One school of thought, popular among the titans of Silicon Valley, suggests that AI ushers in an era of explosive growth. This perspective is rooted in the Western conviction, maintained since the Enlightenment, that the growth of knowledge is an unbounded source of prosperity. This idea, however, assumes there are no inherent limits to knowledge growth. In contrast, decades ago, philosopher Nicholas Rescher published a book on the limits of science, including economic aspects. He argued that the production of knowledge is not without costs and that there are both diminishing returns and increasing needs for investment in the material infrastructure necessary for knowledge production. The visionaries in Silicon Valley acknowledge this challenge but believe that it is a human problem, rather than an issue directly related to AI. They believe that AI can overcome limits to the growth of knowledge, and therefore of economic growth.

Economists have often argued that knowledge is available for free. In Neoclassical growth theory, it is said to “fall like manna from heaven” and is deemed the sole means by which continuous growth can be sustained. In New Growth Theories, one essential factor supporting endogenous growth is the positive externalities associated with the formation of capital, including human capital. This suggests that knowledge is also free since externalities, by definition, are not priced in the market. Interestingly, during the early stages of developing new growth theory, one concern was that the models appeared to imply that economic growth could be explosive, even with infinite output. Obviously, economists concluded that there must be something wrong here.

The visions of Silicon Valley can be examined from two perspectives: the inherent limits to the growth of AI and the limitations on AI-driven economic change, particularly in terms of technological progress. The first perspective is informed by the thermodynamics of computation. I argued a decade ago that information technology cannot serve as a panacea for overcoming the limits to growth due to the thermodynamics of information processing. This is widely recognised today by the informed public, especially as we confront the rapidly increasing energy demands of expanding data centres. However, optimists believe that technological creativity can resolve the energy bottleneck, with the hope that AI itself will contribute to this solution. What emerges from this perspective is the idea of an AI perpetual motion machine, where the knowledge produced by AI transcends the energetic constraints on the growth of AI.

Such visions fundamentally misunderstand the thermodynamics of computation (which is, of course, well understood by the engineers of AI infrastructures). Decades ago, Rolf Landauer formulated his classical equation defining the relationship between a unit of information and energy. The process that generates the energy needs for information processing is the erasure of information, which is essential unless memory capacity is available at no cost and is infinite. Erasure makes computation a thermodynamically irreversible process, while retaining information indefinitely would allow for designing computation as a reversible process. Therefore, the simplest argument against the viability of the Silicon Valley vision is that current AI models are extremely wasteful; they rely on the brute force of statistical learning, which necessarily leads to both explosive growth in erasures and expanding memory capacities. It is crucial that this erasure should not be misunderstood as a kind of immaterialization of information. The energy required for erasure represents the costs of freeing up computational and memory capacities, ultimately resulting in entropy production, namely heat—something anyone who works with computers knows to be true.

Recent developments in stochastic thermodynamics have shown that Landauer’s principle is only a lower bound for the energetic efficiency of computation, and that the simple argument on erasure is no longer critical. This is because it was derived from the framework of equilibrium thermodynamics, while real-world computation is a non-equilibrium process. Stochastic thermodynamics of computation indicates that earlier ideas about making computations reversible—and therefore thermodynamically costless—were indeed correct. The reasoning seems analogous to the analysis of reversible adiabatic processes in an ideal gas. However, this also implies that the structural properties of real-world computers are significant for their thermodynamic performance (a “real gas”). Consequently, real-world computers cannot even theoretically achieve the Landauer minimal bound. To the contrary, my hunch is that the specific construction of Large Language Models creates structural constraints that are particularly prone to maximising entropy production because they employ extensive methods of data collection, analysis and storage (physically manifest in the exorbitant size of newly built data centres).

The new perspective on the stochastic thermodynamics of computation presents a principled argument against the notion that the thermodynamic costs of computation can be minimised to the point of negligence, thereby maintaining an optimistic view of growth, even explosive. One intriguing additional point is the idea that while AI could potentially address this optimisation problem, it encounters a fundamental issue: calculating this optimisation is probably a non-computable problem. Moreover, reflexivity leads to logical paradoxes similar to those discussed in earlier analyses of self-referential machines. Consider an AI that conducts a large number of statistical tests on various alternative AI systems. This meta-AI must be more complex than the AI systems it is testing and must also include itself in the analysis: The Gödelian writing appears on the wall! Furthermore, as with other general-purpose technologies, my earlier argument regarding rebound effects applies: Improved efficiency will lead to increased usage, resulting in an absolute growth in energy consumption and entropy production. In summary, the explosive growth of AI cannot trigger explosive economic growth because of the rapidly tightening thermodynamic constraints.

The second issue with the visions coming from Silicon Valley is straightforward: What does it mean for the economy’s productivity to grow substantially? For example, it means that product cycles are speeding up, resulting in new and more powerful smartphones being released at an ever-increasing rate. Clearly, this also means that the material throughput of the economy will rise, further tightening the traditional limits to growth imposed by planetary boundaries. There is a stark contradiction between explosive economic growth and the implied rapid obsolescence of products. If we consider the entire non-equilibrium system of AI and the material economy, their interplay will accelerate the production of entropy within these planetary limits.

Finally, there is a seemingly naïve question: Do we really need explosive growth? Productivity is always relative to the needs of the ultimate consumers. Here, we encounter a dystopian perspective that we already see with many internet businesses, namely that the software itself creates the needs that consumers perceive. Thus, the brave new world of AI-driven explosive growth implicitly assumes that humans are delegating the determination of their needs to AI.

In conclusion, dreams of explosive growth are misleading and should be viewed as another form of “fake news” during these troubling times.

One Reply to “”

  1. This is an interesting comment, it is in line with a concept which I have published about a year ago in Germany (and translated to English): “Entropy as criterion for Sustainability”, it can be found here. This is based on non-equilibrium thermodynamics into which I had introduced in an easy-to-understand (popular science) book. In the a.m. publication, I have (semi-)quantitatively analyzed the (non-)sustainability of DAC, CCS, CCU.

    It is important to note that entropy (in case of AI) does not just get manifested as waste heat, but in many more tangible material ways: As radioactive waste (AI data centers booked energy supply from nuclear power plants), raw material mining landscape destruction and waste, loss of ground water due to mining, all kinds of waste connected with production of the computer and data center hardware and many more – not least loss of biodiversity and fertile soil. (regarding the latter, I would like to point to to a more recent article, available as preprint here: Sustainability through Bioagriculture: Carbon dixide reduction plus Biodiversity Recovery.)

    This also underlines the conclusion of Carsten Herrmann-Pillath’s comments above: Energy supply and raw material limits plus destruction of biodiversity (= species decline) are natural and hence thermodynamic limits of growth, entropy is finally the ultimate limit. Not to forget that AI will not *create* new knowledge, it can only deliver some kind of digestion of previously created knowledge, knowledge rooted in human research. AI will not be able to do its own research.

    The more and the longer AI is swallowing publicly available human knowledge (whether legally available of illegally by breach of copyright doesn’t matter for my argument), the more the AI output will just be some reworded (and maybe easier to digest) copy of already existing knowledge.

    Like

Leave a reply to bernhard6350b65e61 Cancel reply