In Andrew Jarvis’ previous post I read that on the one hand we might just observe evolutions that are “most likely”, and on the other hand that the economy is a “low-probability” structure. How can a low-probability structure be most likely? This apparent contradiction applies for all living systems. The Maximum Entropy approach to evolution solves this problem, because it distinguishes neatly between a system and its environment, and the meta-system of both: A system that assumes states of higher complexity and order (hence, ‘low probability’) exports entropy to the environment so that the entropy of the meta-system increases. The latter observation means, that the state of the meta-system is the most likely one. Thus, the economy is an ‘unlikely’ structure, but it exports disorder to the global environment, and therefore the combined state is ‘most likely’.
Of course, this argument is very coarse and over-simplified. But I believe that it deserves to be explored when thinking about agency in the technosphere. In order to start the discussion, let me focus on one specific aspect. The Maximum Entropy approach comes in two variants, as it has been employed in the Earth system sciences and the life sciences. The first variant is strictly statistical and is a way to explain and analyse complex systems (following Jaynes’ theory of probability). It is also well known to econometricians. Maximum Entropy reasoning explains and predicts systems behaviour by means of the constraints that govern its evolution. Regarding the behaviour of the systems constituents, i.e. the ‘agents’, and even systems architecture, you do not need to introduce any more detailed information, because you just assume that the system will move to the most likely state, given the constraints. The second version of the Maximum Entropy approach now asks the question, does the system that behaves in this way also physically maximize entropy production? You can follow the first version without accepting the second. But the fact is that the Maximum Entropy approach is a powerful theory to explain the emergence of order as expression of the Second Law of thermodynamics in the evolution of living systems.
In this post, I only want to reflect on the first version, against the backdrop of our topic ‘agency in the technosphere’. If you apply this version on the economy, it would just mean that you analyse the constraints under which the economy operates on the aggregate level (not the individual level) and then assume that no matter what agents think and do individually, on the aggregate they will move to a state which is the most likely one. In other words, individual agency would not matter at all for economic explanations! In fact, this idea is not unfamiliar to macroeconomists, especially in the Keynesian tradition. Keynes’s famous paradox of savings tells us a similar story: Individual agents might wish to save, but they end up with less savings than intended, because the evolution of the system is governed by fundamental accounting interdependencies in a monetary economy (savings, investment and income). Given the constraints, this is the most likely outcome.
Interestingly, for decades macroeconomists have tried to combine this insight with the so-called ‘micro-foundations’ program. A Nobel award was given to Robert Lucas for introducing the notion of rational expectations and the analytical construct of the representative agent. The aggregate movements of the macro-economy are explained by introducing a ‘rational agent’ who reflects a kind of statistical average of the population. That saves the deep conviction upheld by many economists that only individual agency matters in explanations (‘methodological individualism’). But at what a price! Today, many economists are frustrated with the state of macroeconomics. But the trouble is, as Keynes wanted to show, that any explanation that starts out from ‘real’ individual agents would probably always result in the conclusion that market failure will be endemic because of information externalities, collective action problems, miscommunication, you name it. The ghost of the ‘representative agent’ was created to rescue normative beliefs about the optimality of markets.
Maximum Entropy thinking would just neutralize all these messy methodological and normative issues in treating all individual-level phenomena as random events and exclusively focusing on the constraints. What are the implications for our understanding of the technosphere?
Let me just give an example: There is the Whiggish account of the ‘rise of Europe’ and of industrialization as reflecting unique Western values, enlightenment intellectual achievements and entrepreneurial spirit, for example, in comparisons with Imperial China. But you can also explain the ‘Great Divergence’ just as reflecting the constraints that governed the evolution of the Chinese and the European economy in between the 17th and the 20th century, especially the energy system, land resources and population. From that point of view, European industrialization was indeed the ‘most likely’ trajectory (and certainly not a ‘miracle’), once certain technological innovations were randomly generated that released ‘hang ups’ (Peter Haff’s term), i.e. constraints that governed the activation of fossil fuels for economic uses. China’s ‘failure’ was not due to deficient values, beliefs and institutions, but was a ‘most likely outcome’. No matter what individuals might have pursued and wished for, the aggregate trajectory was shaped by the constraints.
Therefore, the question is, what is the nature of the constraints that govern the evolution of the technosphere and the economy today? Perhaps this is more important to know than pondering what human agents can achieve, both individually and collectively. They throw the dices, and the most likely result will obtain.