The technosphere as a computer: What are the thermodynamic implications?

In modern economics, information is arguably the core notion. Indeed, many economists would agree that the market is a giant distributed computer that processes information about scarcities of resources and generates prices that guide the actions of economic agents accordingly. However, economists rarely consider the physical side of information, although in practice that matters much, such as determining the locations of servers or managing their energy costs.

What always comes as a surprise to me is that physicists and engineers also tend to sideline the topic of information when considering the physical side of the economy. I recently attended the conference ‘Thermodynamics 2.0’ where this was salient, again. I know many researchers who explicitly exclude this topic from their research (for example, Axel Kleidon, a contributor to this blog, always shrugs shoulders, too messy, no data…). Of course, one reason is simply that it is complex and difficult. In comparison, investigating the energetic aspects of the economy is a clear-cut question, and data are readily at hand. But for me, this is ‘Hamlet’ without the Prince.

In his seminal 1994 book, Robert Ayres, one of leaders in research into physical aspects of the economy, already elaborated on the role of information in much detail. However, even he retracted from the issue in his subsequent prolific work. In this book, the central problem was already salient. Physics refers to the Shannon notion of information as established concept that allows for its quantification, such as in computation. In economics, as in Ayres’ book, this is mostly used as a measure of complexity. That fits well with the concept of entropy, and since there is a direct formal correspondence with Boltzmann’s definition of entropy, that seems the end of the story. But as Ayres already realized, what is entirely lost is the meaning of the information (which Shannon deliberately kept out of his original work on transmission of information). In his book, therefore he distinguished between two types of information, one relating to complexity, and the other to functions in the context of evolutionary processes: The physical meaning of information would be its contribution to maintenance and survival of a system. This strand was rarely pursued again. Take, for example, Hidalgo’s celebrated 2015 book, which also starts with opening up new vistas on thermodynamics and economics, but gets stuck in the complexity cul-de-sac again.

Therefore, I suggested and elaborated in my 2013 book ‘Foundations of Economic Evolution’ an entirely new approach ‘from scratch’ based on Peirce’s semiotics. Yet, this move seems extremely difficult to communicate and understand, since it is unfamiliar to most other scholars (and after all, what could a 19th century philosopher contribute?). In my contribution to the ‘Thermodynamics 2.0’ conference, I presented an introduction based on a reinterpretation of Maxwell’s demon. Recently, one of the participants, Carey King, draw my attention to an important paper by Artemy Kolchinsky and David Wolpert on ‘semantic information’ that readily provides a formal foundation for my view, yet without being related in any way. This paper starts out from the same basic insight that motivated my work, Landauer’s principle that established the energetic equivalence to a bit of information. Landauer’s equivalence can be almost seen as a physical constant and implies that there is no computation without energetic costs. Kolchinsky and Wolpert connect this with Stuart Kauffman’s concept of autonomous agent (as I do in my book), which is a self-sustaining system that feeds on energetic throughputs. Then, one can show that semantic information is the capacity of a bit of information that contributes to the thermodynamic sustainability of the system (‘survival’ in Ayres’ approach). The crucial corollary of this is that information processing exports entropy to the environment, i.e. is subject to the Second Law. Indeed, similar ideas have been ventilated in biology for long, especially in the context of origin of life theories. Life is evolving information, and if we want to explain its origins physically, this close relationship between energy and information must be essential.

I refer the reader to the sources that I cited for more detail. For understanding the technosphere, the theory suggested by Kolchinsky and Wolpert is crucial for specifying its thermodynamic role in the Earth System. The technosphere is the hardware of the economy, conceived as a computer. Thus, information processing in the economy always comes with physical costs, i.e. exporting entropy to the environment. The big difference to the standard thermodynamic approaches to the economy, as seminally launched by Georgescu-Roegen in his 1971 book, is that this entropy production is not merely seen as reflecting the consumption of energy, and therefore could be contained just by adapting energetic needs of the economy. In my view, these energetic needs result from the more fundamental process of information processing.

The catch is obvious: Most economists think that more information is the solution to our current climate quandary. But they still treat information as ‘manna from heaven’, without any costs. There is growing awareness of the issue in the context of energetic requirements of ‘big data’ and AI. But again, the solution is envisaged as ‘more information’.

The ‘technosphere computer’ is devouring energy to expand its embodied information, and the meaning of that information is to perpetuate and expand this pattern. We must develop a research program that explores this aspect of the technosphere. We happen to live in a grand delusion: That more information is always good for us. But what is the function of the flood of information that is overwhelming us and that we cannot possibly understand at all anymore? Is there possibly a ‘Second Law of Information’ that implies that in generating more information, there is always an irreversible loss of information?

4 Replies to “The technosphere as a computer: What are the thermodynamic implications?”

  1. The Holy Trinity

    Yes the flow of information is important, and energy is used and entropy produced, but we are also interested in how this information accumulates and defines the structures making up the ‘technosphere’. After all, it is these structures that differentiate the technosphere from its background and, more importantly, define its ability to do productive useful work for any given input of primary energy. We might even go so far as to say that it is the final creation of productive structure that is the definition of what is ‘useful’, a view supported by the growing appreciation that GDP comes close to measuring this. If this is so, then the energy supported flow of information articulated by Carsten is into the stock of information encoded in the mat[t]erial fabric of the technosphere. This view underscores that the importance of seeing and accounting for all three states of the system simultaneously – Matter, Energy and Information – the Holy Trinity. Our failing in understanding both human and natural systems derives largely from over-emphasising the role of one or other elements of this trinity. We started with a uniquely material view, and slowly broadened this view to include energy. Not surprising we left information ’til last given its ethereal spirit. Given we are struggling to wrestle information into the picture perhaps we should focus on how we observe it, and then use these ‘measurements’ to paint an inductive landscape?

    Like

    1. The notion of ‘observation’ is really essential, but what does it mean to ‘observe’ information? The Peircean view would suggest that we should avoid to reify information by introducing a subject-object duality again. Information is what information does, and in this sense we observe information by observing how we act on that information or what the information causes in the environment in which our action is inextricably embedded. There is no information ‘out there’ independent from the observer. I think that John Searle introduced important ideas here in distinguishing between ‘observer-dependent’ and ‘observer-independent’ facts, both having same ontological status. But once we approach physical reality as information, all reality becomes observer-dependent. That does not mean ‘subjective’, though, as ‘observers’ are not single human beings, but evolutionary processes, such as the tree of life on Earth, or the technosphere as the embodiment of technologically mediated human ‘observations’ of physical mechanisms that become productive in the economy.

      Like

  2. I often think that the relation between information and entropy is itself non-ergodic: the total future of the universe will not be long enough for anyone to work it out properly, let alone my own lifetime long enough for me to understand it.

    I liked Hidalgo’s book Why Information Grows, and it helped me to think of the Earth’s surface as an information-rich object. But it is frustrating for any reader who wants a lot of rigour and depth in the concept of information. Thanks for the other references – ideally, I would read them properly before commenting on your post (have started on the very technical Kolchinsky and Wolpert 2018 but it will need a few passes for me to grasp it all!). But your post and Kolchinsky and Wolpert’s paper reminded me about (and of) an earlier paper of yours – your 2011 paper in Biosystems with Stanley Salthe, ‘Triadic conceptual structure of the maximum entropy approach to evolution’ (https://doi.org/10.1016/j.biosystems.2010.10.014) – so I revisited it.

    I always thought this paper, which tried to make an intellectual bridge between statistical mechanics and the semiotics of CS Peirce, made a great contribution towards a satisfactory solution to a physical concept of information. As you know, but readers may not, in that paper you point out that there are two ways to understand entropy and information – one is ‘internal’, as if it they could be measured objectively, and the other ‘external’, which makes them inextricably linked to the knowledge or ignorance of an observer. You explore it mainly in relation to biological evolution, but as you point out it can be applied to the economy too. Both of them are statistical processes, in which the behaviour of the population (e.g. the economy as a whole) is relatively insulated from the behaviour of its individual components (eg. firms, investors, consumers) – and certainly from the ‘final causes’ (intensions, purposes, functions) involved in both.

    I increasingly think that any understanding of systems exhibiting emergent complexity and self-organisation requires a deeper grasp not just of what Kolchinsky and Wolpert call ‘semantic information’ – information that an entity has about its environment that contributes to its enduring viability – but also what might be called something like ‘semantic ignorance’. (Peter Haff has done some important thinking about this in terms of entities at different strata within systems https://doi.org/10.1177/2053019614530575.) If all living things understood ecology and natural selection, there would be no evolution. If all economic actors had a perfect model of the economy, it would not be an economy. (Repeat with suitable variations until the end of the universe.)

    Like

  3. There is the common adage that the more we know, the more we know what we do not know. I think that this idea has been well formalized by Brooks and Wiley in their theory of entropy and evolution (https://press.uchicago.edu/ucp/books/book/chicago/E/bo5970569.html) and other contributions, such as Walter Fontana’s. The basic idea is simple. Consider chemical evolution or chemistry as practiced by humans. There a space of all possible combinations of atoms and molecules, of which only a part is manifest. There are physical constraints on that space, hence we can exclude some combinations as impossible. But still we do not know which combinations will become possible in the future, for the simple reason that once a new molecule is formed, this further expands the space of possible states, and since we do not yet know this molecule, we cannot know which other combinations will become possible in the future. We only explore what Stuart Kauffman has called the ‘adjacent possible’. There is an important implication: If we consider the realized states and their information content in terms of Shannon information, a growing space of possible states means that the realized states become less probable, hence embody more information. If the space of possible spaces expands, that implies, however, that our ignorance grows, even though what we know grows in tandem, in terms of information content. That is a very general argument, but has practical and even ethical implications, also related to Peter Haff’s work: We suffer from an illusion of control precisely because we experience the growth of our knowledge, but in fact the space of possible trajectories of technological evolution grows even faster. In other, economic terms, the risk accumulates that we do not correctly estimate the opportunity costs of going for one trajectory. That favours strategies for designing technosphere evolution that would maximize variety, slack, and even crazyness, because this enables us to explore the rapidly multiplying unknown possibilities.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: