Climate, Complexity & Randomness

Climate, Complexity & Randomness

After publishing my last article on the usefulness of climate models, I received some extensive criticism, noting that several definitions and wordings were somewhat blurred. Reason enough, therefore, to devote a little more attention to the topic of climate and complex systems.

Complex systems appear in various forms. Be it within engineering sciences to model complex cycles such as traffic flow, information and communication sciences, which deal with the flow and interaction of information within networks, or political and economic sciences and their respective fields of observation. And, of course, climate sciences are also part of this.
Complex systems are characterised by the fact that they consist of innumerable components that may interact with each other. The characteristics of these interactions include non-linearity (not necessarily B from A, proportionality is not given), adaptivity (the ability to respond to change), emergence (superior, new properties not found in the individual components), feedback loops, and a few others that are not too important for basic understanding[1].

Now these are a lot of buzzwords that can cause some confusion at first reading. For a better illustration an example shall serve.
Complex systems are often characterized by a long-term, inherent robustness against minor interferences. This means that even a large number of smaller problems do not lead to the destruction of the entire system (note, of course, that there is a certain limit and at some point each system will collapse, but more about this later). Just take the Internet. Even if individual providers have network issues, which will be very unpleasant for individual users, the Internet as a global network system will not be threatened and therefore will not collapse. The infrastructure itself is secured by large nodes, so-called Internet Exchange Points, the most important of which is currently located in Frankfurt. Should the latter have technical difficulties to contend with, it is unlikely that the entire system will fail, but much more serious consequences can already be expected. However, if several of these nodes are affected at the same time, we will face a serious problem. The probability of such an event may be low, but it should not be underestimated[2].

It is therefore important to be aware of those dynamics within complex systems that are often referred to as “fat tails”. This term is basically the technical expression of Black Swan events, i.e. rare but highly influential events that often have far-reaching consequences. What most students get to know during their basic statistical education is the discussion of normal distributions and the resulting probabilities of occurrence for rare events. Such events are also referred to as 5 sigma events, i.e. those occurrences whose distance from the mean corresponds to five or more standard deviations. In a normally distributed world, it would follow that these events are highly unlikely. Statisticians such as Mandelbrot[2] and Taleb[3], on the other hand, argue that in a complex world, fat-tail distributions are particularly relevant and normal distributions often underestimate the risk of rare events.

However, not only extreme events can be relevant, but also the cumulative influence of smaller variables. The inherent non-linearity of complex systems can lead to processes that at first glance appear less intuitive. Scott E. Page illustrated this with a very nice example – the algae development of a pond:
Algae are often a consequence of increased phosphate concentrations within a water body. However, it takes a while until a clear pool becomes an algae-polluted pond. The unique characteristic of this is that it is not a gradual change, but a very rapid tilting of the entire system. For a while, the system of the pool can handle the increased phosphate concentration quite well, but at some point there comes a moment when this is no longer the case and the system undergoes a fundamental change in which the algae feel more comfortable than ever before. The gradual increase in the phosphate concentration ultimately leads to a fat tail event that completely throws the original system off course[5].
Now this example is a very simple one and should only serve as an analogy for understanding, because we can explain quite well why this change occurs. In much more complex domains this becomes far more difficult, not to say impossible. Which completes the loop and brings us back to the original topic.

The earth’s climate obviously belongs to the group of complex systems. However, there is also the problem that the behaviour of such systems cannot be predicted exactly, since no model can include all components in its calculations. At this point the climate skeptic feels completely confirmed, because he always knew that these climate scientists and their prognoses cannot be trusted. Without noticing it, however, he falls victim to one of the oldest problems of truth-finding: the induction problem, which I have already discussed here.
The failure of past forecasts is not a reliable indicator that it must always remain that way. In a complex environment, it is impossible to establish obvious causal relationships, but as more potential stressors are added, the risk of causing devastation can increase.

Climate models now face the problem that they are inherently probabilistic, operating on past data and de facto unable to provide fully accurate predictions[6]. However, this is not so much an argument against using the models as it is against the communication strategies derived from them. The media and activists often suggest a certainty of prediction that simply does not exist – but does not have to – in order to demand more reasonable environmental protection strategies.
In the past, there have repeatedly been scenarios in which warnings were given of a possible catastrophe, but which then did not occur. Take, for example, the warning of a population explosion in the 1960s, represented by Paul Ehrlich[7]. He spoke of the extinction of all relevant marine animals around 1980. Obviously, this did not happen because countless factors have developed in a direction that he and other followers of this idea did not see coming. How could they? Such predictions are doomed from the beginning.

But to deduce that there have been various climate episodes in the past, and even if the situation worsens, humanity will be able to develop a new invention that prevents the worst is dangerously naive. In a complex world, it is impossible to make predictions based on past data. It is simply impossible to calculate rare events. A high risk aversion and thus the protection of the environment is the really rational decision. People tend to forget that in complex systems one plus one does not always result in two, but often much more (keyword: non-linearity). Stressors can act as super-additive functions and cause enormous damage.

Climate research itself is very well aware of the difficulty of prognoses within complex systems and articulates them accordingly. Above all the groundwork by Edward Lorenz and the Lorenz system named after him, in popular culture also known as the butterfly effect (ironically, this is often misinterpreted in such a way that one should pay more attention to small details, since their influence could be so great – although countless of them exist and one does not know which are the relevant ones anyway)[8].
The core statement of a Lorenz system is that it is impossible to know all the initial variables within a physical system, from which it follows that the prediction of future behaviour must inevitably fail, even if the system is highly deterministic and quantum effects are ignored.
On the basis of this impossibility, Snyder et al. draw a very meaningful conclusion along the lines of the arguments of Mandelbrot and Taleb. Precisely because making exact predictions is not a realistic option, it is all the more important to develop human systems in a way that ensures their survival even when fat-tail events occur. Above all, this means reducing the number of possible stressors[9].
From this it can be deduced that one should trust the most pessimistic of all models, knowing very well that sometimes things can get even worse, since each one is probably wrong to a certain degree. One can doubt the reliability of accurate forecasts, but this is precisely the reason why one should position oneself extremely conservatively and look for better measures than before. This would have the advantage, especially in public discourse, that one could acknowledge the inadequacy of correct forecasts and still demand better environmental protection measures.
Of course, it is easy to say that not everything will be as bad as some models predict. After all, we have been doing quite well in the past. But this time it is different, because the worst-case scenario is not just a few hundred million deaths, but the complete uninhabitability of the earth for the human race. If not even existential threats force us to change our behaviour, then we do not deserve to survive.



[1] Boeing, Geoff. Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction


[3] Mandelbrot, B. (1997). Fractals and Scaling in Finance: Discontinuity, Concentration, Risk. Springer

[4] Taleb, N. N. (2007). The Black Swan. Random House and Penguin.

[5] Scott E. Page: Understanding Complexity


[7] Ehrlich, Paul R. (1968). The Population Bomb. Ballantine Books.

[8] Lorenz, Edward Norton (1963). “Deterministic nonperiodic flow”. Journal of the Atmospheric Sciences.

[9] Snyder, Carolyn W.; Mastrandrea, Michael D.; Schneider, Stephen H. The Complex Dyanmics of the Climate System: Constraints on our Knowledge, Policy Implications and the Necessity of Systems Thinking. Philosophy of Complex Systems. Volume 10 in Handbook of the Philosophy of Science. 2011. Pages 467-505

Climate Models Are Useless – So What?

Climate Models Are Useless – So What?

The climate debate is still in progress and a satisfying end in which all those involved reach out happily is far away. An excellent opportunity to add a little more fuel to the fire.

The very deliberately chosen title of this article may sound a little confusing at first. After all, as a veteran climate activist, one knows that the majority of scientists agree on the existence of anthropogenic climate change – even if there is uncertainty about the extent of its influence. Whereas the inclined climate sceptic tirelessly emphasizes that past predictions of doom were wrong with reliable regularity. No massive forest extinction, no islands disappearing by the dozen, and even the ozone hole seems to be closed again by about 2075.

According to this logic, it is only reasonable to be sceptical about the alarmism of Greta Thunberg and the movement Fridays for Future inspired by her. That’s not because the argument behind it is so powerful, but because we humans often don’t understand the world around us.

Evidence and Absence

The human mind has an inherent need to identify causal relationships everywhere in order to explain the world. We see the earth getting warmer and warmer since the Industrial Revolution, so it is quite clear that humankind is to blame. Or is climate only in a warm period again and the human influence is negligibly small, therefore continue as always?

None of these positions recognizes that it doesn’t really matter who is actually to blame, because any predictions based on the assumption of linear relationships are completely useless. However, before the climate skeptics triumphantly throw their arms into the air, a short detour into the philosophy of science and complex systems is necessary.

Such a system is characterized by the fact that it consists of innumerable components that may interact with each other. Among the properties of this interaction are non-linearity (B does not necessarily follow from A, proportionality is not given), adaptivity (the ability to react to changes), emergence ( new, higher-level properties that cannot be found in the individual components) and a few others that are not too important for basic understanding.

The Earth’s climate obviously belongs to the group of complex systems. However, this also poses the problem that the behaviour of such systems is impossible to predict accurately, since no model can include all components in its calculations. At this point the climate skeptic feels completely confirmed, because he always knew that one cannot trust these climate scientists and their forecasts. Without noticing it, however, he falls victim to one of the oldest problems of the search for truth: the problem of induction. As I wrote in a previous post, the idea is often attributed to the Scottish philosopher David Hume, who stated in A Treatise of Human Nature:

“There can be no demonstrative arguments to prove that those cases of which we have had no experience are similar to those of which we have had experience.”

In philosophy of science, this process is called induction. This means that, on the basis of certain premises, a possible general conclusion is derived. Note the use of the term “possible”, because the conclusion does not have to be logically compelling.

The consideration that no general laws can be derived on the basis of incomplete information is, however, already many centuries old. The Pyrrhonian skeptic Sextus Empiricus wrote about this already in the second century:

“If they intend to determine the general from the details by induction, they will do so by checking all or some of the details. But if they check some of them, the induction will be uncertain, since some of the details left out in the induction may violate the general; while if they are all to check, they will break the impossible, since the details are infinite and indeterminable.”

The most popular version of this problem is about the often mentioned black swan. If you go around the world and every swan you see is a white swan, it makes sense to conclude that all swans are white. However, a single black swan is enough to show that the general theory of white swans is not as universal as originally assumed. If these considerations are brought to a common conclusion, the following guiding principle emerges:

“The absence of evidence is not the same as evidence of absence.”

The failure of past predictions is not a reliable indicator that it must always remain so. In a complex environment, it is impossible to establish obvious causal relationships, but if more and more potential stressors are added, the risk of causing devastating events may increase.

It is dangerously naive to assume that there have always been different climate episodes in the past, and even if the situation worsens, humanity will be able to develop a new invention that prevents the worst from happening. In a complex world, it is by no means possible to make predictions based on past data. It is simply impossible to calculate rare events. A high risk aversion and thus the protection of the environment is the most rational decision. People tend to forget that in complex systems one plus one does not always equal two, but often much more. Stressors can act as super-additive functions and cause enormous damage.

Nobody knows what will happen, no model is able to predict the future.
The thing is: This is not necessary to realize that influencing systems you don’t understand can have unintended, negative consequences. By removing or at least slowing down some stressors, the risk of extreme events may be reduced. It does not require linear evidence or apocalyptic predictions to be aware of the potential damage that one’s actions could cause.

Protecting the environment is humane

Climate skeptics must be credited with the idea that correct predictions are in fact not among the things that one would cite as praiseworthy characteristics of human behaviour. However, to conclude from this that everything would somehow work out is not the answer to the problem. As climate change is a very abstract phenomenon for many people, it helps to transfer the argument just made to a more familiar event: the 2008 financial crisis.

The majority of economists did not see such a crash coming, let alone consider it possible. The mathematical models of economics at the time did not foresee such catastrophes, but of course it was by no means the first global economic crisis. It is in the nature of rare events that they cannot be predicted. The global economy is no less complex than Earth’s climate. Accordingly, forecasters also face the same problems here. Even then, there were a handful of skeptics who warned that the financial system could eventually face a huge collapse. They were right. Does this automatically mean that today’s climate skeptics are just as well on track? Perhaps. Perhaps not. Nobody knows. It would be desirable, but the risk that this is not the case can hardly be dismissed.

The public debate is facing a strange paradox:
Even if anthropogenic climate change does not exist (or does not exist to the extent to which it appears) and the world will be in perfect order as usual, where does the problem lie, at least in trying to live more sustainably? Even if one is not completely convinced of the apocalyptic narrative, there are undeniable environmental problems that adversely affect the quality of human life. The Earth itself will continue to persist. It simply exists. It does not care whether some intelligent monkeys inhabit it or not. The protection of our environment is not so much about the planet as it is about ourselves. It is something deeply humanistic.

Changing the narrative – how to talk about climate change

Changing the narrative – how to talk about climate change

Considering the renewed debate surrounding the relevance of climate change, it seems almost tragicomical that few actors are willing to allow some degree of dispassionate distance to develop strategies that are actually helpful. After all: this is possible.

In an increasingly complex world which makes it difficult for the human mind to find its way around, simple proposals for solving complicated problems become increasingly attractive. This comes as no surprise to anyone who has ever been concerned with how people perceive the world and base their decisions on this perception.

In the 1970s, psychologists Daniel Kahneman and Amos Tversky developed a theory that tries to explain how people decide in situations that involve a certain risk for themselves and for which the outcome is not obvious. In 1979, these considerations finally led to the paper “Prospect Theory: An Analysis of Decision under Risk”[1] – a work that to this day is one of the most influential in economics and which has provided the foundation for the new research field of behavioural economics.

Before Kahneman and Tversky, economic models were dominated by the view that people were always able to make rational decisions to their advantage. This kind of person is usually referred to as homo economicus. While scientific psychology at this time had long been aware that people were anything but brutally calculating, always rationally deciding actors, this insight did not yet seem to have reached the economists of that time. But the pioneering work carried out by Tversky and Kahneman and later continued by Richard Thaler, Cass Sunstein and others was soon undeniable. The credo of the always rationally deciding individual began to crumble faster and faster. Consequently, Kahneman received the Nobel Prize in Economics in 2002 and Thaler in 2017 for their research. The work on decision-making processes led to two central aspects that are important for understanding the public perception of climate change.

Less is not always more

One of these basic assumptions is described as “loss aversion” – in other words, the tendency to be more anxious to avoid losses than to make profits of comparable value. A concrete example: For the personal perception, the loss of satisfaction when losing 100€ weighs more than the gain in satisfaction when the same amount occurs as an unexpected windfall. The emotional evaluation of the loss-profit calculation therefore shows an asymmetry in favour of avoidance behaviour. Closely connected with the term loss aversion is the so-called “endowment effect”. According to this, people attribute a higher value to those things they perceive as their property than to foreign objects of similar value. Kahneman, Knetsch and Thaler (1990)[2] considered that loss aversion offers a possible explanation for the often observed endowment effect.

If one takes this idea as a starting point, it comes as little surprise that many people are repelled by the often communicated restrictions in their personal lifestyles that are demanded for the prevention of climate change. While climate change is an abstract phenomenon that is emotionally difficult to comprehend, the required renunciation of meat, air travel or children reveals itself as a concretely perceived loss of quality of life. Apparently objective projections that calculate price adjustments for many amenities – measures that are supposed to be necessary to reduce consumption and thus its impact on climate – understandably cause rejection and anger among many people. It is among the great tragedies of political ideologies that they have to work with the people they have, not those they seek.

The widespread call for more renunciation, more taxes and more restrictions will therefore only be met by those who are already willing to accept personal losses in favour of a superior idea. In many cases this is due to the fact that they do not perceive the recommended restrictions as such at all, as they already follow the proposals themselves voluntarily. However, assuming that all other people are just as willing to change their own lives is too short-sighted.

Strategies are needed that recognize that different people have diverse needs and that a “one-size-fits-all solution” will only work in the manifestos of revolutionary ideologists. Much would already have been done for public perception if there were no longer so much talk about what we must sacrifice, but what we can actively do without compromising the perceived quality of life.

In Germany, for example, one of the good news in recent history has been the idea initiated by the state of Schleswig-Holstein and the cartoonist Ralph Ruthe to plant trees or donate money for the Day of German Unity – without any coercion or scaremongering. The positive response was accordingly high. Presumably it would have looked differently if the introduction of a “tree tax” had been considered, which would be used for the same purpose, but without providing the public with a decision-making option.

The idea behind this is not new. Richard Thaler and Cass Sunstein popularized the concept of the so-called “Nudge”[3], which they also refer to as “libertarian paternalism”, with their same-named book in 2008. One of the basic assumptions is that it often makes sense to increase the number of possible options and to present certain positive alternatives in a way that makes them more obvious to the user. For example, instead of banning fast food in canteens, healthier offers are set up in a place perceived as more accessible. Every customer still has the freedom to decide whether or not to order a burger with fries on his plate, but from now on he may consider more often that a vegetarian alternative is occasionally not completely wrong. Private companies have been familiar with this type of customer influence for years and exploit it wherever possible.

The question therefore arises why political actors too often resort to fear as a motivator instead of looking for ways to provide people with more options to act or at least make them more aware of them. Because in many cases it is not so much the intensive search for a panacea as the focus on already existing possibilities that is needed. Instead of pointing out on Earth Day that the resources for the current year have theoretically been used up and that we all, once again, have to renounce, abstain, abandon, it would be much smarter in terms of communication theory to show the amount of electricity costs a private household can save through simple methods. The goal and the result are the same, but the way is completely different.

Climate change, shmimate change

Many people are annoyed with the debate about climate change. They think it’s all just a big media hysteria and Greta, Fridays for Future and others have no idea what they’re talking about. And anyway, there have always been warm and cold phases on earth.

These thoughts exist and it is important to take them seriously. A sweeping condemnation with the indication that such people are just some right-wing conservative conspiracy types after all, is not very effective.
Instead, it is worth considering the second basic assumption of behavioural economics: a dichotomy that Kahneman describes as “System 1” and “System 2”[4].

System 1 is the unconscious, intuitive mechanism that reacts quickly, automatically and emotionally and provides the basis for most everyday decisions. It would be extremely impractical for people to consciously think about every decision because life as we know it would probably be impossible. However, we should not identify System 2 as the actual decision maker. This second, logically calculating, consciously thinking system gives us the illusion that we are always master of all things and, of course, can always make rational decisions that benefit us. It is not surprising that the idea of homo economicus survived for so long. We often perceive it as our reality of life, simply because we lack knowledge of the unconscious processes that determine our own existence.

The reason why nudging can often be a useful approach to promoting desired behaviors is due to the fact that our system 1 makes many of the decisions influenceable by it. Climate change, however, which takes place as an abstract, perceived distant phenomena somewhere beyond our reality, is more accessible to the more cumbersome System 2, and only if the arguments presented are convincing and do not confront the perception of System 1 too much. Who has the time and motivation to deal with technical discussions on climate theory? This should not be understood as a criticism, but as a simple representation of the status quo as it presents itself to many people. It is completely normal that we often try to avoid stressful situations when it doesn’t seem necessary.

So should we just slap our hands above our heads, because there is no point in anything anyway and nobody really knows what he is talking about? Not quite.

Black swans and complex systems

In 2007 the statistician Nassim Nicholas Taleb published the book The Black Swan: The Impact of the Highly Improbable[5]. One of the book’s key messages is that human life and the ecosystem that surrounds it is an unbelievably complex system of countless variables that we cannot possibly all include in the assessment of the world. Hence: There is a possibility that events will occur that contradict all statistical predictions but sometimes have catastrophic consequences – the eponymous Black Swans. It is part of the irony of history that one year later an unexpected world economic crisis had very disastrous consequences in many countries – hardly anyone had expected such a far-reaching event at the time. The forecasting models used did not indicate anything. This does not mean that statistical methods are useless, quite the opposite. There are many areas in which statistical observations can be very helpful. However, it is no less important to occasionally consider the limitations of these methods and understand what can be reasonably predicted and what cannot.

According to Taleb, climate models belong in the latter category. Since it is impossible to know all relevant factors, derived forecasts based on incomplete information are also relatively worthless. Interestingly, this consideration does not lead him to the same conclusion as many critical commentators writing below the articles of numerous climate change reports.

Following his reasoning, it is absolutely necessary to be as conservative (in the literal sense of the word, meaning “preserving”) as possible with regard to environmental aspects. The possibility of a catastrophic Black Swan event exists and its effects can be so devastating that we will never recover. So even if all the models and predictions are useless (a position about which there is likely to be a good deal of controversy), it is precisely this uncertainty that makes risk-avoiding behaviour very reasonable.

One does not have to share Taleb’s rigorous rejection of statistical forecasts in complex systems to recognize the attractiveness of the argument. Sometimes it doesn’t take complicated mathematical procedures to admit that you don’t understand most things, but it may be a good idea to reduce the risk of a devastating event.

Hopefully, public discourse will shift in favour of a positive, less fear-centered debate. Instead of insisting on more and more prohibitions, restrictions and taxes, thinking about more attractive alternatives would be a very welcome change. Nobody will benefit from rejecting the major global issues of our time because we have failed to adequately illustrate their significance. Most people do not like to feel that their freedom of choice is being restricted. That is normal, that is human. We therefore need more good options, not less.


[1] Kahneman, D. & Tversky, A. (1979). “Prospect Theory: An Analysis of Decision under Risk”. Econometrica. 47 (4): 263–29

[2] Kahneman, D.; Knetsch, J.; Thaler, R. (1990). “Experimental Test of the endowment effect and the Coase Theorem”. Journal of Political Economy. 98 (6): 1325–1348.

[3] Thaler, Richard H.; Sunstein, Cass R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. Yale University Press

[4] Daniel Kahneman (October 25, 2011). Thinking, Fast and Slow. Macmillan.

[5] Taleb, Nassim Nicholas (2007), The Black Swan: The Impact of the Highly Improbable, Random House