4 min read

Bad rational decisions - (ENG)

Bad rational decisions - (ENG)

A few years ago, when I was finishing my philosophy degree via distance learning and had little left to go, I decided to take on a massive amount of subjects in my final year without telling anyone. Only the head of studies at the UNED (Spanish online university) and I knew.

Since I have a healthy enough social circle, nobody noticed my relative absence from public plans (some of them thought I was hanging out with others and vice versa).

During those months, I read fascinating things that changed my life, and above all, I read extensively on a topic that has always obsessed me: rationality. I did my final degree on rationality in the 21st century (which most people around me found to be an extremely boring topic).

2 Lessons on Rationality

As I said, I learned plenty of fascinating stuff, and I would like to summarize the two most important ones:

1) The first lesson is that practically nobody has any idea what rationality means because they haven't stopped to think about it for more than two seconds. People continuously do things in the name of a supposed "rationality," and it is an absolute 'farce'. This isn't just done by "stupid" people but by entire philosophical schools that accept rationality as a central element uncritically. For example, it is absolutely stupid to oppose it to "emotions," and in fact, over-relying on a blind and "neutral" rationality can lead to catastrophic consequences... (like the famous thesis on the 'dark side' of reason during World War II1).

2) The second (closely linked to the first) is that there is no absolute "Rationality" with a capital R. The other day I saw a Silicon Valley CEO on social media defining himself as "rational 100% of my time". Rationality is always instrumental; it is always a rationality for something and for someone. Rationality is moving toward a goal efficiently and effectively. When someone claims to do rational things, we must always ask for what objective (or under which values).

And so far, so good: rationality is not very problematic when it is descriptive, when we aim to understand something better.

From "Being" to "Doing"

The problem comes when we prepare to act rationally. That is where the trouble begins, particularly ethical2 problems. If reason doesn't tell us what we should do, how do we choose? Since "rationality" (reason) cannot give us the final answer, the only way out left to us is pure action.

There is a kind of basic unit of rationality, which is "the decision". Although we are not aware of the vast majority of decisions we make (some coming from ignorance, omission, or indifference), the decision is a first cousin of action. Decisions are the universe in which actions operate. There is a lot of decision-making theory out there, but most of it is stupid, especially when it seeks credibility through equations and sophisticated models. Making decisions is managing different types of ignorance.

People are quite good at making small "local" decisions, routine day-to-day choices, and in that sense, there is little we are doing wrong. The drama comes when we have to make important decisions: we ignore everything we don't know. This table is an idea I always think about, representing the different domains of knowledge (or the relationship between decisions and ignorance):

(1) What we know we know (3) What we know we don't know
(2) What we don't know we know (4) What we don't know we don't know

Quadrant number 4 (what we don't know we don't know) is where surprises live. It is the territory of events we cannot foresee precisely because we don't know they exist. And paradoxically, it is these unpredictable events that have the most impact on our lives, which is the central argument of Taleb's black swans.

And this is precisely the problem: intelligent people (especially technical ones) tend to overvalue quadrants 1 and 3 (what they think they know and what they know they don't know) and dramatically undervalue quadrant 4. It is the expert bias: the more you know about a subject, the more you trust that your mental map coincides with the real territory. But maps are always simplifications, and simplifications always omit details that can be crucial.

And here appears one of the most cruel ironies of human cognition: we are absolutely brilliant at explaining why things happened the way they did, but really bad at predicting what will happen. Retrospectively, everything seems inevitable and logical. "Of course Trump would win," "obviously the pandemic had to progress like this," "it was evident that Catalonia wouldn't become independent". But none of these things were obvious before they happened. Our mind is a machine for self-justification, creating coherent narratives after the fact.

That is why the most catastrophic decisions are not made by the ignorant, but by experts who are too sure of themselves (who possess the confidence of the ignorant). The ignorant person is at least afraid, the expert believes they control variables they don't even know exist.

And here lies the great misunderstanding about rationality: we believe that being rational means maximizing information and calculation, when in reality it means recognizing the limits of calculation. The decisions that mark us most—choosing a career, who to share our life with, or where to live—rarely follow any cost-benefit analysis in an Excel sheet. They are bets on things we don't know will work.

In the end, good decisions are not those that follow the rules of rationality, but those that survive contact with reality. I don't pretend to know what the best option is for you, because the quadrant of the unpredictable is personal and non-transferable. But if we understand why decisions are bad, we are already closer... We only need to know how to make them good, but that will be a topic for another day :)


Author's notes ☝🤓

1. Here I am referring to Adorno and Horkheimer's critique of instrumental rationality in Dialectic of Enlightenment, where they argue that reason, when it becomes a technical and neutral tool, can facilitate horror on a large scale.
2. Hume already demonstrated with his 'naturalistic fallacy' that the jump from "is" to "ought" is impossible to make or justify "rationally". In other words, "how things are" is not a compelling reason to justify "how things should be," and this is possibly one of the most radically anti-conservative reflections ever uttered