Nassim Nicholas Taleb’s The Black Swan explores the nature of what we perceive as random events, as well as the logical pitfalls that cause us to miss out on the bigger picture. He calls these seemingly random events, which often have profound consequences for the individual and even for societies on the whole, “Black Swans.”
By giving us a better understanding of our own shortcomings when it comes to making predictions, Taleb offers advice that can help us to recognize when our judgment is clouded by the desire to fit information into neat, easy-to-understand narratives.
In these blinks, you’ll find out how to avoid mistaking noise for knowledge, and how to make better use of your ignorance.
You’ll learn why thinking like a turkey could be bad for your health.
You’ll also find out why the greatest threat to a casino might actually have nothing to do with gambling.
Finally, you’ll discover why “knowing what you don’t know” can save you from losing your life savings.
“Black Swans” are events thought to lie outside the realm of possibility, and yet happen anyway.
As human beings, we are particularly good at turning all of the stimuli from our environment into meaningful information. This is a talent that has allowed us to create the scientific method, philosophize about the nature of being, and invent complex mathematical models.
But just because we’re able to reflect on and order the world around us doesn’t necessarily mean we’re very good at it.
For one thing, we’re inclined to be narrow-minded in our beliefs about the world. Once we have an idea about how the world functions, we tend to cling to it.
But because human knowledge is constantly growing and evolving, this dogmatic approach makes no sense. Just two-hundred years ago, for example, doctors and scientists were supremely confident in their knowledge of medicine, yet today their confidence seems ludicrous: just imagine going to your doctor complaining of a common cold, and being given a prescription for snakes and leeches!
Being dogmatic about our beliefs makes us blind to those concepts that fall outside the paradigms we’ve already accepted as true. How, for example, is it possible to understand medicine if you’re not aware that germs exist? You might come up with a sensible explanation for illness but it will be flawed by a lack of crucial information.
This kind of dogmatic thinking can result in huge surprises. We’re sometimes surprised by events not because they’re random, but because our outlook is too narrow. Such surprises are known as “Black Swans,” and they can prompt us to fundamentally reconsider our worldview:
Before anyone had ever seen a black swan, people assumed that all swans were white. Because of this, all their depictions and imaginations of the swan were white, meaning that white was an essential part of “swanness.” So, when they discovered their first black swan, this fundamentally transformed their understanding of what a swan could be.
As you’ll see, Black Swans can be as trivial as learning that not all swans are white, or as life-changing as losing everything because of a stock market crash.
Black Swan events can have earth-shattering consequences for those who are blind to them.
The effect of a Black Swan isn’t the same for us all. Some will be hugely affected by them, others hardly at all. The power of their effect is largely determined by your access to relevant information: the more information you have, the less likely you are to be hit by a Black Swan; and the more ignorant you are, the more you are at risk.
This can be seen in the following scenario:
Imagine making a bet on your favorite horse, Rocket. Because of Rocket’s build, her track record, the skill of the jockey, and the poor competition, you believe that Rocket is the safest bet and gamble everything you own on the horse winning.
Now imagine your surprise when the starting pistol is fired and Rocket not only doesn’t leave the gates but opts instead to simply lay down on the track.
This would be a Black Swan event. Given the information you’d gathered, Rocket winning was a safe bet, yet you lost everything the instant the race began.
But this event will not be a tragedy for everyone. For example, Rocket’s owner made a fortune by betting against his own horse. Unlike you, he had additional information, knowing that Rocket was going to go on strike to protest animal cruelty. Just that small amount of information saved him from having to suffer a Black Swan event.
The impact of Black Swans can also differ widely in scale. Rather than affecting only individuals, sometimes, entire societies can experience a Black Swan. When this happens, a Black Swan can transform how the world works, impacting many areas of society, like philosophy, theology and physics.
For example, when Copernicus proposed that the Earth is not the center of the universe, the consequences were immense, as his discovery challenged both the authority of the ruling Catholics and the historical authority of the Bible itself.
In the end, this particular Black Swan helped to establish a new beginning for all of European society.
We are very easily fooled by even the most basic of logical fallacies.
Although humans seem to be the most intelligent animals on the planet, there’s still a long way to go before we’ll have outgrown all of our bad habits.
One such habit is creating narratives based on what we know of the past. While we tend to believe that the past is a good indication of the future, this is often a fallacy. It leaves us prone to mistakes because there are simply too many unknown factors which could go against our narratives.
For example, imagine you’re a turkey living on a farm. Over the years the farmer has fed you, let you roam freely, and provided a place to live. Using the past as your guide, there is no reason to expect that tomorrow should be any different.
Alas, tomorrow is Thanksgiving, and you are decapitated, filled with spices, thrown in an oven, and devoured by those who had housed and fed you.
As this example shows, believing that we can base predictions about the future on knowledge of the past is a fallacy with potentially dire consequences.
A similar fallacy is confirmation bias: we often search for evidence only for those beliefs we’ve formed already, even to the extent that we ignore any evidence that contradicts them.
When we encounter information that goes against what we already believe, we’re unlikely to accept it and even less likely to investigate further. If we do investigate, we’ll probably look for sources that undermine this information.
For example, if you strongly believe that “climate change” is a conspiracy but then happen to see a documentary called “The Undeniable Evidence for a Changing Climate,” it’s likely that you’ll be upset.
If, after this, you did a web search for information about climate change, it’s more probable that the search terms you use would be “climate change hoax” and not “evidence for and against climate change.
”While both of these fallacies are anti-scientific, it turns out that we can’t do much to avoid such bad reasoning: it’s simply in our nature.
The way that our brains categorize information makes accurate predictions extremely difficult.
During our evolution, the human brain developed certain ways to categorize information. While these were great for surviving in the wild, when we needed to learn and adapt quickly to our dangerous surroundings, they are terrible in today’s complex environments.
For instance, one way we incorrectly categorize information is the so-called narrativefallacy, where we create linear narratives to describe our current situation.
This is due to the massive amount of information we’re faced with every day. To make sense of it all, our brains select only that information it considers important. For example, while you probably remember what you ate for breakfast this morning, it’s doubtful you remember the color of everyone’s shoes on the subway.
In order to give meaning to these unconnected bits of information, we turn them into a coherent narrative. For example, when you reflect on your own life, you probably select only certain events as meaningful, and you order those events into a narrative that explains how you became who you are. For example, you love music because your mom used to sing songs by The Beatles to you every night.
However, creating such narratives is a poor way to gain any meaningful understanding of the world. This is because the process works only by looking back on the past, and doesn’t take into account the near-infinite explanations that are possible for any one event.
The fact is that tiny, apparently insignificant events can have unpredictable, major consequences.
Imagine, for example, that a butterfly flapping her wings in India causes a hurricane one month later in New York City.
If we catalogue each stage of cause and effect in this process as they occur, then we’d be able to see a clear, causal relationship between events. But since we only see the outcome – in this case, the hurricane – then all we can do is guess at which of the simultaneously occurring events actually influenced that outcome.
We don’t easily distinguish between scalable and non-scalable information.
We humans have developed many methods and models for categorizing information and making sense of the world. Unfortunately, however, we’re not very good at distinguishing between the different types of information – most crucially between “scalable” and “non-scalable” information.
But the difference between those types is fundamental.
Non-scalable information – such as body weight and height – has a definite, statistical upper and lower limit.
Body weight is non-scalable because there are physical limitations on how much a person can weigh: while it is possible for someone to weigh 1000 lbs, it is physically impossible for anyone’s weight to reach 10,000 lbs. Because the properties of such non-scalable information are clearly limited, it’s possible for us to make meaningful predictions about averages.
On the other hand, non-physical or fundamentally abstract things, like the distribution of wealth or album sales, are scalable. For example, if you sell your album in digital form through iTunes, there’s no limit to how many sales you might make because distribution is not limited by the amount of physical copies you could manufacture. Furthermore, because the transactions take place online, there is no shortage of physical currency to prevent you from selling a trillion albums.
This difference between scalable and non-scalable information is crucial if you want to have an accurate picture of the world. And trying to apply those rules that are effective with non-scalable information to scalable data will only lead to mistakes.
For example, say that you want to measure the wealth of the population of England. The simplest way to do this is to work out their per capita wealth, by adding up their total income and dividing that figure by the number of citizens.
However, wealth is actually scalable: it’s possible for a tiny percentage of the population to own an incredibly large percentage of the wealth.
By merely collecting data on the per capita income, you end up with a representation of income distribution that probably doesn’t accurately reflect the actual reality of the citizens of England.
We are far too confident in what we believe we know.
We all like to keep ourselves safe from harm, and one of the ways that we do this is assessing and managing the possibility of risk. This is why we buy things like accident insurance, and we try not to “put all our eggs in one basket.”
Most of us try our best to measure risks as accurately as possible to ensure that we don’t miss out on opportunities, while also ensuring that we don’t do something we may later regret.
To achieve this we have to evaluate any possible risks and then measure the probability that these risks will materialize.
For example, imagine you are in the market to purchase insurance. You want to buy the kind of policy that will protect you against the worst-case scenario, but also will not be a waste of money. In this case, you’d have to measure the threat of disease or accident against the consequences of those events transpiring, and then make an informed decision.
Unfortunately, we are far too confident that we know all the possible risks which we need to protect ourselves against. This is called the ludic fallacy, and according to it, we tend to handle risk as we would a game, with a set of rules and probabilities that we can determine before we play.
Yet, treating risk like a game is itself risky business. For example, casinos want to make as much money as possible, which is why they have elaborate security systems and ban players that win too much, too frequently.
But their approach is based on the ludic fallacy. The major threats to casinos may not be lucky gamblers or thieves, but rather, for example, a kidnapper who takes the owner’s child hostage, or an employee failing to submit the casino’s earnings with the IRS. The casino’s greatest threats might be completely unpredictable.
As this shows, no matter how hard we try, we’ll never be able to accurately calculate every risk.
Next, we’ll find out how being aware of our ignorance is far better than remaining unaware of it.
Taking an inventory of what you don’t know will help you to assess risks better.
We’ve all heard the phrase “knowledge is power.” However, sometimes we’re constrained by what we know, and at these times recognizing what you don’t know is far more advantageous.
Indeed, by focusing only on what you know, you limit your perception of all the possible outcomes of a given event, and create fertile ground for the occurrence of Black Swan events.
For example, say you want to purchase stocks in a company, but your knowledge of stock statistics is limited to the period 1920-28 – one year before the greatest stock market crash in US history. In that case, you’d observe a few small dips and peaks, but in general you’d notice that the trend is upward. So, thinking that this trend must continue, you spend your life savings on stocks. The next day, however, the market crashes and you lose everything you have.
If you’d studied the market a little more, you would have observed the numerous booms and busts throughout history. By focusing only on what we know, we open ourselves to great and unmeasured risks.
On the other hand, if you can at least establish what it is that you don’t know, then you’d be able to greatly reduce your risk.
Good poker players understand this principle well, as it’s crucial to their success at the game.
While they know the rules of the game, and the probabilities that their opponents have better cards than they do, they are also aware that there is certain, relevant information they don’t know – such as their opponent’s strategy, and how much their opponent can stand to lose.
Their knowledge of these unknowns contributes to a strategy that doesn’t focus solely on their own cards, thus enabling them to make a far more informed assessment of the risk.
Having a good understanding of our limitations as human beings can help us to make better choices.
Perhaps the best defense against falling into the cognitive traps we’ve seen is a good understanding of the tools that we use to make predictions, and their limitations.
While knowing our own limitations certainly won’t save us from every blunder we’ll ever make, it can at least help us to reduce our bad decision-making.
For instance, if you’re aware that you are subject to cognitive bias, like everyone else, then it’s much easier to recognize when you’re only looking for information that confirms what you already believe to be true.
Likewise, if you know that we humans like to organize everything into neat, causal narratives, and that this kind of approach simplifies the complexity of the world, then you’ll be more likely to search for further information to gain a better view of the “whole picture.”
Just this small amount of critical self-analysis can help you gain a competitive advantage over others in your field.
It’s certainly preferable to be aware of your shortcomings. For example, if you know that there will always be unforeseeable risks in pursuing any opportunity, despite how promising that opportunity seems, you’ll probably be less inclined to invest heavily in it.
While we cannot triumph over randomness or our limited capacity for understanding the vast complexity of our world, we can at least mitigate the damage inflicted by our ignorance.
The key message in this book:
Even though we’re constantly making predictions about the future, we’re actually terrible at it. We put far too much confidence in our knowledge and underestimate our ignorance. Our over-reliance on methods that seem to make sense, our basic inability to understand and define randomness, and even our biology, all contribute to poor decision making, and sometimes to “Black Swans” – events we believe to be impossible but which end up redefining our understanding of the world.
Be suspicious of “because.”
Although it is absolutely in our nature to look for linear, causal relationships between events in order to make sense of this complex world, the reality is that we are absolutely pitiful at both making predictions of the future and establishing causes for the present. Rather than feeding our desire to see events in clear-cut cause and effect, it’s better to instead consider a number of possibilities without being married to any single one.
Know what you don’t know.
If you want to make meaningful predictions about the future – which, if you are buying insurance, making investments, attending college, changing jobs, conducting research, or just being a human, then you certainly do – then it’s simply not enough to take all of the “knowns” into consideration. This leaves you with only a partial understanding of the risks involved in your prediction. Instead, you should also be consciously aware of what you don’t know, so that you don’t unnecessarily limit the information that you are working with.