Learn more about Colin Shaw: Join over 80,000 people on our LinkedIn Newsletter list or visit our website for more great podcast episodes.
Listen to the podcast:
This episode is the sixth of eight master classes, showing that one event never affects customer behavior. Multiple events can occur at any one time.
Today’s master class discusses understanding risks and how we, as human beings, deal with those risks. How people perceive risks and determine probability is an essential part of judgment and decision-making. Research has identified some important biases in this process.
The Gambler’s Fallacy
Let’s begin by discussing a common bias, the Gambler’s Fallacy, which concerns how we assess probability or the likelihood of an outcome. We do this all the time.
For example, if you want to invest, you assess the probability that the investment will pay off and have a good outcome. Or, if you’re going to start a new marketing initiative, you evaluate the likelihood that that marketing initiative will succeed.
However, we are terrible at assessing probability and risk because of the biases we will discuss. Each of these biases affects us differently.
The Gambler’s Fallacy is the assumption that we can predict future random outcomes based on past random outcomes. However, it’s called the Gambler’s Fallacy because you will likely lose when you use predictions based on this type of information.
For example, Roulette, a wheel game with black and red numbers, has various ways of placing a bet. You can bet on a number and color, number only, or color only, among other things. Many casinos will (helpfully) put a history of the last ten spins to help guide your decisions. If you see that the previous four were red, you (naturally) think it’s about time for a black one, so you will bet that way.
The problem is that the outcome is random. The marble has no memory and no awareness. There is no “rule” that means it’s time for black; it’s random. The history is meaningless. Your bet is exactly that: a bet. The Gambler’s Fallacy describes how we think that history isn’t useless, that “it’s time” for a black, and our bet on randomness has some certainty.
Hot Hand Fallacy
The Hot Hand Fallacy is a bias that follows this same pattern. It applies to sports and probably business, too. It’s trickier, but it still strongly influences people’s behavior.
Through the context of sports, the idea behind the Hot Hand Fallacy is that a player is performing better than their recent stats might indicate. The Hot Hand Fallacy is that they will keep being better than usual.
However, statisticians have also repeatedly shown that this variation (aka, the hot streak) is a natural occurrence that always appears compared to the mean. From a numbers standpoint, a hot streak can exist but is not sustainable. So, betting on that continuation of higher-than-usual success is still a guess. There are no guarantees that the hot streak will continue, nor is betting on one more than a random guess.
The complication of comparing the Hot Hand Fallacy to the Gambler’s Fallacy is that there is skill involved in the Hot Hand Fallacy. Unlike a bet on a roulette wheel, which has a pure chance outcome, athletes do have skills that evolve over time. Moreover, externalities, like competition and rivalry relationships, can affect someone’s level of play in a given period.
The player’s career will be the ultimate calculation of stats. Sure, there will be hot streaks where they are playing well above their typical level, and vice versa. Those highs and lows throughout a career will even out in a mean performance, showing that these peaks and valleys are part of the player’s journey.
The Overconfidence Bias
An easier bias to explain is the Overconfidence Bias. This one describes how we feel more confident in our probability estimates than we should. Studies have shown this several times by asking people to estimate the likelihood that the stock market will rise by three percent the next day or how likely it is that it will rain the next day. Then, once they answer, researchers follow up with a question about the participant’s percentage of confidence that their estimate is right. Looking at those percentage estimates, it’s clear that people are overconfident in their ability to predict the direction of the stock market or the weather the next day.
I have this bias, too. I bought a SLR camera and was overconfident in my abilities as a photographer and my ability to learn how to be a better one. The camera had many features I didn’t know how to use—and still don’t. My solution was to use it on the automatic settings.
Another caveat to Overconfidence Biases is the Dunning-Kruger effect, named after the two psychologists who identified it. Many times, overconfidence depends on how much you objectively know about something. So, if you know very little, you recognize that you know very little and are not very confident. Therefore, your overconfidence bias is reduced.
For example, if I asked you to predict particle physics, you would probably admit that you don’t know much about it and wouldn’t feel confident in your accuracy (unless you do know about it). However, if you did learn about it, even if it was a little bit, your confidence in the accuracy of your prediction would be high, or at least higher than before you learned anything about it.
Another example of Overconfidence Bias is related to Customer Experience. Many organizations think they understand the whole area of Customer Experience because they have customers.
As a Customer Experience consultant, I had to learn to be diplomatic as I exposed how little many organizations knew about experiences. I accomplished this diplomacy by asking questions or getting them to think about examples of their experiences as customers. Then, these so-called experts realize the extent of what they don’t know, but without getting slapped in the face with their overconfidence. Slapping people in the face rarely leads to good business outcomes.
The Hindsight Bias
Finally, we have the Hindsight Bias. This one is straightforward; it describes how we don’t understand how people don’t see the outcome coming when we look back on an event. Looking back, it was so obvious, after all.
Hindsight Bias happens because there is an outcome. Before the event, there were many possible futures. After the event, there isn’t a future; there is only what happened. That’s the only possible outcome, and we lose our ability to see those other potentialities.
This state quickly shifts to judgment. How could you not see that coming?
So What Should You Do About This?
The most important thing to remember regarding all these biases is that we have used them to adapt to our challenges in dealing with risk. They present an opportunity to decide about the probability of an outcome where we stand to lose something if we are wrong that feels more rational, even if it isn’t. These biases help us move forward in the face of uncertainty.
I find these biases so interesting and think they significantly affect people’s behavior. Understanding how people perceive (and misperceive) probability in experience designs is essential in business today, particularly regarding experience design.
Recognizing these biases in customer behavior also sheds some light on opportunities where customers face risk in your experience and how they might respond. You can then improve these moments in the customer experience, helping them make good customer decisions for themselves that are also good for your bottom line.
This review sums it up: “The dynamic between the two hosts absolutely makes this podcast. Each brings a unique take on the topic, their own perspective, and plays off each other’s sense of humor. I come away after each episode with a feeling of joy and feeling a bit smarter”.
Listen to the podcast: https://beyondphilosophy.com/podcasts