Key Points from Think Twice by Michael Mauboussin

According to research by Stanovich and others, if you explain to intelligent people how they might go wrong with a problem before they decide, they do much better than if they solve the problem with no guidance.

The second experiment showed the failure of pure rationality. Here, Richard Thaler, one of the world’s foremost behavioral economists, asked us to write down a whole number from zero to one hundred, with the prize going to the person whose guess was closest to two-thirds of the group’s average guess. In a purely rational world, all participants would coolly carry out as many levels of deduction as necessary to get to the experiment’s logical solution—zero. But the game’s real challenge involves considering the behavior of the other participants. You may score intellectual points by going with naught, but if anyone selects a number greater than zero, you win no prize.

Beauty contest in the stock market amplify bubble as participant guesses the most desirable stocks. This is why there is always an overshoot. Bidders will overbid and create bubble.

Most spend their time gathering information, which feels like progress and appears diligent to superiors. But information without context is falsely empowering. If you do not properly understand the challenges involved in your decision, this data will offer nothing to improve the accuracy of the decision and actually may create misplaced confidence.

In a probabilistic environment, you are better served by focusing on the process by which you make a decision than on the outcome.

These contrasting points of view reveal our first mistake, a tendency to favor the inside view over the outside view.6 An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and unique set of inputs.

The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened.

Remarkably, the least capable people often have the largest gaps between what they think they can do and what they actually achieve.

The second is the illusion of optimism. Most people see their future as brighter than that of others.

Finally, there is the illusion of control. People behave as if chance events are subject to their control.

Study the distribution and note the average outcome, the most common outcome, and extreme successes or failures.

If the properties of the system change, drawing inference from past data can be misleading.

Failure to entertain options or possibilities can lead to dire consequences, from a missed medical diagnosis to unwarranted confidence in a financial model.

Last, a mental model is an internal representation of an external reality, an incomplete representation that trades detail for speed.5 Once formed, mental models replace more cumbersome reasoning processes, but are only as good as their ability to match reality. An ill-suited mental model will lead to a decision-making fiasco.6

we start with an anchor and then move toward the right answer. But most of us stop adjusting once we reach a value we deem plausible or acceptable.

The availability heuristic, judging the frequency or probability of an event based on what is readily available in memory, poses a related challenge. We tend to give too much weight to the probability of something if we have seen it recently or if it is vivid in our mind.

Failure to reflect reversion to the mean is the result of extrapolating earlier performance into the future without giving proper weight to the role of chance. Models based on past results forecast in the belief that the future will be characteristically similar to history. In each case, our minds—or the models our minds construct—anticipate without giving suitable consideration to other possibilities.

The confirmation bias occurs when an individual seeks information that confirms a prior belief or view and disregards, or disconfirms, evidence that counters it.19 Robert Cialdini, a social psychologist at Arizona State University, notes that consistency offers two benefits. First, it permits us to stop thinking about an issue, giving us a mental break. Second, consistency frees us from the consequence of reason—namely, changing our behavior. The first allows us to avoid thinking; the second to avoid acting.

Let’s face it: we all have finite attention bandwidths. If you dedicate all that bandwidth to one task, none is left over for anything else. So people should be alert to striking a balance between nitty-gritty problem solving and a broader context.

Stressed people struggle to think about the long term. The manager about to lose her job tomorrow has little interest in making a decision that will make her better off in three years. Psychological stress creates a sense of immediacy that inhibits consideration of options with distant payoffs.

The subprime mess revealed that what may appear to be optimal for the individual agents in a complex system may be suboptimal for the system as a whole.

A decision-making journal is a cheap and easy routine to offset hindsight bias and encourage a fuller view of possibilities.

Stress, anger, fear, anxiety, greed, and euphoria are all mental states antithetical to quality decisions.

“It is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones.”

Yet in reality, half of all people must be below average, and so you should sort out when you are likely to be one of them.

With the diversity prediction theorem in hand, we can flesh out when crowds predict well. Three conditions must be in place: diversity, aggregation, and incentives. Each condition clicks into the equation. Diversity reduces the collective error. Aggregation assures that the market considers everyone’s information. Incentives help reduce individual errors by encouraging people to participate only when they think they have an insight.

In other words, what comes in through our senses influences how we make decisions, even when it seems completely irrelevant in a logical sense. Priming is by no means limited to music. Researchers have manipulated behavior through exposure to words, smells, and visual backgrounds.

Immediately after being exposed to words associated with the elderly, primed subjects walked 13 percent slower than subjects seeing neutral words.13 Exposure to the scent of an all-purpose cleaner prompted study participants to keep their environment tidier while eating a crumbly biscuit.14 Subjects reviewing Web pages describing two sofa models preferred the more comfortable model when they saw a background with puffy clouds, and favored the cheaper sofa when they saw a background with coins.15

For priming to work, the association must be sufficiently strong and the individual must be in a situation where the association sparks behavior.

In reality, many people simply go with default options.

Affective responses occur quickly and automatically, are difficult to manage, and remain beyond our awareness. As Robert Zajonc, a social psychologist, said, “In many decisions affect plays a more important role than we are willing to admit. We sometimes delude ourselves that we proceed in a rational manner and weigh all the pros and cons of the various alternatives. But this is probably seldom the case. Quite often ‘I decided in favor of X’ is no more than ‘I liked X.’ ”19 Affect is situational because it often follows vivid outcomes or a specific individual experience.

Zimbardo explains the factors that make the situation so forceful. First, situational power is most likely in novel settings, where there are no previous behavioral guidelines. Second, rules—which may emerge through interaction or be predetermined—can create a means to dominate and suppress others because people justify their behavior as only conforming to the rules. Third, when people are asked to play a certain role for a prolonged period, they risk becoming actors who can’t break from character. Roles shut people off from their normal lives and accommodate behaviors they would generally avoid. Finally, in situations that lead to negative behavior, there is often an enemy—an outside group. This is especially pronounced when both the in-group and out-group stop focusing on individuals.

To overcome inertia, Peter Drucker, the legendary consultant, suggested asking the seemingly naïve question, “If we did not do this already, would we, knowing what we now know, go into it?”

Yet the tendency to interpret the behavior of a complex system from its components is as common as it is wrong.

“The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of the simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear.”6 If you want to understand an ant colony, don’t ask an ant. It doesn’t know what’s going on. Study the colony.

A star’s performance relies to some degree on the people, structure, and norms around him—the system. Analyzing results requires sorting the relative contributions of the individual versus the system, something we are not particularly good at. When we err, we tend to overstate the role of the individual.

“Understanding how well intentioned, intelligent people can create an outcome that no one expected and no one wants is one of the profound lessons of the game.”

Children—indeed people of all ages—do not behave the same under all conditions. They adjust their behavior to reflect their social circumstances.

Theories improve when researchers test predictions against real-world data, identify anomalies, and subsequently reshape the theory. Two crucial improvements occur during this refining process. In the classification stage, researchers evolve the categories to reflect circumstances, not just attributes. In other words, the categories go beyond what works to when it works. In the definition stage, the theory advances beyond simple correlations and sharpens to define causes—why it works.

Outsourcing is not universally good. For example, outsourcing does not make sense for products that require the complex integration of disparate subcomponents.

You must be very alert to the correlation-causality mistake. The fact that we like to make explicit cause-and-effect connections only adds to the challenge. When you hear of a causal connection, step carefully through the three conditions to see if the claim holds up. You will most likely be surprised at how rarely you can firmly establish causation.

Changing the decision-making process as circumstances dictate is a fundamental challenge and can be psychologically taxing.

People have an innate desire to link cause and effect and are not beyond making up a cause for the effects they see. This creates the risk of observing a correlation—often the result of chance—and assuming causation. When you hear of a correlation, be sure to consider the three conditions: time precedence, relationship, and that no additional factor is causing the other two to correlate.

The focus of this chapter is phase transitions, where small incremental changes in causes lead to large-scale effects. Philip Ball, a physicist and writer, calls it the grand ah-whoom.4 Put a tray of water into your freezer and the temperature drops to the threshold of freezing. The water remains a liquid until—ah-whoom—it becomes ice. Just a small incremental change in temperature leads to a change from liquid to solid.

This shows why critical points are so important for proper counterfactual thinking: considering what might have been.7 For every phase transition you do see, how many close calls were there?

Repeated, good outcomes provide us with confirming evidence that our strategy is good and everything is fine. This illusion lulls us into an unwarranted sense of confidence and sets us up for a (usually negative) surprise. The fact that phase transitions come with sudden change only adds to the confusion.

When asked to decide about a system that’s complex and nonlinear, a person will often revert to thinking about a system that is simple and linear. Our minds naturally offer an answer to a related but easier question, often with costly consequences.

For the most part, people are scorched not by black swans, the unknown unknowns, but rather by their failure to prepare for gray swans.

In dealing with systems of collectives, the ideal is to get cost-effective exposure to positive events and to insure against negative events. While we are getting more sophisticated, the financial instruments that we see in the market that are tied to extreme events are often mispriced.28 In the end, the admonishment of investment legend Peter Bernstein should carry the day: “Consequences are more important than probabilities.” This does not mean you should focus on outcomes instead of process; it means you should consider all possible outcomes in your process.29

The idea is that for many types of systems, an outcome that is not average will be followed by an outcome that has an expected value closer to the average.

For example, consider how a golfer may score on two rounds on different days. If the golfer scores well below his handicap for the first round, how would you expect him to do for the second one? The answer is not as well. The exceptional score on the first round resulted from his being skillful but also very lucky. Even if he is just as skillful while playing the second round, you would not expect the same good luck.6 Any system that combines skill and luck will revert to the mean over time. Daniel Kahneman neatly captured this idea when he was asked to offer a formula for the twenty-first century. He actually provided two. Here’s what he submitted:7 Success = Some talent + luck Great success = Some talent + a lot of luck

In my research, I found that analysts on Wall Street ignore the effects of reversion to the mean when they build their models of a company’s future financial results. Analysts regularly neglect the evidence for reversion to the mean in considering essential drivers like company sales growth rates and levels of economic profitability.

spread between return on invested capital (ROIC) and cost of capital reverts to the mean for a sample of more than a thousand companies, broken into quintiles, over a decade (the figure tracks the median ROIC for each quintile).

over time, luck reshuffles the same companies and places them in different spots on the distribution. Naturally, companies that had enjoyed extreme good or bad luck will likely revert to the mean, but the overall system looks very similar through time.

What if you ran the analysis of reversion to the mean from the present to the past instead of from the past to the present? Are the parents of tall children more or less likely to be taller than their children? A counterintuitive implication of mean reversion is that you get the same result whether you run the data forward or backward. So the parents of tall children tend to be tall, but not as tall as their children. Companies with high returns today had high returns in the past, but not as high as the present.

In reality, their performance was simply reverting to the mean. If a pilot had an unusually great flight, the instructor would be more likely to pay him a compliment. Then, as the pilot’s next flight reverted to the mean, the instructor would see a more normal performance and conclude praise is bad for pilots.

While Tomlinson and Hjelt got it right, the media often perpetuates the halo effect. Successful individuals and companies adorn magazine covers, along with glowing stories explaining the secrets of their success. The halo effect also works in reverse, as the press points out the shortcomings in poor-performing companies. The press’s tendency to focus on extreme performance is so predictable that it has become a reliable counter-indicator.

In Moneyball, Michael Lewis, an author who frequently provides fresh views on issues, points out, “In a five-game series, the worst team in baseball will beat the best about 15 percent of the time.”25 You do not see this in chess or tennis matches, games in which the best player almost always beats the worst, regardless of time frame.

Streaks, continuous success in a particular activity, require large doses of skill and luck. In fact, a streak is one of the best indicators of skill in a field. Luck alone can’t carry a streak. My analysis of various sports streaks in basketball and baseball clearly suggests streak holders are among the most skilled in their fields.

Imagine trying a new restaurant with two possible outcomes. In the first case, the restaurant is at its best. You have a wonderful meal with attentive service at a reasonable price. Would you go back? In the second case, the restaurant has an off day. You have a so-so dinner with indifferent service at the high end of what you had hoped to pay. Would you go back? Most people would go back in the first case but not in the second. Given reversion to the mean, what’s likely to happen the second time you go to the restaurant? Chances are the meal won’t be quite as good, or the service will slip a bit. But in this case you have gathered a more accurate view of the restaurant, even if it’s less flattering. On the other hand, if you never return to the restaurant because of a bad experience, you are assured you will gather no additional information, even if that information—as reversion to the mean suggests—would be more favorable.

But Gary Klein, a psychologist, suggests what he calls a premortem, a process that occurs before a decision is made. You assume you are in the future and the decision you made has failed. You then provide plausible reasons for that failure. In effect, you try to identify why your decision might lead to a poor outcome before you make the decision. Klein’s research shows that premortems help people identify a greater number of potential problems than other techniques and encourage more open exchange, because no one individual or group has invested in a decision yet.

About Kevin

Kevin is a global macro analyst with over four years experience in the financial market. He began his career as an equity analyst before transitioning to macro research focusing on Emerging Markets at a well-known independent research firm. He read voraciously, spending most of his free time following The Economist magazine and reading topics on finance and self-improvement. When off duty, he works part-time for Getty Images, taking pictures from all over the globe. To date, he has over 1200 pictures over 35 countries being sold through the company.
This entry was posted in Review and Ideas and tagged , , , , , . Bookmark the permalink.

1 Response to Key Points from Think Twice by Michael Mauboussin

  1. andi2563 says:

    Try to write shorter article or divided into severalnpart on each email.

    Regards Krisna Suswandi

    >

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s