New World New Mind

The following excerpt is from a book published in 1995, I found it in the archives of Jay Hanson’s website.


Robert Ornstein and Paul Ehrlich

IT ALL SEEMS to be happening at once. A small group of terrorists murder a few Americans far away–and fear of getting murdered changes the traveling habits of millions. But Americans continue to slaughter more people each day with handguns than all the people the terrorists have killed up to the writing of this book. No one does anything about it.

People swamp AIDS testing centers, desperate and anxious to know if they are carrying the virus. If they have it, it will likely kill them. Can society even care for AIDS victims?

Meanwhile populations explode, stockpiles of nuclear weapons grow, budget deficits mount, our education becomes more and more obsolete, and the environment–on which our very existence depends–deteriorates. But most people’s attention is fixed upon eyecatching “images,” such as the taking of the Iran hostages, horrible murders, airplane crashes, changes in stock prices, and football scores. Cancer terrifies us, yet we keep on smoking. Oliver North testifies that he lied–yet his good looks and smooth talk lead many people to propose that he run for President.

And the President operates the same way. Ronald Reagan, by his own admission, perverted an important U.S. global policy because his mind was similarly fixed on another set of hostages. He said, “I let my preoccupation with the hostages intrude into areas where it didn’t belong. The image, the reality of Americans in chains, deprived of their freedom and families so far from home, burdened my thoughts. And this was a mistake.” [italics ours]

Why does the growing budget deficit attract relatively little attention while the comparatively meaningless stock market “crash” makes headlines? Why do many popular writers yearn for a return to an education suitable for Oxford men before World War I, when the world has changed in critical ways to a greater extent since World War II than it changed between the birth of Christ and that war? Why do the numbers of nuclear weapons expand astronomically but largely unheralded, while a small girl trapped in a well commands the front pages? Why do we collectively spend billions on medical care while neglecting the simple preventative actions that, if we took them, would save many times the lives?

We believe it is no accident.

All these things are happening now, and are happening all at once, in part because the human mental system is failing to comprehend the modern world. So events will, in our opinion, continue to be out of control until people realize how selectively the environment impresses the human mind and how our comprehension is determined by the biological and cultural history of humanity. These unnoticed yet fundamental connections to our past, and how we can retrain ourselves for a “new world” of the future, one filled with unprecedented threats are what this book is all about. [p.p. 1-3]

Let’s look more clearly at the nature of caricatures of thought.

Suppose you have decided that your new car will be an Italian “Provolone.” You have read the automobile magazines; kept track of frequency-of-repair, recall, and resale-value statistics; and looked up the latest Consumer Reports, which cites statistics that the Provolone is a safe, reliable, good-handling, and adequately powered car. On these well-researched facts, you are on your way out the door to the Provolone dealership when your next-door neighbor drops by. He tells you in vivid detail about his new Provolone, saying, “It’s a lemon!” You immediately change your mind about the Provolone and buy another car instead. Have you made a reasonable decision? Your decision not to buy the car is clearly not based on the real evidence. You make an immediate vivid caricature of reality and accept it in contrast to other reports with data from a large sample (the frequency-of-repair and other statistics).

This caricature of mind violates any reasonable approach to decision making. Since one default is to focus on the small world around us, we ignore the fact that large amounts of information summarized in statistics are more reliable than single personal experiences. The neighbor’s story is the most recent and most available one in memory, and it is more emphatic than a published report, but this is not a good reason to give it more weight.

We often encounter problems where we must make decisions under conditions of uncertainty. We do not have full information, and there may be no single, correct answer, but the information that we have is probabilistic. How good are we at making such decisions? Not very, because the same kind of illusions that fool the visual system also fool the judgment system.

Psychologists Daniel Kahneman and Amos Tversky were among the first to study these “cognitive illusions” that demonstrate how easily we are misled. These effects can be particularly pronounced when decisions involve serious risks. One of their problems is this:

Imagine that the government is preparing for an outbreak of a rare disease that is expected to kill six hundred people. Two programs are available. If Program A is adopted, then two hundred people will be saved. If Program B is adopted, then there is a one-third chance that six hundred people will be saved, and a two-thirds chance that nobody will be saved. Which program should be adopted?

When the issue is framed this way, most people prefer Program A over Program B. They want to avoid the two-thirds risk that nobody will be saved. Now consider a similar problem involving the same disease and the same expectation that if nothing is done, six hundred people will die:

Two programs are available. If Program C is adopted, then four hundred people will die. If Program D is adopted, then there is a one-third chance that nobody will die, and a two-thirds chance that six hundred people will die.

When the issue is framed this way, most people prefer Program D over Program C, avoiding the certain consequence that four hundred people will die. This might seem reasonable to you until you realize that Programs A and C have the same outcomes. In both, of the six hundred people at risk, two hundred will live and four hundred will die. Programs B and D are also precisely the same; a one-third chance of six hundred people being saved is the same as a one-third chance of nobody dying. Illusions originate from just the wording of problems!

Decisions are often based on our beliefs about the relative likelihood of things happening–the price of real estate next year, the availability of jobs for engineers in the year you expect to graduate, whether the romance of the moment will last through marriage, and so on. People estimate the likelihood of such things by relying on oversimplified caricatures of reality.

Every student needs to be taught how they cut mental corners to make decisions. These “shortcuts” probably result in more efficient decision making overall, but they also lead to systematic caricatures that prevent us from being objective in certain kinds of judgments. Knowing about these common biases may help students to keep them from distorting their judgments.

They should learn that when people are asked to judge the relative frequency of different causes of death, they overestimate the frequency of well-publicized causes such as homicide, tornadoes, and cancer, and they underestimate the frequency of such less exceptional causes as diabetes, asthma, and emphysema. And they should know that this tendency causes disproportionate funds to go to the dramatic highly visible causes and relative neglect of the search for solutions to chronic problems.

In an important experiment that should be demonstrated in the curriculum, Tversky and Kahneman read to students lists of names of well-known people of both sexes. In each list the people of one sex were more famous than those of the other sex. When the subjects were asked to estimate the proportion of men and women on the lists, they overestimated the proportion of the sex with more famous people on the list. For example, if the list contained very famous women (such as Elizabeth Taylor) and only moderately well-known men (such as Alan Ladd), then subjects overestimated the proportion of women on the list. In both these cases, people’s judgments were biased by how easily they could recall specific examples.

Students can learn how easily they caricature other people. Kahneman and Tversky told subjects to read the following passage. “This description of Tom W. was written by a psychologist when Tom was in his senior year of high school”:

Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.

The subjects were then told that Tom W. is now a graduate student. Rank the following categories in order of the likelihood that they are Tom’s area of graduate specialization:

  • Business administration
  • Computer science
  • Engineering
  • Humanities and education
  • Law library science
  • Medicine and physical and life sciences
  • Social science and social work

Most students probably will choose computer science or engineering as Tom’s most likely area of specialization, as did the subjects in the experiment, who thought that humanities and education and social science and social work were least likely. The character description probably fits the caricature of what “typical” computer science or engineering students are like. Using this kind of caricature to simplify judgments leads you to think these are likely categories for his field of study. But there are many more graduate students in humanities, education, social science, and social work than there are in computer science or engineering. Even people who know this fact, and who have very little faith in the predictive value of character sketches disregard such proportions in making their predictions. Everybody, including experts, is strongly influenced by his or her caricatures.

An imaginary coin is all that is needed to teach students that caricaturing things on the basis of their representativeness is also responsible for another common type of error. Ask them if they tossed a coin six times in a row, which of the following sequences would be more likely to occur, A or B:

A: Head, Head, Head, Tail, Tail, Tail.
B: Head, Tail, Tail, Head, Tail, Head.

Most will choose B because it looks more like our caricature of a random sequence than does A. They can easily be shown, however, that A and B are equally likely. With any given proportion of heads and tails, any one sequence is just as likely as any other.

This caricature is partly responsible for another common error. Suppose that you and a friend are tossing coins and you can bet on each coin toss. Assume that the coin is fair, and so over the long run will come down 50 percent heads, 50 percent tails. During one run of tosses, the coin has come down heads twenty times in a row. What’s your bet on the twenty-first toss: Is it more likely to come down heads or tails? Most people feel strongly that tails are far more likely because of the “law of averages”: Because the chances of heads and tails in the long run are equal, we expect a shift after a long run so that things will balance out. But the long-run averages have nothing to do with any one particular event. Just because heads came up twenty times in a row, the twenty-first toss is still an independent event, with a fifty-fifty chance of being heads or tails. This error is known as the gambler’s fallacy-the belief that a forthcoming event will turn out a particular way because it’s “past due.”

Teachers can demonstrate another critical feature of the mind’s caricatures–insensitivity to sample size. Students can be presented with a problem like this one:

A town has two hospitals, a large one and a small one. About fifty babies are born every day in the large hospital, and ten babies every day in the small hospital. As everyone knows, about so percent of all babies born are boys, but of course the exact percentage will fluctuate from day to day, sometimes being higher than so percent, sometimes lower. In one particular year both hospitals kept a record of the days on which more than 60 percent of the babies born were boys. Which hospital was more likely to record such days?

About half of the college students who were actually given this problem by Tversky and Kahneman judged that the likelihood was the same for both hospitals, while the other half split evenly between choosing the larger and the smaller hospitals. The correct judgment is that the smaller hospital is much more likely to have such deviant days. This becomes explicit in the extreme case of a really small hospital that records just one birth a day. If half the babies born in a year are boys, then on half of the days of the year this hospital would record 100 percent boys! If a hospital had two babies born a day, then (assuming the odds on each birth were exactly fifty-fifty) on one quarter of the days of a year it would record 100 percent boys (on half of the days of the year, on average, one boy and one girl would be born; on the other half, two boys or two girls). [p.p. 209-215]

Excerpted from New World  New  Mind, Robert Ornstein and Paul Ehrlich

See also: The Moral Animal


Reposted from Jay Hanson’s website DieOff.org