Daniel Kahneman
Kahneman & Tversky
Two books, one from this winter the other from 2011, celebrate a partnership that helped reshape to some extent the way we think about thinking and decision-making. Some of the work of Amos Tversky and Daniel Kahneman is good for parlor tricks. Several years ago, a friend of my son’s introduced me to the Problem of Linda, and I’ve been annoying people with it ever since. The two Israeli psychologists demonstrated that we aren’t very good at thinking statistically (as most people’s answers to the Linda question illustrate), but they did much more than that. They explored the useful/perilous role of heuristics in our thought, identifying numerous biases that tilt judgment and reinforce it to faulty ends: the availability of information, the resemblance of one thing to another, the ease of substituting a simple question for a difficult one, susceptibility to a narrative fallacy, the influence of anchoring, the power of framing, the deceptions in small samples, deference to theory, and our overriding desire for coherence. Their work was so fertile that it reached outside the field of psychology to help influence theories of decision-making in economics, for which Kahneman was awarded a Nobel Memorial Prize in Economic Sciences in 2002; Amos Tversky had died in 1996. Their work has affected fields from military training to sports management, from medicine to stock market speculation.
Both books are readable, charming and suffused with the puzzled joy of discovery.
Michael Lewis is a deft popularizer. His books on Wall Street, including The Big Short and Flash Boys, provided splendid descriptions of the 1998 mortgage-securities crisis and the Street’s fine art of front-running customer orders. His Moneyball (2003), which I haven’t read, explores the use of statistics in sports management. The first 50 pages of The Undoing Project focus on the problem of uncertainty again through a sports lens, but it’s really unnecessary: Kahneman and Tversky needn’t come in through the back door. When Lewis gets to the two men, and the zig-zag careers that brought them together, Amos Tversky emerges as an almost mythical figure, Israeli-born geek who became a paratrooper, the smartest, most confident and happiest man in any room. What an event when Danny Kahneman, the self-doubter, invites Tversky to address a seminar at Hebrew University in Jerusalem in 1969 and then declares that he disagrees with everything Tversky said.
What developed over fifteen years was a partnership in which both men preferred one another’s company to anyone else’s. As they devised tests to determine how people actually make decisions, passers-by outside their offices heard rollicking laughter. The partners were fascinated by the willingness of rational people, including statisticians, to contradict themselves depending on how a number of options were presented. In the simplest form, telling someone there’s a 90 percent chance of success in a life-saving medical procedure induces a different response from telling him there is a 10 percent chance of failure. How is that people who are risk-averse when offered one gamble become risk-seekers when offered an inverted version that is statistically identical? The venerable model of utility theory, which addressed how people viewed gains and losses, was wanting. By 1975, the authors presented a draft of one of their seminal papers: a discussion of decision-making under uncertainty. The work, developed as Prospect Theory, made them famous.
The partnership became strained. Tversky accepted a lifetime appointment at Stanford. Kahneman headed for a less illustrious post at the University of British Columbia. By the end of the decade, reflecting on people’s reaction to the death of a nephew in a fighter-plane accident, and on his own reactions to upheavals in his private life, Kahneman was looking at the use of counter-factual stories as a way of dealing with regret. The partnership had defined three heuristics that shaped thought: availability, anchoring and representativeness. Now Kahneman was thinking of a fourth heuristic, a “simulation heuristic,” described by Lewis as “all about the power of unrealized possibilities to contaminate people’s minds.”
As Tversky’s fame grew, so did Kahneman’s insecurity. They were collaborating at a distance when Tversky proposed a counterattack on a German critic. Reluctantly Kahneman joined in the research and the Problem of Linda emerged as a test of subjects’ rationality. The results were dumfounding. It didn’t matter whether the test was given to undergraduates, graduate students or professors. “People were blind to logic,” Lewis reports, “when it was embedded in a story.” Kahneman fed the test to a dozen students, and all of them fell for it. They tightened the test to an essential alternative—shoving “their subjects’ noses right up against logic,” Lewis recounts. Which statement is more probable: “Linda is a bank teller.” or “Linda is a bank teller and is active in the feminist movement.” Eighty-five percent chose the latter statement, defying logic. It was a stunning discovery. They tried other versions of the same problem. Their conclusion, presented in a 1983 paper, was that even well-educated, mathematically literate people do not instinctively think statistically or logically when presented with a story.
The partnership’s insights are broadly applicable, seemingly reaching into any theory-driven non-mathematical work as well as into day-to-day practical decision-making. Regarding the narrative fallacy, Tversky commented, “In contrast to our skill in inventing scenarios, explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly inadequate. Once we have adopted a particular hypothesis or interpretation, we grossly exaggerate the likelihood of that hypothesis, and find it very difficult to see things any other way.” Equity investors who “buy the story” will know what he meant. Almost in passing, Tversky gave a talk to historians that cut the ground out from under their causal narratives: “All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence. This ‘ability’ to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe there is a less uncertain world than there actually is. . . .”
Lewis’s book benefits from access to memos the partners wrote, the recollections of people who knew one or both of them, and of Kahneman’s description of their experiences working separately and together. The final chapter’s triumphal tone, as assorted acolytes such as Cass Sunstein use Kahneman and Tversky’s work as a basis for public policy decisions, is laughably counter-factual and a pretty good example of what the partners noted as a tendency to be blinded by theory. But it’s a minor blemish, as is the absence of an index, in an excellent introduction to two important thinkers.
In Thinking, Fast and Slow, published in 2011, Kahneman walks us through the partnership’s work (and other researchers’) like a tour guide wearing a straw hat and waving a black cane. “Here,” he says engagingly, “is this absurdity. Here is another. Can you believe that we don’t notice how often our conclusions defy logic?” Revisiting this book after Lewis’s adds to the pleasure. Kahneman and Tversky worked primarily from noticing their own mistakes and wondering: If we believed that, how many other people do—surely it’s not that many?—and in any case why?
What, for example, do you make of the fact that a tiny county in West Virginia has the highest bladder cancer rate in the nation? Or the fact that a year later it has the lowest? How is it that we perform less well in a state of cognitive ease, and that our focus and performance can be improved by the simple act of frowning? (For years we’ve reminded our son heading into an exam: Scowl at the damned thing!) Why do we succumb to the simplest forms of emphasis in messages, favoring a statement in bold-face type? Why do we find banal aphorisms truer if they are rhymed? How can we be primed into falling for the gimmick of repeating “shop” a dozen times and then responding to the question, “What do you do at a green light?” by answering “Stop.” These aren’t, as it happens, trivial questions. People who understand how we think are adept at manipulating our opinions and decisions. Imagine the television cameras descending on the small county in West Virginia with the highest bladder cancer rate in the U.S. The reporters look around, discover a paper mill upstream. An environmental professor is found to declare causation. The story of corporate greed goes viral. We’re outraged at the callousness. A year later, the mill is still operating, and the bladder cancer rate is zero. The reporters do not return to discuss the “law of small numbers.” Kahneman does, and his reader will take a deep breath before accepting casually reported statistics in the future.
Or you’re thinking of buying a house. You find one you love. The comparable for similar houses in the area is $200,000. Ideally, you would like to get your house for a bit less than it’s worth. The seller has listed the house at $275,000. You know that’s too much. What do you offer? Offering seven percent less than a house is worth is a normal buyer’s strategy. But offering $186,000 looks like you’re trying to steal the place compared to the $275,000 listing price. The listing price is an example of “anchoring,” and many people’s opening bid will be north of $200,000. Auditors try to avoid being anchored by previous years’ results as they review corporate records. Other professionals try to avoid being anchored by prior information. Does a patient have a recurrence of last year’s inner ear disturbance, or is there a brain tumor this time?
Much of the charm in Kahneman’s book is in his admission of error—and the persistence of belief in the face of that admission. His account of testing soldiers for leadership roles in the Israeli army is a delight; the assessments prove worthless, but his team presses on anyway. His discovery of senior Israeli Air Force officers’ neglect of regression to the mean in assessing pilot performance—and in determining a proper reaction to pilot underperformance—could be applied across many disciplines. His encounters with prominent American money managers who reject evidence that their results are random will surprise few investors.
A reader will find that he wants to be tried by a judge who has had a good lunch, or graded by a professor who admires the first exam answer and proceeds directly to the others. Discussions of the sometimes overlapping phenomena of priming, cognitive ease, halo effects, focus, endowment effects, associative memory, representativeness, information availability and intensity, framing, and the classic gambles offered in studies of decision-making under uncertainty could lead even a confident reader to view his or her own decisions warily. Kahneman doesn’t exclude himself. After discussing Bernoulli’s errors and the Kahneman-Tversky breakthrough paper, “Prospect Theory: An Analysis of Decision Under Risk,” he notes that “theory-induced blindness” has led to scholars overlooking “some absurd consequences” in certain Prospect Theory assumptions. He suggests that Prospect Theory gives too little weight to the gambler’s anticipation of regret in explaining the rejection of statistically sound risks.
Thinking, Fast and Slow pays its due to the partnership with Tversky. If Amos had survived, Kahneman says more than once, he would clearly have shared the Nobel. But Thinking, Fast and Slow also encompasses work Kahneman has done subsequently, and with different research partners. And it plays with a metaphor, that we operate with two mental systems. System One is fast and uses information at hand, which is quite often adequate but sometimes isn’t. System Two is slow and methodical, far more critical and less apt to jump to conclusions, and it comes to work only when we demand its attention. The body of the Kahneman-Tversky partnership’s work demonstrates the value of this plodding and reluctant worker.
February 5 2017
Both books are readable, charming and suffused with the puzzled joy of discovery.
Michael Lewis is a deft popularizer. His books on Wall Street, including The Big Short and Flash Boys, provided splendid descriptions of the 1998 mortgage-securities crisis and the Street’s fine art of front-running customer orders. His Moneyball (2003), which I haven’t read, explores the use of statistics in sports management. The first 50 pages of The Undoing Project focus on the problem of uncertainty again through a sports lens, but it’s really unnecessary: Kahneman and Tversky needn’t come in through the back door. When Lewis gets to the two men, and the zig-zag careers that brought them together, Amos Tversky emerges as an almost mythical figure, Israeli-born geek who became a paratrooper, the smartest, most confident and happiest man in any room. What an event when Danny Kahneman, the self-doubter, invites Tversky to address a seminar at Hebrew University in Jerusalem in 1969 and then declares that he disagrees with everything Tversky said.
What developed over fifteen years was a partnership in which both men preferred one another’s company to anyone else’s. As they devised tests to determine how people actually make decisions, passers-by outside their offices heard rollicking laughter. The partners were fascinated by the willingness of rational people, including statisticians, to contradict themselves depending on how a number of options were presented. In the simplest form, telling someone there’s a 90 percent chance of success in a life-saving medical procedure induces a different response from telling him there is a 10 percent chance of failure. How is that people who are risk-averse when offered one gamble become risk-seekers when offered an inverted version that is statistically identical? The venerable model of utility theory, which addressed how people viewed gains and losses, was wanting. By 1975, the authors presented a draft of one of their seminal papers: a discussion of decision-making under uncertainty. The work, developed as Prospect Theory, made them famous.
The partnership became strained. Tversky accepted a lifetime appointment at Stanford. Kahneman headed for a less illustrious post at the University of British Columbia. By the end of the decade, reflecting on people’s reaction to the death of a nephew in a fighter-plane accident, and on his own reactions to upheavals in his private life, Kahneman was looking at the use of counter-factual stories as a way of dealing with regret. The partnership had defined three heuristics that shaped thought: availability, anchoring and representativeness. Now Kahneman was thinking of a fourth heuristic, a “simulation heuristic,” described by Lewis as “all about the power of unrealized possibilities to contaminate people’s minds.”
As Tversky’s fame grew, so did Kahneman’s insecurity. They were collaborating at a distance when Tversky proposed a counterattack on a German critic. Reluctantly Kahneman joined in the research and the Problem of Linda emerged as a test of subjects’ rationality. The results were dumfounding. It didn’t matter whether the test was given to undergraduates, graduate students or professors. “People were blind to logic,” Lewis reports, “when it was embedded in a story.” Kahneman fed the test to a dozen students, and all of them fell for it. They tightened the test to an essential alternative—shoving “their subjects’ noses right up against logic,” Lewis recounts. Which statement is more probable: “Linda is a bank teller.” or “Linda is a bank teller and is active in the feminist movement.” Eighty-five percent chose the latter statement, defying logic. It was a stunning discovery. They tried other versions of the same problem. Their conclusion, presented in a 1983 paper, was that even well-educated, mathematically literate people do not instinctively think statistically or logically when presented with a story.
The partnership’s insights are broadly applicable, seemingly reaching into any theory-driven non-mathematical work as well as into day-to-day practical decision-making. Regarding the narrative fallacy, Tversky commented, “In contrast to our skill in inventing scenarios, explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly inadequate. Once we have adopted a particular hypothesis or interpretation, we grossly exaggerate the likelihood of that hypothesis, and find it very difficult to see things any other way.” Equity investors who “buy the story” will know what he meant. Almost in passing, Tversky gave a talk to historians that cut the ground out from under their causal narratives: “All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence. This ‘ability’ to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe there is a less uncertain world than there actually is. . . .”
Lewis’s book benefits from access to memos the partners wrote, the recollections of people who knew one or both of them, and of Kahneman’s description of their experiences working separately and together. The final chapter’s triumphal tone, as assorted acolytes such as Cass Sunstein use Kahneman and Tversky’s work as a basis for public policy decisions, is laughably counter-factual and a pretty good example of what the partners noted as a tendency to be blinded by theory. But it’s a minor blemish, as is the absence of an index, in an excellent introduction to two important thinkers.
In Thinking, Fast and Slow, published in 2011, Kahneman walks us through the partnership’s work (and other researchers’) like a tour guide wearing a straw hat and waving a black cane. “Here,” he says engagingly, “is this absurdity. Here is another. Can you believe that we don’t notice how often our conclusions defy logic?” Revisiting this book after Lewis’s adds to the pleasure. Kahneman and Tversky worked primarily from noticing their own mistakes and wondering: If we believed that, how many other people do—surely it’s not that many?—and in any case why?
What, for example, do you make of the fact that a tiny county in West Virginia has the highest bladder cancer rate in the nation? Or the fact that a year later it has the lowest? How is it that we perform less well in a state of cognitive ease, and that our focus and performance can be improved by the simple act of frowning? (For years we’ve reminded our son heading into an exam: Scowl at the damned thing!) Why do we succumb to the simplest forms of emphasis in messages, favoring a statement in bold-face type? Why do we find banal aphorisms truer if they are rhymed? How can we be primed into falling for the gimmick of repeating “shop” a dozen times and then responding to the question, “What do you do at a green light?” by answering “Stop.” These aren’t, as it happens, trivial questions. People who understand how we think are adept at manipulating our opinions and decisions. Imagine the television cameras descending on the small county in West Virginia with the highest bladder cancer rate in the U.S. The reporters look around, discover a paper mill upstream. An environmental professor is found to declare causation. The story of corporate greed goes viral. We’re outraged at the callousness. A year later, the mill is still operating, and the bladder cancer rate is zero. The reporters do not return to discuss the “law of small numbers.” Kahneman does, and his reader will take a deep breath before accepting casually reported statistics in the future.
Or you’re thinking of buying a house. You find one you love. The comparable for similar houses in the area is $200,000. Ideally, you would like to get your house for a bit less than it’s worth. The seller has listed the house at $275,000. You know that’s too much. What do you offer? Offering seven percent less than a house is worth is a normal buyer’s strategy. But offering $186,000 looks like you’re trying to steal the place compared to the $275,000 listing price. The listing price is an example of “anchoring,” and many people’s opening bid will be north of $200,000. Auditors try to avoid being anchored by previous years’ results as they review corporate records. Other professionals try to avoid being anchored by prior information. Does a patient have a recurrence of last year’s inner ear disturbance, or is there a brain tumor this time?
Much of the charm in Kahneman’s book is in his admission of error—and the persistence of belief in the face of that admission. His account of testing soldiers for leadership roles in the Israeli army is a delight; the assessments prove worthless, but his team presses on anyway. His discovery of senior Israeli Air Force officers’ neglect of regression to the mean in assessing pilot performance—and in determining a proper reaction to pilot underperformance—could be applied across many disciplines. His encounters with prominent American money managers who reject evidence that their results are random will surprise few investors.
A reader will find that he wants to be tried by a judge who has had a good lunch, or graded by a professor who admires the first exam answer and proceeds directly to the others. Discussions of the sometimes overlapping phenomena of priming, cognitive ease, halo effects, focus, endowment effects, associative memory, representativeness, information availability and intensity, framing, and the classic gambles offered in studies of decision-making under uncertainty could lead even a confident reader to view his or her own decisions warily. Kahneman doesn’t exclude himself. After discussing Bernoulli’s errors and the Kahneman-Tversky breakthrough paper, “Prospect Theory: An Analysis of Decision Under Risk,” he notes that “theory-induced blindness” has led to scholars overlooking “some absurd consequences” in certain Prospect Theory assumptions. He suggests that Prospect Theory gives too little weight to the gambler’s anticipation of regret in explaining the rejection of statistically sound risks.
Thinking, Fast and Slow pays its due to the partnership with Tversky. If Amos had survived, Kahneman says more than once, he would clearly have shared the Nobel. But Thinking, Fast and Slow also encompasses work Kahneman has done subsequently, and with different research partners. And it plays with a metaphor, that we operate with two mental systems. System One is fast and uses information at hand, which is quite often adequate but sometimes isn’t. System Two is slow and methodical, far more critical and less apt to jump to conclusions, and it comes to work only when we demand its attention. The body of the Kahneman-Tversky partnership’s work demonstrates the value of this plodding and reluctant worker.
February 5 2017
An afterword: Some time ago I saw a comment by Kahneman addressing the quality of some of the pair's research. The sample sizes were small when he and Tversky were quizzing students. Too small for meaningful results? Well, the work was replicated many times, so . . . but then. . . . We thought, Kahneman said (I'm paraphrasing), that the replication pointed to the robustness of our results. It hadn't occurred to him and Tversky that they could be getting support from theory-blinded researchers. It's hard not to admire someone who looks for holes in the work that made him famous.
But Keep Your Elbows to Yourself
I gave too little critical weight in the preceding review to a problem. Suppose we accept the Kahneman-Tversky observations that we’re pretty poor at making statistically sound decisions. Couldn’t the faulty decision-making be improved by a narrowing of our choices?
That theme was already in play when Kahneman’s book was published. In 2008 Richard Thaler and Cass Sunstein had published Nudge, a call for what they disingenuously labeled “libertarian paternalism”: a kind of gentle regulation that presents the unwashed with purportedly better choices as defaults, attaching an option that allows us to opt-out (of medical insurance, for example) if we insist.
It was an interesting book that evaded the underlying epistemic problems of decision-making. In his concluding chapter of Thinking, Fast and Slow, Kahneman notes this derivative of his economic-behaviorism work with approval. “The appeal of libertarian paternalism has been recognized in many countries,” he attests. “. . . Britain’s government has created a new small unit whose mission is to apply the principles of behavioral science to help the government better accomplish its goals.” His disciple Thaler advised this group known informally as the Nudge Team. Meanwhile Sunstein had bagged himself a similar job with the Obama administration.
Leaving aside the question of whether behavioral science qualifies as a science or as mere data-collection, the idea that there would be anything libertarian in a government program should have evoked spasms of hilarity. Because I liked the rest of the book, I simply viewed Kahneman's endorsement as inconsistent with his findings. In any case, the year after Kahneman’s book appeared Sarah Conly brought out Against Autonomy: Justifying Coercive Paternalism (Cambridge), to a fawning review by Sunstein in The New York Review of Books. Now Sunstein is back, with an Olivier Sibony and an 87-year-old Kahneman credited as lead author, with Noise: A Flaw in Human Judgment (Little, Brown). It makes no case against human judgment, only a case that much decision-making across a wide range of human pursuits can’t be programmed to the authors’ satisfaction: nuances intrude, and varied decision-makers give different weight to some nuances while ignoring others. A shame the world is so messy, and human knowledge so imperfect. The book is stuffed with examples of noise (human disagreement in the face of identical data) without approaching the insights of Nate Silver's The Signal and the Noise (Penguin 2012). It feels like an attempt to cash in on Kahneman's reputation.
Sarah Conly was also busy. In 2016, she delivered One Child: Do We Have a Right to More? (Oxford). Merely asking the question implies the answer from this Bowdoin College philosophy prof and one-time fellow at the Edmond J. Safra Center for Ethics at Harvard. Terrible environmental perils lie ahead, you know? For the concise version here is Conly at The Philosophical Salon.
None of this diminishes the value of the Kahneman-Tversky work. Parts of it will probably endure (logical-minded people still falter on the Problem of Linda), parts will probably be replaced by new observations. What’s remarkable, but shouldn’t surprise me, is the eagerness of "scientists" to write social prescriptions despite all the evidence that they’re no more immune from common error and bias than the proles they would guide. That’s point one. That any pretense they make to libertarian impulses is a sham is point two. Both Conly in Against Autonomy and Sunstein in his review face up squarely to Mill’s injunction against the state saving us from ourselves (and thus from the lessons of experience), then walk past it—because their belief in their own insight is so compelling.
All decision-making occurs in the face of uncertainty. Nobody has overcome the argument that rather than narrow choices at the outset—at the behest of assorted self-interested policy-setters—what works is to present decisions to the testing of reality and see how they work out: that is, to allow feedback that comes automatically to govern what stays and what goes. It’s a centerpiece of liberal thought. If I were pushing books this evening, I would suggest Sowell’s Knowledge and Decisions (Basic Books, 1980), which deals with the issue more than adequately.
November 27, 2021
November 27, 2021