When Uncertainty Can Be Helpful

 

Intelligence
Faculty ResearcherRichard Zeckhauser, Frank Plumpton Ramsey Professor of Political Economy, Harvard Kennedy SchoolPaper Title Assessing Uncertainty in Intelligence
Coauthor Jeffrey Friedman, Harvard Kennedy School

Good intelligence can often prevent bad things from happening, but only if the dots are connected in the right ways.

In a new paper, Richard Zeckhauser, Frank Plumpton Ramsey Professor of Political Economy, and Jeffrey Friedman, a doctoral candidate in public policy, analyze the ways in which “estimative intelligence”— the kind used to help policymakers deal with uncertain and complex situations— can be produced more accurately by assessing uncertainty rather than by seeking to eliminate it altogether.

Many of us are familiar with the adage about making assumptions, but the authors argue that assumptions about the likelihood and potential consequences of various scenarios and challenges facing the United States and its allies are vital to policymakers. “A critical function of the intelligence community is to help policymakers form these assumptions,” they write — and that involves managing uncertainty.

Rarely do analysts have definitive evidence to work with: The dots can usually be connected in a number of ways and will almost always include a bit of uncertainty. “This is the kind of challenge where estimative intelligence can play an important role in helping policymakers to form and revise their assumptions,” the authors write.

Drawing on hundreds of declassified National Intelligence Estimates (nies), they find that current tradecraft methods attempt to eliminate uncertainty in ways that can impede the accuracy, clarity, and utility of estimative intelligence.

Zeckhauser and Friedman argue that trying to eliminate uncertainty altogether fosters two significant problems — “consequence neglect” and “probability neglect”— that do not occur in the process of simply attempting to assess uncertainty.

Consequence neglect arises when collectors, analysts, and consumers of intelligence focus too much on the probability of each possible scenario and too little on the magnitude of its potential consequences. The authors cite the example of Pearl Harbor. U.S. Intelligence had encountered an explicit warning that Japan planned to attack Hawaii in 1941. Upon further investigation, the Navy determined that the Peruvian ambassador had originally received the information from his chef.

“There is no doubt, even in hindsight, that the U.S. Intelligence community should not have grounded strategic warning on the basis of a report from a Peruvian chef,” the authors write. “However, it is hardly clear that the right move was to ‘discard and forget’ the information. The report had a low degree of reliability but its potential consequences were enormous.”

Probability neglect is the reverse problem. It arises when intelligence focuses over much on the potential consequences of various possibilities while giving too little attention to their likelihoods.

“When an estimate simply says some event is ‘unlikely,’ it is difficult for readers to weigh the prediction properly,” Zeckhauser and Friedman write. “Conversely, policymakers often have trouble thinking about what small probabilities mean, and sometimes effectively treat them as if they were zero. When likelihoods and consequences are not identified separately and then considered together, estimative intelligence will be incomplete, unclear and subject to misinterpretation.” It is important to separate likelihood and confidence and to be explicit about each, they explain, because the analysis provides information about how predictions might change if new information emerges.

In the case of Pearl Harbor, had several independent reports containing similar information been received, Zeckhauser and Friedman argue, there would have been substantial grounds for taking the threat seriously.

“Consider instead a situation where analysts see no reason to highlight any particular prediction relative to the alternatives,” they write. The “Deepening Crisis in the ussr” 1990 nie lays out four “scenarios for the next year.” The most likely one is presented first, but none is given more attention than the others, thus avoiding both probability neglect and consequence neglect. This particular report allows readers to decide for themselves which possibilities to prioritize. And although it contains a wide range of information, the nie remains clear and concise.

This delineation of multiple possibilities with attached likelihoods is called a probability distribution. The authors write,“Probability distributions avoid the difficulty of reconciling opposing viewpoints; reduce the challenge of judging which hypotheses are ‘better’ than others; and help to obviate debates about what constitutes a ‘significant’ threat.”

Yet of the 379 declassified nies surveyed for this paper, 200 (53 percent) examine a single possibility without offering alternatives, and only 112 (30 percent) explicitly consider three or more possible judgments.

“Estimative intelligence will always be as much art as science, and the result will always be imperfect,” say Zeckhauser and Friedman. “Yet uncertainty is bound to persist on critical issues, and when it does, the ultimate goal should be to assess this uncertainty in a clear and accurate manner.”


— by Jenny Li Fowler

Print print | Email email

It is important to separate likelihood and confidence and to be explicit about each, the authors explain, because the analysis provides information about how those predictions might change if new information emerges.