unit 4
Instructions
Based on your reading, discuss the problem caused by imperfect intelligence about a potential adversary’s capabilities and intentions in obtaining and using a WMD. Review the three views given in the textbook of when it is appropriate to take military action in self-defense. Which camp do you fall in? Why?
Your journal entry must be at least 200 words. No references or citations are necessary.
the three are
the strict constructionist school
the unilateralist school
the reasonable necessity school
unit 6
Instructions
Your textbook describes the challenges and mass hysteria that would be present in a widespread biological attack. It also discusses the different ways that people might react. Some will take themselves to the hospital because they become psychosomatic, clogging the hospital with unnecessary walk-in patients. On the other hand, others will refuse to be vaccinated and will risk spreading the biological agent.
In addition, watch this short video segment onResponsible Reporting of a Bioterrorism Attack (Segment 10 of 15) to get a better insight about response to a hypothetical bioterrorism attack on a U.S. city.
Click
here
to view the video.
Click here to view the video transcript.
In this environment, what would you do if you were the decision-maker following a widespread biological attack? Contrast the two models of public health response discussed in the textbook (coercive and cooperative) to a bioterrorism attack, and describe how they might be employed in a bioterrorism attack response. Explain the process of decontamination and how it impacts a bioterrorism event and the importance of tracking a bioterrorism attack on the population.
Your assignment must be at least two pages in length and in APA style. You are required to use at least one outside source besides the textbook. All sources used, including the textbook, must be referenced; paraphrased and quoted material must have accompanying APA citations. Ensure that your assignment begins with an introduction.
unit 8
Instructions
In one of this unit’s case studies from your textbook, “Predicting Peril of the Peril of Prediction? Assessing the Risk of CBRN Terrorism,” Gregory Koblentz discusses three different schools of thought regarding the likelihood of a CBRN terrorist attack: optimism, pessimism, and pragmatism. He further asserts that heuristics and biases play a role in which school of thought a layperson or an expert might fall into.
Which school of thought do you fall into? What heuristics and biases do you think have influenced your opinion? Is there anything that you found in this reading that has made you question or rethink your opinion? Would a quantitative risk assessment be helpful in determining the likelihood of a terrorist chemical, biological, radiological, nuclear, or high-yield explosive (CBRNE) attack in the United States?
In this paper, consider the political, philosophical, and religious perspectives of the various actors in the war on terror and how they affect their perceptions of risk. Also, include a discussion of how future technologies may affect major casualties concerning current nonproliferation treaties and agreements and calculations, and document your reasoning.
This reflection paper should be a minimum of two pages in length and is an opportunity for you to express your thoughts about the assignment. Reflection writing is a great way to study because it increases your ability to remember the course material.
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/233118942
Predicting Peril or the Peril of Prediction? Assessing the Risk of
CBRN Terrorism
Article in Terrorism and Political Violence · September 2011
DOI: 10.1080/09546553.2011.575487
CITATIONS
READS
12
407
1 author:
Gregory D Koblentz
George Mason University
18 PUBLICATIONS 166 CITATIONS
SEE PROFILE
All content following this page was uploaded by Gregory D Koblentz on 22 January 2016.
The user has requested enhancement of the downloaded file.
Terrorism and Political Violence, 23:501–520, 2011
Copyright # Taylor & Francis Group, LLC
ISSN: 0954-6553 print=1556-1836 online
DOI: 10.1080/09546553.2011.575487
Predicting Peril or the Peril of Prediction? Assessing
the Risk of CBRN Terrorism
GREGORY D. KOBLENTZ
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Department of Public and International Affairs, George Mason
University, Fairfax, Virginia, USA
Since the mid-1990s, academic and policy communities have debated the risk posed
by terrorist use of chemical, biological, radiological, or nuclear (CBRN) weapons.
Three major schools of thought in the debate have emerged: the optimists, the
pessimists, and the pragmatists. Although these three schools of thought draw on
the same limited universe of data on CBRN terrorism, they arrive at strikingly different conclusions. Given the highly subjective process of CBRN terrorism risk assessment, this article analyzes the influence of mental shortcuts (called heuristics) and
the systemic errors they create (called biases) on the risk assessment process. This
article identifies and provides illustrative examples of a range of heuristics and biases
that lead to the underestimation of risks, the overestimation of risks and, most importantly, those that degrade the quality of the debate about the level of risk. While these
types of biases are commonly seen as affecting the public’s perception of risk, such
biases can also be found in risk assessments by experts. The article concludes with
recommendations for improving the CBRN risk assessment process.
Keywords biological weapon, homeland security, nuclear weapon, risk
assessment, risk perception, terrorism, weapons of mass destruction
In December 2008, the bipartisan Commission on the Prevention of Weapons of
Mass Destruction Proliferation and Terrorism (also known as the Graham-Talent
Commission) predicted that ‘‘it is more likely than not that a weapon of mass
destruction will be used in a terrorist attack somewhere in the world by the end of
2013.’’1 This was only the most recent warning of the peril posed by terrorists armed
with chemical, biological, radiological, or nuclear (CBRN) weapons, also called
weapons of mass destruction (WMD). Indeed, several other analyses of the likelihood of CBRN terrorism have arrived at remarkably similar conclusions to those
of the Graham-Talent Commission. In a 2005 survey of national security experts
conducted by Senator Richard Lugar, the risk of a CBRN attack somewhere in
the world during the next five years was estimated to be 50 percent.2 A 2007 survey
found that 51% of biologists believed that there would a bioterrorism incident somewhere in the world within the next five years.3 The Graham-Talent Commission also
Gregory D. Koblentz is an assistant professor in the Department of Public and
International Affairs at George Mason University.
Address correspondence to Gregory D. Koblentz, Department of Public and
International Affairs, George Mason University, 4400 University Drive, MS3F4, Fairfax,
VA 22030, USA. E-mail: gkoblent@gmu.edu
501
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
502
G. D. Koblentz
concluded that ‘‘terrorists are more likely to be able to obtain and use a biological
weapon than a nuclear weapon.’’4 This assessment echoes the result of a 2006 survey
conducted by the Center for Strategic and International Studies in which a majority
of the experts surveyed ranked the threat of biological terrorism higher than nuclear
terrorism.5 Thus, there appears to be a consensus among experts on the nature of the
CBRN terrorist threat.
This consensus, however, is misleading. Since the mid-1990s there has been a
lively debate within academic and policy communities about the urgency and severity of the threat of CBRN terrorism. The reports and studies mentioned above reflect
only part of the broader debate on this topic. To fully appreciate the risks posed by
CBRN terrorism it is necessary to first understand the full spectrum of opinion in the
debate and how and why they disagree.
This article is composed of four sections. The first section of this article outlines
the three major schools of thought on the risk of CBRN terrorism: optimists, pessimists, and pragmatists. Although these three schools of thought draw on the same
limited universe of data on CBRN terrorism, they arrive at strikingly different conclusions. Given the inherently subjective process of CBRN terrorism risk assessment,
the second section examines the role of cognitive biases in the risk assessment process.
The third section addresses some potential criticisms of this approach. The final section offers some recommendations for improving CBRN terrorism risk assessments.
The Debate on Terrorist Intentions and Capabilities to Acquire and
Use CBRN Weapons
The threat of CBRN terrorism received sporadic attention among academics and
policy-makers until the mid-1990s. In 1995, three high-profile incidents propelled
the issue of CBRN terrorism to the forefront of the national security agenda. On
March 20, 1995, the Japanese cult Aum Shinrikyo released the nerve agent sarin
in the Tokyo subway system. The attack, which killed eleven and injured thousands,
was the first overt case of CBRN terrorism. Subsequent investigations of the cult by
Japanese authorities revealed the group’s interest in nuclear weapons and failed
attempts to acquire and use biological weapons.6 Less than one month later, on
April 19, Timothy McVeigh, a domestic extremist, destroyed the Alfred P. Murrah
Federal building in Oklahoma City with a car bomb. This attack, which killed 168,
was the worst terrorist attack in U.S. history. In May 1995, a white supremacist in
Ohio was arrested for fraudulently ordering samples of Yersinia pestis (the bacteria
that causes plague) through the mail.7
Although these three events were completely unrelated to one another, their
occurrence in a relatively short span of time contributed to their power as focusing
events that propelled the issue of CBRN terrorism to the forefront of the national
security agenda. A focusing event is a rare, sudden event that inflicts harm on a
large-scale, surprises the general public and the policy elite, and serves as a powerful
catalyst for policy-making by providing a highly symbolic event that interest groups
can mobilize around.8 The confluence of these three events spawned a lively debate
in academic and policy circles about the risk posed by terrorists armed with CBRN
weapons. The terrorist attacks on September 11 and the anthrax letter attack served
as another set of focusing events which had a major impact on this debate.
Risk assessments are probabilistic judgments about the likelihood of a particular
type of event occurring. In the study of terrorism, risk is commonly conceived of as
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
503
the function of the threat posed by a terrorist group, a chosen target’s vulnerability to
attack, and the consequences of a successful attack on the target.9 The threat component can further be characterized as the combination of a terrorist group’s intent to
attack a specific target or use a certain weapon and the likelihood that the group will
be able to develop the capability of doing so. Vulnerability can be measured as the
probability that an attack succeeds once it is launched. Consequences are the
expected magnitude and type of damage resulting from a successful terrorist attack.
Countermeasures that can mitigate the effects of an attack should also be factored
into an assessment of consequences.
A comprehensive CBRN terrorism risk assessment requires the analysis of all
three components: threat, vulnerability, and consequence. The debate over CBRN
terrorism has been driven not only by different assessments of the values assigned
to each component of risk, but also the relative weight that should be assigned to
each component. A comprehensive review of the literature on CBRN terrorism is
beyond the scope of this article.10 This section summarizes the key assumptions,
arguments, evidence, and conclusions of the three schools of thought on the risk
posed by CBRN terrorism.11
According to the optimists, CBRN terrorism is a ‘‘very low probability, very low
consequence’’ threat.12 They believe that CBRN terrorism has been rare for good
reasons: that terrorist groups lack the motivation and capability to acquire and
use these weapons. Terrorist groups simply don’t want or need CBRN to achieve
their objectives. Guns and bombs have been the preferred tools of terrorists and will
remain so. Optimists also emphasize the formidable technical hurdles to developing
or acquiring a CBRN weapon capable of causing mass casualties. Optimists also
believe that there is an inverse relationship between an organization’s degree of interest in CBRN weapons and the capability of the group to use such weapons to cause
mass casualties, i.e., fanatics and psychotics are unlikely to have the organizational,
technical, and logistical skills necessary to obtain such weapons. Optimists expect
threats and hoaxes to be the most common type of CBRN incident followed by
infrequent low-tech attacks with crude materials. The likelihood of a mass casualty
CBRN terrorism incident is viewed as being very low probability. Optimists argue
that focusing on exotic ‘‘low-probability, high-consequence’’ threats like CBRN terrorism distracts us from more likely types of attacks such as car bombs, hijackings,
and suicide bombers.
According to pessimists, CBRN terrorism is a ‘‘low (but growing) probability,
high consequence’’ threat.13 Pessimists believe that the likelihood of CBRN terrorism
is growing due to worrisome changes in the capabilities and intentions of terrorist
groups. The technical capabilities of non-state actors to acquire or develop CBRN
have improved due to globalization, advances in science and technology, and the greater availability of CBRN material, technology, and knowledge from the former Soviet
Union. In addition, pessimists point to trends such as the increasing lethality of terrorist attacks and the rise of religiously motivated terrorist groups as evidence that the
constraints that have historically deterred secular terrorist groups from causing mass
casualties and seeking CBRN weapons are eroding. In addition, the pessimists emphasize the horrific consequences that a CBRN attack could cause and the great difficulty
in reducing our vulnerability to such attacks. Pessimists advocate for stronger
measures to prevent, detect, prepare for, and respond to CBRN terrorist threats.
According to pragmatists, CBRN terrorism is a ‘‘low probability, low consequence’’ threat.14 Like the pessimists, pragmatists worry about the emergence of
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
504
G. D. Koblentz
terrorist groups with a greater interest in causing mass casualties and acquiring CBRN
weapons, but they view these groups as being rare. Pragmatists tend to pay less attention to the vulnerability and consequence components of risk and more on understanding how and why terrorist groups develop both the intent and capability to
pursue CBRN weapons. Instead of relying on historical data to predict future terrorist
behavior as the optimists do or trying to discern the significance of various long-term
trends as pessimists do, pragmatists focus on identifying the variables or conditions
that influence the decision of terrorist groups to pursue CBRN weapons. Pragmatists
share the pessimists’ view that the material and technology to develop CBRN weapons
are growing increasingly available, but they are also more sensitive to the challenges in
transforming CBRN-related materials into operational weapons. As a result, pragmatists share the optimists’ skepticism about the capabilities of such groups to successfully conduct large-scale attacks with these weapons, although pragmatists believe
that small-scale attacks are feasible. Pragmatists are worried about the increased level
of interest expressed by terrorist groups in acquiring CBRN weapons, but view their
aspirations as exceeding their capability. From the pragmatist perspective, CBRN terrorism is not as unthinkable as optimists say it is or as technologically predetermined
or inevitable as pessimists fear it is. Pragmatists favor policies that are calibrated to the
limited scale of the threat, provide protection against a broad spectrum of deliberate
and natural hazards, and are geared towards addressing the conditions that enable
terrorists to pursue CBRN weapons.
Heuristics and Biases in CBRN Terrorism Risk Assessments
When evaluating the arguments and evidence of these different schools of thought it
is important to keep in mind that these are subjective judgments on a complex and
dynamic problem. Over the past thirty years a burgeoning literature on heuristics
and biases has emerged that demonstrates that individuals are not very good at
assessing risks. Heuristics are methods of thought that humans use as information
processing shortcuts. Heuristics such as imaginability, memorability, similarity,
and affect serve as cues for probability judgments. The systemic errors that these
heuristics introduce into risk assessments are called biases.
These biases influence not only the perceptions of the layperson, but also those
of experts. These biases can even infect an entire organization.15 Heuristics and
biases can influence risk assessments in several ways. Hindsight bias, the availability
heuristic, and Black Swan bias can lead to the underestimation of threats. The representativeness heuristic, salience bias, and dreaded risk bias can lead to the overestimation of threats. Perhaps even more damaging, however, is the impact of biases on
the debate over the risk of CBRN terrorism, not on the assessments themselves. Cognitive biases such as confirmation bias and disconfirmation bias, anchoring and
adjustment, and overconfidence can prevent scholars and policy-makers from adjusting their assessments over time with new information. The tendency for participants
in the debate to become wedded to their perspectives is perhaps the greatest problem
with the CBRN terrorism debate.
Biases Leading to the Underestimation of Risks
Hindsight bias, also called the I-knew-it-all-along effect, is the belief that an event
was easily predictable before it happened. Once people know the outcome of an
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
505
event, they believe that the outcome should have been predicted by those without
advance knowledge.16 As a result, the cost of preventing the risk is underestimated.
This bias underlies the urge to label every terrorist attack as an intelligence failure: if
only the analysts had ‘‘connected the dots’’ beforehand, the attack could have been
averted. The September 11 commission identified ten missed opportunities to prevent some of the hijackers from entering the United States or to intercept them once
they were in the country.17 As Roberta Wohlstetter found studying the attack on
Pearl Harbor, it was ‘‘much easier after the event to sort the relevant from the irrelevant signals. After the event, of course, a signal is always crystal clear; we can now
see what disaster it was signaling since the disaster has occurred. But before the event
it is obscure and pregnant with conflicting meanings.’’18 Trying to connect too many
dots the wrong way can produce false alarms. An analyst or agency that runs afoul
of the ‘‘cry wolf’’ problem risks becoming irrelevant.19
Due to the availability heuristic, people judge the frequency or probability of an
event by the ease with which examples come to mind.20 Therefore, ‘‘hypothetical
events that defy easy explanation and for which images are difficult to construct
should appear as improbable.’’21 As Thomas Schelling has observed, ‘‘the tendency
in our planning is to confuse the unfamiliar with the improbable. The contingency
we have not considered seriously looks strange; what looks strange is thought
improbable; what is improbable need not be considered seriously.’’22 The combination of the availability bias and hindsight bias leads to what Nassim Taleb calls
Black Swans: rare events that have a huge impact and can only be predicted retrospectively.23 Taleb argues that the unforeseen nature of Black Swans leads to the
underestimation of such events despite their huge potential impact. For example,
collateral debt obligations and sub-prime mortgages looked like great investments
based on past performance. However, this high rate of return and low risk was
dependent on the housing bubble and the continued rise of real estate prices. Once
the bubble burst and real estate prices dropped, homeowners began defaulting at
unprecedented levels, turning these investments into toxic assets which destabilized
the entire financial system.
The terrorist attacks on September 11 and the anthrax letter attacks were Black
Swans. Before September 11, a handful of government officials had imagined the
possibility of a hijacked airplane being used as a weapon, but this scenario was
not taken seriously and integrated into homeland security or counterterrorism planning. As a result, neither the Federal Aviation Administration nor the military had
plans or procedures in place to deal with this type of scenario.24 Likewise, virtually
no one in the biodefense community paid attention to using the mail as a delivery
system for a biological weapon before the anthrax letter attacks.25 The novelty of
this method of dissemination caught the United States Postal Service and Centers
for Disease Control and Prevention off-guard. Both agencies underestimated the risk
of anthrax spores escaping from the letters at mail processing facilities and infecting
postal workers and contaminating other letters.26
In the wake of September 11, the Black Swan became the pessimists’ most potent
argument against the optimists. Although those attacks did not involve the use of
CBRN weapons, al-Qaeda’s successful surprise attack and ability to inflict an unprecedented level of casualties provided powerful ammunition for pessimists who
argued that a CBRN attack was a matter of ‘‘when, not if.’’ According to Bill Keller,
‘‘The best reason for thinking it [nuclear terrorism] won’t happen is that it hasn’t
happened yet, and that is terrible logic.’’27 The Bush Administration’s fear of being
506
G. D. Koblentz
surprised by another Black Swan led it to adopt what has been called the One
Percent Doctrine. After receiving a briefing from the CIA on possible ties between
Pakistani nuclear scientists and al-Qaeda, Vice President Richard Cheney stated,
‘‘If there’s a 1% chance that Pakistani scientists are helping al-Qaeda build or
develop a nuclear weapon, we have to treat it as a certainty in terms of our response.
It’s not about our analysis . . . It’s about our response.’’28 This logic provided the
foundation for the dramatic increase in funding for programs designed to prevent,
deter, detect, prepare for, and respond to CBRN terrorism.
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Biases Leading to the Overestimation of Risks
Due to the salience bias, ‘‘colorful, dynamic or otherwise distinctive stimuli disproportionately engage attention and accordingly disproportionately affect judgments.’’29
The salience bias, in conjunction with the availability heuristic, can lead to the overestimation of risks when these risks become socially amplified by media coverage that
devotes disproportionate coverage to sensational topics compared to mundane
ones.30 As a result, people overestimate the frequency of rare events (such as being
the victim of homicide) and underestimate the frequency of common events (dying
from cancer).31 When people were asked by a February 2007 Zogby-UPI poll to rate
the greatest global health threat, 34% responded bioterrorism, 18% avian flu, and 11%
HIV=AIDS.32 These results are inversely proportional to the annual mortality of these
threats.33 The media can influence perceptions of not only the probability of a risk,
but also its severity. Diseases that occur frequently in the media are viewed as being
more serious than equally severe diseases that are underreported by the media.34
The intelligence community may have played a similar role in enhancing the
salience of terrorist threats after September 11 by providing a constant stream of
raw intelligence on such threats to senior policymakers. After September 11 the
CIA and FBI created a daily ‘‘threat matrix’’ that summarized all available raw intelligence on terrorist threats to the United States to help senior Bush Administration
officials ‘‘visualize the range of possible plots we were tracking.’’35 The matrix listed
on average 400 threats a month and regularly included intelligence on terrorist interest in CBRN weapons. Former President George W. Bush reports that for months
after September 11, he would ‘‘wake up in the middle of the night worried about what
I had read’’ in the threat matrix.36 According to Roger Cressey, who served as a
counterterrorism expert on the National Security Council at the time, ‘‘There was
no filter. Most of it was garbage. None of it had been corroborated or screened.’’37
According to former CIA director George Tenet, ‘‘many, perhaps most of the threats
contained in it were bogus. We just didn’t know which ones.’’38 As a result, ‘‘You
could drive yourself crazy believing all or even half of what was in it.’’39 Other senior
officials who received the daily report called it ‘‘sensory overload’’ and reported that
it made one ‘‘paranoid’’ and turned threat assessment into ‘‘an obsession.’’40
According to Eliezer Kudkowsky, ‘‘There is a saying in heuristics and biases that
people do not evaluate events, they evaluate descriptions of events.’’41 As a result,
the description of events can be manipulated intentionally or inadvertently to skew
estimates of their probability. Due to the simulation heuristic, ‘‘the elaboration of a
single plausible scenario that leads from realistic initial conditions to a specified end
state is often used to support the judgment that the probability of the end state is
high.’’42 In effect, the way a story is told can have an impact on judgments about
the plausibility of the story. From a statistical perspective, however, more elaborate
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
507
scenarios are also less likely. Unfortunately, the conjunction fallacy is a common
bias. Tversky and Kahneman found that subjects assigned a higher probability to
a scenario that includes an outcome and an explanation than to a scenario that
contained just the basic outcome.43 As Yudkowsky notes, ‘‘According to probability
theory, adding a detail to a hypothesis must render the hypothesis less probable…Yet
human psychology seems to follow the rule that adding a detail can make the story
more plausible.’’44 Assessments of CBRN terrorism frequently begin with detailed
descriptions of a CBRN terrorist plot or depiction of the aftermath of a CBRN
attack. These vignettes are not just literary ploys to get readers interested in a book,
but also subtle attempts to enhance the plausibility of the phenomenon being
described.
A highly problematic variation of this bias is what Yudkowsky calls the ‘‘logical
fallacy of generalization from fictional evidence’’ in which movies, books, and TV
shows can be ‘‘recalled (is available) as if it were an illustrative historical case.’’45
The clearest case of this fallacy in the field of CBRN terrorism is the impact of
the novel Cobra Event on U.S. biodefense policy. In early 1998, at the urging of gene
sequencing expert J. Craig Venter, President Bill Clinton read the novel Cobra Event
by Richard Preston.46 The book’s plot revolves around a scientist’s creation and
release of a genetically engineered smallpox virus, dubbed brainpox, in New York
City. The virus inflicts a horrific death on its victims, including seizures, hemorrhaging, and auto-cannibalism, a death graphically described by the author in the
opening pages of the book. Clinton was reportedly impressed by ‘‘the book’s grim
narrative and apparent authenticity.’’47 In the epilogue, Preston claims that the
science behind the brainpox virus ‘‘is real or based on what is possible’’ and that
the virus ‘‘should be taken as one example of a wide range of possibilities that actually exist for the construction of advanced bio-weapons.’’48 The long list of experts
cited in his acknowledgements, including officials from the FBI, CDC, and military,
no doubt lent weight to this assertion. Clinton urged colleagues, even political rival
House Leader Newt Gingrich, to read the book. After a meeting at the White House,
Clinton pulled aside Deputy Secretary of Defense John Hamre and asked him if he
thought the novel’s plot was plausible. After consulting with experts in the military,
Hamre reported to Clinton that the scenario was theoretically possible. In March
1998, the White House held a bioterrorism tabletop exercise which featured a smallpox virus genetically spliced with the Marburg hemorrhagic fever virus. The exercise
ended only after thousands died and it revealed huge gaps in the capability of the
United States public health and medical communities to respond to such an event.
In a follow-up meeting in April with seven distinguished experts in biology and
public health to discuss the threat of bioterrorism, Clinton asked for their assessment
of Cobra Event. Was the bioterrorism plot in the novel ‘‘a forecast of what’s in our
future?’’ Several of the scientists replied that it was plausible.49 Shortly thereafter, the
White House unveiled a new presidential directive to enhance preparedness against
biological terrorism and requested $294 million in emergency funding to establish a
pharmaceutical stockpile, increase research and development, and enhance public
health surveillance.50
According to the representativeness heuristic, ‘‘the likelihood of an event is evaluated by the degree to which it is representative of the major characteristics of the
process or population from which it originated.’’51 This heuristic may account for
the different assessments of the significance of Aum Shinrikyo within the debate
on CBRN terrorism. For American analysts who focused on Aum’s status as a
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
508
G. D. Koblentz
non-state actor and its illegitimate use of violence as the group’s most salient
characteristics, it was easy to categorize Aum as a terrorist group. Thus, Aum was
seen as being representative of the larger class of terrorist groups and thus a harbinger of other terrorist groups interested in acquiring CBRN weapons and causing
mass casualties. The view from Japan, however, was different. During a 2001
U.S.-Japan workshop on terrorism, the Japanese participants had to explain to their
American counterparts that that there was no consensus in Japan on whether Aum
Shinrikyo should be considered a terrorist group and thus whether the cult’s attack
on the Tokyo subway was an act of terrorism. Instead, the Japanese public ‘‘saw
Aum as a bizarre religious cult without any clear ideological or political objectives.
For that reason, the Japanese do not think future WMD terrorist attacks are
likely.’’52 For the Japanese, Aum was representative of the broader category of
otherwise benign New Age religious groups so its actions were not perceived as
having dire implications for the threat posed by terrorist groups.
Another cause of overestimating risks is the tendency to overestimate conjunctive probabilities and underestimate disjunctive probabilities. For example, people
overestimate that seven events each with a 90% probability will all occur and underestimate the probability that at least one of seven events of 10% probability will
occur.53 Terrorists have used this bias to their advantage to maximize the impact
of even failed attacks. After an attack in 1984 which barely missed killing Prime
Minister Margaret Thatcher, the Irish Republican Army warned, ‘‘Today we were
unlucky, but remember we only have to be lucky once. You have to be lucky
always.’’54 Michael Levi’s analysis of nuclear terrorism avoids this pitfall by highlighting the importance of having a layered defense. If nuclear terrorism requires
10 steps, each with 90% chance of success, then the terrorists have a 40% chance
of success. If the odds of failure can be doubled at each step, the overall success rate
drops to 10%. Levi observes, ‘‘This perspective turns a cliché about terrorism on its
head. It has often been said that defense against terrorism must succeed every time,
but that terrorists must succeed only once. This is true from plot to plot, but within
each plot, the logic is reversed. Terrorists must succeed at every stage, but the defense
needs to succeed only once.’’55
Due to the affect heuristic, subjective impressions of ‘‘goodness’’ or ‘‘badness’’
can influence risk perception. A key finding of research on the affect heuristic is that
judgments of risk and benefit are negatively correlated: the greater the perceived
benefit the lower the perceived risk and vice versa. In one experiment, subjects provided with information that changed the perceived benefit or risk of a technology (in
this case nuclear power) resulted in an affectively congruent but inverse effect on the
non-manipulated variable: information that increased the perceived benefit of
nuclear power also led to a reduction in the perceived risks of this technology and
vice versa.56 The advertising industry has taken advantage of the affect heuristic
to brand and label products with words (new, improved, all natural) or images
(babies, puppies, beautiful models) that evoke positive feelings. At the same time,
some technologies, such as nuclear power and pesticides, have become stigmatized,
meaning ‘‘something that is to be shunned or avoided not just because it is dangerous but because it overturns or destroys a positive condition; what was or should be
something good is now marked as blemished or tainted.’’57
The affect heuristic plays an important role in assessments of the dual-use
potential of emerging technologies. As Yudkowsky warns, ‘‘analysts with scanty
information may rate technologies affectively, so that information about perceived
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
509
benefit seem to mitigate the force of perceived risk.’’58 Assessments of the dual-use
potential of biotechnology authored by scientific organizations tend to begin their
reports extolling the past, present, and future benefits of biotechnology before examining the risks it poses.59 The debate over synthetic biology, for example, has been
characterized by polemicists on both sides who seek to capitalize on the affect
heuristic to sway the public’s assessment of the risks and benefits of this new field.
Proponents portray synthetic biology as a revolutionary new source of solutions
for problems ranging from malaria to global warming while opponents describe
the technology as a Pandora’s Box that could be opened by a modern-day
Dr. Frankenstein working in his garage.60
A special form of the affect heuristic is the dreaded risk bias. Dreaded risks are
risks that are viewed as being invisible and involuntary with large-scale, lethal, and
long-term effects and indiscriminant, inequitable, and uncontrollable consequences.61 Dreaded risks provoke a disproportionate psychological response: they
are viewed as being riskier and more worthy of regulation than statistically more
likely and severe risks. Jessica Stern usefully employed the concept of dreaded risk
to analyze the U.S. government’s post-2001 policy responses to the threat of bioterrorism.62 The dreaded risk bias, however, applies to the full range of CBRN threats.
The risks in the upper right quadrant of Figure 1 are those most closely associated
with the characteristics of dreaded risks. This quadrant includes nuclear power,
radioactive waste, nuclear fallout, DNA technology, and pesticides—all of which
are analogous to different forms of CBRN terrorist threats. The dreaded risk bias
helps explain why even CBRN terrorism hoaxes and false alarms produce such fear.
Brian Jenkins makes a useful distinction between nuclear terrorism, the detonation
of a nuclear weapon by terrorists, and nuclear terror, the fear of a nuclear terrorist
attack. ‘‘Nuclear terrorism is about events. Nuclear terror is about imagination,
about what might be.’’63 Due to the dreaded risk bias, ‘‘the mere proximity of the
words nuclear and terror elicits a shiver of terror.’’64
Impact of Biases on the CBRN Terrorism Risk Debate
A final category of heuristics and biases with important implications for CBRN
terrorism risk assessment are those that influence not just specific assessments, but
the overall debate about this risk. Ideally, scholars and policymakers would adjust
their risk assessments based on new information in line with Bayesian analysis.
Unfortunately, there are three biases that confound this approach.
A pair of biases, known as confirmation bias and disconfirmation bias, makes it
more difficult for people to incorporate new information and therefore revise assessments of risk. The confirmation bias leads individuals to seek information that confirms, not falsifies, their beliefs. Likewise, disconfirmation bias occurs ‘‘when people
subject disagreeable evidence to more scrutiny than agreeable evidence.’’65 The intelligence community’s analysis of Iraq’s biological weapons program prior to the 2003
invasion of Iraq exemplifies the pernicious effects of both of these biases. Based on
new intelligence from an Iraqi defector, codenamed Curveball, CIA analysts wrote
in 2000 that Iraq had a mobile biological agent production capability. After questions
were raised about the defector’s reliability, the analysts exhibited a strong confirmation bias to interpret ambiguous or fragmentary information, or even the lack
of information, as supporting their position and ignored contradictory information.66
In particular, the analysts used Iraq’s history of successful denial and deception
510
Figure 1. Dreaded risks. (Color figure available online.)
(Source: M. Granger Morgan, ‘‘Risk Analysis and Management,’’ Scientific American (July 1993), p. 41.)
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
511
measures as a crutch to explain the presence of contradictory information and the
absence of confirmatory evidence. In addition, analysts cherry-picked reports from
other defectors and portrayed them (inaccurately) as corroborating Curveball’s
account.67
Slovic observes, ‘‘A particularly pernicious aspect of heuristics is that people
typically have great confidence in judgments based on them.’’68 This overconfidence
is due, in part, to a lack of calibration. The extent to which subjective probabilities
match the actual outcomes reflects how well an individual is calibrated. In a set of
experiments by Alpert and Raiffa, multiple groups of test subjects were asked to
answer 1,000 general knowledge questions and provide confidence intervals for their
answers. The subjects were ‘‘surprised’’ (i.e., the correct answer fell outside of their
confidence interval) roughly 40% of time.69 This experiment should provide a cautionary tale for similar exercises in CBRN terrorism risk assessment. As part of
the Department of Homeland Security’s Bioterrorism Risk Assessment (BTRA),
experts were asked to estimate the probability that terrorists would select different
biological threat agents in the form of a mean and 90% confidence intervals.70 A
National Academies of Science committee charged with reviewing BTRA’s methodology heavily criticized its reliance on subject matter expertise.71
Overconfidence may also help explain the polarized nature of the debate on
CBRN terrorism and the relatively rare defection of analysts from one camp to
the other. The polarized nature of the debate on CBRN terrorism has been noted
for decades. In 1977, Brian Jenkins described the opposing viewpoints on nuclear
terrorism (‘‘Apocalypticians’’ and disbelievers) in theological terms since in the
absence of strong evidence, positions on this topic were adopted more often on
the basis of faith than analysis.72 This divide, between what Todd Masse calls the
conventionalists and skeptics, continues today.73 The broader debate about CBRN
terrorism has been characterized as being divided between the minimalists and
alarmists74 and the optimists and pessimists.75 The authors who believe that CBRN
terrorism is low probability also believe it is low consequence while those who believe
it is a higher probability threat also believe the consequences would be severe. In
contrast, there are fewer authors who hold a more centrist position. Furthermore,
there has been little movement of analysts from one school to another.76 This state
of affairs led two longtime researchers in the field to lament that, ‘‘The WMD
terrorism literature has reached a kind of plateau in which the same speculative,
unsubstantiated interpretations are constantly recycled…Whatever the reason, the
fact remains that scholars and policymakers have reached an interpretive impasse.’’77
The problem is not simply a lack of data, but the natural tendency of experts to make
bold predictions and then get locked into their original position.
Another contributor to overconfidence is the anchoring and adjustment heuristic. Individuals use prior information as a starting point, or anchor, in their analysis
and then adjust their estimates to an answer that seems plausible.78 Two problems
frequently creep into this process, however. First, the information used as the starting point may be inaccurate or even logically or factually irrelevant, known as contamination effects. Second, individuals tend to under adjust from their initial
estimate.79 The anchoring and adjustment heuristic may account for why so many
assessments of the risk of CBRN terrorism cited in the introduction have resulted
in such similar predictions. If a 50% likelihood of CBRN terrorism somewhere in
the world is now the ‘‘new normal,’’ subsequent analyses are likely to vary only
marginally from this estimate.
512
G. D. Koblentz
Critiques of the Role of Heuristics and Biases in Risk Assessment
This description of the role of heuristics and biases in CBRN terrorism risk assessments is open to two criticisms. The first criticism revolves around the applicability
of the heuristics and biases literature to expert judgments, like those involved in
CBRN terrorism risk assessments. The second criticism is that the influence of
heuristics and biases on subjective probability judgments at the heart of most CBRN
terrorism risk assessments can be avoided by replacing expert judgment with quantitative risk assessments.
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Layperson Versus Expert Estimates of Risk
Since the theories and evidence that underlie the field of heuristics and biases were
derived primarily from artificial experiments conducted on laypeople, either
students or members of the general public, in a laboratory-type setting, one could
argue that the biases that afflict laypeople in assessing complex and dynamic risks
such as CBRN terrorism are not relevant to experts who have specialized knowledge and have studied the issue for an extended time. Indeed, Slovic found that in
contrast to laypeople, expert assessments of the risks posed by a variety of hazardous technologies and activities more closely correlated with their expected annual
mortality.80 A significant difference between the hazards studied by Slovic and
CBRN terrorism is that there is a sound empirical basis for estimating annual
mortality for the former and not the latter. Therefore, without reliable data to base
their assessments on, CBRN terrorism experts may be as susceptible to the dreaded
risk bias and other biases as anyone else. When analysis takes place in an environment characterized by a high degree of uncertainty and the available data is fragmentary and conflicting, ‘‘ambiguity abets instinct and allows intuition to drive
analysis.’’81
In addition, one could argue that the findings of this field are not applicable to
expert judgment in the real world. However, researchers in the field of heuristics and
biases have tested their theories on expert populations and found similar effects as in
the laboratory with laypeople. For example, toxicologists and financial analysts used
the affect heuristic to assess the risks of drugs and stocks, respectively.82 Physicians
and analysts who specialize in forecasting were found susceptible to the conjunction
fallacy.83 As Tversky and Kahneman observe, ‘‘substantive expertise does not displace representativeness and does not prevent conjunction errors.’’84 Experts ranging
from auto mechanics to nuclear engineers have also been shown to suffer from
systemic overconfidence in their judgments.85 The intelligence community has also
recognized the role of cognitive biases in influencing analysts and intelligence
estimates.86
Perhaps the most comprehensive assessment of the accuracy and reliability of
expert judgment has been conducted by Philip Tetlock. Based on more than
27,000 short-term and long-term forecasts by hundreds of individuals in dozens
of countries on domestic political, economic, and national security issues,
Tetlock’s research revealed a poor ability to predict the future.87 Furthermore,
experts did not perform better than non-experts. According to Tetlock, ‘‘People
who devoted years of arduous study to a topic were as hardpressed as colleagues
casually dropping in from other fields to affix realistic probabilities to possible
futures.’’88
Assessing the Risk of CBRN Terrorism
513
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
The False Promise of Quantitative Risk Assessment
In the quest to avoid subjectivity, some experts have turned to quantitative methods
to assess the risk of CBRN terrorism. Matthew Bunn has constructed a simple mathematical formula to estimate the risk of nuclear terrorism based on eight key variables: the number of terrorist groups interested in nuclear weapons, the number of
attempted acquisitions of nuclear materials or weapons per year, the probability
of success for each of four acquisition routes, weaponization, and the delivery and
detonation of a weapon. Based on the values that Bunn assigns to each variable,
he found that there is a 3% chance of nuclear terrorism every year.89 This type of
analysis is useful for several reasons. First, it forces the analyst to lay out every step
of a nuclear terrorism plot which reinforces the notion that CBRN terrorism is a
dynamic process, not an event. This technique also avoids the problem of the author
overestimating conjunctive probabilities and underestimating disjunctive probabilities. Second, this technique forces the analyst to quantify their assessment by assigning hard numbers to the probability values. This is in some ways artificial, but it
improves the transparency of the assessment and provides a solid foundation for a
meaningful debate. Third, this method provides a mechanism for evaluating the
efficacy of different risk reduction strategies.
This approach is not without its critics. John Mueller has performed his own
nuclear terrorism risk assessment using the same basic approach as Bunn.90 Mueller,
however, lists twenty steps in a nuclear terrorism plot. In addition, he accuses Bunn
of using probability estimates that are ‘‘wildly favorable’’ to the terrorists. Even
granting terrorists what he sees as a ‘‘generous’’ 50% probability of surmounting
each barrier without detection by intelligence or law enforcement agencies, Mueller
calculates the odds of a successful nuclear terrorist attack at one in a million. By
changing the odds of success at each stage to one in three, then the likelihood of success falls to one in three billion. If multiple groups make multiple attempts over time,
the probability that one of a hundred attempts would be successful would be less
than one one-hundredth of one percent.91
The most ambitious, and controversial, attempt to apply quantitative risk
assessment methodology to CBRN terrorism is the Department of Homeland
Security’s BTRA. The BTRA uses a 17-step computer-based model to estimate
the likelihood and consequence of the intentional release of 28 biological threat
agents. In 2008, a National Academies of Science committee issued a scathing report
that concluded that the ‘‘BTRA in its present form should not be used to assess the
risk of bioterrorism threats.’’92 One of the committee’s key critiques of the BTRA
was its reliance on subjective probabilities and the unreliability of subject matter
experts due to imperfect information and the potential for cognitive biases. The committee also felt that the event-tree model used by DHS was insufficient for assessing
the risk posed by a dynamic, thinking adversary and would underestimate the risks
posed by bioterrorism.93 As an alternative, the committee proposed substituting the
subjective probabilities generated by subject matter experts with a decision analysis
model based on the terrorists’ objectives. The assumed objective of the terrorists is to
inflict as many fatalities as possible.94 Defenders of the BTRA methodology point
out, however, that this alternative model may overestimate the bioterrorism risk
by assuming ‘‘the defender knows the utility of the attacker, and that the attacker
knows the defensive architectures of the defender with certainty—both naive and
demonstrably false assumptions.’’95 A terrorist group, for example, may be more
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
514
G. D. Koblentz
interested in causing mass disruption than mass casualties (such as the Rajneeshees
use of Salmonella to poison salad bars in the The Dalles, Oregon in 1984). Even
proponents of the decision analysis model admit that ‘‘it may be difficult to determine
an objective function for the attacker.’’96 Presumably, such a determination would be
made by individuals with extensive knowledge of the history, ideology, organization
and motivation of specific terrorist groups: in other words, by experts. The core
disagreement between proponents and critics of the BTRA model is less about statistics and more about the proper role of expertise, no matter how flawed, in risk
assessment.
The use of quantitative methods is not a panacea for solving the problem of
heuristics and biases in CBRN terrorism risk assessment. Different experts using
the same model can come up with radically different estimates of the threat. Despite
the use of mathematical formula and statistical analyses, these types of quantitative
risk assessments remain reliant on the judgment of experts. As a result, they remain
susceptible to the biases discussed above.
Conclusion
The perils of making predictions about CBRN terrorism have long been recognized.
In 1975, Brian Jenkins observed, ‘‘This type of forecasting is hazardous. The resultant predictions must be viewed as highly conjectural, tentative, and quite possibly
dead wrong.’’97 Evaluating the likelihood of a perpetual risk as opposed to a discrete
event further complicates matters. As David Rapoport noted, ‘‘We are dealing with
a frightening and very remote possibility, but one which, alas, can neither be demonstrated nor disproved. Just as there is no logical way to show religious believers they
are in error in thinking the world will come to an end, so likewise no way exists to
demonstrate terrorists will never use apocalyptic weapons.’’98 In 2009, JASON, an
independent scientific advisory group that provides consulting services to the U.S.
government on defense science and technology issues, was asked to evaluate current
and proposed models for anticipating rare, catastrophic events such as CBRN
terrorism. Their conclusion was that ‘‘it is simply not possible to validate (evaluate)
predictive models of rare events that have not occurred, and unvalidated models
cannot be relied upon.’’99
Despite these pitfalls, improving our assessments of the risk of CBRN terrorism
is imperative. Since 2001, the United States has spent over $60 billion on defenses
against biological weapons.100 In addition to the direct cost of such programs is
the opportunity cost associated with them. In 2005, biologists complained that the
increase in the National Institute for Health’s budget for biodefense research was
crowding out research on diseases that pose immediate public health threats.101
The growth in the number of laboratories and scientists conducting research on
dangerous pathogens has also increased the risk posed by insiders with malevolent
intentions.102 Jessica Stern has noted that policymakers may be ‘‘more prone to
choose remedies that substitute new risks for old ones in the same population, transfer risks to new populations, or transform risks by creating new risks in new populations.’’103 There is also the question of priorities. Is the focus on CBRN terrorism
misdirecting the allocation of resources away from higher-probability but lower
consequence threats? In 2009, the homeland security budget included $9 billion to
defend against catastrophic terrorism such as CBRN terrorism, but only $1.3 billion
to counter the threat of improvised explosive devices (IEDs).104
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
515
On the other hand, pessimists argue that not only could a CBRN terrorist attack
have catastrophic consequences, we have the means of preventing this outcome. As
Graham Allison points out, nuclear terrorism is the ultimate preventable catastrophe. The United States and its allies can prevent nuclear terrorism by making fissile material and nuclear weapons as secure as the gold in Fort Knox.105 The former
chairmen of the Commission on the Prevention of Weapons of Mass Destruction
Proliferation and Terrorism argue that while preventing a biological terrorist attack
is too difficult, improving our public health and medical systems to prevent such an
attack from causing mass casualties is feasible.106 Given our vulnerability to attack
and susceptibility to surprise, it would be irresponsible not to invest in the full range
of countermeasures against these threats. Pessimists frequently compare investments
in homeland security to buying insurance: it is better to spend now and hope that
you don’t need it than be left unprepared in the event of a disaster. Proponents of
biodefense also argue that investing in medical and public health preparedness for
bioterrorism will pay dividends for dealing with pandemics and natural disasters
even if another bioterrorism attack does not occur.107
It is unlikely that either optimists or pessimists would object to improving the
quality of CBRN terrorism risk assessments. The prospects of doing so, however,
are daunting. To the extent that heuristics are hard-wired into the human brain, there
would appear to be little hope of avoiding the biases that they give rise to. Nonetheless, there is some evidence that educating individuals about the nature of biases helps
them to avoid them.108 In the wake of September 11 and the Iraq WMD intelligence
failure, the intelligence community has placed a greater emphasis on techniques to
impose greater rigor on analytical judgments and on mechanisms that force analysts
to question their assumptions and take into account alternative outcomes.109 Evaluations of the effectiveness of these new tools, however, are lacking.110
To paraphrase Niels Bohr, even without biases clouding the picture, prediction is
very difficult, especially if it’s about the future. If, as Tetlock found, even experts with
years of specialized training, access to large quantities of data, and sophisticated analytical tools are incapable of producing reliable forecasts in their area of specialization, then
the value of the entire risk assessment exercise is called into question. There is a silver
lining, however, to Tetlock’s research. Tetlock found that experts could be roughly divided into two categories, foxes and hedgehogs, and that foxes had better forecasting track
records in their domain of expertise than hedgehogs.111 Hedgehogs know ‘‘one big thing’’
that they believe has wide explanatory power, are less tolerant of dissent, and are highly
confident in their ability to forecast future events. Foxes, on the other hand, are ‘‘thinkers
who know many small things (tricks of the trade), are skeptical of grand schemes, see
explanation and prediction not as deductive exercises but rater exercises in flexible ‘ad
hocery’ that require stitching together diverse sources of information, and are rather
diffident about their own forecasting prowess.’’112 Tetlock further explains that,
The foxes’ self-critical, point-counterpoint style of thinking prevented
them from building up the sorts of excessive enthusiasm for their predictions that hedgehogs, especially well-informed ones, displayed for theirs.
Foxes were more sensitive to how contradictory forces can yield stable
equilibria and, as a result, ‘‘overpredicted’’ fewer departures, good or
bad, from the status quo. But foxes did not mindlessly predict the past.
They recognized the precariousness of many equilibria and hedged their
bets by rarely ruling out anything as impossible.113
516
G. D. Koblentz
Tetlock, unfortunately, provides no insights into how to train yourself or others to
think more like a fox and less like a hedgehog. Instead he promotes transparency and
accountability measures to improve the quality of debate among ‘‘self-correcting
epistemic communities.’’114 Establishing such communities, however, is a long-term
process. A more pragmatic approach would be for decision-makers to diversify their
sources of information on risks by talking to four foxes, one pessimistic hedgehog,
and one optimistic hedgehog.115 Improving CBRN terrorism risk assessments and,
more importantly, the broader debate over the nature of this threat, will require
concerted efforts by both the producers and consumers of such assessments.
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Notes
1. Commission on the Prevention of Weapons of Mass Destruction Proliferation and
Terrorism, World at Risk (New York: Vintage, 2008), xv.
2. Richard G. Lugar, The Lugar Survey on Proliferation Threats and Responses
(Washington, DC: Richard G. Lugar, 2005).
3. National Research Council, A Survey of Attitudes and Actions on Dual Use Research
in the Life Sciences (Washington, DC: National Academies Press, 2009), 74.
4. Commission on the Prevention of Weapons of Mass Destruction Proliferation and
Terrorism, World at Risk, xv.
5. Amy Smithson, The Biological Weapons Threat and Nonproliferation Options: A
Survey of Senior U.S. Decision Makers and Policy Shapers (Washington, DC: Center for
Strategic and International Studies, 2006), 12.
6. David E. Kaplan, ‘‘Aum Shinrikyo (1995),’’ in Jonathan B. Tucker, ed., Toxic
Terror: Assessing Terrorist Use of Chemical and Biological Weapons (Cambridge: MIT Press,
2000), 207–226.
7. Jessica E. Stern, ‘‘Larry Wayne Harris (1998),’’ in Tucker, Toxic Terror (see note 6
above), 227–246.
8. Thomas A. Birkland, After Disaster: Agenda Setting, Public Policy, and Focusing
Events (Washington, DC: Georgetown University Press, 2007), 23–26, 131–143.
9. Henry H. Willis, Andrew R. Morral, Terrence K. Kelly, and Jamison Jo Medby,
Estimating Terrorist Risk (Santa Monica, CA: RAND, 2005), 6–11.
10. For useful reviews of the literature on this topic, see Chris Dishman, ‘‘Understanding Perspectives on WMD and Why They Are Important,’’ Terrorism and Political Violence
24, no. 4 (2001): 303–313; Adam Dolnik, ‘‘13 Years since Tokyo: Re-Visiting the ‘Superterrorism’ Debate,’’ Perspectives on Terrorism 2, no. 2 (2008): 3–11; and Todd Masse, ‘‘Nuclear
Terrorism Redux: Conventionalists, Skeptics, and the Margin of Safety,’’ Orbis 52, no. 2
(2010): 302–319.
11. These schools of thought are ideal types that do not correspond strictly to the writings
or policy preferences of any single scholar or policy-maker. Indeed, the same academic or
official might make arguments consistent with different schools of thought at different times.
12. Examples of optimists include Brian Michael Jenkins, Ehud Spriznak, Milton
Leitenberg, John Mueller, and Robin Frost.
13. Examples of pessimists include Richard Falkenrath, Ashton Carter, Richard
Danzig, Tara O’Toole, and Graham Allison.
14. Examples of pragmatists include Jessica Stern, John Parachini, Jonathan Tucker,
Jean Pascal Zanders, the Gilmore Commission, and Bruce Hoffman.
15. Lynn Eden, Whole World on Fire: Organizations, Knowledge and Nuclear Weapons
Destruction (Ithaca, NY: Cornell University Press, 2004), 37–60.
16. Baruch Fischoff, ‘‘For Those Condemned to Study the Past: Heuristics and Biases in
Hindsight,’’ in Daniel Kahneman, Paul Slovic, and Amos Tversky, eds., Judgment Under
Uncertainty: Heuristics and Biases (Cambridge: Cambridge University Press, 1982), 335–351.
17. National Commission on Terrorist Attacks Upon the United States, The 9=11 Commission Report (New York: W.W. Norton, 2004), 355–356.
18. Roberta Wohlstetter, Pearl Harbor: Warning and Decision (Stanford, CA: Stanford
University Press, 1962), 387.
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
517
19. Richard Betts, ‘‘Analysis, War and Decision: Why Intelligence Failures are
Inevitable,’’ World Politics 31, no. 1 (1978): 61–89.
20. Amos Tversky and Daniel Kahneman, ‘‘Availability: A Heuristic For Judging
Frequency and Probability,’’ in Kahneman, Slovic and Tversky, Judgment Under Uncertainty
(see note 16 above), 163–178.
21. Steven J. Sherman, Robert B. Cialdini, Donna F. Schwartzman, and Kim D.
Reynolds, ‘‘Imagining Can Heighten or Lower the Perceived Likelihood of Contracting a
Diseases: The Mediating Effect of Ease of Imagery,’’ in Thomas Gilovich, Dale Griffin
and Daniel Kahneman, eds., Heuristics and Biases: The Psychology of Intuitive Judgment
(Cambridge: Cambridge University Press, 2002), 98.
22. Thomas Schelling, foreword to Wohlstetter, Pearl Harbor (see note 18 above), vi.
23. Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable
(New York: Random House, 2007), xvii–xviii.
24. National Commission on Terrorist Attacks Upon the United States, The 9=11
Commission Report, 344–348.
25. Two exceptions are a study by SAIC in 1999 and an experiment conducted by the
Canadian military in 2001. See Guy Gugliotta and Dan Eggen, ‘‘Biological Warfare Experts
Questioned in Anthrax Probe,’’ Washington Post, June 28, 2002; and B. Kournikakis, S.J.
Armour, C.A. Boulet, M. Spence, and B. Parsons, Risk Assessment of Anthrax Threat Letters
(Suffield, Canada: Defence Research Establishment Suffield, 2001).
26. Kristen Lundberg, ‘‘The Anthrax Crisis and the U.S. Postal Service (A): Charting a Course in a Storm,’’ in Arnold M. Howitt and Herman B. Leonard, eds., Managing
Crises: Responses to Large-Scale Emergencies (Washington, DC: CQ Press, 2009),
337–356.
27. Bill Keller, ‘‘Nuclear Nightmares,’’ New York Times, May 26, 2002.
28. Ron Suskind, The One Percent Doctrine (New York: Simon and Schuster, 2006), 30.
29. Shelley E. Taylor, ‘‘The Availability Bias in Social Perception and Interaction,’’ in
Kahneman, Slovic, and Tversky, Judgment Under Uncertainty (see note 16 above), 192; and
Sherman, et al., ‘‘Imagining Can Heighten or Lower the Perceived Likelihood of Contracting
a Disease’’ (see note 21 above), 98–102.
30. Nick Pidgeon, Roger E. Kasperson, and Paul Slovic, eds., The Social Amplification
of Risk (Cambridge: Cambridge University Press, 2003).
31. Karen Frost, Erica Frank, and Edward Maibach, ‘‘Relative Risk in the News
Media: A Quantification of Misrepresentation,’’ American Journal of Public Health 87, no. 5
(1997): 842–845; and Barbara Coombs and Paul Slovic, ‘‘Newspaper Coverage of Causes of
Death,’’ Journalism Quarterly 56, no. 4 (1979): 837–843.
32. Charles Piddock, Outbreak: Science Seeking Safeguards for Global Health (New
York: National Geographic Books, 2008), 49.
33. In 2007, 2 million died from HIV=AIDS, 59 died from avian influenza, and none
died from biological terrorism. Joint United Nations Programme on HIV=AIDS, Report on
the Global AIDS Epidemic (New York: United Nations, 2008), 30; and World Health Organization, ‘‘Cumulative Number of Confirmed Human Cases of Avian Influenza A=(H5N1)
Reported to WHO,’’ 6 May 2010, accessed at http://www.who.int/csr/disease/avian_influenza/country/cases_table_2010_05_06/en/index.html
34. Meredith E. Young, Geoffrey R. Norman, and Karin R. Humphreys, ‘‘Medicine in
the Popular Press: The Influence of the Media on Perceptions of Disease,’’ PLoS One 3, no. 10
(2008): 1–7.
35. George Tenet with Bill Harlow, At the Center of the Storm: My Years at the CIA
(New York: HarperCollins, 2007), 231.
36. George W. Bush, Decision Points (New York: Crown, 2010), 153.
37. Jane Mayer, The Dark Side: The Inside Story of How the War on Terror Turned Into
a War on American Ideals (New York: Anchor, 2009), 5.
38. Tenet, At the Center of the Storm (see note 35 above), 232.
39. Ibid.
40. Mayer, The Dark Side (see note 37 above), 5.
41. Eliezer Yudkowsky, ‘‘Cognitive Biases Potentially Affecting Judgment of Global
Risks,’’ in Nick Bostrom and Milan M. Cirkovic, eds., Global Catastrophic Risks (Oxford:
Oxford University Press, 2008), 114.
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
518
G. D. Koblentz
42. Daniel Kahneman and Amos Tversky, ‘‘The Simulation Heuristic,’’ in Kahneman,
Slovic, and Tversky, Judgment Under Uncertainty (see note 16 above), 207.
43. Amos Tversky and Daniel Kahneman, ‘‘Extensional Versus Intuitive Reasoning:
The Conjunction Fallacy in Probability Judgment,’’ in Gilovich, Griffin, and Kahneman,
Heuristics and Biases (see note 21 above), 39–41.
44. Yudkowsky, ‘‘Cognitive Biases Potentially Affecting Judgment of Global Risks’’
(see note 41 above), 97 (emphasis in the original).
45. Ibid., 103.
46. William Jefferson Clinton, My Life (New York: Knopf, 2004), 788.
47. Judith Miller, Stephen Engelberg, and William Broad, Germs: Biological Weapons
and America’s Secret War (New York: Simon and Schuster, 2001), 225.
48. Richard Preston, The Cobra Event (New York: Ballantine, 1997), 419.
49. Miller, Engelberg, and Broad, Germs (see note 47 above), 226, 232–238.
50. Gregory D. Koblentz, ‘‘Biological Terrorism: Understanding the Threat and
America’s Response,’’ in Arnold Howitt and Robyn Pangi, eds., Countering Terrorism:
Dimensions of Preparedness (Cambridge: MIT Press, 2003), 118.
51. Daniel Kahneman and Amos Tversky, ‘‘Subjective Probability: A Judgment of
Representativeness,’’ in Kahneman, Slovic and Tversky, Judgment Under Uncertainty (see
note 16 above), 47.
52. Michael Green, Terrorism Prevention and Preparedness: New Approaches to U.S.Japan Security Cooperation (New York: Japan Society, 2001), 20–21.
53. Amos Tversky and Daniel Kahneman, ‘‘Judgment Under Uncertainty: Heuristics
and Biases,’’ in Kahneman, Slovic, and Tversky, Judgment Under Uncertainty (see note 16
above), 15.
54. Bruce Hoffman, Inside Terrorism (New York: Columbia University Press, 2006), 254.
55. Michael Levi, On Nuclear Terrorism (Cambridge: Harvard University Press, 2007), 7.
56. Paul Slovic, Melissa Finucane, Ellen Peters, and Donald G. MacGregor, ‘‘The
Affect Heuristic,’’ in Gilovich, Griffin, and Kahneman, Heuristics and Biases (see note 21
above), 410–418.
57. Robin Gregory, James Flynn, and Paul Slovic, ‘‘Technological Stigma,’’ in James
Flynn, Paul Slovic and Howard Kunreuther, eds., Risk, Media and Stigma: Understanding
Public Challenges to Modern Science and Technology (London: Earthscan, 2001), 3.
58. Yudkowsky, ‘‘Cognitive Biases Potentially Affecting Judgment of Global Risks’’
(see note 41 above), 105.
59. See, for example, Institute of Medicine and National Research Council, Globalization, Biosecurity and the Future of the Life Sciences (Washington, DC: National Academies
Press, 2006).
60. Gaymon Bennett, Nils Gilman, Anthony Stavrianakis and Paul Rabinow, ‘‘From
Synthetic Biology to Biohacking: Are We Prepared?’’ Nature Biotechnology 27, no. 12
(2009): 1109–1111.
61. Paul Slovic, ‘‘Perception of Risk,’’ Science 236 (17 April 1987): 282–283.
62. Jessica Stern, ‘‘Dreaded Risk and the Control of Biological Weapons,’’ International
Security 27, no. 3 (2002=03): 89–123.
63. Brian Michael Jenkins, Will Terrorist Go Nuclear? (Amherst, NY: Prometheus
Books, 2008), 25–26.
64. Ibid., 30.
65. Yudkowsky, ‘‘Cognitive Biases Potentially Affecting Judgment of Global Risks’’
(see note 41 above), 99.
66. Robert Jervis, ‘‘Reports, Politics, and Intelligence Failures: The Case of Iraq,’’
Journal of Strategic Studies 29, no. 1 (2006): 20–27.
67. Gregory D. Koblentz, Living Weapons: Biological Warfare and International
Security (Ithaca, NY: Cornell University Press, 2009), 186–187.
68. Paul Slovic, Baruch Fischoff, and Sarah Lichenstein, ‘‘Facts Versus Fears: Understanding Perceived Risk,’’ in Kahneman, Slovic and Tversky, Judgment Under Uncertainty
(see note 16 above), 472.
69. Marc Alpert and Howard Raiffa, ‘‘A Progress Report on the Training of Probability
Assessors,’’ in Kahneman, Slovic and Tversky, Judgment Under Uncertainty (see note 16
above), 294–305.
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
Assessing the Risk of CBRN Terrorism
519
70. National Research Council, Department of Homeland Security Bioterrorism Risk
Assessment: A Call for Change (Washington, DC: National Academies Press, 2008), 26, 124.
71. Ibid.
72. Brian Michael Jenkins, The Potential for Nuclear Terrorism (Santa Monica, CA:
RAND, 1977), 7–8.
73. Masse, ‘‘Nuclear Terrorism Redux’’ (see note 10 above), 302–319.
74. Dishman, ‘‘Understanding Perspectives on WMD and Why They Are Important’’
(see note 10 above), 303–313.
75. Jessica Stern, ‘‘Terrorist Motivations and Unconventional Weapons,’’ in Peter
Lavoy, Scott Sagan, and James Wirtz, eds., Planning the Unthinkable: How New Powers Will
Use Nuclear, Biological and Chemical Weapons (Ithaca, NY: Cornell University Press, 2000),
202–229.
76. Noted exceptions are Bruce Hoffman who moved from the optimist to pragmatist
camp and Jessica Stern who moved from the pessimist to pragmatist camp.
77. Jeffrey M. Bale and Gary A. Ackerman, ‘‘Profiling the WMD Terrorist Threat,’’ in
Stephen Maurer, ed., WMD Terrorism: Science and Policy Choices (Cambridge: MIT Press,
2009), 38.
78. Tversky and Kahneman, ‘‘Judgment Under Uncertainty’’ (see note 53 above),
14–16.
79. Yudkowsky, ‘‘Cognitive Biases Potentially Affecting Judgment of Global Risks’’
(see note 41 above), 101–102.
80. Slovic, ‘‘Perception of Risk’’ (see note 61 above), 283.
81. Richard K. Betts, Enemies of Intelligence: Knowledge and Power in American
National Security (New York: Columbia University Press, 2007), 31.
82. Slovic, Finucane, Peters, and MacGregor, ‘‘The Affect Heuristic’’ (see note 56
above), 412–413.
83. Amos Tversky and Daniel Kahneman, ‘‘Extensional Versus Intuitive Reasoning:
The Conjunction Fallacy in Probability Judgment,’’ in Gilovich, Griffin, and Kahneman,
Heuristics and Biases, 29–30, 39–40.
84. Ibid., 30.
85. Slovic, Fischoff and Lichenstein, ‘‘Facts Versus Fears’’ (see note 68 above), 472–477;
Derek J. Koehler, Lyle Brenner, and Dale Griffin, ‘‘The Calibration of Expert Judgment:
Heuristics and Biases Beyond the Laboratory,’’ in Gilovich, Griffin, and Kahneman, Heuristics and Biases, 686–715; and Werner F.M. De Bondt and Richard H. Thaler, ‘‘Do Analysts
Overreact?’’ in Gilovich, Griffin, and Kahneman, Heuristics and Biases, 678–685.
86. Richard J. Heuer, Jr., Psychology of Intelligence Analysis (Washington, DC:
Government Printing Office, 2003).
87. Philip E. Tetlock, Expert Political Judgment: How Good Is It? How Can We Know?
(Princeton: Princeton University Press, 2005), 20, 49–59.
88. Tetlock, Expert Political Judgment (see note 87 above), 54.
89. Matthew Bunn, ‘‘A Mathematical Model of the Risk of Nuclear Terrorism,’’ Annals
of the Academy of Political and Social Science 607, no. 1 (2006): 103–120.
90. John Mueller, ‘‘The Atomic Terrorist: Assessing the Likelihood,’’ prepared for presentation at the Program on International Security Policy, University of Chicago, 15 January
2008, accessed at http://polisci.osu.edu/faculty/jmueller/APSACHGO.PDF
91. Ibid., 13–14.
92. National Research Council, Department of Homeland Security Bioterrorism Risk
Assessment, 5.
93. Ibid., 3.
94. Gregory S. Parnell, Christopher M. Smith, and Frederick I. Moxley, ‘‘Intelligent
Adversary Risk Analysis: A Bioterrorism Risk Management Model,’’ Risk Analysis 30, no.
1 (2010): 33–34.
95. Barry C. Ezell and Andrew J. Collins, ‘‘Response to Parnell, Smith, and Moxley,
Intelligent Adversary Risk Analysis: A Bioterrorism Risk Management Model,’’ Risk Analysis
30, no. 1, (2010): 1.
96. Parnell, Smith, and Moxley, ‘‘Intelligent Adversary Risk Analysis’’ (see note 94
above), 44.
97. Brian Michael Jenkins, Will Terrorists Go Nuclear? (Santa Monica, CA: RAND, 1975), 3.
Downloaded by [Gregory Koblentz] at 09:45 09 August 2011
520
G. D. Koblentz
98. David C. Rapoport, ‘‘Terrorism and Weapons of the Apocalypse,’’ National
Security Studies Quarterly 5, no. 3 (1999): 50.
99. JASON, Rare Events (McLean, VA: MITRE Corporation, 2007), 7.
100. Crystal Franco and Tara Kirk Sell, ‘‘Federal Agency Biodefense Funding,
FY2010-FY2011,’’ Biosecurity and Bioterrorism 8, no. 2 (2010): 129–149.
101. Sydney Altman, et al., ‘‘An Open Letter to Elias Zerhouni,’’ Science (4 March 2005):
1409–1410.
102. Lynn C. Klotz and Edward J. Sylvester, Breeding Bio Insecurity: How U.S. Biodefense Is Exporting Fear, Globalizing Risk, and Making Us All Less Secure (Chicago: University
of Chicago Press, 2009).
103. Stern, ‘‘Dreaded Risks and the Control of Biological Weapons’’ (see note 62 above),
91–92.
104. Office of Management and Budget, Budget of the U.S. Government, Fiscal Year
2009: Analytical Perspectives (Washington, DC: Office of Management and Budget, 2008),
28–29; and Office of the Press Secretary, ‘‘Department of Homeland Security Announces
6.8 Percent Increase in Fiscal Year 2009 Budget Request,’’ Department of Homeland Security
Fact Sheet, 4 February 2008, accessed at http://www.dhs.gov/xnews/releases/pr_
1202151112290.shtm
105. Graham Allison, Nuclear Terrorism: The Ultimate Preventable Catastrophe
(New York: Times Books, 2004), 15.
106. Bob Graham and Jim Talent, ‘‘Bioterrorism: Redefining Prevention,’’ Biosecurity
and Bioterrorism 7, no. 2 (2009): 1–2.
107. Franco and Sell, ‘‘Federal Agency Biodefense Funding’’ (see note 100 above), 134.
108. This has been shown true of miscalibration and overconfidence, but not of hindsight
bias. Baruch Fischoff, ‘‘Debiasing,’’ in Kahneman, Slovic and Tversky, Judgment Under
Uncertainty, 429–430, 437.
109. Roger Z. George and James B. Bruce, eds., Analyzing Intelligence: Origins, Obstacles
and Innovations (Washington, DC: Georgetown University Press, 2008).
110. National Research Council, Field Evaluation in the Intelligence and Counterintelligence Context: Workshop Summary (Washington, DC: National Academies Press, 2010),
18–20.
111. The advantage held by foxes was broadly generalizable across regions, topics, and
time. Tetlock, Expert Political Judgment (see note 87 above), 75–76.
112. Ibid., 73.
113. Ibid., 21.
114. Ibid., 237–238.
115. Adrian E. Tschoegl and J. Scott Armstrong, ‘‘Review of Philip E. Tetlock, Expert
political judgment: How good is it? How can we know?’’ International Journal of Forecasting
23, no. 2 (2007): 339–342.
View publication stats
LET ME TELL YOU WHAT’S HAPPENING MEANWHILE. WE’VE ALL BEEN WORRIED ABOUT THIS VACUUM
OF INFORMATION. WHAT DO WE KNOW? OF COURSE AS SOON AS THIS WAS REPORTED TO DR.
TOOMEY’S OFFICE, SAMPLES WERE TAKEN OUT TO STATE LABS. THERE WAS A SAMPLE FLOWN DOWN
TO THE CDC. AND THE CDC AND THE STATE LABS ARE GOING TO CONFIRM WHATEVER THE AGENT IS
PRETTY SOON. BUT BEFORE THAT HAPPENS, YOU’VE GOT A RELIABLE SOURCE AT THE CDC WHO TELLS
YOU “THEY’RE GOING TO TELL YOU IT’S PLAGUE”. YOU GOING TO RUN WITH THAT? IT WOULD BE
STRONGLY CONTINGENT ON HOW RELIABLE THAT SOURCE WAS. HOW WELL CONNECTED WAS THAT
SOURCE. WHAT FIRST-HAND KNOWLEDGE DID THE SOURCE HAVE TO THE INFORMATION. IT’S THE
VETTING PROCESS THAT YOU GO THROUGH WHEN EVALUATING THE CREDIBILITY OF A SOURCE AND
HOW MUCH YOU WANT TO HANG ON IT. SECONDLY, WHEN THE STAKES GET HIGHER, THE EFFORT IS TO
CORROBORATE WHAT THE SOURCE TELLS YOU WITH YET ANOTHER SOURCE. TO TRIANGULATE ON THE
INFORMATION SO THAT YOU FEEL MORE CONFIDENT ABOUT IT. IS THERE ANY REASON WHY YOU WANT
THEM TO HOLD THAT UNTIL YOU’VE MADE AN OFFICIAL ANNOUNCEMENT? ABSOLUTELY. FIRST OF ALL I
WANT TO KNOW WHO THE PERSON IS YOU GET THE INFORMATION FROM.
(LAUGHTER) IF THEY’RE A REALLY CRACK SCIENTIST, WE’LL HANG ONTO HIM BUT ISOLATE THEM
SOMEWHERE. YEAH, WE WOULD WANT TO HANG ONTO IT. IF WE HAVE DEFINITIVE DIAGNOSIS THAT
THIS IS, SAY, PNEUMONIC PLAGUE, WE HAVE GOT TO MAKE SURE THAT OUR STATE AND LOCAL
COLLEAGUES CAN ENDURE THAT INFORMATION BEING RELEASED BECAUSE NOW YOU’RE GONNA HAVE
A WHOLE VARIETY OF PEOPLE IN METROVILLE AND AROUND THE COUNTRY WHO ARE GOING TO THINK
THAT IF THEY WERE EXPOSED TO TERRORISM, TERRORISTS CAN DO THIS ANYWHERE. THEN EVERYONE
IS GOING TO MAKE A RUN ON THE HOSPITALS. THEY’RE GOING TO WANT TO KNOW WHERE CAN THEY
GET ANTIBIOTICS, IF THEY EVEN KNOW THAT ANTIBIOTICS IS ONE OF THE TREATMENTS FOR
PNEUMONIC PLAGUE. IT’S GOING TO CREATE A LOT OF PANDEMONIUM. SO WE’RE GOING TO WANT TO
PLAN. HOW DO WE ROLL THIS OUT? HOW DO WE CONTINUE TO DO AN INVESTIGATION AND FIND OUT
WHO WERE THE INDIVIDUALS THAT WERE EXPOSED SO WE CAN TREAT THEM TO SAVE THEIR LIVES.
THAT’S WHERE THE VERY SHARP CONFLICT COMES BETWEEN WHAT OUR PERCEIVED OBLIGATIONS ARE
TO OUR PUBLIC. WE BELIEVE THAT WHILE YOU’RE GETTING YOUR DUCKS IN A ROW AND DOING
PRECISELY THE THING YOU’RE SUPPOSED TO DO BY THE BOOK, AND HAVING THINGS ROLLED OUT JUST
PERFECTLY, WE BELIEVE THAT TIME IS TICKING, AND PEOPLE ARE IN DANGER. AND IF WE HAVE THAT
INFORMATION, IF WE HAVE A REASONABLE CERTAINTY THAT THAT INFORMATION IS CORRECT, WE FEEL
THAT WE HAVE A MORAL OBLIGATION TO SHARE IT WITH THAT BROADER PUBLIC. AND LET ME TAKE
YOU INTO THE PROCESS FOR JUST A MOMENT BECAUSE IT’S HAPPENING IN EVERY ONE OF THESE
NEWSROOMS. AS YOU’RE SHARING THAT INFORMATION SO YOU CAN ROLL IT OUT, YOU ARE, BY
DEFINITION, SHARING IT WITH MORE PEOPLE. BY DEFINITION. WHICH IS WHY YOU ALMOST ALWAYS
LEARN ABOUT IT ON CNN BEFORE YOU CAN ROLL IT OUT. BUT FRANK, JUST TO BE FAIR… I’M NOT
CRITICIZING THE PROCESS, I’M JUST EXPLAINING THE REALITY OF IT. RIGHT. AND I’M NOT TRYING TO BE
TOO DEFENSIVE AROUND THE SITUATION, BUT THINGS ARE DIFFERENT. WHEN WE SAY A “ROLL OUT”
STRATEGY, THE FIRST THING WE THINK OF IS HAVING A PRESS CONFERENCE. I KNOW THAT. YOU HAVE
TO BE RESPONSIBLE. YOU HAVE TO DO THAT. SO DO WE. WE WANT TO BE RESPONSIBLE. NONE OF US
WANTS TO BE WRONG WITH THIS OR GO TO AIR EARLY. BUT JUST UNDERSTANDING THE DYNAMIC OF
THIS SITUATION, AS YOU’RE INFORMING PEOPLE, ALL OF OUR FOLKS ARE ON RED ALERT TO BE CALLING
AND TALKING TO EVERYBODY THEY CAN. THEY HAVE YOUR CELL PHONE NUMBERS.