LIMN

Search Search

Ebola, 1995/2014

Nicholas B. King looks back at the dialectics of confidence and paranoia in the Ebola outbreaks of 1995 and 2014

Infectious disease in poor African nations rarely generates the kind of sustained attention that the 2014 Ebola outbreak event has. Lassa fever, a viral hemorrhagic illness estimated to infect roughly 300,000 and kill 5,000 every year in West Africa, hardly receives any attention at all.[1] Nevertheless, in a recent Gallup poll, Americans ranked Ebola third when asked to name “the most urgent health problem facing the country at the current time,” just behind access to health care, and ahead of cancer and obesity.

The disjuncture between the actual threat posed by Ebola in North America, and the apparent fear it has generated, has itself become an object of intense scrutiny. As media coverage of cases in West Africa and North America has grown, so too has the proliferation of contrarian voices taking North Americans to task for being unnecessarily afraid of the virus. Social media is awash with listicles—including Salon’s “6 things Americans should fear more than Ebola” (Schwartz 2014), Humanosphere’s “5 diseases Americans should fear way more than Ebola” (Murphy 2014), and Cracked’s “5 Reasons America Can Calm the F#@% Down About Ebola” (Bell and Tashjian 2014)—admonishing readers for worrying about Ebola rather than comparatively more prevalent threats to health.

The discourse of disjuncture is not limited to popular media. Public health law expert Lawrence O. Gostin (2014) argues that the United States and Europe have “grossly overreacted” with “panicked responses” that ultimately divert attention from the correct response: improving basic health care infrastructure in West Africa. Similarly, in a widely distributed London Review of Books essay, anthropologist Paul Farmer laments that “the cycle of fear and stigma, amped up by the media, will continue to spiral, even though there’s little doubt that the epidemic will be contained in the US, which has the staff, stuff, space and systems” that are lacking in the countries hardest-hit by the Ebola outbreak (Farmer 2014). Under the headline “Canada’s response to Ebola driven by fear, not evidence,” a trio of Canadian physicians calls Canadian travel restrictions “illogical and anti-public health…likely to cause more harm than good” (Sharma et al. 2014).

While the substance of these critiques is likely correct—Ebola poses little threat to the healthy and wealthy citizens of North America—in this essay I am interested in their form, which illustrates what we might call a dialectic of confidence and paranoia. This dialectic plays out at the level of both lay and expert discourse, alternating between reporting that amplifies the threat of Ebola and critical commentary claiming a more accurate and level-headed risk assessment. This reflexive approach to risk, simultaneously producing knowledge about Ebola and critiquing the conditions of that knowledge’s production, circulation, and consumption, is a hallmark of modern risk communication. With respect to Ebola, its roots stretch back at least 20 years.

Ebola 1995

 

Ebola first came to widespread attention for North American audiences in September 1994 with publication of Richard Preston’s The Hot Zone, which was based on a 1992 New Yorker article. In riveting prose, Preston described an outbreak of Ebola hemorrhagic fever among a shipment of laboratory monkeys at a primate quarantine unit maintained by Hazelton Research Products in late 1989, which resulted in the euthanization of several hundred monkeys and four subclinical infections among humans. A multiweek national bestseller, the book garnered Preston a reported $3 million advance for his next book, numerous awards, and a mention among the American Scientist’s list of “100 or so Books that Shaped a Century of Science” (Morrison and Morrison 1999). Preston’s work also inspired intense interest in the culture industries—Preston has claimed that “within two months of the publication of my piece, 20 unauthorized screenplays thudded onto the desks of producers all over Hollywood” (Fine 1995:4D)–resulting in several films and bestselling books on Ebola-like viruses.

As Preston’s book was making its way down the bestseller list, its alarmist speculations appeared to find justification in real-world events. For three weeks in May 1995, news media issued daily reports on an outbreak of Ebola in Kikwit, Zaire (now Democratic Republic of Congo). Major magazines, including Newsweek, Time, and The Economist, published cover stories on the “Killer Virus”; network news programs such as ABC’s Nightline devoted special episodes to the outbreak; and CNN aired a special report on “The Apocalypse Bug.”

The Kikwit outbreak eventually killed fewer than 300, and no cases were ever reported in North America. While coverage ebbed quickly, the combination of Preston’s fictional account and a real-world outbreak fixed Ebola as an emblematic disease. A Google n-gram shows mentions of Ebola increasing eightfold and subsequently flattening out at the higher level after 1994 (Figure 1). Five years after the events in Kikwit, a U.S. News and World Report poll asked which presidential candidate would better respond to nine national crises, including “a stock market crash,” “a US is attacked by another country,” and “Ebola virus spread across the country” (voters preferred Al Gore over George W. Bush by a 42% to 31% margin for the last case) (Whitman 2000).

Coverage of the Kikwit outbreak drew a backlash comparable to the listicles of 2014. The July 1995 issue of The New Republic featured a critical article by Malcolm Gladwell, trumpeted on the cover as “Paranoia Strikes Deep. Ebola, Outbreak, The Hot Zone and the new panic about plagues” (Figure 2). Arguing that Americans were “in the grip of paranoia about viruses and diseases,” he argued that “it is because of the success of The Hot Zone that Outbreak was made, that the Ebola outbreak in Zaire was covered as feverishly as it was, that the idea of killer viruses has achieved such sudden prominence. In the epidemic of virus paranoia, The Hot Zone is patient zero” (Gladwell 1995:39).

Four years later, journalism scholar Susan Moeller devoted a quarter of her book Compassion Fatigue: How the Media Sell Disease, Famine, War, and Death to a critique of Ebola coverage. Arguing that the American public suffered from an inability to sustain concern about specific, long-term, or low-intensity crises or social problems, a malady she called “compassion fatigue,” she argued that “it’s the media that are at fault. How they typically cover crises helps us to feel overstimulated and bored all at once.” Moeller saved her harshest criticism for “the late-20th-century phenomenon of the melding of news and entertainment, the vanishing boundaries between news-worthy events and celebrity spectacle” (Moeller 1999:34).

Two years later, when 32-year-old Colette Matshimoseka fell ill after arriving in Canada from the Democratic Republic of Congo, suspicions that she might have Ebola sparked widespread media coverage. In response, Toronto Star science correspondent Leslie Papp presented a now-familiar critique of the “outbreak of hype”:

Among mass killers it’s more Mickey Mouse than Hannibal Lecter, but the Ebola virus still sends shivers through North Americans—thanks to Hollywood. Pop culture, rather than lab cultures, is at the root of an Ebola scare that rippled across the continent this week from the unlikely epicentre of Henderson General Hospital in Hamilton…. Ebola is different, not because it’s more dangerous than other viruses. It’s the one that’s gone Hollywood…. The virus has been the subject of scores of sensational articles, books, and, above all, movies. Filtered through Hollywood’s carnival lens, it looks disturbingly apocalyptic—a mass killer—as easy to catch as the common cold and capable of rapidly spreading across the continent (Papp 2001:NE03).

These analyses were at best oversimplifications; concern over new infectious diseases during the 1990s owed a great deal to a calculated public campaign about “emerging diseases” by scientists and policymakers (King 2004). Appearing in the same year as Preston’s original article, the 1992 Institute of Medicine report Emerging Infections: Microbial Threats to Health in the United States (Lederberg et al. 1992) argued that Americans should be far less sanguine about the threat posed by novel infections, including Ebola. Nevertheless, between 1995 and 2001, critical reflections on the dialectic of confidence and paranoia presented a stark opposition: confident, measured scientific understanding of the true threat of Ebola on one side; paranoid fears stoked by mass media and the culture industries on the other.

Confidence, Paranoia, and the Risk Communication Industry

While superficially similar, the 1995 and 2014 versions of the dialectic of confidence and paranoia differ in one key way. In the 1990s, critics were most concerned with the blurring of boundaries between fact and fiction in coverage of Ebola and other emerging diseases. Nostalgic for an age in which clear firewalls separated journalism from the culture industries, critics lashed out at both the structural consolidation of entertainment and news media, and the practical intermixing of fact and fiction in newspapers, TV, film, and especially the nascent World Wide Web. According to the critics, American consumers that had come to depend on these firewalls to filter their understandings of risk were now threatened with a confusing and corrupting merger of fantasy and reality.

In 2014, the dialectic of confidence and paranoia looks different to the “risk experts” seeking to explain public irrationality. Gone is the faith in a rational individual threatened by confusing or manipulative reporting. In its place are decisionmakers hampered not by the media but by their own brains. Far from rational consumers, human beings (not just North Americans) are instead statistically illiterate, prone to irrational misjudgment of the relevance and magnitude of risks, subject to cognitive biases and framing effects, and dependent on premodern heuristics ill-suited to the complexities of twenty-first-century life. Whereas in 1995 experts were afraid that otherwise rational citizens were led astray by media-driven paranoia, in 2014 experts warned against misplaced confidence in an illusory human rationality.

What explains this shift? In the 20 years since Kikwit, cognitive psychology and behavioral economics have called into question humans’ ability to reliably interpret, predict, and respond to risk (Ariely 2008, Kahneman 2011). The dominant narrative now is not one of rational humans corrupted by inaccurate reporting, but rather “predictably irrational” humans whose corruption is innate, hardwired into brains produced by millions of years of evolution that have yet to catch up with our complex, modern risk environment. Inherent human fallibility about risk is the root cause of everything ranging from vaccine refusal to low organ donation rates, low participation in 401Ks to ignorance of the “black swans” responsible for economic crises.

Tracking the contours of human fallibility, a cottage industry of journalists and academic experts have set themselves the task of explaining just how consistently wrong we are about just about every risk. as reflected in the titles of two popularizations, Dan Gardner’s Risk: Why We Fear the Things We Shouldn’tAnd Put Ourselves in Greater Danger and Barry Glassner’s The Culture of Fear: Why Americans Are Afraid of the Wrong Things: Crime, Drugs, Minorities, Teen Moms, Killer Kids, Mutant Microbes, Plane Crashes, Road Rage, & So Much More. The common thread running through this type of work is that imperfect humans cannot be relied upon to make good decisions, and must be supplemented by carefully designed choice architecture to guide us, or supplanted entirely by expert systems to do the deciding for us.

In 1995, arbiters of the distinction between rational and irrational risk perception criticized manipulation of essential human subjectivity by nefarious outside forces. Gladwell, Moeller, and Papp criticized media for exaggeration and blurring fact/fiction boundaries, but left intact the possibility that responsible media, disseminating objective science to rational individuals, could produce good decisions. They thus called for reform of existing communication infrastructure, to ensure that confident rational humans were not duped into paranoia.

In 2014, a new set of arbiters preaches management rather than structural reform, advising us to look to outside forces to manipulate us into better decisions. In doing so, twenty-first-century experts in risk communication, behavioral economics, and cognitive psychology carve out a novel managerial space. If individuals cannot be relied upon to be correctly confident or paranoid, then they require constant expert supervision. The ultimate source of rationality thus is located not in individual humans, but rather the distributed architecture of risk management, endlessly channeling our atavistic human brains into productive decisions.