Chapter 3

 

RISK, HAZARD, AND NARRATIVE

 

 

The proposed radioactive waste dump at Ward Valley is an example of a the government and its constituents on the brink of making a decision about a controversial risk. Making decisions about risk and hazards has long been the subject of extensive study from various theoretical and practical approaches. It seems to me that this research has failed to illuminate the motivations for actors involved in conflicts over taking risks,. This failure stems from a tendency to reduce the factors considered to the point that the explanatory framework becomes a caricature. In this chapter I will review the research on risk and hazard. This involves tracing four general areas of interest, the engineering-influenced quantitative risk assessment, cultural construction of risk, the more cognitive risk perception school and the human ecology approach to hazards. Second, I will present what I believe is a more inclusive way to understand how people and institutions evaluate hazards, by looking at the stories that comprise their narrative matrices, and how the roles that they attempt to impose on others fit into those matrices. I use this view to suggest what may be a productive way to look at the conflict over Ward Valley, which I then implement in the next chapter.

Research on Risk and Hazard

Risks and hazards seem to be inevitable components of living in modern society (Giddens 1991), but the meanings of the two terms differ between popular and academic usage. In common parlance the idea of risk involves exposing oneself to a danger in return for some known payoff. Risks are "taken" and often "calculated," suggesting the appeal to a mental calculus using consistent rules. Hazards, on the other hand, are things to be avoided and feared (Winner 1986). They may be natural or technological, but certainly undesirable. Also in common usage, natural hazards are seen as unavoidable, while technological hazards are not.

Scholarly usage is not so value-laden and is somewhat less clear. Cutter (1993) defines risk as the "measure of likelihood of occurrence of [a] hazard," and hazard as a "much broader concept that incorporates the probability of the event happening, but also includes the impact of magnitude of the event on society and the environment, as well as the sociopolitical contexts within which these take place. Hazards are the threats to people and the things they value, whereas risks are the measures of the threats of the hazards" (Cutter 1993:2). But like their commonsense notions, these two definitions also lead to different approaches, people who study hazards tend to concentrate on how to mitigate and minimize, while people looking at risks tend to emphasize quantification to determine how to balance risks with benefits.

In terms of intellectual history, there is a distinction between risk and hazard research. They started with different foci, but have come to share much common ground in recent years. Risk studies can be seen as one tradition starting with quantitative risk analysis and giving rise to a school of more interpretive cognitive and cultural studies of risk. Throughout, risk analysis has been used to study technological risks. Hazards studies started as an approach to address natural hazards and has gradually moved to include technological hazards, and to overlap considerably with the cognitive strand of risk assessment. What follows is a brief survey of the development of these three strands of research. By no means is it exhaustive or comprehensive—I have chosen to omit the more sociologically-minded school of disaster studies because it does not address risk-taking—but it should provide an indication of general trends (for more complete descriptions of the development of these fields, see Covello 1983; Douglas 1985; Mitchell 1990; Misa & ElBaz 1991; Cutter 1993).

Quantitative Risk Assessment

Quantitative risk assessment was developed in the 1970s by academics, but with industry support, as an attempt to roll back tough environmental legislation enacted in the 1960s (Misa & ElBaz 1991). The seminal article in this area is Chauncey Starr’s ‘Social Benefit versus Technological Risk’ (1969). Starr suggested an approach for establishing a quantitative measure of benefit relative to cost for accidental deaths arising from technological developments. He based his method on two explicit assumptions. First, historical records of accidental deaths are adequate for revealing patterns of fatality arising from technological developments. Second, "historically revealed social preferences and costs are sufficiently enduring to permit their use for predictive purposes." Proceeding from these assumptions, Starr then "converted into a dollar equivalent" the "social benefit derived from each activity." With this information he plotted the benefit per person compared to the fatalities per person-hour of exposure for voluntary and involuntary technological hazards. Using this method he drew some conclusions about willingness to take risks in America.

The last point is one that confirms the suspicion that quantitative risk analysis was designed to help justify certain technologies. Starr later elaborated on his proposal (Starr et al. 1976), but the essence of the approach had already been set. Quantitative risk analysis has at its core the belief that decisions about technology can be made according to a monetary calculus in which costs, determined by quantifying loss of life and productivity, are balanced with total "social benefits." Two studies about nuclear power using quantitative risk assessment gained much public attention (U. S. Nuclear Regulatory Commission 1975; Inhaber 1978; see Schrader-Frechette 1980 for a review). The Rasmussen Report (U. S. Nuclear Regulatory Commission 1975) was an attempt to tabulate all of the deaths that occurred during the course of nuclear power production, from uranium mining and processing to power plant operation. On the basis of these tabulations some scientists, mostly engineers, have argued that it is hazardous not to build nuclear power plants (e. g., Beckmann 1976) because fewer deaths have been attributed to nuclear power than to coal power.

Quantitative risk assessment presents a curious picture of how people make decisions. This approach assumes that everyone is constantly and rationally comparing their risk of death in an activity with the cash value of the benefit of that activity. This is surely incorrect (see Shrader-Frechette 1985a, 1985b; Hanke 1981; Kelman 1982 for critiques of cost-benefit analysis); the reactions of people to danger, and their choices to subject themselves to danger, seem to be much more complex than this reduction to dollars and cents. And indeed, when it became clear that people were not acting according to the sort of linear functionality described by Starr, researchers began to investigate the more qualitative and cognitive aspects of human perception and reaction to risk. However, quantitative risk assessment endures as a policy tool. So, for example, a bill called the Risk Assessment Improvement Act of 1994 was introduced to the 103rd Congress asserting that "Risk assessment is a scientific procedure for evaluating and quantifying the magnitude and severity of environmental hazards which may threaten human health and ecological resources," and proposing the expansion of the risk assessment program in the Environmental Protection Agency and adding risk assessment to virtually all environmental regulation is an explicit goal of the newly Republican Congress (U. S. House of Representatives 1994). At least two other risk assessment bills have been introduced in the 104th Congress.

Cultural Constructions of Risk

Cultural and cognitive approaches to studying risk were developed in the 1980s as a response to the simple fact that public perceptions and decisions did not coincide with conclusions drawn using quantitative risk assessment (Misa & ElBaz 1991). The central tenet of these studies was that human risk-taking is inseparable from the values held by an individual and the groups of which he or she is a member. While the ideas used to develop this thesis had been expressed earlier (Douglas 1976; Douglas 1976/1982), they were articulated strongly by Douglas and Wildavsky in Risk and Culture (1982).

Douglas and Wildavsky try to explain why fears about hazards are not necessarily linked to solid, statistical evidence and why people emphasize some risks and not others. Their explanation is based on the assertion that "the perception of risk is a social process," leading to the explanation that people who live in different kinds of social organizations are inclined to accept and avoid different sets of risks. This "cultural theory of risk perception" suggests that people’s complaints about hazards and risks should never be taken at face value, but instead should be interpreted in light of the form of social organization being threatened or preserved. For example, if a person is worried about radiation from a nuclear power plant, their concern must be based on an aversion to centralized government. In this way, Douglas and Wildavsky use their insights to attack environmentalists by portraying them as "sectarians" at the "border" of society, without equally turning their distrust to other groups in American society. By making statements such as "The political views of the border are predictably on the left," critic Langdon Winner notes that "[O]ne fascinating accomplishment of Risk and Culture is to redefine the political spectrum so that the center now includes the far right" (Winner 1982). Indeed, the book does suggest a way of investigating the underlying values in debates about risk but its execution "ends up an ill-conceived polemic" (Winner 1982). Still, despite the failings of Risk and Culture, the idea that human proclivities to accept or ignore risks based on broader cultural beliefs has gained some credence and remains an avenue of research (e.g., Cvetkovich & Earle 1992; Dake 1992).

Cognitive Approaches to Risk

A slightly different tactic for investigating human reactions to risk is to concentrate on personal perceptions. Developing out of roots in psychology and behavioral science, this approach used survey research to understand "risk perception". Covello (1983) provides an extensive review of the emergence of this field through the early 1980s, categorizing the conclusions of risk perception studies as concentrating on three areas: human intellectual limitations, overconfidence, and expert-lay disagreements about risk.

Human Intellectual Limitations. Risk perception research suggests that people do not cope well with risk decisions. To simplify risk problems, people use a number of inferential judgment rules, called heuristics (see Bostrom et al. 1992). Two of the most basic heuristics are information availability, "the tendency for people to judge an event more frequent if instances of it are easy to imagine or recall" and representativeness, "the tendency off people to assume that roughly similar activities and events…have the same characteristics and risks" (Covello 1983:287). These tendencies are seen to lead to systematic biases, such as underestimating the risks of high-frequency events (automobile crashes, falling) and overestimating the risks of low-frequency events (airplane crashes, fires).

Overconfidence. In addition to having systematic biases, risk perception researchers characterize the population as being overconfident about their risk assessments. Most people believe their estimates of risks to be very reliable, when often they are not. As a result, people tend to believe that they are immune to common hazards—they rate themselves among the most skillful and safe drivers, believe they can avoid common accidents, and greatly underestimate their chances of having a heart attack. In general, they are overconfident about their underestimation of activities they perceive to be familiar and under control (Covello 1983:288).

Expert and Lay Disagreements. A final area addressed by risk perception studies is that of expert and lay estimates of risk (e.g., Kraus et al. 1992; Maharik & Fischhoff 1992, 1993a, 1993b; Flynn et al. 1994). Risk estimates of experts are found to be well correlated with annual fatalities, while non-expert estimates are only poorly related. To explain these differences researchers have identified factors other than annual fatality rates that may effect public perceptions of risk. They have found that risks are perceived to be greater if the activity is "involuntary, catastrophic, not personally controllable, inequitable in the distribution of its risk and benefits, unfamiliar, and highly complex." Other factors include whether the hazard is immediate or delayed, exposure is continuous or intermittent, effects are known or uncertain, or always fatal. In short, experts use methods in which a death is a death, while the general public tends to see some ways of dying as worse than others.

Slovic (1987) reduced these factors to two scales, one measuring the amount of dread, the other whether the effects were known or unknown. He concludes that classification of risks according to these two factors allows for efficient prediction of public responses to technological hazards, but maintains that experts should respect the opinions of the public. Others are not so generous; Wilson and Crouch (1987) see the benefit of risk assessment as indicating areas in which the public needs to be educated. They propose to do this by comparing different risks to show the irrationality of public perception.

Viewing the public as irrational is a typical way for experts to explain the differences between expert and lay risk assessments. Freudenburg and Pastor (1992) identify three explanations in for expert/lay disagreements in the risk perception literature: judging the public to be ignorant/irrational, selfish, or prudent. The first response is perhaps the most common reaction by proponents of technology (and in legal writing, e.g., Contreras 1992). For example Drottz-Sjöberg and Persson (1993) recommend that "persons who appear unduly fearful of radiation [be] handled with care and [be] recommended professional medical treatment." They also suggest educating people about radiation, and "respect" for people’s fears in order to increase trust in authorities. Others (e.g., Keeney & von Winterfeldt 1986) set up the straw man that the public wants a "zero risk" society and claim that because zero risk is impossible the public is irrational (Freudenburg & Pastor 1992; Winner 1986).

Related to the model of the public as ignorant or irrational is the model of social amplification of risk (Kasperson et al. 1988; Renn et al. 1992; Burns et al. 1994). While the explicit thesis is that "events pertaining to hazards interact with psychological, social, institutional, and cultural processes in ways that can heighten or attenuate individual and social perceptions of risk," the implicit assertion is that social actors selectively report and emphasize certain events and thereby cause the perception that a certain risk is larger or smaller than it really is. This leads to the conclusion that the public is not aware of what the "real" risks are, and that somehow everything would be better if the media (and others) would not make things sound worse than they are. Critics of this view (Rip 1988) wonder if the social amplification of risk should be counteracted.

There is an opposing view that the opinions of the general public should be considered prudent. This view sees citizen evaluations of risk as augmenting those of experts, providing attention to a big picture which may be ignored by scientific specialists "who have been hired to look after the technical details" (Freudenburg & Pastor 1992:44). Here, it is asserted that the differences between experts and lay persons arise because each group asks questions which are of little concern to the other. Adherents to this view also find considerable fault with that portion of the risk perception literature that leads to the conclusion that the public is irrational (see Schrader-Frechette 1990). For instance, it becomes clear that experts as well as the public have biases, use faulty heuristics, and show great variation between fields of expertise (Perrow 1984; Freudenburg 1988; Barke & Jenkins-Smith 1993). Scientists too are guilty of "overconfidence, insensitivity to erroneous assumptions, erroneous beliefs about the precision of estimates, failure to understand ways in which apparently solid results have been influenced by methodological choices, and the tendency for estimates of extremely low probabilities to be effectively nonfalsifiable" (Freudenburg & Pastor 1992).

There is another way in which public views of risks are understood as rational and justified. Related to the area of "risk communication" some scholars have concentrated on trust as a key element in explaining public reactions to risk (e.g., Bullard 1992; Flynn et al. 1992, 1993; Freudenburg 1993; Frey 1993; Kasperson et al. 1992; Slovic 1993; Slovic, Flynn & Layman 1991). These authors point to the lack of trust between professional risk assessors and the public and suggest a few different reasons for it. Kasperson et al. (1992) point to a "broad loss of trust in leaders and in major social institutions in the U. S. since the 1960s," and recommend that risk communication attempt to establish trust by listening to the public and addressing the reasons for mistrust. Slovic (1993) presents the idea that there are "technological and social changes that systematically destroy trust,"—for instance, the media only report bad news—and calls for "ways to work constructively in situations where we cannot assume that trust is attainable." Freudenburg (1993) offers a more sociological explanation for lack of trust in situations concerning technological risk. He singles out the increased division of labor in modern society and the concomitant levels of interdependence, which create the opportunity for recreancy—"the failure of institutional actors to carry out their responsibilities with the degree of vigor necessary to merit the societal trust they enjoy" (Freudenburg 1993:910).

Current research on risk perception and the cultural construction of risk, as described above, has much in common with research in the hazards tradition. I now present a brief review of the human ecology tradition of hazards research before summarizing what I see as the weaknesses of research on both risks and hazards.

Human Ecology and Hazards Research

Human ecology, based on the assertion that the natural world, rather than being an independent variable, is in constant and reciprocal interaction with humans, had a long history among Chicago intellectuals (e.g., Barrows 1923). In the 1950s, under the direction of Gilbert White, a school of natural hazards research was developed at the University of Chicago concentrating first on floods (White 1952). Mitchell (1990:138) asserts that the attention to natural hazards in geography was stimulated by attempts to explain the continuing failure of U. S. flood control policies to successful curb losses despite extensive physical science research . These researchers mapped human uses of hazard-prone areas to examine adjustments made to the hazards and to see if similar patterns of risk, response, and adjustment arose (Cutter 1993). As the field progressed, technological hazards were added to the analysis (Cutter 1993; e.g.. Burton et al. 1978).

As with many other disciplines, there were radical critiques of the natural hazards paradigm during the 1970s. Many of these critiques argued that the perception of and adjustments to natural hazards were the product of cultural, economic, political and social forces, asking where the ‘human’ was in human ecology (e.g., Torrey 1979). This critique took some of the ‘natural’ out of natural hazards. Rather than being seen as physical processes, natural disasters were described as avoidable outcomes of certain forms of social organization, a view that became known as the political ecology view of hazards (e.g., Watts 1983; Hewitt 1983; Morren 1983).

Another result of the radical critiques of the human-ecological approach was work that tried to put hazards "in context." These studies add the cognitive and behavioral aspects of risk analysis to the radical political ecology critiques to focus on the social and political contexts of hazards (e.g., Mitchell et al. 1989; Palm 1990). As Cutter explains it, "the hazards-in-context approach seeks to reinterpret the nature-society interaction as a dialogue between the physical setting, political-economic context and the role of and influence of individuals or agents in affecting change" (Cutter 1993:179).

The result of these new directions was a further blurring of the distinction between natural and technological hazards; the human-ecological paradigm already considered natural hazards as a result of the human-nature interaction, the concentration on social and political context allows the hazards-in-context approach to be used to address both technological and natural hazards. Morren (1983: 284) states that "In my view, the traditional distinction between ‘natural’ and ‘man-made’ disasters largely disappears." Palm (1990) contends that although technological hazards are "produced largely by human activity rather than by geophysical processes," technological and natural hazards are similar in many ways—people overestimate rare technological and natural hazards, both create widespread externalities, and both are compensated for by ex post and ex ante methods. She also indicates two areas of difference—the de minimis method of regulation for technological hazards, and different notions of equity and compensation (Palm 1990:15). Because of the similarities, but recognizing the differences, Palm borrowed from research on technological risk and hazards for her study of natural hazards.

The elision of natural and technological in the human ecology model of hazards research seems to be complete with Cutter (1993). Cutter writes about technological risk drawing on both the human ecology tradition of hazards research and cognitive studies of risk perception. Even though the fields of risk assessment and hazards research seem to be converging, there are a number of issues which are either ignored or addressed only peripherally.

Lacunae in Risk and Hazards Research

Despite attempts to understand public reactions to risk as valid and justifiable, most of the cognitive and cultural work on technological risk tends to muster support for the promoters of new technology. Conclusions from this field, as exemplified by its flagship journal, Risk Analysis, look for ways to increase trust of authorities and to decrease public opposition to new technological risks. The overwhelming assumption also seems to be that decisions about risk are made in a logical and consistent way, so that every aspect of the social relations surrounding risk decisions could be explained by a complete model. If the risk professionals can just find that model, all decisions about risk will be easily analyzed and efficiently made according to it. The purpose of risk assessment and risk communication studies seems to be to learn how to convince the public that expert assessments are right, rather than to evaluate the danger of a technology.

The risk assessment research discussed here leaves some questions unanswered. Why do expert and lay evaluations continue to diverge? If risk decisions are not made according to a cost-benefit analysis, how are they made? The hazards school also leaves some untidy questions. If natural and technological hazards are really similar, why do people continue make an impassioned distinction between living in a flood plain and living near an incinerator? Similarly, the cultural construction approach to risk fails to account for diversity within recognizable cultural groups. And virtually none of the work addresses the question of why people make what seem to be contradictory decisions about risk.

Research on risk and hazards ignores the concept of place. While the more Newtonian concept of space has been applied to note the effect of distance on environmental concerns (e.g., Kunreuther et al. 1991; van der Pligt; Payne et al. 1988), the power of people’s attachments to place is overlooked, at least in any explicit form. Some studies do take proxy measures which could be construed as representing a sense of place—measuring real estate values near toxic waste dumps, predicting the effect on tourism of a nuclear waste repository in Nevada (Slovic et al. 1991)—but none of them integrate work in this area in any serious way. Rather, any opposition to danger that comes from a local area tends to be pejoratively dismissed as the NIMBY syndrome (e.g., Greenberger 1991).

Toward a Narrative Understanding of Risk and Hazard

There is some work that holds promise for developing a more sophisticated understanding of the relations between individuals, society, and both technological and natural risks and hazards. From political science, Langdon Winner (1986) offers an interesting starting point by showing the political character of technology (see also Street 1992 and other philosophers of technology for an elaboration and history of this view). Artifacts, he argues, have politics, and their uses have political consequences. The decision to use a new technology therefore has more that just economic results, unlike what the proponents of risk assessment would claim. A second major point made by Winner is more directly made with reference to risk. Discourse about dangers need not be made in the language of "risk," he claims—to do so is akin to "hitting the tar-baby" of the famous children’s tale. Rather than refer to something as a risk, he suggests discussing it as a danger, peril or threat because "a number of important social and political issues are badly misdefined by identifying them as matters of ‘risk’" (Winner 1986:152). His conclusion is that the introduction and use of technology requires a different sort of thinking from that of the logical application of rules—it is a moral issue.

From geography, Andrew Kirby presents some interesting ideas that shed light on the seemingly irrational behavior of the public with respect to risks. Kirby maintains that speaking of rationality means referring to public acts, not private evaluations of reality. He cites the philosopher Rom Harré, who contends that,

Grappling with risks, Kirby concludes, is "A story of the attempt to deal with the gulf between the ways in which people say they view the world and the ways in which they act" (Kirby 1990:26). There is a dissonance between what we say and what we do, and this dissonance, Kirby contends, is evident in the relationship between the public and the private. To get along in the world people sometimes ignore their own fears and listen to public rationality, and at other times ignore the public rationality. Kirby here points to the difference between commonsense—the instincts and evaluations of the individual—and common sense—the collective judgments and assertions (Kirby 1990:33).

From sociology, Kai Erikson points to differences between human reactions to technological and natural hazards and suggests a way to explain those differences (Erikson 1994). He admits that the line between natural and technological disasters is difficult to draw theoretically. To victims, the line is more clear.

In drawing the distinction between technological and natural disasters, Erikson concentrates on a subset of technological hazards, which he calls "toxic" disasters. These are the "new species of trouble" that is the subject of his book. Toxic disasters are different, he claims, because they contaminate rather than destroy, and that contamination and other effects are not bounded. Toxic disasters "violate all rules of plot" (Erikson 1994:148) because as contamination rather than destruction they never end, and, being largely imperceptible to the senses, have no clear beginning. The experience of a hurricane compared to a toxic waste leak is fundamentally different. A hurricane can be seen approaching, announces its arrival with force and is gone, leaving the survivors to pick up the pieces. A dump leak—like Love Canal—gives no indication of impending danger, only becomes perceptible through insidious effects, and leaves the place contaminated with no real hope of purification.

Also from sociology is a body of literature that while not directly concerned with risk and hazards seems to have clear application to conflicts surrounding them. A number of scholars working on the sociology of science have developed a way of analyzing the production of scientific knowledge known as a "sociology of translation" (Callon 1986). The sociology of translation refers to the process by which scientists put forward and support a particular observation that they would like to be accepted as fact. The process of translation described by Callon refers to the way in which one actor in a situation imposes roles on the other actors, thereby "translating" their interests and actions. The first step in this process is "problematization" where the scientist attempts to define a situation in such a manner that it is accepted by others in the situation that the scientist is indispensable to the solution of the problem. Problematization includes the creation of an "obligatory passage point" required for the solution of the problem. This passage point may be anything, but it is usually dependent on the scientist. Callon’s next step concentrates on the scientist bringing the other actors into the situation in a way consistent with the problematization, called enrollment. ""Here he considers anything to be an actor, both animate and inanimate, human and non-human. The enrollment of actors means attempting to make them accept a certain role, as defined by the scientist. Callon identifies two more steps, but the gist of the analysis is evident. The creation and application of knowledge by a scientist depends not on the nature of his research but on how well he can get others to behave according a set of roles that he has set out. Put another way, implementation of a new technology depends on actors accepting and acting according to a plot set out by the proponent of the technology.

The work of these scholars leads to the direction that I am suggesting for discussions of risk and hazard. The common thread in each is that of narrative. Erikson is most explicit in appealing to narrative for explanation—toxic disasters are different because it is hard to tell compelling stories about them. Kirby is appealing to narrative at another level—his concentration on the difference between public and private discourse could also be seen as the conflict between public and personal narratives. Winner’s work is less explicitly related to narrative, but by speaking of morals he implicitly appeals to stories. Stories are, after all, things that have morals. Callon sees the outcome of conflict over science and technology as dependent on the success or failure of one party to defining roles for others. Callon does not refer to narrative in his model of translation, however, it seems to me that this is exactly what his understanding relies upon. The idea that a conflict has "actors" who attempt to define "roles" implicitly relies upon an appeal to narratives. Actors and roles are meaningless without fitting them into the plot of a narrative. And while Callon does not turn his attention explicitly to conflicts over technological risk the approach seems well-suited to just that purpose.

My contention is that the only way to incorporate the truly diverse factors that influence risk decisions, is to realize that personal, local, cultural and national narratives about society, technology, place, land, and many other subjects form the mental backdrop that produces decisions on risk. For purposes of description and reference I refer the diverse and overlapping sets of narratives appealed to by an individual as the narrative matrix. I use the word "matrix" to communicate that there are many narratives that a person may appeal to, and that these narratives are not necessarily consistent with each other. The narrative matrix includes many of the factors discussed in risk and hazard research, but allows for much greater flexibility to incorporate multiple, often-contradictory factors. In practice, the centrality of the narrative matrix to risk decisions is acknowledged by the strategies used to influence public opinion—attempts to force actors in a situation into roles indicates an effort to appeal what are perceived to be shared elements of the narrative matrix.

The concept of the narrative matrix allows for a deeper consideration of place in efforts to understand risk and hazard conflicts. Because personal and cultural narratives are often about places, and are perhaps the best way to get a sense of a place is through narratives (Entrikin 1991), they are ideal for investigation of the role of place in risk and hazard decisions. However, it would be impossible to proceed directly to a discussion of the relationship of narrative and the narrative matrix to risks and hazards. I must first turn more attention to the issue of narrative, to acknowledge theoretical work in that area and present a description of language and narrative that can be applied to an understanding of the interplay between people, places and the discourse about risks and hazards.

What is Narrative?

Much of the academic discussion of narrative centers around its appropriateness for use in historical explanation. Hayden White offers this succinct summary of the schools of thought in this ongoing debate.

What is common among these four strands is a suspicion that narrative is somehow not an accurate description of "reality" (e.g., White 1987). It corresponds with a belief that under some set of circumstances language can map perfectly onto the world. However, this does not seem to be the case—language, in fact, is learned ad hoc and is always the product of our experience of the world. Moreover, narratives, or stories—I would use the terms interchangeably—are constantly used and appealed to in all aspects of life. The act of learning language is based on telling stories. The way the concept of "pride" or "justice" is explained is usually by telling a story. While we may have an image of some universally applicable, logical definition of "pride," any real understanding is based on the accretion of exemplary stories. The use of language, then, is constantly appealing to stories.

We can take the idea of stories and language and apply it to the text. Whether or not a text is explicitly in narrative form (a scientific report versus a novel) it appeals to stories for its understanding. A scientific report makes no sense without the potential of its being fit into a story of what a scientist does and how a scientist acts. Even the explicit narrative appeals to a set of stories about what the novelist does and who a reader is. By no means am I denying that there is more of a plot in a novel than in a chemistry report, but I only wish to make the point that both count on appealing to other stories in order to make sense in a society.

It is this observation that virtually everything appeals to narrative that I apply to risks and hazards. Erikson (1994) suggested that the reason toxic hazards are different from other dangers is that one cannot tell a complete story about them—that their very nature defies emplotment. However, I would suggest that Erikson is wrong in a very commonsense way; we can and do tell stories about nuclear waste leaks, dioxin dumps and the like. But what I would take from Erikson is that narratives explaining and justifying the existence of toxic hazards come into conflict with other powerful narratives, among them stories about the self, place, and nation—the sum of which I am calling the narrative matrix. Therefore, it would be useful to identify and investigate the narratives which influence, contradict, and intersect with narratives about risk. This means identifying the motivating factors for human decisions, which I suggest take the form of the narrative matrix. The stories that create the narrative matrix are used to make sense of the world, and they range from private to public. For the purpose of simplification I want to concentrate on three scales, narratives about self, narratives about local places, and narratives about communities and nations usually associated with larger, perhaps politically-defined places. The continuum between these three levels, collectively taken to be the narrative matrix, contribute to personal identity to various, and make up the psychological backdrop upon which all manner of decisions are made, including decisions about risk and hazard..

Narrative and Self Identity

The connection between narrative and self identity is intuitive; Ricoeur noted that "we speak of a life story to characterize the interval between birth and death" (Ricoeur 1991:20). There is considerable debate over whether we experience life as narrative or whether we impose the form of narrative on life by "emplotment" of the maelstrom of unconnected sense perceptions. Ricoeur asserts that "stories are recounted and not lived; life is lived and not recounted" (Ricoeur 1991:20). White (1987) also holds this view, suggesting that we force events into narrative form. Carr opposes this view, claiming that "Narrative is not merely a possibly successful way of describing events; its structure inheres in the events themselves" (Carr 1986:1). I introduced this disagreement before and suggested that it was moot, because all language, not just narrative, is funneling "reality" through human interpretation. However, this discourse is still interesting because despite the disagreement over whether "reality" happens as a narrative, there is widespread acknowledgment that narrative plays an important role in identity.

There are a number of philosophical ideas that address why narrative is important to identity (e.g., Ricoeur 1991). The point which I need to make is a much simpler observation that narrative is important to identity. It does not require extensive philosophical training to realize that most people have a sort of running narrative of their lives. That narrative may be constantly changing, and the person may not always act according to it, but most, when asked who they are would tell a story about their past, their present, and how they expect their future to be. In fact, it would be difficult to explain who someone is without appealing to narrative, either implicitly or explicitly.

These personal narratives that contribute to establishing and maintaining identity are important for understanding decisions about risk. When Kirby makes the distinction between commonsense and common sense he is pointing to the difference between personal narrative and collective narratives. The choice to fly in an airplane is an example of this. Some people refuse to use air transportation because nowhere in their idea of who they are—their story of their lives—is a plot twist that involves the possibility of dying in an airplane crash. The football commentator John Madden is the most public example of this. Refusing to become an airline fatality is so much a part of his life story that his tour bus has become a recognized part of his identity.

Narrative and Place

There are two points to be made about the concept of place in the context of risk and hazards. First is that one of the important ways in which places are created is through telling stories about them. Second is that people tend to derive identity from connection to a place and that places can therefore provide strong motivations for human actions. These are important features to recognize because they represent a different way of making decisions than that which is normally supposed by risk and hazard studies.

Tuan (1980; 1989; 1991) stresses the importance of language in cultivating and maintaining a sense of place. "Words have great power in creating place," he writes, citing the stories and rituals used by Australian aborigines to center themselves in the world (Tuan 1980:6). The process of naming, examined in depth by Carter (1987) with reference to Cook’s charting of Australia, has the power to create places. This does not mean that places do not exist before they are named, but that names bring them into focus respective to the set of people naming them. Children often develop their own names for places with friends, giving them a significance that is not accessible to the adult world. Cook’s naming was creating places for the entire Western world.

One step beyond naming is storytelling, which plays an important role in the creation and maintenance of place. Telling stories seems to be one of the ways in which people form connections to places. Myths of origin are one kind, expressing a primordial bond to a place. A myth that a people was born out of a mountain or were dropped from the sky onto an island are good examples. Stories of families connect them to places, the genealogy in the Bible is still used two thousand years later as the claim of a people to a place. On a smaller time scale, family histories and anecdotes serve to give people roots in places. There are a many ideas about why narratives serve this function—for example, Entrikin (1991) suggests that they capture the "wholistic" quality of a place.

This aspect of narrative connecting people to places can also be applied to communities of people. Not only does narrative contribute to personal identity, it is constitutive of community identity, and as such transmits the values of a community. White (1987) uses this as a criticism of narrative, claiming that its moralizing nature negates its ability to describe the world as it is. As I mentioned before, this view is blind to the fact that language itself is based on shared morality (see Winch 1970). Johnstone (1990) investigates this connection between place, community and narrative. Her examples, taken from Fort Wayne, Indiana, show the important role of narrative in developing a sense of community in a place. The creation of "public narratives" about reactions to a flood give the community a sense of purpose and cohesion as the product of its place.

Perhaps it is almost too obvious to note that places are important to identity. But consider a few examples that support this assertion. Living on the "West Side" or in "The Valley" have implications for identity. Being from Glendale means something different than being from Compton. Where I grew up in Maine, being from Orono meant being associated with a University and a more educated population than being from the mill town of Old Town. Being from a farm carries a different meaning than being from a city. Associations with places are incorporated in identity and influence both what one does in that place and how one relates to other places. Tuan (1980) distinguished between two sorts of attachments to place: "rootedness" and "sense of place". Rootedness is characterized by an unexamined knowledge of a place through long habitation. Unfortunately he only uses examples of traditional societies, but the concept, I believe, applies to modern situations. A sense of place is a more self-conscious awareness of the history of a place—a knowing about rather than knowing. But both of these attachments to place may influence reactions to environmental threats.

A subset of having a sense of place may be what Leopold has called the "land ethic" (Leopold 1949). In contrast with a sense of place, the idea of a land ethic is predicated on a connection to a place not so much as a human place, but as a natural one. Awareness and understanding of places as non-human may contribute as much as anything to opposition to technological hazards. In one of the few glimmers of common sense shown by the risk professionals, Clark Bullard, Chairman of the Central Midwest Interstate Compact from Low Level Radioactive Waste Management, notes that "An alternative explanation of the NIMBY syndrome, one that is more consistent with the data, recognizes that local opposition groups are composed mainly of individuals who have a strong attachment to the land" (Bullard 1992:718). This "attachment to the land" is a sense of place, but one more tied to understanding places as not only the home of humans.

Narrative and Community/Nation

Narratives are used to create community in contexts other than those explicitly related to local places. Somers (1992) has investigated the role of narrative in explaining class formation. The English working class has not behaved as would be predicted by standard class theory, yet they were the class from which the theory was developed. To explain this paradox Somers notes that what we "recognize as nineteenth-century working-class formation developed from patterns of protest almost exclusively among northern villages—the inheritors of those strong, popular legal cultures of early pastoral and rural-industrial settings. Working families carried with them into the nineteenth century a robust narrative identity based on a long culture of practical rights" (Somers 1992:616). The message of this analysis is that any sort of class formation resulted from a collective narrative of identity rather than as the result of any sort of structural forces created by the mode of production.

Narrative is used to create national communities (Bhabha 1990; Morley & Robins 1993; Berdoulay 1994). The emergence of the modern nation state was marked also by the development of national histories of golden ages that inevitably point toward a glorious future (Tuan 1980; Taylor 1993). Taylor (1993) details the pure fabrication of Scottish national history, complete with inventing a history of clan tartans. American history, at least as it is taught in elementary and secondary school, is often transmitted in the form of stories, all contributing to the collective idea of the country. These stories, like the apocryphal tale about George Washington cutting down the cherry tree, establish what it means to be American and illuminate the values upon which the country is supposedly founded. Whether these stories are true or not is of little import, their purpose is to perpetuate the community. Even modern "politically correct" versions of history, though possibly more complete histories, are also attempts to redefine the character of the nation.

The examples which I have used here to illustrate the use and appeal to narrative to create and maintain identities is by no means comprehensive. But what I have tried to suggest is that human actions can often be better understood in terms of the sum of these narratives—the narrative matrix. The narratives of the narrative matrix may be incomplete or contradictory, but nevertheless they are important to investigate to understand human behavior. This claim supplements many of the methods used to analyze decisions about risk and hazard—the premises of revealed preferences and cost-benefit analysis, for example may reflect only one narrative to which individuals may appeal.. I suggest that a better way to understand reactions to risks and hazards is to investigate the variety of personal and public narratives are either supported or refuted by the danger in question, and realize that they need not be consistent, complete, or economically rational. To a certain extent, this argument is similar to Douglas and Wildavsky’s cultural construction of risk argument if they were to acknowledge the role of narrative in creating and perpetuating culture, except that my premise is that there are many levels—personal, local, cultural, national—that influence the evaluation of risk.

Some Narratives About Technological and Natural Hazards

The importance of the diverse nature of the narrative matrix in evaluating risk and hazard is best illustrated using examples; first, the experience of Utah residents living downwind from the Nevada Test Site during and after U. S. testing of nuclear weapons both above and below ground. Carole Gallagher interviewed and photographed hundreds of test site workers and downwinders for her book, American Ground Zero, showing the effects of U. S. nuclear weapons testing on its own citizens (Gallagher 1993).

Reading through the interviews in American Ground Zero it becomes clear that people certainly can tell stories about toxic hazards, and that those stories are very compelling. Consider these stories from Ben Levy, a worker at the Nevada Test site who was responsible for collecting experiments left at ground zero.

Every story in American Ground Zero has features similar to Ben Levy’s. For the downwinders the place of contamination was the home and farm, places of refuge rather than for dangerous work, but their stories are full of the same sense of betrayal and bitterness. What is clear is that people can and do tell stories about toxic hazards. The reason that these stories have been kept secret for so long, like the DOE medical tests using radiation, is that they go counter to a national story. The U. S. is a place to pursue happiness, and the government, like George Washington, is supposed to be honest and protect its citizens. That government is also charged with defending the country from foreign powers. Both of these stories about the government hold powerful sway in most parts of the country, so much so that people would deny that the downwinders were being exposed to anything harmful because the government would not let that happen, or because the government had to do it for national security. Throughout the stories in American Ground Zero the tellers of the stories themselves are struggling with going against this national myth. One downwinder says,

It is this dissonance with a national story that would cause some to ignore or discount the danger posed by the government, but this does not mean that they "choose to take the risk" as the proponents of revealed preferences would assert. They (especially the Mormons, who are taught in church to be patriotic) do not want to believe that the U. S. government could put them in harm’s way. What we see is not a logical risk assessment, but the playing out of conflict between different narratives about a place, the United States.

Radioactive contamination also comes into conflict with narratives about Utah and the West more generally. Many of the downwinders were ranchers or farmers (called, incidentally, by the Department of Defense "a low-use segment of the population", Gallagher 1993) who made their livings by working outside. Part of the Western experience is to be exposed to the elements, which, although they may make life difficult, presumably build character and ultimately provide support. The idea that the very land that defines the Western experience could become deadly runs counter to the story of life "home on the range." One Utah rancher’s wife rather plaintively laments that, "We didn’t know that our own milk was poisoning us" (Gallagher 1993). Radiation contamination had turned the tough yet nurturing land into a deadly menace but it was easier for the downwinders to believe the government’s propaganda that the atomic testing posed no danger. For many, the goodness of the land and a life outside near it was too powerful a story to abandon.

A third theme of the stories of the downwinders is the utter unnaturalness of what happened. There are many accounts of hair falling out, lambs with two heads, babies with no lungs, stillbirths, tumors, cancers, miscarriages and other decidedly unnatural problems. Even though some of these problems may arise in the normal course of events, there is no question that those effected know that what happened to them was not natural. Radioactivity in this context is clearly an unnatural, technological hazard and therefore being subjected to it provokes feelings of bitterness and resentment. The distinction between natural and technological hazards is not problematic for victims (Erikson 1994).

While I will return to the question of why humans continue to make this delineation later, I would first present the example of a story about a natural hazard. William Least Heat-Moon incorporated many interviews into his book about Chase County, Kansas (Heat-Moon 1991). One of them tells the story of surviving a tornado.

She found the wedding band, but lost the other rings. This is a vastly different story from those about technological hazards. First, the moral of the story, if there is one, is that it is futile to try to save things in the face of a natural disaster. She lost the rings to the dogs even though she saved them from the tornado. The message is that there is no use in trying to escape the forces of nature. The story has no bitterness, no blame to place. Rather than conflicting with their narratives about the place, the story about the tornado serves to reinforce it. To live in Kansas is to live in tornado country. It coincides with the American story of the frontier farmer, clinging stubbornly to the land in the face of the vagaries of wind and rain. This experience of a natural disaster, like most natural disasters, tends to resonate, rather than conflict with stories about places. Natural hazards contribute to rather than contradict the identity of places–California without earthquakes or Kansas without tornadoes would be unthinkable.

So, if each of us appeals to a narrative matrix, made up of conflicting narratives about identity, place, nation, technology, and society, in the course of everyday life, how does this address the issue of Ward Valley? How can the knowledge that narrative is important be used to better understand conflicts over noxious facilities? My suggestion is that by being aware that the conflict is not simply over the rational application of a set of rules, but over the complex interaction of elements of the narrative matrix, events become much less mysterious. And, if this approach is correct, it allows more realistic prediction of the outcome of conflicts involving risk judgments. The ultimate outcome depends not on the arguments adduced, but to what degree the various actors can control the way in which the problem is described, what roles the other actors are allowed to play, and what widely-held elements of the public’s narrative matrices they can ally themselves with. With this in mind, I return to the proposed Ward Valley nuclear dump.

 

Chapter 4