Blog

Blog, Updates, and In the News

Crafting the New Story.png

Questions For Precautionary Thinking - June 2002

The Networker
I. Questions For Precautionary Thinking Nancy Myers
  I. Questions For Precautionary Thinking TOP
By Nancy Myers This Networker is part of an occasional series on the precautionary principle, which has been a major focus of the work of the Science and Environmental Health Network since 1998.The recurring question is how we implement the precautionary principle. On the simplest level, the emerging principle of international law known as the precautionary principle is "invoked" to justify taking protective action when serious harm might occur. The precautionary principle, used this way, is a kind of emergency measure—to slow development of a new technology, such as genetically engineered crops, until more is known about its side effects; or to bar imports of substances over which there is scientific controversy, such as beef containing hormone residues.

But these cases are far from simple. They involve judgment calls about what is dangerous—since science cannot provide definitive answers--and what risks a society is or is not willing to take. When opposing interests are involved, or different societies with different values and standards, any decisions will be challenged. The result is often stalemate.

This narrow invocation of the precautionary principle in certain limited circumstances is not enough. We need to back up these "simple" judgment calls with a logic and ethic powerful enough to expose and counter prevailing myths and errors in the current ways of acting. We must clarify and act upon the values that cause us to make such judgments. And we must supplement these judgments with a comprehensive and consistent approach to making decisions about how we develop and use technologies and products, and how we treat the Earth and it most vulnerable species and inhabitants.

This does not translate into a one-size-fits-all method, however. Rather than suggesting comprehensive procedures for "implementing the precautionary principle"—although these will be appropriate in some cases--we might think instead of the kinds of questions that invoke our precautionary intelligence in different situations, faced with many kinds of decisions. These questions, like the precautionary principle, are deceptively simple. They do not have prescribed answers. Rather, they are designed to make us think better and act more wisely. They remind us of what we forget when we are stuck in current ways of thinking or caught up in futile arguments. They are a checklist for sanity and wisdom.

A version of this list of questions will appear in a forthcoming book (Myers and Raffensperger, editors), so please let us know what you think. Do you find them useful? Is anything missing?

Many of the examples will be more fully explained in that book. On the nearer horizon is an important book to be published this fall by Island Press, edited by Joel Tickner, on precautionary science. Some chapters of that book are cited here.

1. What do we care about? This is by its nature a first question. Sometimes it will be the first one we ask, for instance, when we are deciding where to direct our energies. More often, though, it is a question we will ask ourselves repeatedly, and answer, as we work toward change or as we identify what is wrong with the way things are being done. Whenever we ask it, this question brings us back to basics: the values that drive us. It helps keep us working in the direction we have chosen and gather allies among those who share our values. (See Blue Mountain Lake Statement of Essential Values.)

In a society where value systems compete, it is especially important to name the values to which we aspire. When we do, we may find allies in surprising places. On the other hand, each of us holds conflicting values and follows no single set of values with absolute consistency. This often shows up as a contradiction between what we say and what we do, and it is important to expose those contradictions. But taking a positive approach—inviting people to act on what they believe and value—may be more effective in the long run. Guilt can be paralyzing, and what we want is change, not paralysis.

Example: The Health Care Without Harm campaign has had considerable success persuading medical institutions to reduce the amount and toxicity of medical waste and to find safer substitutes for problematic materials such as phthalate-containing plastic IV bags by emphasizing the shared ethic summed up as "first, do no harm."

The questions that follow are all aspects of the next logical question: If we know what we care about, how do we exercise that care?

2. What is our goal?  Goals are powerful instruments. Naming the goal of an existing or proposed activity will expose its purpose. Often, however, a goal that is named, such as feeding the world or meeting consumer demand is not the real or primary purpose behind the activity. The primary goal is to make money for a corporation. We should get in the habit of naming real goals, especially primary ones.

Ex.: In marine fisheries, the precautionary principle has been applied to the limited goal of protecting fisheries, species by species, rather than protecting marine ecosystems. This limitation and the failure to articulate the goal have interfered with the achievement of the goal.

U.S. society is built on the premise that freedom is more important than goals; that is, that nearly any activity is justified, regardless of its purpose, unless it harms others. Even if we do not challenge this basic assumption it is possible to introduce purposeful activity that changes its terms. When we set a goal such as improving children's environmental health or restoring biodiversity we decide to act in certain ways to reach that goal, often in concert with others, rather than being left entirely to our own devices.

Ex.: The restoration of mine sites described in the January 2002 Networker is an example of how a positive goal spurred purposeful activity that brought a community into a new, harmonious relationship with its environment.

a. Whose goal is it?  By naming primary goals of problematic activities, we may expose the fact that the goals belong to a few at the expense of many and that affected parties have not been consulted. This is a major flaw of modern industrial society. It is important to work toward inserting those voices into decisions.

Ex.: The POPs treaty is an effort to insert the voice of the affected global population into decisions about persistent organic pollutants.

Likewise, in beginning purposeful activity with precautionary goals, it is important to gather allies and establish as robust a consensus as possible. Opposition to change is inevitable, but working toward a shared goal, including all affected parties, may be more effective in some cases than simple, direct confrontation.

b. Does the goal reflect precautionary values? Goals are the practical expressions of our values, just as our behavior and methods reflect our ethics. It is good to remind ourselves that it is not contradictory to be both idealistic and practical, but, rather, these modes represent complementary aspects of being human: thinking and acting. A goal that embodies our values will help us act on what we believe.

3. What choices do we have?  Introducing more choice hardly seems necessary in a culture of overchoice. And yet, too few of our choices are made at a meaningful level. A new habit of the precautionary mindset is to ask whether, in the light of precautionary goals, we are able to make precautionary choices at any given stage: from how we direct our research to how we make products, build homes, grow food, make purchases, treat our illnesses, and relate in other ways to the natural environment. These choices can be influenced by goals. One way to think of this is "backcasting"—thinking in terms of a goal and working back to see how it will be accomplished. Here are some backcasting questions:

a. What is feasible and likely to move us toward the goal?  This question will lead to an assessment of the benefits of various ways of working toward the goal. It is a classic question for brainstorming sessions. Countless courses of action may qualify as precautionary, depending on the circumstances.

b. How do choices compare and rank?  This question points to the need to consider what is known and not known about harms and disadvantages of various alternatives (see question 5), to focus on best practices, and set priorities. But the approaches to the goal may be multiple, with different parties taking on different actions.

c. How do we find even better solutions?  This question applies to working toward a positive goal, but it also applies to stopping a harmful or potentially harmful activity. In the former case more solutions, and better ones, are always helpful. The search for them should lead to wider consultations—more democratic and interdisciplinary participation—and more and better science (see question 5). In the latter case, the problem may be that few alternatives exist to potentially harmful activities, but the more likely problem is that that there is little incentive to adopt better alternatives.

d. How do we adopt better solutions? The environmental movement has focused on the need for regulation, while industry chafes under all regulation, or, at best, promises "voluntary compliance." This resistance to change, especially forced change, is inevitable and the conflicts to which it leads may be unresolvable. In most situations, both carrot and stick are needed to overcome the status quo. On the "carrot" side it is important to emphasize the long-term economic as well as health advantages of sustainable practices. Establishing wide consensus on positive goals is another way to help circumvent this conflict. But (as any parent will testify) even highly contested goals, once adopted, can have beneficial effects for those who contested them.

Ex.: 1) The treaty banning chlorofluorocarbons in order to protect the ozone layer gave rise to the rapid development of safer substitutes, which had been available but not widely used at that point. 2) Raising gasoline mileage standards helped make American automobiles more competitive on the world market.

4. What is the bigger picture?  This question should become reflexive as we learn to think in a precautionary way. It has many aspects—temporal, spatial, philosophical, social, and scientific. It demands that we think of preventing problems rather than only treating them; that we look at wider, long-term effects of our activities; and that we invest in farsighted research. This question falls into two subcategories of questions, having to do with problems and solutions.

a. What are the "upstream" problems? What are the downstream repercussions? What is the broader context? Just as we often accept stated goals rather than examining the real ones, we often focus on immediate or narrow problems without considering origins, repercussions, and context.

Ex.: A classic example is a cluster of seemingly separate issues around beef cattle that includes "Mad cow disease," animal welfare, hormones, bacterial contamination, and human dietary issues. The upstream view would look at the industrial beef system based on large slaughtering operations, feedlots, massive corn production, and fossil fuel consumption. The downstream view considers the sanitation problems, disease, and other human and ecological health threats that flow from that system. The broader context is the consumer demand for cheap meat, which, in turn, is fueled by the fast food industry. (Michael Pollan, "Power Steer," NY Times Magazine, March 31, 2002)

b. What are the earlier solutions? The most elegant? The most comprehensive? What are the system solutions? Where can we intervene in the system to set in motion the best solutions? Ex.: Continuing the beef story, an example of well-targeted intervention is the animal welfare campaign focusing on McDonald's Corporation. The campaign's boycott exerted leverage, but it was not enough in itself to bring about change in the corporation's standards for animal treatment. The corporation began cooperating when the campaign engaged an expert to recommend better ways of rearing and slaughtering animals. These improvements benefited suppliers, who were happier about how they treated the animals, and the corporation, which was able to use its new standards as a selling point. The change on the part of McDonald's has created a ripple effect throughout the food industry.

Often the bigger picture will reveal potential solutions that combine comprehensive, early, and/or carefully targeted intervention with a kind of aesthetic and emotiona - as well as scientific and practical - "rightness." Increasingly, such systems are being identified, described, and developed. These "integrated systems" are more than technical improvements on the current ways of doing things; they are based on sound scientific and ethical principles that support sustainability. Further development of these systems and many others should be a priority for science, especially publicly funded research, in the 21st Century.

Examples include Biomimicry, Green Chemistry/Building/Manufacturing, Sustainable Agriculture, Clean Production, and Ecological Medicine.

5. What do we know and how do we know it?  A number of precautionary questions are knowledge questions. They have to do with science, but they also address how we gain scientific knowledge and what we do with it. Until now, the major knowledge question has been "What do we know?" While this is important, it is not the only question, and it must be asked in the context of others. Science is closely linked to policy, social choices, and ethics. The following questions should help nonscientists as well as scientists focus on those intersections and on science in the decision-making context.

a. How would we know if harm was occurring or about to occur? This important question addresses our capacity to observe, predict, and monitor. If systems are not in place to do this—for example, if public funding for science is aimed at product development rather than monitoring public and environmental health or predicting repercussions of our activities - we have little chance of avoiding harm. No one is "minding the store."

Ex.: A state department of health is setting up an Emerging Issues Advisory Group to evaluate emerging environmental health issues and recommend policy that safeguards public health. The committee includes specialists in different disciplines as well as expert observers such as practicing physicians.

An important priority for publicly funded science should be to establish comprehensive inventories and databases that will help us track changes in health and the environment over time and, therefore, might give early indication of harm (see Carl Cranor in Tickner, forthcoming). Until more such systems are in place, however, we must rely on what early warnings we do receive from various monitoring programs and studies. As bearers of bad news, the scientists who do these studies are often attacked, and critics emphasize the gaps in their knowledge. Cutting-edge science indeed involves great uncertainty. But these early warnings are often the only sign we have of even greater danger to come. They have been right far more often than they have been wrong. (See Late Lessons from Early Warnings: The Precautionary Principle 1896-2000, European Environment Agency, Copenhagen 2001.)

Ex.: Environmental groups were the first to pay attention to early warnings about endocrine-disrupting effects of DDT. However, the science of endocrine disruption was not taken seriously until twenty years later. It is now linked to hundreds of substances

b. What do we know about harmful effects? This question seems simple to those of us who are not scientists. However, scientists are rightly reluctant to pronounce any finding as final knowledge, or "proven." Science is based on testing, refining, discarding, and developing new hypotheses and thus relies on a constant state of uncertainty and open-minded observation. Scientists' reluctance to announce certainty and the shifting nature of scientific knowledge have been exploited by those with an economic interest in ignoring or concealing harmful side effects of enterprises, products, and technologies.

Deciding "what we know," therefore, often means looking at the weight of evidence. Does the preponderance of evidence point to a certain conclusion? Is it suggestive? We do indeed "know" a great deal.

But the discussion should not stop here—or get bogged down, as it often does, in what one scientist has called "adversary statistics"— duels over competing interpretations of data (Richard Levins in Tickner, forthcoming). The way out of such dead ends is to combine the "what do we know" question with the other questions in this list, and in the whole grid.

Ex.: The raging controversy over studies that suggest artificially modified corn genes have found their way into native Mexican maize varieties illustrates how adversary statistics are used to discredit scientists who uncover evidence that a technology might be harmful.

c. Where does our knowledge come from? This question refers back to the first knowledge question: Have we done our best to gather knowledge from all possible sources? Have we given due attention to all plausible forms of evidence, including observations by the alert public? The more numerous the sources of information—both scientific and nonscientific—the more robust the information is likely to be. Lay observers such as farmers, hikers, parents, and practicing physicians are sometimes more likely than other experts to observe suspect patterns and changes in health and the environment.

This question also prompts us to examine sources of knowledge for bias and conflicts of interest. Bias—the lens through which each of us acts and sees the world--is inevitable in both scientific and lay observation. No science is pure, objective, and value-free. A scientist may have a strong commitment (bias) toward protecting the environment and public health. However, the effect of this bias is not the same as the pressure of financial obligations, or conflict of interest, which Black's Law Dictionary defines as the clash between the public interest and the "private pecuniary interest" of the individual concerned. The difference is money.

Often, more attention is paid to bias than to conflict of interest. It is customary for the media to cover an environmental story, like many others, by presenting opposing viewpoints (biases). But the questions that may reveal conflict of interest are not always asked.

Ex.: In the Mexican maize case, the scientific debate was strongly influenced by accusations from "concerned citizens" that the scientists who had uncovered evidence of gene transfer into native species were biased because of their ideological leanings. Months later, a Guardian article traced those early accusations to a public relations firm employed by the Monsanto Corporation, a leading developer of genetically modified seeds. (George Monbiot, "The Fake Persuaders," The Guardian, May 14, 2002)

d. How can we predict from what we know already? In order to prevent harm, we must rely on a certain amount of extrapolation from current knowledge. That means, for example, accepting the implications of animal studies, even though humans are somewhat different; taking small, seemingly isolated disasters seriously as possible indicators of broader effects; working to develop and continually improve models; calling whole classes of chemicals into question if a number of them exhibit danger signs; and so forth. There are many ways to make the best possible use of what we already know. Precautionary thinking looks at what is known and tries to make reasonable predictions about what is still unknown in order to prevent harm.

Ex.: The exact effects of global climate change will not be known until the shift is well advanced and irreversible. But scientists are able to make a number of intelligent guesses about effects, based on models.

e. Do we know enough to act? The public who relies on scientific knowledge and predictions for protecting themselves, their descendents, and the environment must support action on the basis of what we do know, even if that knowledge is partial. This key precautionary question is one that scientists are often willing to answer, even if they are unwilling to declare certainty about their conclusions. The public and their representatives have both a right and duty to ask this question.

f. Do we know so little that we must act with caution? The other side of knowing enough to act is acknowledging our ignorance in all its forms and also acting accordingly. This is often seen as the sole purpose of the precautionary principle—to prevent development of a technology that may prove harmful. But it is only one aspect of a precautionary treatment of scientific knowledge and uncertainty.

Both this question and the preceding one call for examination of the stakes—the nature of the possible or inevitable harm—as well as the availability of alternatives. These are social and political questions as well as scientific ones.

g. How will we learn?  Precautionary science involves an unceasing quest for relevant information. What new information do we need and how will we get it? How will we monitor the effects of our action or inaction? How will we measure progress toward goals? How will we incorporate new information in decisions? A precautionary approach requires a strong, focused research agenda to bring in new information as well as feedback loops to inform public policy, which may, in turn, drive further research.

Scientists, in turn, must consider how to communicate what they know. If citizens have knowledge of evidence that might lead to solving a crime, they are expected to convey that information to those who are in a position to act on that evidence. Likewise, scientists have an obligation to convey the information they have to those who will act on it, such as the public, decision makers, or nongovernmental organizations.

Ex.: In the early years of the atomic age, the fallout from nuclear weapons tests brought widespread public attention for the first time to the secondary dangers of these weapons. Public fear and outrage drove scientists to learn more about these dangers. The new scientific knowledge reinforced the campaign for a ban on nuclear testing in the open air (Barry Commoner, in Tickner, forthcoming).

6. Who is responsible; who and what are affected?  The final set of questions has to do with accountability, justice, and fairness. Of all the precautionary issues, these are perhaps the most human-centered and ethical. However, they intersect with questions of science—knowledge and uncertainty—as well humans' relationship with other species. It is important to make these intersections clear rather than isolate these issues as purely ethical or political ones.

These questions are based on what we know about interdependence and connections—humans with other species; humans with other humans; human activities with their consequences, both desired and undesired—and about the difficulty (uncertainty) of tracking and controlling cause and effect in this complex web of relationships. In such conditions we must act as wisely as we can, given the knowledge we have, and exercise as much care as possible, given our ignorance.

a. Are those responsible accountable?  When our activities affect the commons, we should be accountable to those with whom we share the commons and who are affected: other people, other species. Responsible behavior is encouraged by public accountability, which depends, among other things, on the free flow of information. Those who develop and manufacture products are likely to have the most information about their effects. They should have incentives to gain more information and make it public. However, the current system, backed up by the courts, encourages a "don't ask, don't tell" policy. Research into the harmful effects of products is often begun only when a lawsuit has been brought against a company.

Ex. In almost every one of the major toxic tort class action suits, such as Agent Orange, asbestos, or the Dalkon Shield, a company had some information but failed to pursue the information or disclose the problem until the suit was initiated.

b. Do the burdens reflect precautionary values? "Burden" is a legal term with two kinds of meanings. Both are value laden: who is responsible; where are the lines drawn? In a system based on precaution, proponents and perpetrators of possibly harmful technologies have the burden of proof or responsibility, for reasons described above. And in such a system, the lines are drawn on the side of caution to protect human health and the environment.

Science theorists often describe the drawing of lines in terms of Type I and Type II errors. A Type I error is a false positive—finding an effect when there is no effect. Scientists try hard to avoid Type I errors. In so doing, they are more likely to make Type II errors—not finding an effect when there is one (false negatives). But, if errors are to be made, a precautionary mindset (one geared toward policy) leans toward making Type II errors. This is a complicated but precise way of saying that those who suffer harm and those who are vulnerable should get the benefit of the doubt.

Ex.: Congress directed the scientific committee examining veterans' claims of injury from the herbicide Agent Orange during the Vietnam War to use a liberal standard of evidence. In effect, Congress and the committee gave veterans the benefit of the doubt when there was some evidence that their claims were justified, even though the evidence fell short of proof. (Joel A. Tickner, Precaution in Practice: A Framework for implementing the Precautionary Principle, doctoral dissertation, University of Massachusetts, Lowell, Dec. 2000)

c. How can we distribute power, costs, benefits, and responsibilities more justly? These final questions also pose the biggest challenge. They point out the painful gap between power and powerlessness, perpetrator and victim; between precautionary values and our society's current way of operating. Some will choose to address these contradictions head-on in campaigns against egregious abusers. Others will consistently speak for the voiceless—vulnerable humans, other species—or speak out against the destructive norms and contradictions that pervade our culture. In all of this, it is important to look for leverage points for change in systems.

Ex. Environmental justice campaigns have created a new kind of popular heroine—the unsophisticated Erin Brokovitches of the world who see harm being done, educate themselves, expose the abusers, and fight tooth and nail for justice. While these campaigns extract a high toll on their leaders and often end in defeat, they are key ingredients for eventual cultural change.

An example of a leverage-point campaign is the Science and Environmental Health Network's focus on the misuse of science in toxic tort cases. About a decade ago, several legal refinements gave judges considerable power to bar evidence from trials, and this has tipped the balance even further against plaintiffs claiming injury. The Network believes that changing or reinterpreting these rules will lead to necessary reforms and greater justice.