Heather E. Douglas

When considering the importance of science in policymaking, common wisdom contends that keeping science as far as possible from social and political concerns would be the best way to ensure science’s reliability. This intuition is captured in the value-free ideal for science—that social, ethical, and political values should have no influence over the reasoning of scientists, and that scientists should proceed in their work with as little concern as possible for such values. Contrary to this intuition, I will argue in this book that the value-free ideal must be rejected precisely because of the importance of science in policymaking. In place of the value-free ideal, I articulate a new ideal for science, one that accepts a pervasive role for social and ethical values in scientific reasoning, but one that still protects the integrity of science.

Science, Policy, and the Value-Free Ideal (PDF file)

2 thoughts on “Heather E. Douglas

  1. shinichi Post author

    Science, Policy, and the Value-Free Ideal

    by Heather E. Douglas

    (2009)

    Chapter 1
    Introduction
    Science Wars and Policy Wars

    When considering the importance of science in policymaking, common wisdom contends that keeping science as far as possible from social and political concerns would be the best way to ensure science’s reliability. This intuition is captured in the value-free ideal for science—that social, ethical, and political values should have no influence over the reasoning of scientists, and that scientists should proceed in their work with as little concern as possible for such values. Contrary to this intuition, I will argue in this book that the value-free ideal must be rejected precisely because of the importance of science in policymaking. In place of the value-free ideal, I articulate a new ideal for science, one that accepts a pervasive role for social and ethical values in scientific reasoning, but one that still protects the integrity of science.

    Central to the concerns over the use of science in policymaking is the degree of reliability we can expect for scientific claims. In general, we have no better way of producing knowledge about the natural world than doing science. The basic idea of science—to generate hypotheses about the world and to gather evidence from the world to test those hypotheses—has been unparalleled in producing complex and robust knowledge, knowledge that can often reliably guide decisions. From an understanding of inertia and gravity that allows one to predict tides and the paths of cannonballs, to an understanding of quantum mechanics that underlies the solid state components of computers, to an understanding of physiology that helps to guide new medical breakthroughs, science has been remarkably successful in developing theories that make reliable predictions.

    Yet this does not mean that science provides certainty. The process of hypothesis testing is inductive, which means there is always a gap between the evidence and the theory developed from the hypothesis. When a scientist makes a hypothesis, she is making a conjecture of which she is not certain. When the gathered evidence supports the hypothesis, she is still not certain. The evidence may support the theory or hypothesis under examination, but there still may be some other theory that is also supported by the available evidence, and more evidence is needed to differentiate between the two. The hypothesis concerns a great many more instances than those for which we will carefully collect data. When we collect more data, we may find that seemingly well-confirmed hypotheses and theories were false. For example, in the late nineteenth century, it was widely accepted that chemical elements could not transform into other elements. Elements seemed to be stable in the face of any efforts at transmutation. The discovery of radioactivity in the early twentieth century overturned this widespread belief. Or consider the theory of ether, a medium in which it was once commonly believed light traveled. Despite near universal acceptance in the late nineteenth century, the theory of ether was rejected by most physicists by 1920. Going even further back in history, for over 1,500 years it seemed a well-supported theory that the sun revolved around the Earth, as did the fixed stars. But evidence arose in the early seventeenth century to suggest otherwise and, along with changes in the theories of mechanics, overturned one of the longest standing and best supported scientific theories of the time. After all, how many times had humans seen the sun rise and set? And yet, the theory was ultimately incorrect. Data can provide evidential support for a theory, but can never prove a scientific theory with certainty. Aspects of the world that were once thought to be essential parts of scientific theory can be rejected wholesale with the development of new theories or the gathering of new evidence.

    Because of the chronic, albeit often small, uncertainty in scientific work, there is always the chance that a specific scientific claim is wrong. And we may come to know that it is wrong, overturning the theory and the predictions that follow from it. The constant threat of revision is also the promise of science, that new evidence can overturn previous thought, that scientific ideas respond to and change in light of new evidence. We could perhaps have certainty about events that have already been observed (although this too could be disputed—our descriptions could prove inaccurate), but a science that is only about already observed events is of no predictive value. The generality that opens scientific claims to future refutation is the source of uncertainty in science, and the source of its utility. Without this generality, we could not use scientific theories to make predictions about what will happen in the next case we encounter. If we want useful knowledge that includes predictions, we have to accept the latent uncertainty endemic in that knowledge.

    The chronic incompleteness of evidential support for scientific theory is no threat to the general reliability of science. Although we can claim no certainty for science, and thus no perfect reliability, science has been stunningly successful as the most reliable source for knowledge about the world. Indeed, the willingness to revise theories in light of new evidence, the very quality that makes science changeable, is one key source for the reliability and thus the authority of science. That it is not dogmatic in its understanding of the natural world, that it recognizes the inherent incompleteness of empirical evidence and is willing to change when new evidence arises, is one of the reasons we should grant science a prima facie authority.

    It is this authority and reliability that makes science so important for policy. And it seems at first that the best way to preserve the reliability of science is to keep it as far from policy as possible. Indeed, the realm of science and the realm of policy seem incompatible. In the ideal image of science, scientists work in a world detached from our daily political squabbles, seeking enduring empirical knowledge. Scientists are interested in timeless truths about the natural world rather than current affairs. Policy, on the other hand, is that messy realm of conflicting interests, where our temporal (and often temporary) laws are implemented, and where we craft the necessary compromises between political ideals and practical limits. This is no place for discovering truth.

    Without reliable knowledge about the natural world, however, we would be unable to achieve the agreed upon goals of a public policy decision. We may all agree that we want to reduce the health effects of air pollution, for example, or that we want safe, drinkable water, but without reliable information about which pollutants are a danger to human health, any policy decision would be stymied in its effectiveness. Any implementation of our policy would fail to achieve its stated goals. Science is essential to policymaking if we want our policies concerning the natural world to work.

    This importance of science in achieving policy goals has increased steadily throughout the past century in the United States, both as the issues encompassed by public policy have expanded and as the decisions to be made require an increasingly technical base. As science has become more important for policy, the relationship between science and policy has become more entangled. This entanglement exists in both directions: science for policy and policy for science. In the arena of policy for science, public funds allocated for doing science have grown dramatically, and these funds require some policy decisions for which projects get funded and how those funds will be administered. In the arena of science for policy, increasing numbers of laws require technically accurate bases for the promulgation of regulations to implement those laws. These arenas in practice overlap: which studies one chooses to pursue influences the evidence one has on hand with which to make decisions. In this book, however, my focus will be largely on science for policy.

    While the entanglement between science and policy has been noted, the importance of this entanglement for the norms of science has not been recognized. As science plays a more authoritative role in public decisionmaking, its responsibility for the implications of research, particularly the implications of potential inductive error, increases. Failure to recognize the implications of this responsibility for science, combined with the desire to keep science and policy as distinct as possible, has generated deep tensions for our understanding of science in society.

    These tensions are evident in the increased stress science has been under, particularly with respect to its public role. Some commentators note an increasing strain on the “social contract” between science and society (see, for example, Guston and Keniston 1994). This strain was made manifest in the 1990s when two public debates erupted over science: the “Science Wars” and the sound science–junk science dispute. Both can be taken as emblematic of science under stress in our society.

    The Science Wars, as they are often called, centered on the authority of science. They were about whether or not science should be believed when it tells us what the nature of the world is, about whether or not science should have more public authority than other approaches to knowledge or belief. For those outside the world of science studies, these are astonishing questions to raise. If one wants to know something about the natural world, it seems obvious that one should ask scientists. While few in science studies would actually dispute this, the claim has been made that the knowledge produced by science has no special authority above and beyond any other approach. In other words, the claim is that science and its methods have no special hold on the ability to uncover and speak truth; they simply have more funding and attention.

    The sound science–junk science war, in contrast, does not question the special epistemic authority given to science in general, or the overall reliability of science for answering empirical questions. Instead, this dispute is about which particular piece(s) of science should shape policy. When is a particular body of scientific work adequately “sound” to serve as the basis for policy? Debates in this arena center on how much evidence is sufficient or when a particular study is sufficiently reliable. The arguments focus on such questions as: How much of an understanding of biochemical mechanisms do we need to have before we regulate a chemical? How much evidence of causation is needed before a court case should be won? How much of an understanding of complex biological or geological systems do we need before regulatory frameworks intervene in the market to prevent potential harm? The idea that science is the authoritative body to which one should turn is not questioned; what is questioned is which science is adequate for the job, or which scientific experts are to be believed by policymakers, Congress, and the public.

    While both of these disputes are symptomatic of deep concerns surrounding the public role of science, neither has been able to produce a satisfactory approach to understanding the role of science in society or what that role might mean for the norms of scientific reasoning. This is, in part, because both disputes began with the presupposition that science is a distinct and autonomous enterprise developed by a community of scientists largely in isolation from public questions and concerns. Such an understanding of science and scientists inhibits a clear view of how science should function in society. Both in the academic arena of the Science Wars and in the policy arena of the sound science–junk science dispute, the discussions shed little light on the deep questions at issue, even as the existence of the debates indicated the need for a more careful examination of the role of science in society and its implications.

    Reply
  2. shinichi Post author

    科学は価値中立であるべき「ではない」理由 
    (書評:”Science, Policy, and the Value-Free Ideal”)

    by rmaruy

    http://rmaruy.blog.fc2.com/blog-entry-46.html

    エピローグ

    「価値中立の理想」は、無用な混乱をもたらしてきた。これまで科学者たちは、倫理的・社会的な価値を少しでも考慮することは―とくに科学的根拠を評価するときにそれらの価値の混入を認めることは―、科学の公正さ(integrity)や信頼性(authority)を損うと考えてきた。しかし実際には、科学が社会に大きな影響を与えるような場面ではとくにそうだが、根拠が十分であるかどうかを決めるのは倫理的・社会的な価値にほかならない。そして価値こそが科学者同士の意見が食い違う一番の原因なのに、検証・認知されないままにされてきた。そのせいで、意見が一致しない専門家同士がお互いを「疑似科学」呼ばわりしたりして、「健全な科学/くず科学論争」(sound science – junk science dispute)が勃発することにもなった。中傷のために「疑似科学」という言葉が使われるような状況では、科学の公正さや信頼性の出る幕もなく、科学に基づく政策決定も成り立たなくなってしまう。

    そろそろ「価値中立の理想」を捨て、別のよりよい理想を描くべきではないだろうか。価値中立の理想が今の形で登場したのは1950年代だが、これは皮肉にも、科学哲学者たちによって打ち立てられた。科学者たちの社会的な役割が大きくなり始め、孤立主義的な科学者像―それは価値中立の理想が前提とする科学者像だ―にそぐわなくなってきていたさなかのことだった。しかし、この孤立主義の前提を捨てれば、価値中立を理想とする理由もなくなる。科学の社会的な役割を認めることにより、新たな理想が生まれる余地が生まれるし、また必要にもなるのだ。

    新たな理想への第一歩は、科学者の道徳的な責任についてよく考えることだ。科学は社会的権威をもつため、科学的知見の表明には少なからず責任が伴い、その表明が強すぎ(あるいは弱すぎ)た場合の影響は軽視できない。科学者は、誰かの助けでこの負担を軽減することはできても、責任から完全に免れることはない。私たちはつねに、データを解釈し、不確実性を明確に示し、私たちの下す決定がどんな影響をあたえるかを教えてくれる存在を必要としている。このような役割を果たすためには科学者は判断を下さねばならず、それには責任が伴う。

    価値は科学的判断に不可欠なものだが、その機能には一定の制約がなければならない。価値中立の理想を捨てるにしても、科学的プロセスにおける「証拠」と同じ役割を価値に認めるのはまずい。そのようなことをしたら、科学の価値そのものや、科学の公正さや正当性が損なわれてしまうだろう。そんな本末転倒に陥らないためには、科学の中で価値の果たすいくつかの役割を区別し、科学で重要な判断するの際(とくに、ある科学的な言明の根拠が十分かどうかを評価するとき)に価値が果たす役割を、「間接的」なものに限定する必要がある。

    科学の公正さを保つためには、価値の混入を防ぐことではなく、価値が果たす役割を区別・限定することが大事である。この区別をしっかりすることは、科学に「客観性」を保証することにも関係する。科学的客観性の少なくとも一つの側面は「推論における価値の役割が明確に区別されているかどうか」にあるからだ。もっとも、客観性には他の側面もある。客観性のそれぞれの基準は主張の信頼性を高める能力をもつ反面、どの一つも完全ではないため、科学的主張は複数の客観性の基準に照らして判断される(客観性の概念が複雑であるのには、このような実用性もあるのだ)。とはいえ、科学の公正さを保つためのに本質的に重要なのは、価値が「直接的な役割」を果たす場面を、適切な段階―たとえば研究対象を選ぶときなど、価値が正当な役割を果たせる段階―に限ることである。

    これが、科学の公正さを守るための新たな鍵となる。政策決定のなかで科学がどのように使われるべきかの理解も変わっていくだろう。すなわち、科学的根拠を評価するとき、たとえ個々の状況に特殊なものであっても、社会的・倫理的価値は無視してはならず、それどころか本質的なものとして捉えられるようになる。ただし価値は、残された不確実性がどれほど重大かを判断する役割に限定して使用されることになる。また、民主主義においては、価値がどんな間接的役割を果たしたかを(政策文書や研究論文などで)明らかにすることが、同じくらい重要である。

    価値の明示を求めることは、科学者や政策決定者にいくらかの負担を課すことになる。そのため、市民をこのプロセスに巻き込んで、必要な価値判断をする手助けをしてもらうもの有効かもしれない。そのような市民参画を可能にする社会的仕組みのための研究はすでに始まっているし、さらに精力的に研究が進むことが望まれる。このような市民参加の試みは、政策決定のプロセスを複雑にし、「先に事実があり、その後に価値判断をする」という単線的なモデルから遠ざけることにはなる。だが、これを避けないことが、科学的知識の性質を誠実に評価するということなのだ。科学は世界について知るためのもっとも頼りになる情報源だが、それは完全ではありえず、つねに新しい証拠によって覆されうるものでなければならない。私たちはいつでも世界に驚かされるべきで、驚きに対してオープンでなくなったとき、「くず科学」へ足を踏み入れることになる。

    私たちは、そうして変わりゆく科学にもとづいて、政治的決断を下さなければならない。しかし仮に、ある判断の妥当性がその後の証拠によって覆されたとしても、もし価値が果たした役割が正当で、かつ十分にオープンにされていたならば、それがベストな決断だったかどうか、当時手にしていた証拠と価値へのコミットメントへ遡って評価することができるだろう。それが、この不確かで複雑な世界で生きる私たちにとっての精一杯なのだ。

    Reply

Leave a Reply to shinichi Cancel reply

Your email address will not be published. Required fields are marked *