I’m glad to see Ryan Myer blogging again after a long break. His recent post does a typically good job dissecting the Obama administration’s ratcheting up the value of a human life used in cost-benefit analysis and regulatory decisions. As Ryan notes, despite their quantitative facade, such calculations ultimately hinge on subjective, moral judgments. But as often happens in these situations, the Obama administration insists their actions “utilize the best available science in assessing the benefits and costs of any potential regulation.” Ryan asks for greater honesty:
It would be great to see the Administration taking an open, straightforward approach to this. They could just come out and make a very reasonable argument that, based on their values, they feel that human life should be more important than it was under Bush. Instead, they’ve disowned the decision entirely, hiding behind a scientific-seeming method.
I sympathize with this request, and I’ve repeatedly called for simpler, more honest arguments. But methinks Ryan is a bit harsh here. Obama has not “disowned the decision entirely.” Everyone is aware that these decisions flow from Obama himself. I’m also pretty sure that everyone already knows that with respect to environmental regulation, Obama values life more than Bush did. Moreover, greater regulatory oversight is a longstanding liberal policy goal and it’s no surprise that Obama has been following through on it. The values are already widely known and so I’m not sure how much benefit there would be to Obama’s publicly vocalizing them.
As much as I would like to see more nuanced public discourse about science in decision-making, it’s a bit much to expect that from elected officials. Obama is simply doing what it’s his job to do–push his agenda with the best tools available. For better and for worse, science is often viewed as one of the best tools. And so it’s not surprising to see Obama deploy it here. Especially since hiding behind science doesn’t seem to be hurting him (and may in fact be helping), I don’t see how he can do otherwise. Sure there’s a bit of grandstanding and p0litical theater there. But are we really surprised? We are are in fact talking about the President. Grandstanding and political theater comes with the territory. Careful arguments based on logical premises is ultimately the realm of philosophers, not politicians.
Unless its power is diminished, we really have no choice but to accept that science will be used as a cover for politics and values.
Julian Sanchez recently discussed why classifying homosexuality as a disorder hinges on both science and values:
I’m glad, of course, that we’ve dispensed with a lot of bogus science that served to rationalize homophobia—that’s a pure scientific victory. And I’m glad that we no longer classify homosexuality as a disorder—but that’s a choice and, above all, a moral victory. It ultimately stems from the more general recognition that we shouldn’t stigmatize dispositions and behaviors that are neither intrinsically distressing to the subject nor harmful, in the Millian sense, to the rest of us…The change in the psychiatric establishment’s bible, the DSM, was partly a function of new scientific information, but it was equally a moral and a political choice. [Emphasis added--PK]
Sanchez’s great example highlights what I’ve argued previously: some scientific judgments involve values while some do not. We can safely say that measuring the acceleration due to gravity is a purely scientific judgment. But we can also safely say that classifying homosexuality is not. It remains a mystery to me why some resist this idea.
Consider William’s comment on Sanchez’s post:
Well how about the mental condition called depression? Are you saying that it is a moral rather than scientific question whether depression is an illness/disorder? I’m talking about can’t get out of bed, too weak to commit suicide depression here, not a bout of the blues. How about Post Traumatic Stress Disorder? You’re saying that diagnosis is a moral rather than medical (scientific) question?
Well, no William. Neither I nor (I suspect) Sanchez are saying any such thing. We simply accept that mental health contains both value-free and normative science. A belief in objectivity with respect to PTSD does not conflict with a belief in subjectivity with respect to homosexuality. There is no universal standard or set of rules that we can blindly apply in all cases. Believing otherwise is analogous to playing the game without watching game film.
A couple things come to mind. First, as I’ve argued before, simply using the single word science undermines rational discourse on topics like these. Ultimately, Sanchez is trying to argue that stigmatizing homosexuality involves a different kind of science than what we’re used to. And this kind of science necessarily involve moral judgments. But since all we have is “science” and its associated baggage of supreme and perpetual objectivity, this subtlety gets lost.
Second: why did Sanchez have to explain what should be common knowledge? We figured out no later than 1972 when Alvin Weinberg wrote Science and Trans-Science that some areas of science cannot be separated from values. We figured it out again in 1985 when The National Academies wrote a report on risk assessment, yet again when Funtowicz and Ravetz introduced post-normal science in 1991, and once more in Sheila Jasanoff’s book-length treatment on regulatory science. Scholars from fields as diverse as nuclear physics, philosophy, history, and sociology have all independently determined that science is not a monolith and that, yes, sometimes values play a role. In the end, Sanchez’s thesis is impressively mundane and uncontroversial. In an ideal world it wouldn’t merit a shout-out from arguably the most influential political blogger alive.
None of this undermines Sanchez’s eloquence and brilliance. I am always impressed by his writing, and he does a particularly good job here explaining a complicated topic. But if we had dispensed with the false notion of one science that follows “the” scientific method, maybe he wouldn’t have had to.
Let me toss in a few thoughts on a debate that has been beaten to death, most famously during the science wars in the mid-1990′s. Do scientific judgments depend solely on data? Or do external considerations enter the decision? There are clearly instances (the history of eugenics and fraud in drug trials come to mind) when biases affect scientists’ work. But we acknowledge those case as deviations from an ideal. The question is whether science must involve values.
I’ve found that scientists usually get quite emotional and heated during this discussion. The mere suggestion that our work is not absolutely, perfectly objective really gets our blood boiling. The response is partially understandable. After all much of our credibility depends on the fact that we produce, more or less, objective information. The public counts on the fact that my beliefs about conservation don’t affect my research on climate change. So I do understand some of our indignation.
Nevertheless, I’ve found that we’re often blinded to some basic facts. Consider first the immensity of science. From 1992 – 2002, almost 3 millions papers were published in the U.S. alone. Federal R&D in 2009 stood at almost $145 billion. Corporations added another $290 billion. As scientists routinely brag, the results of modern science have permeated every facet of our existence. It is impossible to avoid.
So when we ask whether “science” is value-free, we’re really asking whether those millions upon millions upon millions of research problems are value-free. Given both how vast and how diverse science is, it’s inevitable that at least some of science is not value-free. Surely some questions are so caught up in the fabric of society that it’s impossible to completely separate science from values. There’s nothing scary about it. Asking an abstract question about “science” goes down down the wrong path. An enterprise as vast as science cannot be easily generalized.
None of this negates the idea that much of science is value-free. We can safely say that quantum mechanics and molecular biology fall in this category. But a lot of regulatory science (e.g. determining the effect of a pesticide) has to be performed under time constraints and limited data. In these cases it’s generally accepted that subjective value judgments play a big role. Heck, the National Academy of Science even wrote a report about it in 1985. Before that, physicist Alvin Weinberg’s “Science and Trans-Science” made the same argument. Sheila Jasanoff’s The Fifth Branch also addresses this topic. There’s actually a fairly large body of empirical evidence proving quite conclusively that regulatory science involves value judgments.
A couple things come to mind as I look back on these debates. First, most scientists (myself included) ironically ignored evidence. All those studies and data on the subject were irrelevant to us. Whenever the subject came up, one of us could have suggested we look at the relevant research. But no one did. Which brings me to my second point.
There’s something about scientists’ training that makes us believe we’re all qualified to speak for “science.” I would never consider speaking authoritatively on condensed matter physics even though I’ve taken a few classes in the subject. Yet in the past I have waxed eloquently about “science.” And the funny thing is that people (especially non-scientists) take me seriously. But if I’m not qualified to speak about all areas of physics, how on earth am I qualified to speak for science?
Kevin Drum makes some of my previous arguments in this great post, albeit more briefly and eloquently. Check out an opposing view here, and some mediation between the two sides and more insight here. (h/t Andrew Sullivan)
Right before New Year’s Eve, Andrew Sullivan’s blog saw some chatter about the tradeoffs between dying in a terrorist attack versus a car accident. Apparently 113 Boeing 777′s must be exploded before terrorism can kill as many people as car accidents. Yet people don’t scream for protection from their Buicks! Bill Maher makes a similar argument in this entertaining video.
And why not? A death is a death is a death….right? If our goal is overall safety, then surely the public should clamor for safer roads as much as they do for airports…right? Well, not quite. As much as I agree that we spend too much on terrorism, both Sullivan and Bill Maher gloss over the important fact that deaths cannot always be treated equally.
Social science research has shown quite conclusively that these calculations inevitably involve a subjective value judgment on how to treat human life. To quote Paul Slovic’s excellent paper: “Simply counting fatalities treats the deaths of the old and young as equivalent; it also treats as equivalent deaths that come from immediately after mishaps and deaths that follow painful and debilitating disease…”
Slovic continues to explain that distributional impacts (affecting black rather than white, poor rather than rich) and degree of control also affect risk perceptions. People may be more forgiving if they knowingly engage in a risky activity as opposed to one where they are guaranteed safety. We would, I hope, be very upset if a chemical plant discharge solely affects a community of poor, uneducated blacks even if only a couple dozen people died every year. Calibrating our response to nothing more than total deaths elides these subtleties.
Along these lines, it doesn’t seem that unreasonable for people to demand strong government action on terrorism rather than automobile safety. Perhaps they accept a certain risk of driving a car but don’t do so when flying a plane. Perhaps they think getting blown up is somehow worse than a car crash. Or perhaps they think we have done all we can to improve car safety but haven’t done nearly enough in other areas. Again…accounting just for total deaths misses all this.
Maher et al. are of course free to say that we should treat car crashes and exploding planes equally. They’re entitled to that belief. But it represents their own subjective preference rather than a uniquely rational calculation. Scientists’ acting otherwise is a main reason the public often sharply disagree with risk assessments. Pretending our risk models are wholly rational also contributes to poor communication and mutual distrust.
None of this undermines the idea that we overemphasize the threat from terrorism. I largely agree the sentiment. But those arguments shouldn’t rest on a faulty analysis that simply sums total deaths in various activities. While all lives should be treated equally, all deaths should not.
UPDATE: I meant to add this reference the first time. Email me if you want a PDF copy.
Paul Slovic, The risk game, Journal of Hazardous Materials, 86 (2001), 17 – 24