I’m sure that Matt Yglesias has forgotten more economics in the past hour than I will ever know. And yet, he believes that “if spending on military robotics declines then our most talented roboticists will focus more of their time and attention on civilian applications.” Really? Military spending doesn’t affect the overall demand for engineers and scientists? It’s just as likely that if spending on military robotics declines our most talented roboticists will leave robotics and science altogether. If Lockheed Martin, Raytheon, etc. weren’t hiring, many of my friends would be out of a job, not making snazzy commercial gadgets.
Several liberal bloggers protested the Times suggestion that cutting the Defense budget will reduce innovation. While some of their points are well-taken (the DOD budget is almost certainly bloated and wasteful), they all unfortunately make two big mistakes: they equate defense research with weapons research, and they neglect the role of deployment in bringing technology to scale.
Here is Robert Wright’s flawed analysis, typical among the group:
Defense department research, in contrast, focuses on services that people are more ambivalent about–like getting blown up. If more benign services get developed in the process–like if blowing people up involves technologies that help them play digital music–that’s a happy accident.
Wright’s simplistic link between DOD research and weapons ignores the synergies between civilian and military technologies. At some point the DARPA-funded optical interconnects that my girlfriend studies may improve weapons. But in the short run, they have a much better chance of reducing energy use.
At least in universities, DOD complements rather than competes with civilian agencies, and they all fund similar work. Everyone in my lab did the same sort of space physics research. Some of us were funded by the Air Force, some by the Office of Naval Research, and some by NSF. While the emphases may have differed slightly, there was a lot of overlap. That’s why we all had the same advisor. I’m sure there’s a similar dynamic in quantum computing funded by DARPA, the NSF, and DOE.
The existence of multiple funding agencies is one of the main strengths of U.S. science. They foster diverse approaches and ensure that a single paradigm doesn’t dominate. It wouldn’t necessarily be a good thing if all of DOD quantum computing money were transferred to the NSF. We want many groups attacking the same problem and we should be happy DOD is part of the mix.
Now if all we care about is research production, we may be fine with just two or three agencies funding science. Especially if DOD is as inefficient as they suggest, we may be better off transferring half the DOD research budget to NSF and DOE.
But we don’t care about research for the sake of research. We want to drive innovation, which depends on much more than government funding. Which brings me to the second mistake Wright et. al. make: ignoring the importance of deployment.
As David Roberts noted, technology deployment is itself a form of research. It’s one thing to make a neat device in your lab. It’s quite another to scale the product, align it with customer needs, bypass regulatory hurdles, and market it successfully. I can’t tell how routine it is for a company to fail for these reasons even if they have the science locked down. As great as NSF research is, it’s only a small part of the picture.
Computers are commonplace not only because smart physicists figured out quantum mechanics. It’s also because we learned how to make lots of computer chips cheaply and quickly. The DOD role in this development has been crucial. Precisely because they are so massive and relatively price-insensitive, they enabled large-scale deployment and the learning that goes along with it.
Cliff Bob shows that he doesn’t understand any of this:
Nowhere in the article is there anything but assumption that only the military, as some kind of beneficent and far-seeing midwife of invention, could have fostered these and other innovations. Nowhere are there convincing arguments that most if not all of these developments wouldn’t have been made either through some other government R & D agency or through the market itself.
Nowhere in Bob’s article is there anything but the wrong assumption that “these developments” occurred primarily because of an R&D agency rather than procurement and deployment. In some cases DOD was the only market in existence because no one else could afford the technology. Only after DOD brought down the price of semiconductors did we all benefit.
DOD may very well be wasteful and inefficient. Maybe in 2012 it’s not the best way to drive innovation and perhaps negatives now outweigh the positives . Those are fair arguments. But to debate the point intelligently, we have to first rid ourselves of the myopic view that money is all that matters. DOD funding is associated with scale and deployment, key components of innovation and commercialization. (See Roger for more along these lines.)
The most depressing part about all this is how otherwise brilliant writers make bafflingly simplistic arguments when it comes to innovation policy. Is it really so hard to understand that innovation requires more than government funding of R&D?
To continue musing why some people don’t’ accept climate science, I wonder if part of the blame lies with those of us who want legislative action. We’ve convinced ourselves that climate science definitively, inexorably, and without any doubt leads to climate policy. I get the logic, and have a fair deal of sympathy for it. But since our argument often takes the form: science says x, thus implement cap-and-trade, it’s not too surprising when the science is attacked. Can we really expect otherwise? There are people out there who oppose regulation, taxation and environmental stewardship. Given the way the debate plays out, asking everyone to agree with the science is just a sneaky way of asking those people to agree with regulation, taxation, and environmental stewardship in the first place.
As I speculated ages ago, it’s possible climate science denial has increased precisely because wait-and-see isn’t viewed as a legitimate response. Perhaps more people would accept climate science if it were decoupled from climate policy, and if our argument instead took the form: the science says x, which means very little for anything else. It’s possible that maintaining the purity of the science means compromising our core argument. (If it’s any consolation, at least one climate policy expert believes wait-and-see can be a reasonable stance.)
Now I’m not arguing (as Roger and Breakthrough do) that we should decouple the science and policy because it will advance the policy. I have no clue what will move climate policy, and I actually agree with David Roberts’ critiques of the Breakthrough approach. But for people like me who aren’t that emotionally invested in the issue, separating the science from policy can bring its own rewards. Despite some of my caveats, I really do want better public science literacy. If muting the link between science and action also mutes some of the intrinsic opposition to climate science, that’s a win in my book.
Julian Sanchez recently discussed why classifying homosexuality as a disorder hinges on both science and values:
I’m glad, of course, that we’ve dispensed with a lot of bogus science that served to rationalize homophobia—that’s a pure scientific victory. And I’m glad that we no longer classify homosexuality as a disorder—but that’s a choice and, above all, a moral victory. It ultimately stems from the more general recognition that we shouldn’t stigmatize dispositions and behaviors that are neither intrinsically distressing to the subject nor harmful, in the Millian sense, to the rest of us…The change in the psychiatric establishment’s bible, the DSM, was partly a function of new scientific information, but it was equally a moral and a political choice. [Emphasis added--PK]
Sanchez’s great example highlights what I’ve argued previously: some scientific judgments involve values while some do not. We can safely say that measuring the acceleration due to gravity is a purely scientific judgment. But we can also safely say that classifying homosexuality is not. It remains a mystery to me why some resist this idea.
Consider William’s comment on Sanchez’s post:
Well how about the mental condition called depression? Are you saying that it is a moral rather than scientific question whether depression is an illness/disorder? I’m talking about can’t get out of bed, too weak to commit suicide depression here, not a bout of the blues. How about Post Traumatic Stress Disorder? You’re saying that diagnosis is a moral rather than medical (scientific) question?
Well, no William. Neither I nor (I suspect) Sanchez are saying any such thing. We simply accept that mental health contains both value-free and normative science. A belief in objectivity with respect to PTSD does not conflict with a belief in subjectivity with respect to homosexuality. There is no universal standard or set of rules that we can blindly apply in all cases. Believing otherwise is analogous to playing the game without watching game film.
A couple things come to mind. First, as I’ve argued before, simply using the single word science undermines rational discourse on topics like these. Ultimately, Sanchez is trying to argue that stigmatizing homosexuality involves a different kind of science than what we’re used to. And this kind of science necessarily involve moral judgments. But since all we have is “science” and its associated baggage of supreme and perpetual objectivity, this subtlety gets lost.
Second: why did Sanchez have to explain what should be common knowledge? We figured out no later than 1972 when Alvin Weinberg wrote Science and Trans-Science that some areas of science cannot be separated from values. We figured it out again in 1985 when The National Academies wrote a report on risk assessment, yet again when Funtowicz and Ravetz introduced post-normal science in 1991, and once more in Sheila Jasanoff’s book-length treatment on regulatory science. Scholars from fields as diverse as nuclear physics, philosophy, history, and sociology have all independently determined that science is not a monolith and that, yes, sometimes values play a role. In the end, Sanchez’s thesis is impressively mundane and uncontroversial. In an ideal world it wouldn’t merit a shout-out from arguably the most influential political blogger alive.
None of this undermines Sanchez’s eloquence and brilliance. I am always impressed by his writing, and he does a particularly good job here explaining a complicated topic. But if we had dispensed with the false notion of one science that follows “the” scientific method, maybe he wouldn’t have had to.
I’ll respond yet again to Paul Newall at the Galilean Library. Paul asks: “What, then, is the difference between someone who seizes on doubts to develop a new theory and someone who is merely a contrarian or else actively opposes a theory because of its perceived consequences?”
If by “perceived consequences” we mean the standard solutions offered for climate change (cap-and-trade, carbon tax, efficiency standards, etc.), then we’ve made an implicit and somewhat problematic assumption here. We’ve assumed, as does everyone else in this debate, that everything hinges on the presence or absence of scientific doubt. On one hand, environmentalists insist that cap-and-trade inevitably follows from sound science. Denialists argue that doubtful science undermines any possible action. It’s important to note that both sides make essentially identical arguments by placing science at the center of the decision-making. They simply conceptualize the science differently.
What’s missing is the idea that climate change manifests different types and degrees of doubt that aren’t necessarily coupled. There’s very little doubt over the basic concept of anthropogenic climate change (ACC), somewhat more doubt over the climate sensitivity, and a high degree of doubt about economic projections and the rate of technological innovation. These questions accompany another layer of doubt about the appropriate policy response. There’s significant doubt about the effectiveness of cap-and-trade and carbon tax, doubt about the use of offsets, and even doubt whether we should respond at all.
As Dan Sarewitz and Roger Pielke Jr have argued repeatedly, eliminating doubt in the first set of issues doesn’t help us move forward on the second. Improved global climate models will not automatically bring us intelligent climate policy. Some would even say that focusing on them distracts from us from more important problem.
I know I haven’t at all responded to Paul’s question. But perhaps his question would never be asked if we confronted and decoupled the many dimensions of doubt that climate change presents us. It should be possible to reject any action on climate change even if you agree with the IPCC. As I’ve just said, wait-and-see may be a perfectly rational response. While this approach will surely not end debate, at the very least it might undermine the need to emphasize and distort scientific doubt.
Let me toss in a few thoughts on a debate that has been beaten to death, most famously during the science wars in the mid-1990′s. Do scientific judgments depend solely on data? Or do external considerations enter the decision? There are clearly instances (the history of eugenics and fraud in drug trials come to mind) when biases affect scientists’ work. But we acknowledge those case as deviations from an ideal. The question is whether science must involve values.
I’ve found that scientists usually get quite emotional and heated during this discussion. The mere suggestion that our work is not absolutely, perfectly objective really gets our blood boiling. The response is partially understandable. After all much of our credibility depends on the fact that we produce, more or less, objective information. The public counts on the fact that my beliefs about conservation don’t affect my research on climate change. So I do understand some of our indignation.
Nevertheless, I’ve found that we’re often blinded to some basic facts. Consider first the immensity of science. From 1992 – 2002, almost 3 millions papers were published in the U.S. alone. Federal R&D in 2009 stood at almost $145 billion. Corporations added another $290 billion. As scientists routinely brag, the results of modern science have permeated every facet of our existence. It is impossible to avoid.
So when we ask whether “science” is value-free, we’re really asking whether those millions upon millions upon millions of research problems are value-free. Given both how vast and how diverse science is, it’s inevitable that at least some of science is not value-free. Surely some questions are so caught up in the fabric of society that it’s impossible to completely separate science from values. There’s nothing scary about it. Asking an abstract question about “science” goes down down the wrong path. An enterprise as vast as science cannot be easily generalized.
None of this negates the idea that much of science is value-free. We can safely say that quantum mechanics and molecular biology fall in this category. But a lot of regulatory science (e.g. determining the effect of a pesticide) has to be performed under time constraints and limited data. In these cases it’s generally accepted that subjective value judgments play a big role. Heck, the National Academy of Science even wrote a report about it in 1985. Before that, physicist Alvin Weinberg’s “Science and Trans-Science” made the same argument. Sheila Jasanoff’s The Fifth Branch also addresses this topic. There’s actually a fairly large body of empirical evidence proving quite conclusively that regulatory science involves value judgments.
A couple things come to mind as I look back on these debates. First, most scientists (myself included) ironically ignored evidence. All those studies and data on the subject were irrelevant to us. Whenever the subject came up, one of us could have suggested we look at the relevant research. But no one did. Which brings me to my second point.
There’s something about scientists’ training that makes us believe we’re all qualified to speak for “science.” I would never consider speaking authoritatively on condensed matter physics even though I’ve taken a few classes in the subject. Yet in the past I have waxed eloquently about “science.” And the funny thing is that people (especially non-scientists) take me seriously. But if I’m not qualified to speak about all areas of physics, how on earth am I qualified to speak for science?
Beryl Benderly’s recent piece about the alleged overproduction is getting some attention. She’s blogged about this before, and Nature also ran something a couple years ago. The back-and-forth with my PhD friends highlights a few salient points. First, PhDs themselves cannot agree if there are too many of us. Those of us who left academia tend to side with Benderly–the U.S. does produce too many PhDs. Those still in academia usually disagree, and sometimes very strongly. A faculty member I know insists that the U.S. can never produce enough Ph.D’s. It’s just a question finding all of them productive outside of academia.
It appears that people can disagree on facts–whether the U.S. produces too many scientists–while agreeing on a broader policy of improving graduate education. So clearly agreeing on science is not a prerequisite for agreeing on policy. Science is clearly not the foundation of policy. But I digress. And besides, I’ve already beaten that meme to death.
Going back to the PhD question, I’m struck by how old this debate is. All the way back in 1995, the National Academies wrote a report on the necessity of expanding career paths for graduate students. Nothing changed in graduate education then, and I suspect nothing will happen now.
Sorry for the light posting recently. I’ve been at the AAAS Annual Meeting. It was easily the best conference I’ve every been to. It was great to not worry about presenting, worrying about what collaborators/competitors were up to, etc. I only attended the talks that I found interesting. There were actually too many talks I wanted to see, and I often had to pick and choose. Again, this situation was quite different in grad school!
One of the more interesting science policy talks I attended was on “Value and Limits of Scientific Research” (link). Kei Kozumi and David Goldston always have great things to say. It was refreshing to see people wrestle with the tough policy questions around research funding rather than insist that all problems will be solved by more money for science.
A lot of the discussion focused on the effect of the stimulus bill. Because the stimulus bill was designed to have a near-term effect, much attention and energy focuses on the impact of stimulus dollars spent. And so for the first time (?), we’re really trying to undergo a very detailed evaluation of how R&D dollars move through the economy.
We also discussed the potential opportunities and pitfalls of framing science as a job-creating engine. Historically, science was justified as the source of long-term economic growth. Things could get a lot more complicated if we get into the short-term jobs market. There will need to be real evidence that we meet that mandate rather than nebulous statements about basic research leading to technology. And that’s when we can possibly get into trouble.
My recent post on science and race should have had some caveats on the utility of cost-benefit analysis (CBA). I’ll be the first to admit that it can be abused and has its limits, especially with respect to environmental policy. Although I haven’t actually read the book, Frank Ackerman and Lisa Heinzerling make that point in Priceless.
Nevertheless, I think that on some level we have to use CBA. In the end it is a useful tool, and I mostly agree with Sunstein’s critique of Ackerman and Heinzerling. Yes I know it’s unfair to read the criticism and not the original work. Sue me.
So whatever caveats we attach to CBA, my larger thesis is unchanged: CBA plus values*, not science, should be used to analyze social policy like government funded pre-school. And it goes without saying that when I say “my” thesis, I really mean Nobel-prize winning James Heckman’s thesis. But I’ve already admitted that most of my work isn’t original.
*I think the “plus values” part is important because you can make all sorts of principled, theoretical arguments for or against these types of policies. Go read some Nozik or Mansfield for the anti-view, and Rawls or Galston for the pro-view.
fyi, I list all these philosophers to impress my non-existent readers with my erudition. I also like to believe my (imaginary) readers don’t even know what that word means and are looking it up right now. For what it’s worth, I’ve actually read two of Galston’s books. My fictitious readers are now even more impressed, and even more so by the creative ways I complain about my lack of readership. 13 page views over 10 days isn’t bad…right?
In the dozens of science policy talks I’ve attended over the years (yes I know my life is interesting), I’ve noticed two very mixed messages that continually appear. Somewhere in the introductory slides, the speaker inevitably mentions Vannevar Bush’s seminal report “Science: The Endless Frontier.” This sentence often comes up:
“Science can be effective in the national welfare only as a member of a team, whether the conditions be peace or war.”
If not this direct quote, I hear something like “science is only one input to decision-making.” Or perhaps “science by itself does not dictate a specific policy.”
I’ll also hear quite often that “science is the foundation of decision-making.” Sometimes linchpin or basis replaces foundation. Both “good policy requires good science” and “science underlies policy” are also standard. The funny thing is that these two positions might appear in the same talk given by the same speaker. They might even appear on two consecutive slides.
I’ve never heard anyone point out that these two positions are not entirely compatible. What, exactly, does it mean for science to be “the foundation” of policy? Does it mean science has to come first? Or that it’s the most important? It’s not clear to me. The upshot of all this is more than mere semantics, as important as that may be. Viewing science as teamwork can lead to different actions than viewing it as foundational.
I can’t shake the feeling that us scientists simply pay lip-service to the former idea, but we really internalize the latter. Thinking this way has, I believe, very real consequences for how we interact and communicate. I’ll try expand more in the coming weeks.