During my blogging hiatus, Andrew Sullivan waded into the race and IQ debate yet again. While throwing his usual tantrum on the issue (ably refuted by here, here and here), Sullivan stunningly claims that “research is not about helping people; it’s about finding out stuff.”
Hey, I studied numerical relativity and space plasma physics. I get why some research is not about helping people. But, to continue with a hobbyhorse of mine, broad statements on a $1 trillion enterprise don’t mean much. Some research is for helping people and some for discovery. Sullivan does not have the authority to speak for all research, and he shouldn’t pretend he does.
The current issue of the Atlantic has a fantastic profile of Greek physician John Ioannidis, author of the most downloaded paper in the history of PLoS Medicine. (h/t The Bubble Chamber). Ionnadis apparently has determined that many of the most heralded findings of modern medicine are based on sloppy, careless research:
[Ionnadis's] model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
The article delves deeper into some of the reasons for the outcome, with the drive for publication as one of the chief culprits. I recommend reading the entire piece.
Longtime readers will not be surprised to hear that I found this passage particularly appealing (emphasis added):
In fact, the question of whether the problems with medical research should be broadcast to the public is a sticky one in the meta-research community. Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding. Ioannidis dismisses these concerns. “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.
Modest and restrained arguments more accurately represent the promise of science while avoiding the intellectual costs of bluster and exaggeration.
ClimateScienceWatch has an interview with Steve Schneider, one of the authors on the PNAS paper we just discussed (h/t Joe Romm). I recommend the entire interview (video at the end), but I’ll highlight these passages:
It really matters what your credentials are. If you have a heart arrhythmia as I do, and I also have a cardiologist, and you also have an oncological problem as I do, I’m not going to my cancer doc to ask him about my heart medicine and my cardiologist to ask about my chemo, I’m going to the experts. Who’s an expert really matters. People with no expertise, their opinion frankly does not matter on complex issues. And in my opinion shouldn’t even be quoted when we’re talking about the details of the science.
Scientists are really stuck. It’s exactly the same thing in medicine, it’s the same thing with pilot’s licenses and driver’s licenses: We don’t let just anyone go out there and make any claim that they’re an expert, do anything they want, without checking their credibility. Is it elitist to license pilots and doctors? Is it elitist to have pilots tested every year by the FAA to make sure that their skills are maintained? Is it elitist to have board certification on specialities in various health professions? I don’t think so.
In light of many of my previous posts, it should be obvious that I think Steve has a point. Cardiologists should be trusted over oncologists for an arrhythmia, and I’m quite happy that pilots are licensed.
But Steve’s analysis elides a key difference between scientists and licensed professionals. Namely, scientists aren’t licensed! Heck, much of authority comes from our self-proclaimed ability to tackle any problem whether or not we’re formally trained, a theme we’ve just just discussed. The idea that scientists actually have a fairly limited range of expertise counters what we’ve been saying for several hundred years now. At this point, I think that scientists themselves have internalized the message. I can’t count the number of times I’ve heard a random physicist (myself included!) wax eloquently about “the” scientific method or make a tendentious claim about all of science. Some would even call this attitude arrogant. As I said about Eugene Robinson’s op-ed, I’m happy people are rebelling against a mindless acceptance of scientific expertise. I just don’t know how successful it will be when practically every science organization out there promotes the opposite.
I’ll make one final, brief comment (complaint?) about the interview. Towards the end, Steve responds that it is “very difficult to disentangle” the policy prescription from the science expertise. While I think he may be factually correct, the attitude has also played a non-trivial role in why the field is hyper-politicized. We need greater efforts to highlight that science is not “the” basis of policy, and there are non-climate reasons to pursue mitigation and adaptation. But again, such a message would contradict what we’ve been arguing for years, and I bet there’s no interest.
Over the past year or so the paleoconservative blogger Daniel Larison has taken aim at American exceptionalism, a term he finds sloppy and poorly defined (see here, here, here, and here). The third post in particular makes an insightful point:
Confidence in America and respect for our actual, genuinely considerable accomplishments as a people are natural and worthy attitudes to have. Understanding the full scope of our history, neither airbrushing out the crimes nor dishonoring and forgetting our heroes, is the proper tribute we owe to our country and our ancestors. Exaggeration and bluster betray a lack of confidence in America, and strangely this lack of confidence seems concentrated among those most certain that mostly imaginary “declinists” are ruining everything.
While Larison leveled his critique at the American right, scientists are guilty of similar behavior. As we just discussed, exaggeration and bluster is typical behavior. And like some on the American right, scientists perpetually harp about an imaginary decline despite evidence to the contrary. Ironically, mostly liberal scientists mirror the the extreme right in their rhetoric.
I wonder if more of Larison’s analysis can be applied to scientists. Any STS scholars out there with some papers on this? I bet that a lack of confidence and an outsider mentality (along with the fact we’re just another special interest group!) contribute to our routine exaggerations. Me feels that the topic cries out for more research.
This Eugene Robinson column garnered some attention on my Facebook wall. Here’s the offending passage:
We can all applaud Chu’s accomplishment. But here’s the thing: Chu is a physicist, not an engineer or a biologist. His Nobel was awarded for the work he did in trapping individual atoms with lasers. He’s absurdly smart. But there’s nothing in his background to suggest he knows any more about capping an out-of-control deep-sea well, or containing a gargantuan oil spill, than, say, columnist Paul Krugman, who won the Nobel in economics. Or novelist Toni Morrison, who won the Nobel in literature.
In fact, Chu surely knows less about blowout preventers than the average oil-rig worker and less about delicate coastal marshes than the average shrimp-boat captain.
Strong words indeed. A couple of my friends naturally pointed out that Chu must have exceptional analytical and problem-solving skills that he can apply to the situation. This argument is all too typical and at this point is almost a truism. Of course scientists have spectacular analytical and problem-solving skills. And of course it carries over from their very narrow field to other problems. Surely this much is true, right?
One of the many problems with these assertions is the almost complete lack of supporting evidence. Has anyone actually studied how well scientists think and problem-solve outside of their field? Is your average space physicist more adept at analyzing economics, politics and policy merely on account of being a physicist? How do we separate the scientific component of Chu’s analytical skills from the fact that he’s really smart and driven? As far as I know there’s no data either way.
What I do know is that a search for “domain specific” on the PsycInfo database yields a few thousand results. And I also know that at least some research privileges content knowledge over analytical skills. The latter thesis especially undermines the idea of an amorphous scientific thinking that magically transfers to every problem.
None of this means that scientific thinking does not exist. It very well might. But before drawing any firm conclusions, we should first gather and analyze the available data. Doing otherwise would be pretty unscientific.
Julian Sanchez recently discussed why classifying homosexuality as a disorder hinges on both science and values:
I’m glad, of course, that we’ve dispensed with a lot of bogus science that served to rationalize homophobia—that’s a pure scientific victory. And I’m glad that we no longer classify homosexuality as a disorder—but that’s a choice and, above all, a moral victory. It ultimately stems from the more general recognition that we shouldn’t stigmatize dispositions and behaviors that are neither intrinsically distressing to the subject nor harmful, in the Millian sense, to the rest of us…The change in the psychiatric establishment’s bible, the DSM, was partly a function of new scientific information, but it was equally a moral and a political choice. [Emphasis added--PK]
Sanchez’s great example highlights what I’ve argued previously: some scientific judgments involve values while some do not. We can safely say that measuring the acceleration due to gravity is a purely scientific judgment. But we can also safely say that classifying homosexuality is not. It remains a mystery to me why some resist this idea.
Consider William’s comment on Sanchez’s post:
Well how about the mental condition called depression? Are you saying that it is a moral rather than scientific question whether depression is an illness/disorder? I’m talking about can’t get out of bed, too weak to commit suicide depression here, not a bout of the blues. How about Post Traumatic Stress Disorder? You’re saying that diagnosis is a moral rather than medical (scientific) question?
Well, no William. Neither I nor (I suspect) Sanchez are saying any such thing. We simply accept that mental health contains both value-free and normative science. A belief in objectivity with respect to PTSD does not conflict with a belief in subjectivity with respect to homosexuality. There is no universal standard or set of rules that we can blindly apply in all cases. Believing otherwise is analogous to playing the game without watching game film.
A couple things come to mind. First, as I’ve argued before, simply using the single word science undermines rational discourse on topics like these. Ultimately, Sanchez is trying to argue that stigmatizing homosexuality involves a different kind of science than what we’re used to. And this kind of science necessarily involve moral judgments. But since all we have is “science” and its associated baggage of supreme and perpetual objectivity, this subtlety gets lost.
Second: why did Sanchez have to explain what should be common knowledge? We figured out no later than 1972 when Alvin Weinberg wrote Science and Trans-Science that some areas of science cannot be separated from values. We figured it out again in 1985 when The National Academies wrote a report on risk assessment, yet again when Funtowicz and Ravetz introduced post-normal science in 1991, and once more in Sheila Jasanoff’s book-length treatment on regulatory science. Scholars from fields as diverse as nuclear physics, philosophy, history, and sociology have all independently determined that science is not a monolith and that, yes, sometimes values play a role. In the end, Sanchez’s thesis is impressively mundane and uncontroversial. In an ideal world it wouldn’t merit a shout-out from arguably the most influential political blogger alive.
None of this undermines Sanchez’s eloquence and brilliance. I am always impressed by his writing, and he does a particularly good job here explaining a complicated topic. But if we had dispensed with the false notion of one science that follows “the” scientific method, maybe he wouldn’t have had to.
Philosopher Philip Kitcher just reviewed several books on climate change in Science magazine. I meant to get to this earlier, but Ben Hale got there first and stole some of my thunder. He even has a snappier title than me. Alas! I won’t repeat what Hale said, and I recommend you go over there and read his post. Needless to say, you should also read the (pretty long) Kitcher piece. I’ll have more to say soon, but for now I’ll highlight this:
Captured by a naive and oversimplified image of what “objective science” is like, it is easy for citizens to reject claims of scientific authority when they discover that scientific work is carried out by human beings.
While expanding would have diverted from the main analysis, I wish Kitcher had dwelt on this a bit more. Why exactly is the public captured by naive and oversimplified images? Surely the scientific community has played no small role. We’re nothing if not advocates for an overly simplistic view of science. Though I’ve sharply criticized a monolithic view of both science and scientists, this is one instance where it’s warranted. Pretty much all scientists are perfectly happy uttering crudely simple phrases like “replication is the ultimate test of truth in science” when speaking to the public. Encouraging naivety and oversimplification is par for the course in these situations.
This is something mildly (deeply?) hypocritical about such messaging. We never stop hyperventilating about the importance of science and scientific reasoning: Be rational! Look at evidence! Use the scientific method!
And yet, properly applying these principles conflicts with the account of science promoted by scientists themselves! If people actually looked at evidence and used “the scientific method”, there’s no way they’d believe some of the bullshit we say. You can either be rational or you can accept scientists’ description of science. But you can’t really be both at the same time. We welcome rationality and evidence-based reasoning except, ironically enough, when talking about science. Here it seems we want nothing more than mindless, uncritical adulation.
Now there are much worse sins than hypocrisy. For the most part it doesn’t really kill anyone. But Kitcher suggests that global warming deniers succeed partly because the public adopts an oversimplified view of science. Given that scientists themselves promote such views, and also given some of the dire predictions of a warming world, hypocrisy might be a bit more costly in this case.
I realize now that my last post sloppily blends two distinct points. I noted first that insisting on “the” rightful place of science is analogous to a football coach following the same game plan for every opponent. Towards the end of the post, I continued my long-running complaint against science-as-foundation. I neglected to emphasize that any a priori role for science is a bad idea. Permanently removing science from its pedestal is no better than permanently keeping it there. Sometimes science needs to be on a pedestal, and sometimes it doesn’t. Sometimes the game will be won by handing us the ball and getting out of the way, and sometimes we need to sit on the bench. But I’ve said this before.
I’ve suggested here that there may be real-world consequences for adopting any fixed role for science in policy, whether that role is one of deification or demonization. In contrast, my last post emphasized principled reasons to oppose scientific exceptionalism. From the final paragraph:
Now all this can seem hopelessly academic and pointless. Surely nothing much will change if scientists adopt a different vocabulary. Carbon emissions will continue to rise, the oceans will continue to acidify, and rain forests will continue to be razed. New words alone will not solve these knotty problems. Nevertheless, there’s something to be said for honesty in public discourse, and something to be said against exaggerating one’s virtues and abilities. If nothing else, minimizing the science-as-foundation rhetoric may foster a more honest debate.
At some point I’ll have to detail some specific negatives of an inflexible view of science in decision-making. But this is enough for now.
For the second time in a row I’ll simply restate what I’ve said in an earlier post. While some will take this as an impressive lack of originality, I prefer to think that my blog is on the verge of a membership explosion and so I must introduce new readers to my earlier work. Hey, we all need our fantasies. Or perhaps the recent conference I attended connects to what I’ve been saying for a while and I feel the need to blog about it. At any rate, here it goes.
As I’ve noted before, science in decision-making is highly contextual. To use Jamey Wetmore’s examples, science necessarily plays different roles in abortion and climate change. The upshot of this is that simply asking about “the” rightful place of science sends us down on the wrong path. As any sports fan will tell you, each opponent demands a new game plan. You spend hours upon hours dissecting and document all strengths and weaknesses, accounting for injuries or suspensions, and mapping out hundreds of scenarios. Simply put, you have to really spend some time watching game film. It’d be lunacy to play a game otherwise.
We take the exact opposite approach with science. It’s predetermined as the foundation of policy, and we’re always searching for the rightful place. Even the supremely enlightened denizens of CSPO apparently believe that we should be engaged in this quest. I fail to see how this attitude differs from a football coach using the same game plan every time because he knows the rightful place for the running game. Why do we want the same for science?
Now all this can seem hopelessly academic and pointless. Surely nothing much will change if scientists adopt a different vocabulary. Carbon emissions will continue to rise, the oceans will continue to acidify, and rain forests will continue to be razed. New words alone will not solve these knotty problems. Nevertheless, there’s something to be said for honesty in public discourse, and something to be said against exaggerating one’s virtues and abilities. If nothing else, minimizing the science-as-foundation rhetoric may foster a more honest debate. That’s reason enough for me to make the switch.
Several months ago I suggested we need a new vocabulary to discuss science. From my post:
We need an intellectual framework which articulates there are instances when science is crucially important, instances when it is somewhat important, and instances when it is relatively unimportant. Some decisions heavily rely on science while others do not. Certain disputes are better resolved with politics rather than science and vice versa.
As I’ve said repeatedly, the very words we use impede a productive dialogue. Merely claiming science is the foundation and the basis of policy stacks the deck. Along those lines, I’m curious how “The Rightful Place of Science?” was chosen as the title of the conference I just attended. Given what I know of Dan Sarewitz’s views, I’m pretty sure he opposes the idea that science has a single, predetermined rightful place. So why use the definite article? Doing so only cements the idea that science does have a privileged and rightful place, something I’m sure Dan would like to avoid.
Trying to find the rightful place precludes the possibility of rightful places. As Jamey Wetmore noted, the rightful place of science depends intimately on the specific context. Indigenous farmers in central America, contestants in the abortion debates, and climate-change policy makers all use science in different ways (Jamey’s examples). Contorting these different uses into a single “the rightful place” is pointless and distracting at best. It would be best to completely do away with such a monolithic caricature. But unfortunately our public vocabulary precludes this from happening.
Jason Delborne raised this point towards end when panelists were debating whether science belongs on a pedestal. Delborne wondered whether simply using the word pedestal undermines some of our goals. The 3 or 4 longtime readers of this blog know that I answer in the affirmative. As usual, I found Wetmore’s response to be particularly insightful. He said that it’s perfectly fine to have science on a pedestal as long as we don’t put scientists there as well. I really liked that point, and I’ll have to think about it some more.