Archive

Archive for the ‘Public Voice of Science’ Category

Andrew Sullivan doesn’t speak for research

November 29, 2011 Leave a comment

Daniel Kammen-Research for helping people

During my blogging hiatus, Andrew Sullivan waded into the race and IQ debate yet again. While throwing his usual tantrum on the issue (ably refuted by herehere and here), Sullivan stunningly claims that “research is not about helping people; it’s about finding out stuff.”

Hey, I studied numerical relativity and space plasma physics. I get why some research is not about helping people. But, to continue with a hobbyhorse of mine, broad statements on a $1 trillion enterprise don’t mean much. Some research is for helping people and some for discovery. Sullivan does not have the authority to speak for all research, and he shouldn’t pretend he does.

The case for modesty

October 24, 2010 Leave a comment

The current issue of the Atlantic has a fantastic profile of Greek physician John Ioannidis, author of the most downloaded paper in the history of  PLoS Medicine. (h/t The Bubble Chamber).  Ionnadis apparently has determined that many of the most heralded findings of modern medicine are based on sloppy, careless research:

[Ionnadis’s] model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.

The article delves deeper into some of the reasons for the outcome, with the drive for publication as one of the chief culprits.  I recommend reading the entire piece.

Longtime readers will not be surprised to hear that I found this passage particularly appealing (emphasis added):

In fact, the question of whether the problems with medical research should be broadcast to the public is a sticky one in the meta-research community. Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding. Ioannidis dismisses these concerns. “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”

We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.

“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.

Modest and restrained arguments more accurately represent the promise of science while avoiding the intellectual costs of bluster and exaggeration.

Scientists, pilots and doctors

July 16, 2010 Leave a comment

ClimateScienceWatch has an interview with Steve Schneider, one of the authors on the PNAS paper we just discussed (h/t Joe Romm).  I recommend the entire interview (video at the end), but I’ll highlight these passages:

It really matters what your credentials are. If you have a heart arrhythmia as I do, and I also have a cardiologist, and you also have an oncological problem as I do, I’m not going to my cancer doc to ask him about my heart medicine and my cardiologist to ask about my chemo, I’m going to the experts. Who’s an expert really matters. People with no expertise, their opinion frankly does not matter on complex issues. And in my opinion shouldn’t even be quoted when we’re talking about the details of the science.

….

Scientists are really stuck. It’s exactly the same thing in medicine, it’s the same thing with pilot’s licenses and driver’s licenses: We don’t let just anyone go out there and make any claim that they’re an expert, do anything they want, without checking their credibility. Is it elitist to license pilots and doctors? Is it elitist to have pilots tested every year by the FAA to make sure that their skills are maintained? Is it elitist to have board certification on specialities in various health professions? I don’t think so.

In light of many of my previous posts, it should be obvious that I think Steve has a point.  Cardiologists should be trusted over oncologists for an arrhythmia, and I’m quite happy that pilots are licensed.

But Steve’s analysis elides a key difference between scientists and  licensed professionals.  Namely, scientists aren’t licensed! Heck, much of authority comes from our self-proclaimed ability to tackle any problem whether or not we’re formally trained, a theme we’ve just just discussed.  The idea that scientists actually have a fairly limited range of expertise counters what we’ve been saying for several hundred years now.   At this point, I think that scientists themselves have internalized the message.  I can’t count the number of times I’ve heard a random physicist (myself included!) wax eloquently about “the” scientific method or make a tendentious claim about all of science.  Some would even call this attitude arrogant.  As I said about Eugene Robinson’s op-ed, I’m happy people are rebelling against a mindless acceptance of scientific expertise.  I just don’t know how successful it will be when practically every science organization out there promotes the opposite.

I’ll make one final, brief comment (complaint?) about the interview.  Towards the end, Steve responds that it is “very difficult to disentangle” the policy prescription from the science expertise.  While I think he may be factually correct, the attitude has also played a non-trivial role in why the field is hyper-politicized.  We need greater efforts to highlight that science is not “the” basis of policy, and there are non-climate reasons to pursue mitigation and adaptation.  But again, such a message would contradict what we’ve been arguing for years, and I bet there’s no interest.

The myth of scientific exceptionalism

July 6, 2010 4 comments

Over the past year or so the paleoconservative blogger Daniel Larison has taken aim at American exceptionalism, a term he finds sloppy and poorly defined (see here, here, here, and here).  The third post in particular makes an insightful point:

Confidence in America and respect for our actual, genuinely considerable accomplishments as a people are natural and worthy attitudes to have. Understanding the full scope of our history, neither airbrushing out the crimes nor dishonoring and forgetting our heroes, is the proper tribute we owe to our country and our ancestors. Exaggeration and bluster betray a lack of confidence in America, and strangely this lack of confidence seems concentrated among those most certain that mostly imaginary “declinists” are ruining everything.

While Larison leveled his critique at the American right, scientists are guilty of similar behavior.  As we just discussed, exaggeration and bluster is typical behavior.  And like some on the American right, scientists perpetually harp about an imaginary decline despite evidence to the contrary.  Ironically, mostly liberal scientists mirror the the extreme right in their rhetoric.

I wonder if more of Larison’s analysis can be applied to scientists.  Any STS scholars out there with some papers on this?  I bet that a lack of confidence and an outsider mentality (along with the fact we’re just another special interest group!) contribute to our routine exaggerations.  Me feels that the topic cries out for more research.

No evidence for scientific thinking

June 20, 2010 7 comments

This Eugene Robinson column garnered some attention on my Facebook wall.  Here’s the offending passage:

We can all applaud Chu’s accomplishment. But here’s the thing: Chu is a physicist, not an engineer or a biologist. His Nobel was awarded for the work he did in trapping individual atoms with lasers. He’s absurdly smart. But there’s nothing in his background to suggest he knows any more about capping an out-of-control deep-sea well, or containing a gargantuan oil spill, than, say, columnist Paul Krugman, who won the Nobel in economics. Or novelist Toni Morrison, who won the Nobel in literature.
In fact, Chu surely knows less about blowout preventers than the average oil-rig worker and less about delicate coastal marshes than the average shrimp-boat captain.

Strong words indeed.  A couple of my friends naturally pointed out that Chu must have exceptional analytical and problem-solving skills that he can apply to the situation.  This argument is all too typical and at this point is almost a truism.  Of course scientists have spectacular analytical and problem-solving skills.  And of course it carries over from their very narrow field to other problems.  Surely this much is true, right?

One of the many problems with these assertions is the almost complete lack of supporting evidence.  Has anyone actually studied how well scientists think and problem-solve outside of their field? Is your average space physicist more adept at analyzing economics, politics and policy merely on account of being a physicist?  How do we separate the scientific component of Chu’s analytical skills from the fact that he’s really smart and driven?  As far as I know there’s no data either way.

What I do know is that a search for “domain specific” on the PsycInfo database yields a few thousand results.  And I also know that at least some research privileges content knowledge over analytical skills.  The latter thesis especially undermines the idea of an amorphous scientific thinking that magically transfers to every problem.

None of this means that scientific thinking does not exist.  It very well might.  But before drawing any firm conclusions,  we should first gather and analyze the available data.  Doing otherwise would be pretty unscientific.

Science and trans-science

June 9, 2010 4 comments

Julian Sanchez recently discussed why classifying homosexuality as a disorder hinges on both science and values:

I’m glad, of course, that we’ve dispensed with a lot of bogus science that served to rationalize homophobia—that’s a pure scientific victory.  And I’m glad that we no longer classify homosexuality as a disorder—but that’s a choice and, above all, a moral victory. It ultimately stems from the more general recognition that we shouldn’t stigmatize dispositions and behaviors that are neither intrinsically distressing to the subject nor harmful, in the Millian sense, to the rest of us…The change in the psychiatric establishment’s bible, the DSM, was partly a function of new scientific information, but it was equally a moral and a political choice. [Emphasis added–PK]

Sanchez’s great example highlights what I’ve argued previouslysome scientific judgments involve values while some do not.  We can safely say that measuring the acceleration due to gravity is a purely scientific judgment.  But we can also safely say that classifying homosexuality is not.  It remains a mystery to me why some resist this idea.

Consider William’s comment on Sanchez’s post:

Well how about the mental condition called depression? Are you saying that it is a moral rather than scientific question whether depression is an illness/disorder? I’m talking about can’t get out of bed, too weak to commit suicide depression here, not a bout of the blues. How about Post Traumatic Stress Disorder? You’re saying that diagnosis is a moral rather than medical (scientific) question?

Well, no William.  Neither I nor (I suspect) Sanchez are saying any such thing.  We simply accept that mental health contains both value-free and normative science.  A belief in objectivity with respect to PTSD does not conflict with a belief in subjectivity with respect to homosexuality.  There is no universal standard or set of rules that we can blindly apply in all cases.  Believing otherwise is analogous to playing the game without watching game film.

A couple things come to mind.  First, as I’ve argued before, simply using the single word science undermines rational discourse on topics like these.  Ultimately, Sanchez is trying to argue that stigmatizing homosexuality involves a different kind of science than what we’re used to.  And this kind of science necessarily involve moral judgments.  But since all we have is “science” and its associated baggage of supreme and perpetual objectivity, this subtlety gets lost.

Second: why did Sanchez have to explain what should be common knowledge?  We figured out no later than 1972 when Alvin Weinberg wrote Science and Trans-Science that some areas of science cannot be separated from values.  We figured it out again in 1985 when The National Academies wrote a report on risk assessment, yet again when Funtowicz and Ravetz introduced post-normal science in 1991, and once more in Sheila Jasanoff’s book-length treatment on regulatory science.  Scholars from fields as diverse as nuclear physics, philosophy, history, and sociology have all independently determined that science is not a monolith and that, yes, sometimes values play a role. In the end, Sanchez’s thesis is impressively mundane and uncontroversial.  In an ideal world it wouldn’t merit a shout-out from arguably the most influential political blogger alive.

None of this undermines Sanchez’s eloquence and brilliance.  I am always impressed by his writing, and he does a particularly good job here explaining a complicated topic.  But if we had dispensed with the false notion of one science that follows “the” scientific method, maybe he wouldn’t have had to.

Kitcher on global warming

June 3, 2010 4 comments

Philosopher Philip Kitcher just reviewed several books on climate change in Science magazine.  I meant to get to this earlier, but Ben Hale got there first and stole some of my thunder.  He even has a snappier title than me. Alas! I won’t repeat what Hale said, and I recommend you go over there and read his post.  Needless to say, you should also read the (pretty long) Kitcher piece.  I’ll have more to say soon, but for now I’ll highlight this:

Captured by a naive and oversimplified image of what “objective science” is like, it is easy for citizens to reject claims of scientific authority when they discover that scientific work is carried out by human beings.

While expanding would have diverted from the main analysis, I wish Kitcher had dwelt on this a bit more.  Why exactly is the public captured by naive and oversimplified images?  Surely the scientific community has played no small role.  We’re nothing if not advocates for an overly simplistic view of science.  Though I’ve sharply criticized a monolithic view of both science and scientists, this is one instance where it’s warranted.  Pretty much all scientists are perfectly happy uttering crudely simple phrases like “replication is the ultimate test of truth in science” when speaking to the public.  Encouraging naivety and oversimplification is par for the course in these situations.

This is something mildly (deeply?) hypocritical about such messaging.  We never stop hyperventilating about the importance of science and scientific reasoning:  Be rational! Look at evidence! Use the scientific method!

And yet, properly applying these principles conflicts with the account of science promoted by scientists themselves! If people actually looked at evidence and used “the scientific method”, there’s no way they’d believe some of the bullshit we say.  You can either be rational or you can accept scientists’ description of science.  But you can’t really be both at the same time.  We welcome rationality and evidence-based reasoning except, ironically enough, when talking about science.  Here it seems we want nothing more than mindless, uncritical adulation.

Now there are much worse sins than hypocrisy.  For the most part it doesn’t really kill anyone.  But Kitcher suggests that global warming deniers succeed partly because the public adopts an oversimplified view of science.  Given that scientists themselves promote such views, and also given some of the dire predictions of a warming world, hypocrisy might be a bit more costly in this case.