Archive

Archive for June, 2010

Climate change dust-up

June 28, 2010 5 comments

A recent paper in The Proceedings of the National Academy of Sciences has caused quite a stir.  The authors use citation counts to try prove “the relative climate expertise and scientific prominence of the researchers unconvinced of [anthropogenic climate change] are substantially below that of the convinced researchers.”  I haven’t closely checked the methodology so I can’t comment in great detail.  Roger Pielke Jr. unsurprisingly offers some sharp criticism.  See Michael Levi in Slate and David Bruggeman for more along those lines.   Check out this post at RealClimate and Michael Tobis for more supportive views.

I always find Jonathan Gilligan to be very insightful, so I’ll highlight his tempered responses to Roger’s first post, which is a bit overwrought in my view.  Until I read the paper, I’ll tentatively agree with Gilligan’s assessments that “the PNAS paper seems to me pointless and banal, but innocuous.” I know Steve Schneider, a co-author on the PNAS paper, pretty well.  I’d be shocked if he actually were trying to intimidate researchers or create a blacklist.  I suspect he’s simply trying highlight that not every scientists’ opinion counts on climate change, something I’ve been arguing for a while now. While this paper may not be the best way to make that point, it does need to be made.

Scientific thinking yet again

June 24, 2010 2 comments

My friend I-Chant had a sharp response to my last post:

Geez, Praj, you have never been trained as a psychologist, yet you know about going to PsycInfo, using some relevant search terms to pull up research that you are looking for, and marginally interpret it. Well, you shouldn’t know how to do that, should you? But yet, you did it, and I bet you didn’t think of the irony. ;) [It would have been easier to just call me a hypocrite:)–PK]

There is actually a lot of cognitive psychology that backs up my (and others’) point earlier. The first question here is whether problem solving skills transfer at all. We in the field call it positive transfer. Transfer to a new context is called far transfer, and there is evidence for that as well. The answer from many years of research (Chen and Klahr, 2008, and heck, that is for kids!; Barnett and Ceci, 2002) is yes, these things definitely happen. I will send you the PDFs.

The second question here is whether Chu, or what I think we may agree on is an expert in his field, can do it better than a novice (I think a novice could be a non-scientist, i.e., politicians, Paul Krugman, and Toni Morrison). Again, cognitive psychology research says yes, experts have the advantage. The approach problems completely differently, do not have the learning curve that novices would in building up that skill set (and in this oil crisis, I think we probably agree that not wasting time learning is important), and they also have more space for creativity (a couple of landmark studies are Larkin, McDermott, Simon, Simon, 1980 and Sweller, 1988, lots of more recent ones). There are surprisingly a lot of studies that look specifically at physics problems and physicists as well so I don’t know how you can claim that there is no evidence or data for this! We’ve already agreed that there are probably people who are more expert in this problem than Chu, but Chu can quickly acquire the knowledge that he needs and his job as Secretary of Energy has given him some practice at that. There are probably lots of other qualities that Chu has that may make him more desirable in this situation, including a broader, more creative perspective by not being entrenched in this problem, lots of connections to get him the right expertise, and political backing. Again, I’m not arguing he is the best person in the world for this job, but he is a pretty good one and certainly better than Robinson gives him credit for.

I’ll point out that in my first post I explicitly said “As far as I know, there’s no data either way.”  Now I do know and duly stand corrected.  I am interested, however, in what specifically we mean by “problem-solving skills.”  I’m sure that some things do transfer.  But from my personal experience theorists  often make clumsy experimentalists and vice versa.  So even within a single field there isn’t always strong transfer, and I suspect that when it does occur the problems have a large degree of overlap.

Nevertheless I will grant that Chu is better suited for this role than I gave him credit for, though I’m still not sure if his scientific thinking or management experience helps more.  Perhaps I’ll be convinced after reading the articles or having I-Chant lecture me some more!

But there are a couple deeper issues here.  First, as Ryan pointed out, the question isn’t expert versus non-expert.  It is the specific type of expertise needed, and whether we assume someone with a physics Nobel Prize is the right type.  Robinson’s refusal to blindly accept the latter proposition is what I most appreciated about his column.  We need more skepticism along these lines, and Robinson should be applauded for the effort.  We need more pushback against the conventional wisdom that scientists’ analytical powers qualify them to discuss everything.  So while Robinson’s analysis may have gone overboard here, his desire to challenge the mainstream view is commendable.

As I’ve said before, climate skeptics succeed partially because more people do not adopt such a critical stance (see disunity and climate change).  Scientists are viewed as a single authoritative, undifferentiated mass, and there’s no recognition of the immense diversity that exists.  This attitude allows Freeman Dyson to attack global warming on the cover of  the New York Times Magazine even though he has no credibility as a climate scientist.  Of course if you believe that all it takes is arbitrary expertise and exceptional analytical skills, there’s no problem here.  We can just assume that near, far and medium transfer makes Dyson qualified to discuss global warming.

But quantum field theory is not climate change and Freeman Dyson is no Stephen Schneider.  All scientists are not equal on every issue.  Even if cognitive psychologists can prove the existence of scientific thinking, I suspect the extent of transfer depends critically on the particular situation.  So in the end we must decide whether the default is trust or skepticism.  While there is a happy medium, the mere existence of people like Dyson and Frederick Seitz is proof enough that we’ve swung too far in one direction.  You can even read entire books about the damage caused by scientists speaking outside their domain.  If more people thought like Eugene Robinson this might be less of a problem.

Categories: Literacy

No evidence for scientific thinking

June 20, 2010 7 comments

This Eugene Robinson column garnered some attention on my Facebook wall.  Here’s the offending passage:

We can all applaud Chu’s accomplishment. But here’s the thing: Chu is a physicist, not an engineer or a biologist. His Nobel was awarded for the work he did in trapping individual atoms with lasers. He’s absurdly smart. But there’s nothing in his background to suggest he knows any more about capping an out-of-control deep-sea well, or containing a gargantuan oil spill, than, say, columnist Paul Krugman, who won the Nobel in economics. Or novelist Toni Morrison, who won the Nobel in literature.
In fact, Chu surely knows less about blowout preventers than the average oil-rig worker and less about delicate coastal marshes than the average shrimp-boat captain.

Strong words indeed.  A couple of my friends naturally pointed out that Chu must have exceptional analytical and problem-solving skills that he can apply to the situation.  This argument is all too typical and at this point is almost a truism.  Of course scientists have spectacular analytical and problem-solving skills.  And of course it carries over from their very narrow field to other problems.  Surely this much is true, right?

One of the many problems with these assertions is the almost complete lack of supporting evidence.  Has anyone actually studied how well scientists think and problem-solve outside of their field? Is your average space physicist more adept at analyzing economics, politics and policy merely on account of being a physicist?  How do we separate the scientific component of Chu’s analytical skills from the fact that he’s really smart and driven?  As far as I know there’s no data either way.

What I do know is that a search for “domain specific” on the PsycInfo database yields a few thousand results.  And I also know that at least some research privileges content knowledge over analytical skills.  The latter thesis especially undermines the idea of an amorphous scientific thinking that magically transfers to every problem.

None of this means that scientific thinking does not exist.  It very well might.  But before drawing any firm conclusions,  we should first gather and analyze the available data.  Doing otherwise would be pretty unscientific.

Exaggerating the benefits of science

June 15, 2010 6 comments

Over at Adapt Already Ryan Meyer highlights a recent Times article about the disappointing output from the Human Genome Project.  In typical fashion scientists offered more than they could give.  Instead of a medical revolution we got a few more Nature papers.  Ryan asks at the end of his post:

The science policy question is this: are we, the public, ok with this pattern? In a democracy where the squeaky wheel seems to get the grease, does science have to make its living on empty promises? It’s a tough one to argue on either side.

As I started to say in the comments, I think the answers are pretty clear.  Scientists have been exaggerating their work for forever and yet the public still adores us.  Just consider the public attitudes surveys in the recent Science and Engineering Indicators.  Results like this go back decades.  As to whether scientists have to make unreasonable promises, I’d say that all special interests groups must do so.  Scientists are no different.

It was quickly pointed out that this would be all well and good as long as everyone realized scientists were just another special interest group, not the case right now.  So how to reconcile the discrepancy?  It seems there are two ways.  Either scientists stop exaggerating our promise, or the public stops viewing us as special.  So I would recast Ryan’s questions as:  would science still be funded as much as it is now under either of these hypothetical scenarios?

Although there’s no real way to know, I’d guess the answer would be no in both cases.  Surely our false promises and an uncritical citizenry play some role in our robust funding.  But given how much scientists depend on public funds, I’d bet we’d fight any change on either front.  Just imagine the uproar if anyone pointed the similarities between science organizations and teachers unions.

In the end, we’re probably stuck with this imperfect situation.  For what it’s worth, in my view it’s not as big of a deal as many–myself included!–sometimes make it out to be.  Consider Ryan’s comment:

If no one recognizes that scientists are an interest group, but everyone’s fine with the current pattern of unfulfilled promises, is that ok? By analogy, what if we all thought that Wall Street firms were just selfless engines of economic growth, and that the main job of the SEC was to make sure the firms did as well as possible? That might make us more likely to accept their behavior, but it probably wouldn’t be a better situation. Institutions like the SEC are (supposed to be) managing these self-interested groups in order to protect the public.

Well, yes.  We definitely do not want the SEC as the spokesperson for Wall Street.  But it’s important to note that dishonesty at the SEC can lead to a global economic meltdown and the suffering of millions.  Dishonesty at NSF leads to a few more random professors studying some random problem and publishing in some random journal that no one (including their colleagues) will read.

So the flawed science-society relationship is tolerated not because the public is particularly stupid or scientists particularly venal.  It’s because on some level everyone knows we’re just not that important.

Science, tolerance, and homosexuality

June 14, 2010 1 comment

William raised some good points on my last post:

Now, as for why we gay people might be touchy about whether we are EMPIRICALLY mentally well, OR whether maybe it is just by the grace of genteel, liberal, enlightened psychologists that we can be judged morally (but perhaps not scientifically) well – you’d be touchy too! Obviously, mental illness cannot be measured in centimeters, but I believe psychologists have some objective standards concerning what mental illness is and isn’t…Homosexual is a normal variant of human sexuality. It has always existed, and baring genetic engineering to eliminate it, it always will exist. The fact that homosexuality is not a mental illness is just that: a fact. It is not a moral judgment that allows us to politely TREAT gay people as normal when in fact, we believe that the question of their mental health is just unknowable.

I think there are a few issues getting conflated here.  First, to what extent is homosexuality as mental illness an objective scientific judgment? And second, to what extend did a greater empirical and scientific understanding provide the catalyst for greater tolerance? We could agree on the first point while still recognizing that moral values as well as  science advanced the cause of tolerance.

I admit I may have overstepped the bounds of trans-science with this example, although the continued discussion at Sanchez’s post shows it’s far from settled.  I’ll punt on that issue for now to address the more interesting question of science and tolerance.

The empirical rigor provided by science may indeed help overthrow prejudice.  We can and should apply sound research methods to try reach a conclusive answer.  But we should remember that these questions are not studied in an an imaginary world by imaginary scientists.  They are studied by actual human beings, some of whom possess the very biases William tells us science can eliminate.   To believe science will set us free therefore requires us to believe that all-too-human scientists will both conduct sound experiments and interpret the data correctly.  But if the history of craniometry is any guide, we can’t be counted on to do so.  More often than not, sloppy science and faulty analysis has bolstered and supported those in favor of discrimination.  Some would argue that the pattern continues today.

Which is why I’m a little perplexed that William so opposes the notion that moral values along with science can help advance gay equality.  Yes, it may be that homosexuality is objectively not an illness and gay mental health is an empirical, knowable fact.  But it’s important to remember that it wasn’t until at least 1973 that we recognized these facts.  Until at least 1973, the best available data indicated that homosexuality was, in fact, an illness.

So wouldn’t it have been a good thing if moral judgment forced us to treat gays equally regardless of empirical data? Wouldn’t it have been a good thing if genteel, liberal, enlightened psychologists insisted before 1973 that (in Sanchez’s words) “we shouldn’t stigmatize dispositions and behaviors that are neither intrinsically distressing to the subject nor harmful, in the Millian sense, to the rest of us.” And finally, wouldn’t it have been a good thing if we realized that tolerance is an intrinsic good that should not be held hostage to the vagaries of an ever-changing scientific consensus?

William is ultimately too eager to embrace science and too quick to dismiss morality in the service of gay equality.  Both can play a role, and if Sanchez is correct, both did play a role. Admitting this does not undermine the case for tolerance.  Rather, it recognizes that some things in life are too important to be left to science alone.  Opposing discrimination is one of them.

Categories: Values

Science and trans-science

June 9, 2010 4 comments

Julian Sanchez recently discussed why classifying homosexuality as a disorder hinges on both science and values:

I’m glad, of course, that we’ve dispensed with a lot of bogus science that served to rationalize homophobia—that’s a pure scientific victory.  And I’m glad that we no longer classify homosexuality as a disorder—but that’s a choice and, above all, a moral victory. It ultimately stems from the more general recognition that we shouldn’t stigmatize dispositions and behaviors that are neither intrinsically distressing to the subject nor harmful, in the Millian sense, to the rest of us…The change in the psychiatric establishment’s bible, the DSM, was partly a function of new scientific information, but it was equally a moral and a political choice. [Emphasis added–PK]

Sanchez’s great example highlights what I’ve argued previouslysome scientific judgments involve values while some do not.  We can safely say that measuring the acceleration due to gravity is a purely scientific judgment.  But we can also safely say that classifying homosexuality is not.  It remains a mystery to me why some resist this idea.

Consider William’s comment on Sanchez’s post:

Well how about the mental condition called depression? Are you saying that it is a moral rather than scientific question whether depression is an illness/disorder? I’m talking about can’t get out of bed, too weak to commit suicide depression here, not a bout of the blues. How about Post Traumatic Stress Disorder? You’re saying that diagnosis is a moral rather than medical (scientific) question?

Well, no William.  Neither I nor (I suspect) Sanchez are saying any such thing.  We simply accept that mental health contains both value-free and normative science.  A belief in objectivity with respect to PTSD does not conflict with a belief in subjectivity with respect to homosexuality.  There is no universal standard or set of rules that we can blindly apply in all cases.  Believing otherwise is analogous to playing the game without watching game film.

A couple things come to mind.  First, as I’ve argued before, simply using the single word science undermines rational discourse on topics like these.  Ultimately, Sanchez is trying to argue that stigmatizing homosexuality involves a different kind of science than what we’re used to.  And this kind of science necessarily involve moral judgments.  But since all we have is “science” and its associated baggage of supreme and perpetual objectivity, this subtlety gets lost.

Second: why did Sanchez have to explain what should be common knowledge?  We figured out no later than 1972 when Alvin Weinberg wrote Science and Trans-Science that some areas of science cannot be separated from values.  We figured it out again in 1985 when The National Academies wrote a report on risk assessment, yet again when Funtowicz and Ravetz introduced post-normal science in 1991, and once more in Sheila Jasanoff’s book-length treatment on regulatory science.  Scholars from fields as diverse as nuclear physics, philosophy, history, and sociology have all independently determined that science is not a monolith and that, yes, sometimes values play a role. In the end, Sanchez’s thesis is impressively mundane and uncontroversial.  In an ideal world it wouldn’t merit a shout-out from arguably the most influential political blogger alive.

None of this undermines Sanchez’s eloquence and brilliance.  I am always impressed by his writing, and he does a particularly good job here explaining a complicated topic.  But if we had dispensed with the false notion of one science that follows “the” scientific method, maybe he wouldn’t have had to.

Kitcher on global warming

June 3, 2010 4 comments

Philosopher Philip Kitcher just reviewed several books on climate change in Science magazine.  I meant to get to this earlier, but Ben Hale got there first and stole some of my thunder.  He even has a snappier title than me. Alas! I won’t repeat what Hale said, and I recommend you go over there and read his post.  Needless to say, you should also read the (pretty long) Kitcher piece.  I’ll have more to say soon, but for now I’ll highlight this:

Captured by a naive and oversimplified image of what “objective science” is like, it is easy for citizens to reject claims of scientific authority when they discover that scientific work is carried out by human beings.

While expanding would have diverted from the main analysis, I wish Kitcher had dwelt on this a bit more.  Why exactly is the public captured by naive and oversimplified images?  Surely the scientific community has played no small role.  We’re nothing if not advocates for an overly simplistic view of science.  Though I’ve sharply criticized a monolithic view of both science and scientists, this is one instance where it’s warranted.  Pretty much all scientists are perfectly happy uttering crudely simple phrases like “replication is the ultimate test of truth in science” when speaking to the public.  Encouraging naivety and oversimplification is par for the course in these situations.

This is something mildly (deeply?) hypocritical about such messaging.  We never stop hyperventilating about the importance of science and scientific reasoning:  Be rational! Look at evidence! Use the scientific method!

And yet, properly applying these principles conflicts with the account of science promoted by scientists themselves! If people actually looked at evidence and used “the scientific method”, there’s no way they’d believe some of the bullshit we say.  You can either be rational or you can accept scientists’ description of science.  But you can’t really be both at the same time.  We welcome rationality and evidence-based reasoning except, ironically enough, when talking about science.  Here it seems we want nothing more than mindless, uncritical adulation.

Now there are much worse sins than hypocrisy.  For the most part it doesn’t really kill anyone.  But Kitcher suggests that global warming deniers succeed partly because the public adopts an oversimplified view of science.  Given that scientists themselves promote such views, and also given some of the dire predictions of a warming world, hypocrisy might be a bit more costly in this case.