Does the writer know what that sentence actually says? The answer is routinely no…Here’s another example: “Fixed-gear bikes are ridden exclusively on these tracks.” This sentence is almost proud of its perfect ambiguity. It means one of two things: “People ride fixed-gear bikes only on these tracks” or “On these tracks people ride only fixed gear bikes.” Both are statements about exclusivity, but one is about bikes, the other about tracks. The sentence as written offers no way to choose between them. — Verlyn Klinkenborg
Klinkenborg’s advice should be heeded by science writers, this one included. Though clarifying the vocabulary of science has been a hobby-horse of mine for some time, I’m starting to wonder what that phrase means as it is written. My intentions, as Klinkenborg helpfully observes, are blind to all of you. Am I speaking about the research vocabulary scientists use among themselves? Or the vocabulary of journalists popularizing science? And which scientists? What parts of science?
To try start the process of clarification…
All of us use generalizations. We generalize about sports, about states both red and blue, about men, about women. Thankfully, we mostly recognize these generalizations as generalizations. We know they have limited value and we expect deviations. We get that Orange County can be conservative even though California is liberal.
We also know that complicated systems can be analyzed on different levels. Sports reporters focus on details of the game as well as the backroom negotiations. We routinely hear that sports about greed and corruption as much as it is about teamwork and grit.
So the question is…do we know that sentences like “science is about testing hypotheses” are also generalizations? That not all scientists test hypotheses in the same way? That some fields are not amenable to hypothesis-testing? And do we know that science, like sports, can also be analyzed on different levels? That science is “about” writing grants as much as hypothesis-testing?
Helping us think about “science” in this way–the way we already think about many large categories–is a central goal of this blog. I hope that’s now clear.
If you are planning to engage in science outreach, here’s a word of advice: you must assert regularly and with great conviction your belief in Scientific Exceptionalism. This seems especially true if you are vying for leadership in a scientific society. The particular phrasing varies a bit, but the general message is the same: science is the greatest force in history; science has done more good for the world than anything else in history; science is the envy of the humanities and arts because it is unique. The flip side of these assertions is less often stated but present nonetheless: don’t question the goodness of science; being pro-science means supporting anything academic basic researchers want; criticism of anything they say is anti-science.
Who needs nuance when you’ve got exceptionalism? Why show humility if you’re convinced that science is a “means to freedom?” I hope scientists realize some grad students leave precisely because they can’t stand this attitude.
On a blog I’ve just started reading ardently, Venkat Rao explains why he doesn’t like the scientific method:
I don’t like or use the term scientific method. Instead, I prefer the phrase scientific sensibility. The idea of a “scientific method” suggests that a certain subtle approach to engaging the world can be reduced to a codified behavior. It confuses a model of justification for a model of discovery. It attempts to locate the reliability of a certain subjective approach to discovery in a specific technique…
…The scientific method is a sensibility crammed into the mold of a system. It is a an attempt to externalize something subtle and internal into something legible and external. The only reason to do this is to scale it into an industrial mode of knowledge production, which can be powered by participants who actually lack the sensibility entirely. Such knowledge production has been characteristic of the bulk of twentieth century science (in terms of number of practitioners, not in terms of value). Hence the Hollywood stereotype of the scientist as a methodological bureaucrat; someone who worships at the altar of a specific method. Sadly, Hollywood gets it right. The typical scientist is a caricature of a human.
Though I’m nothing if not an opponent of vague terms like TSM, we need to be careful here. I’ve had countless discussions on this, and Venkat makes TSM seem more rigid and emotionless than its adherents intend it to be. More often than not, TSM is simply a synonym for the notion that problems should be studied rigorously and with care. It’s the recognition that there are better and worse ways to study certain questions even if we allow that there is no step-by-step blueprint. I can agree with Feyeraband’s methodological anarchy, disdain Hollywood’s mechanized portrayal of scientists, while also believing that the phrase TSM is useful.
Personally I would be happy to get rid of the term. But we must wrestle with how scientists actually think of and use TSM rather than its public caricature (which I feel most scientists disagree with already). We’re trying to have a semantic argument without engaging in the boring, messy work of semantics. I suspect that if we did so, there’d be much common ground between the scientific method and the scientific sensibility.
Writing in the Times, Cordelia Fine justifies confirmation bias as a tool that facilitates discovery:
Scientists are not immune. In another experiment, psychologists were asked to review a paper submitted for journal publication in their field. They rated the paper’s methodology, data presentation and scientific contribution significantly more favorably when the paper happened to offer results consistent with their own theoretical stance. Identical research methods prompted a very different response in those whose scientific opinion was challenged.
This is a worry. Doesn’t the ideal of scientific reasoning call for pure, dispassionate curiosity? Doesn’t it positively shun the ego-driven desire to prevail over our critics and the prejudicial urge to support our social values (like opposition to the death penalty)?
Perhaps not. Some academics have recently suggested that a scientist’s pigheadedness and social prejudices can peacefully coexist with — and may even facilitate — the pursuit of scientific knowledge.
Fine ends by calling for scientists to “embrace their humanity,” a request that readers of this blog know I support wholeheartedly.
My friend I-Chant had a sharp comment on my recent post on hypocrisy (emphasis added):
I think the main problem is that there is an assumption that humans have ever been/can ever be “rational” actors. I think the ancient Greeks planted that in society’s ideals but there is now a mountain of evidence that suggest humans act like humans and not robots. Big surprise! This behavior which includes having emotions and sometimes letting them be more dominant than evidence isn’t wrong. It just is how we are. It’s hard to have that as a standard when it is so against our very nature. I can understand the need for it because it is a level playing field where supposedly, everyone can engage with the debate. However, it’s probably easier to find a way to accommodate people’s actual behaviors rather than set an impossible standard. The question now is, how do we have intelligent public discourse that also takes into account stakeholders’ very real emotions?
As I’ve noted before, I think we have to accept a measure of futility here. Whether we like it or not, public discourse is heavily constrained. Since we’re not allowed to say “I really like basic research and I want money for it,” we’re forced to make dubious claims along the lines of: “Basic research is the primary driver of economic growth, and thus everyone on Earth benefits with more string theory!”
Granted, public funds should serve some public benefit. It’s reasonable to ask scientists (and everyone else) to couch their lobbying in terms of the public good. But until we open some space for simple, mundane arguments, we’re stuck with the distortions, fabrications, and lies that are the hallmark of democracies everywhere.
Allow me to arrogantly close by quoting myself:
[Cancer and global warming] need to be solved, and I’m glad everyone devotes so much energy to them. But it’s quite strange that all of our arguments have to be framed by such superlatives. There’s no space to make a simpler point that in the end is probably much closer to the truth. Physicists can’t say that we focus on numerical solutions not necessarily because we believe it’s the best or only path forward, but because we enjoy that type of work. Conversely, the science studies crowd does not (as far as I know) point out that the(mild!) hypocrisy of scientists’ exaggerations is intrinsically wrong. Rather, they string together a series of tendentious links that are very hard to prove. If the benefits of basic research weren’t exaggerated, then (maybe) we’d spend more on socially relevant research, and then (again maybe) we’d make better progress on solving cancer and global warming. There are too many hypotheticals here for my comfort, and this argument is no less convoluted than the ones often made in defense of basic research.
We really shouldn’t have to justify everything in terms of majestic solutions to big problems. Many scientists simply like basic research and they should be allowed to say that. And we can disagree with scientists’ exaggerations for the rather boring grounds that the arguments are bad on their own terms and undermine honest debate. Cancer shouldn’t have to be part of the picture.
Cogitating some more on “The Politics of Demarcation” by Paul Newall and Michael Pearl, I understand their dismay when scientists don’t stick to the values they promote. I get it. Paul and Michael want to right some wrongs, and they do so by highlighting scientists’ shortcomings. Shoddy analysis and cherry-picked data are bad and should be attacked. It’s especially hypocritical coming from the alleged paragons of reason. When we see those awful things happen and say nothing, we’re almost as guilty as if we did it ourselves. Irrationality and sloppy logic anywhere is a threat to rationality and sound logic everywhere. Again, I get it. I too have made similar arguments.
Paul sums up the attitude here (emphasis added):
Ultimately what this discussion suggests is that if the adoption and use of poor arguments is to be lamented when undertaken by those advocating intelligent design, surely those opposing it must hold themselves to a higher standard?
However, is the Creationism/ID issue the sort of circumstance that warrants the abandonment of the principle of philosophical rigor?
Noting that the setting is a legal/political one does not itself justify the abandonment by philosophers of the devotion to argumentative rigor to which they are presumed to be devoted. The Creationism/ID matter is anything but a harrowing circumstance; so, as exactly what are philosophers operating when they so willingly sacrifice the philosophical for the sake of the political? Are they anything more than window dressing?
Although I’ve done so myself, I’m starting to think this is a bad approach. Why exactly should scientists hold themselves to a higher standard? If the standard in question requires them to always make rigorous arguments, it’s clear that scientists never subscribed to it. In this context their highest standard is preventing ID from being taught in science classrooms.
The window dressing comment similarly misses the point. It’s not that scientists don’t value rigor. It’s that sometimes other things are more important. Like everyone else, scientists have context-sensitive desires and goals. At times these desires and goals conflict. Philosophical rigor is not the only, or even highest, principle.
We keep expecting scientists to be different than anyone else. For our public invocations of precision, evidence, and logic to be applied to everything we do, all the time. But why should this be so? When have scientists ever been uniformly consistent in this regard? Does anyone actually maintain an existence of strict, perpetual rationality? Perhaps the biggest change in my thinking over the past couple years has been my often grudging acceptance that I cannot do this. I don’t think anyone can.
I know we all want more intelligent, rational public discourse. Discourse that abides by some basic rules of logic and evidence. Unfortunately, this situation does not exist, and never has existed. In our frustration at those who violate these precious rules, who thwart our attempts to improve public debate, we take on a familiar role. We attack their arguments, emphasize their flaws, accuse them of duplicity. We keep fighting this fight even though we know there will always be too many fallacies, distortions and misconceptions to respond to. Those of us who care for public rationality know we’re in a losing battle.
Perhaps the futility of this battle is a sign it shouldn’t be fought in these terms to begin with. (I’m thinking as I write here, so bear with me.) Careless, bad arguments are an indelible feature of democracy. They will always be there. So perhaps the better way improve public discourse is not only to criticize these bad arguments. We should also acknowledge that there will be times when we all have to argue for something we deeply care about. And in those instances, it’s likely that our arguments will not be completely rational or logical. We are human after all. The exigencies of fighting for our values will ultimately trump academic concerns for reason.
This painful process of accepting our own irrationality should, I hope, temper the outrage when we recognize it in others. Yes, we still should criticize bad arguments, and especially from those who should know better. And yes, we still should note when scientists don’t live up to their standards. When we make such accusations, however, we should do it with the knowledge that at times we too exhibit such hypocrisy. It happens to all of us.
It has been two months (an eternity in blogging years) since Paul Newall and Michael Pearl insisted that the issue of teaching intelligent design in schools should not be resolved via demarcation. While Paul is on solid ground when he deconstructs the sloppy philosophical arguments used by the anti-ID crowd, I take issue with his analysis of the implications (emphasis added):
The implication is thus that if arguments for demarcation criteria continue to fail, if these failures are seized upon by intelligent design advocates and if there are better reasons to dispense with this approach altogether, it is likely that objections to intelligent design on some other basis will be more successful at least in part because they are more philosophically rigorous. Criticising an insistence on demarcation, far from demonstrating a lack of political understanding, actually returns the issue to one of science instead of philosophy and provides a service to the debate rather than acting as an irrelevance or hindrance.
I’m not sure I follow. Scientists are already pretty successful in applying demarcation to intelligent design, however erroneously they do so. They do, after all, win the important cases. And so it’s not clear what they would gain by trying to make their arguments more philosophically rigorous. Their primary goal is to prevent ID from being taught in science class, not to get an A+ on a philosophy paper.
Along those lines, what does it mean to provide a “service to the debate?” Most scientists would say that nothing undermines “the debate” more than confusing ID for science, and thus we must employ any and all arguments–even philosophically suspect ones–to ensure said confusion does not persist. Paul, Michael and I are surely among the minority who so desperately believe that the current form of the debate serves as a hindrance. Mainstream scientists are happy to keep it in these terms,especially because they seem to be successful at it.
There’s an irreconcilable mismatch of goals here. Philosophers–and the former space physicists who have defected to their camp–think truth, sound arguments and civil discourse should hold greater sway. Scientists disagree. More than anything else, they care (not too unreasonably) about preventing ID from entering science classes. Even if most scientists understood the demarcation problem (I’m pretty most have never even heard of it), and even if they agreed with Paul that methodological naturalism cannot be used to demarcate ID (most passionately and honestly believe that alone suffices), I bet their approach wouldn’t change much. For better or worse, this issue has always been fought in terms of demarcation. Unless something drastically changes, that’s the way scientists will continue to fight.
Scientists and their advocates need to become more knowledgeable about how people come to their beliefs — who they rely on for scientific information, what they hear, and through which filters they hear it….
Nor are pleas for rationality and greater respect for science likely to win the day. Were hard data and cold logic all that mattered, any number of common personal behaviors would be long gone by now, from smoking to overeating. As any skilled public relations practitioner will attest, successful communication meets people on their own turf — by means that address emotions, fears and values.”
While I’m deeply sympathetic to this argument, it only goes half-way. If scientists truly want to understand the public, they first have to accept that they are part of it, that we too need our emotions, fears and values addressed. For me at least, building self-awareness and humility has been a continuous, painful process, and one that I’m far from completing. Confronting and interrogating your most deeply-held beliefs and concluding that you do not, and may never, really understand them is not easy. I now look back with more than a little embarrassment at my Richard Dawkins worshipping days and wonder how I could have been so stupid.
As I’ve done so often, here’s Barbara Herrnstein-Smith articulating my thoughts better than I can in the context of (yet again!) science and religion:
Scientists share cognitive tendencies, achievements, and limits with nonscientists; religious believers share them with nonbelievers. Although each may put the world together and conduct his or her life in ways that are at odds with or opaque to the other, the cosmology and way of life of each deserves minimally respectful acknowledgement from the other. Such acknowledgment would not mean accepting ideas one finds fantastic or claims one knows are false. And of course it would not mean approving practices that one knows are confining, maiming, or murderous to oneself or to others. What it would mean is recognizing, as parallel to one’s own, the process by which those cosmologies and ways of life came to be formed.
Not me, says the self-vaunting evangelical atheist. Tu quoque–you too–says the defensive, resentful theist. Et ego–I, too–says the reflexive, reflective naturalist.
The best two closing paragraphs to a non-fiction book I’ve ever read.
A few months ago Alan Jacobs provided some needed restraint on the temptation to over-generalize, in this instance on the role of social media in the Middle East revolutions:
So when Clay Shirky says, “social media . . . helps [sic] angry people coordinate their actions,” I don’t know how we would even figure out whether a statement that broad is true. Which social media? Which actions? In which societies? Presumably when people connect with each other they won’t always agree, so how do we know that some social media, anyway, don’t exacerbate conflicts? Maybe some people in some societies would coordinate better if they met face to face. Maybe, though there are certainly dangers in meeting face to face, there may be just as many dangers in coordinating via social media, depending on how careful the users are and how technologically sophisticated the oppressors are.
It struck me how much Jacobs’ critique applies to the (all-too-frequent) platitudes often heard about science. I’m not at all sure what it means for science to “lie at the center of policy issues facing our nation and world?” Well, which issues? Do all nations face them? And what exactly do we mean by “the center?” It seems a bit strange to suggest that science is equally center in both climate policy and financial regulation. In discussing both social revolutions and science policy, more particularization can be helpful.
And while, as commenter PEG reminds us, ”Broad trends do in fact, exist. There is value in identifying them precisely, analyzing their impact on the present and plotting out their course in the future”…the sloppy, armchair observations about science hardly qualify as precise analysis. Surely we need something more than the exceptionally vague “science and technology continue to transform our lives.” Until we clarify what such statements actually mean, I’ll try my best to stick with the particulars.
In a recent exchange, Peter decries “overblown public assessments” of the benefits of science, and warns that “putting out hype that encourages unrealistic expectations is stupid and will eventually come back to bite the originator.” It’s a typical sentiment, and one that the science studies folks make often. You can find a recent iteration just after the State of the Union, when Matt Nisbet worries that Obama risks trust in “America’s most admired institution” by making science the center of his domestic policy.
This type of argument is quite common: If scientists don’t stop distorting the truth, then someday there will be a reckoning. The only problem is that there never has been and probably never will be any such reckoning. Scientists continue to insist that basic research is the source of applied research, that science is the center of decision-making, and that more science will solve all problems. Despite the possibility of impending doom over such claims, they (we!) appear willing to take the risk.
If we grant that overblown public assessments are intrinsically bad (and I’m not entirely convinced they’re that bad), those of us trying to change scientists’ behavior have to concede they face no consequences. It’s simply not very persuasive to argue that the very, very, very slight chance of backlash is reason enough for them to change. Suggesting otherwise is itself an overblown assessment and will rightly be ignored.