…first check out this video that’s been making its way around Facebook
I may be shooting in the dark here, but I wonder how much of the increase in college costs I just discussed can be attributed to the uniquely (?) American system of mingling research and education. As far as I know (and I may be wrong on this point), American universities play a bigger research role than do universities in other countries. It may not be stated explicitly, but most (if not all) U.S. universities see themselves as research as much as educational institutions. So to what extent is an an increasingly expensive research apparatus to blame for an an increasingly expensive college education? Given the prestige associated with research, I suspect that there’s even more pressure on faculty to publish and seek grants. That has to affect tuition…right?
I know I owe David Bruggeman a response and I’m slowly working on it. Until I sort through my thoughts, I’ll highlight a recent WSJ piece on the growing trend of “pricing” university professors. Driven by budgetary pressures and poor test scores, universities are facing more pressure to prove that money on them is well spent. Texas has taken the lead on this, and Texas A&M allegedly determined a profit-and-loss statement for all its faculty using students taught, tuition generated, and research grants.
I’m of two minds on this trend. On one hand, inefficiencies in academia should be rooted out and external pressure is probably the only way to make that happen. On the other hand, it is a bit disconcerting to see education characterized in such crudely utilitarian terms. There are inherent, diffuse benefits to education that cannot be captured in simplistic cost-benefit analyses. Economic growth isn’t everything. And while more focus on the practical is needed, it can go too far.
I’m also bothered that “colleges typically earn points for pushing students to take science, engineering and math.” This approach exempts science from the critical self-examination that it so desperately needs (does society really benefit by producing more physicists?) while unnecessarily devaluing the humanities. Along those lines, this Martha Nussbaum book has been on my reading list for a while (h/t The Eduwonk).
I’ll close with a sad admission by an A&M history professor (emphasis added):
“Taxpayers of the state of Texas,” Mr. Peacock says, should decide whether “they should be spending two years paying the salary of an English professor so he can write a book of poetry simply to add to the prestige of the university or the body of literature out there.”When the choice is put that bluntly, Chester Dunning, a history professor at Texas A&M, wonders if he’d pass muster. Mr. Dunning teaches two classes a semester and has won several teaching awards. His salary of about $90,000 a year also covers the time he spends researching Russian literature and history. His most recent book argues that Alexander Pushkin’s drama “Boris Godunov” was a comedy, not a tragedy.
Mr. Dunning says his scholarly work animates his teaching and inspires his students. “But if you want me to explain why a grocery clerk in Texas should pay taxes for me to write those books, I can’t give you an answer,” he says.
His eyes sweep his cramped office, lined with books. Then Mr. Dunning finds his answer. “We’ve only got 5,000 years of recorded human history,” he says, “and I think we need every precious bit of it.”
The current issue of the Atlantic has a fantastic profile of Greek physician John Ioannidis, author of the most downloaded paper in the history of PLoS Medicine. (h/t The Bubble Chamber). Ionnadis apparently has determined that many of the most heralded findings of modern medicine are based on sloppy, careless research:
[Ionnadis's] model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
The article delves deeper into some of the reasons for the outcome, with the drive for publication as one of the chief culprits. I recommend reading the entire piece.
Longtime readers will not be surprised to hear that I found this passage particularly appealing (emphasis added):
In fact, the question of whether the problems with medical research should be broadcast to the public is a sticky one in the meta-research community. Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding. Ioannidis dismisses these concerns. “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.
Modest and restrained arguments more accurately represent the promise of science while avoiding the intellectual costs of bluster and exaggeration.
David Bruggeman wants clarification on my desire to see more public engagement from historians and philosophers of science. I’ll pass on that for now because it’s hard to think during the baseball game.
For some light reading, check out Colin Macilwain’s column over at Nature. Macilwain is rapidly becoming one of my favorite science columnists, and here he discusses growing tension between scientists and engineers in Great Britain. I found this quote interesting: “there is a general attitude among the scientific community that science is superior to engineering.”
My research career has traversed from numerical relativity to radiation belt remediation in the department of electrical engineering. As such, I’ve gotten to know a fairly broad cross section of both scientists and engineers. In my experience there may be a small superiority complex amongst scientists, but it is very small. Perhaps it’s a generational thing, but many theoretical physicists I know thoroughly accept that their work may not be that important, and they’re 0ften very grateful that society decides to fund their (not immediately useful) research. I suspect part of the bluster Macilwain detects is masking a degree of insecurity. But that’s a topic for a different post!
Over at The Bubble Chamber, they’re having a spirited discussion on the (possible!) social relevance of history and philosophy of science. I’ve supported this enterprise for quite some time now, and it’s refreshing to hear what practicing philosophers think.
I’d especially like to see more discussion on some of the “big issues” (public discourse anyone?). Many people already spend time on discrete topics such as climate change and nanotechnology. I think more people in HPS should try to help create a better narrative of what we call science.
Via Roger, Nature has an interesting article on the the apparent muzzling of government scientists at various agencies. Some of the stories are a bit troubling, for for me the key sentence comes at the end of this paragraph:
Lane is concerned about the effect of these restrictions on scientists and their work. “It kills morale,” he says. “It makes scientists feel like their work is not valued, and it makes it harder for agencies to recruit and retain the best scientists.” Keeping information from the public could put the credibility of the agency at risk, and some scientists say it affects their careers. “The restrictions limit my overall stature in the research community,” says an ARS scientist who asked to remain anonymous.
Not everyone feels this strongly. Some ARS scientists say that the agency’s internal review process for their research papers is appropriate, and is just part of working for the government.
I suspect part of the problem (as alluded to in the article) is that scientists working in government simply don’t operate under the same rules as, say, academia. Conflict and disagreement will inevitably happen until rules are clarified. I’d like to know how many scientists, exactly, feel muzzled, and how widespread is this feeling? What exactly does it mean that “some scientists” feel the restrictions are appropriate? Some people will always have problems with the rules, and I need a bit more context to evaluate these claims.
I also think it’s mildly funny that alleged restrictions is what’s harming the scientist’s stature in the research community. I’d have thought that leaving academia in the first place is what screws your status whether or not there are restrictions.
Via David Bruggeman, I’ll highlight The Bubble Chamber, a wonderful new blog by historians and philosophers at the University of Toronto. As David says, it’s great to see them trying to wrestle with contemporary problems. You should also be reading Age of Engagement by science communication scholar Matt Nisbet.
I’ll try to balance my two glowing reviews of Natural Reflections with some mild criticism. There are two specific complaints I have. First, as I’ve experienced with much writing in this genre, Herrnstein-Smith cites a research paper or puts something in the footnotes when an example would have clarified much. Throughout the book, she alludes to the fact that scientists exhibit some of the same cognitive limitations (such as confirmation bias) that other humans do. But other than an introductory anecdote, I wasn’t left with anything concrete. Rather than references a study by Mynatt, Doherty and Tweney (on p. 133), it would have been nice to see an example.
My only other (even more minor) gripe is that her discussion on the relationship between basic research and technology towards the end of the last chapter would have benefited from an economics perspective. But she managed to effectively make her point anyway, so it wasn’t a big deal.
I’m taking a break from my book review to briefly address something that’s always bothered me about how science spending is discussed. As I often do, allow me to analogize poorly from a political debate. Via Andrew Sullivan, check out this skewering of a sloppy, careless article on defense spending in the Post. In criticizing the Post article, Gordon Adams notes (emphasis in original):
They substitute the economic “burden” of defense for what we actually spend on defense, tryin to make the case that Eisenhower spent more on defense than we do today. Not true. Eisenhower’s defense budgets were a larger share of GDP than they are today. But the share of GDP consumed by the defense budget measures how much it consumes of US overall product, but not what we spend. If the GDP grows, unless the defense budget grows at the same rate it will consume a smaller share of GDP. Doesn’ t tell you much, except whether the economy can handle it. The proper measure of what we spend is what we spend, not how much it takes of the economy. Spending is measured in constant dollars, not in GPD shares. In FY 2011 constant dollars, the average Eisenhower defense budget was just over $400 billion; the comparable number for FY 2010 is over $699 billion. That’s more, a lot more, in anybody’s book.
A similar point can be made about science spending, and I’ve never understood why we use portion of GDP as a relevant metric. It’s always struck me as kind of a phony metric that conveys no information. Actual science spending has increased inexorably since WWII, whether it’s 2.35% or 2.54% of GDP. We can argue for more science funding without promoting the false notion that there is some “correct” share of GDP that should be spent on R&D. But of course, all we get is meaningless applause when presidents make equally meaningless promises.