Konstantin Kakaes tells us to chill out and stop worrying about Chinese science:
The bible of the competitiveness crowd is a National Academy of Sciences report calledRising Above the Gathering Storm. (In terms of melodramatic white paper titles, the United States is surely a world leader. The report was first issued in 2005; a 2010 revision was subtitled: Rapidly Approaching Category 5.) The 2010 report notes, “30 years ago the United States had 30 percent of the world’s college students. Today we are at 14 percent and falling.” This is cited as evidence of a decline in American competitiveness. But that’s like saying the United States has a smaller percentage of the world’s well-nourished people than it did 30 years ago. It is good for people around the world to go to college and be well-fed. Neither takes anything away from the United States.
The competition rhetoric is almost always linked with calls for increased investment in research. But as Argentino Pessoa of the University of Porto, among others, has pointed out, there is a slight negative correlation between R&D intensity and GDP growth—in other words, spending more on research doesn’t necessarily make you richer. Amar Bhide, in his book The Venturesome Economy, cites the example of Norway, which isn’t even in the top 20 countries ranked by share of scientific papers published, but has the highest labor productivity in the world.
Knowledge—of which technology is a kind—gets shared widely. A Dec. 7 New York Timesarticle called “China Scrambles for High Tech Dominance” gets it exactly wrong. “If the future of the Internet is already in China, is the future of computing there as well?” The future of the Internet isn’t in China any more than the present of the Internet is in the U.S. Technonationalists (as Bhide calls the competitiveness caucus) like to trumpet the fact that Google is an American company. But the benefits of quartering Google’s corporate headquarters are dwarfed by the benefits of using Google (and its peers, like Baidu, a Chinese search engine) and other revolutionary technologies. And those benefits get spread widely. The Internet, for example, was invented in the United States—but that does not mean we get the most benefit from it.
I’m sure that Matt Yglesias has forgotten more economics in the past hour than I will ever know. And yet, he believes that “if spending on military robotics declines then our most talented roboticists will focus more of their time and attention on civilian applications.” Really? Military spending doesn’t affect the overall demand for engineers and scientists? It’s just as likely that if spending on military robotics declines our most talented roboticists will leave robotics and science altogether. If Lockheed Martin, Raytheon, etc. weren’t hiring, many of my friends would be out of a job, not making snazzy commercial gadgets.
Several liberal bloggers protested the Times suggestion that cutting the Defense budget will reduce innovation. While some of their points are well-taken (the DOD budget is almost certainly bloated and wasteful), they all unfortunately make two big mistakes: they equate defense research with weapons research, and they neglect the role of deployment in bringing technology to scale.
Here is Robert Wright’s flawed analysis, typical among the group:
Defense department research, in contrast, focuses on services that people are more ambivalent about–like getting blown up. If more benign services get developed in the process–like if blowing people up involves technologies that help them play digital music–that’s a happy accident.
Wright’s simplistic link between DOD research and weapons ignores the synergies between civilian and military technologies. At some point the DARPA-funded optical interconnects that my girlfriend studies may improve weapons. But in the short run, they have a much better chance of reducing energy use.
At least in universities, DOD complements rather than competes with civilian agencies, and they all fund similar work. Everyone in my lab did the same sort of space physics research. Some of us were funded by the Air Force, some by the Office of Naval Research, and some by NSF. While the emphases may have differed slightly, there was a lot of overlap. That’s why we all had the same advisor. I’m sure there’s a similar dynamic in quantum computing funded by DARPA, the NSF, and DOE.
The existence of multiple funding agencies is one of the main strengths of U.S. science. They foster diverse approaches and ensure that a single paradigm doesn’t dominate. It wouldn’t necessarily be a good thing if all of DOD quantum computing money were transferred to the NSF. We want many groups attacking the same problem and we should be happy DOD is part of the mix.
Now if all we care about is research production, we may be fine with just two or three agencies funding science. Especially if DOD is as inefficient as they suggest, we may be better off transferring half the DOD research budget to NSF and DOE.
But we don’t care about research for the sake of research. We want to drive innovation, which depends on much more than government funding. Which brings me to the second mistake Wright et. al. make: ignoring the importance of deployment.
As David Roberts noted, technology deployment is itself a form of research. It’s one thing to make a neat device in your lab. It’s quite another to scale the product, align it with customer needs, bypass regulatory hurdles, and market it successfully. I can’t tell how routine it is for a company to fail for these reasons even if they have the science locked down. As great as NSF research is, it’s only a small part of the picture.
Computers are commonplace not only because smart physicists figured out quantum mechanics. It’s also because we learned how to make lots of computer chips cheaply and quickly. The DOD role in this development has been crucial. Precisely because they are so massive and relatively price-insensitive, they enabled large-scale deployment and the learning that goes along with it.
Cliff Bob shows that he doesn’t understand any of this:
Nowhere in the article is there anything but assumption that only the military, as some kind of beneficent and far-seeing midwife of invention, could have fostered these and other innovations. Nowhere are there convincing arguments that most if not all of these developments wouldn’t have been made either through some other government R & D agency or through the market itself.
Nowhere in Bob’s article is there anything but the wrong assumption that “these developments” occurred primarily because of an R&D agency rather than procurement and deployment. In some cases DOD was the only market in existence because no one else could afford the technology. Only after DOD brought down the price of semiconductors did we all benefit.
DOD may very well be wasteful and inefficient. Maybe in 2012 it’s not the best way to drive innovation and perhaps negatives now outweigh the positives . Those are fair arguments. But to debate the point intelligently, we have to first rid ourselves of the myopic view that money is all that matters. DOD funding is associated with scale and deployment, key components of innovation and commercialization. (See Roger for more along these lines.)
The most depressing part about all this is how otherwise brilliant writers make bafflingly simplistic arguments when it comes to innovation policy. Is it really so hard to understand that innovation requires more than government funding of R&D?
Another quick link to another blog. Check out Financial Times columnist Tim Hartford calling for more experimentation and risk-taking:
I think our system for promoting innovation, which is funded by a combination of government grants and private enterprise, struggles with large and adventurous projects, such as clean energy. The private sector is terrific at producing lots of experiments (just think of Silicon Valley) but not at funding expensive, long-term projects. Government grants can do that but are often rather risk-averse. One promising approach to get the best of both is innovation prizes. Another is to use a far more risk-loving system of grants.
Recently there seems to be many articles along these lines. In principle, I’m all for more grants for risky research, more experimentation, and the expansion of innovation prizes. But I haven’t seen anyone detail how we would implement such a system in practice, how it would be structured, and perhaps most importantly, how to get buy-in from the advanced faculty who sit on the grant committees!
Light blogging week this week. But check out Farhad Manjoo, who insists that Mark Zuckerberg, not anyone else, invented Facebook and deserves the credit. I’m not too familiar with the details, but this passage caught my eye (emphasis added):
I suspect we’re mainly interested in how Facebook got started because we want to know whom to credit for coming up with a brilliant idea. In America, we root for the guy with the great idea over the guy who didn’t sleep for a year making it happen. If Zuckerberg really did come up with the idea for a campuswide social network, he deserves all the billions that are coming to him. But if he stole the idea, why should he profit from something that someone else thought up first?
Easy answer: because Zuckerberg did it better. If you look at the early history of Facebook, you’ll see that almost nothing about it was a new idea. Even if it’s true that the Winklevosses came up with a plan for a Harvard social network first, they were obviously inspired by other sites. Social-networking sites—even ones focused on college students—had existed long before Facebook. The real value of Facebook wasn’t that it did something new, but that it did something old better—faster, prettier, more useful, and more addictive. This is a story we’ve heard before in the likes of the iPod, the iPhone, the iPad, Windows, and Google. None of these were new ideas, but we shouldn’t think any less of them because of it. Ideas are overrated. In technology, what really matters is execution.
Here’s David Rothkopf at Foreign Policy parroting the standard (and careless) meme that there is a simple, straightforward link between scientific production and economic growth, while also making some dubious historical claims:
The report reveals that whereas in 1996, the U.S. produced approximately 290,000 scientific papers and China produced just over 25,000, by 2008, the United States had crept forward to just over 316,000 whereas China had increased to about 184,000. While estimates as to the speed China is catching up vary, the report concludes that a simple straight-line projection would put the Chinese ahead of the United States … and every other country in the world … in output by 2013.
How did China do it? Simple: They made it a priority. They increased research and development spending 20 percent a year or more every year since 1999 and now invest over $100 billion annually on scientific innovation. It is estimated that five years ago, the Chinese were already producing over 1.5 million new science and engineering graduates a year.
This data resonates on many levels. It suggests a profound shift in the world’s intellectual balance of power. This shift is one that is historically linked to the economic vitality and consequent political and military clout of the countries that lead. It suggests a much better future for the people of the world’s most populous country and knock-on benefits for their neighbors and trading partners. It suggests a relative decline in influence for the U.S. And, for the people of the Arab world, currently struggling with their own revolutions, it suggests the only true path to real reform, opportunity and empowerment.
It is an axiom of history that the silent revolutions — like those that periodically come in science and technology — are far more important than the noisier, bloodier and more publicized political kinds. That’s why these subtle indicators of their progress can be even more momentous than the round-the-clock coverage of upheaval that seems to be dominating our attentions at the moment.
How on Earth can you possibly specify the relative importance of the American and French revolutions over some (unnamed) silent revolutions? What does it “more important” even mean? At any rate, we’ve also known for at least five years that total graduates is a poor metric to use for both China and India. More recently, this article in the Wall Street Journal notes the generally poor quality of many Indian college graduates. There are broad trends in global science and they must be carefully examined and understood. This piece doesn’t help.
In any event, these numbers-focused discussions have a bad habit of devolving into narrow zero-sum game conversations focused on whether leading countries are losing or not. Because collaboration is an important part of science, and that it tends to resist international tensions, having more publications, more scientists, and more quality researchers will help everyone.
A few days ago several bloggers felt it necessary to discuss the scientific status of economics (see Ryan Avent, Adam Ozimek, Matt Yglesias, and Jim Manzi). After reading through all those posts, a big part of me wanted to title this post: “Why do smart people discuss pointless questions?” But then I remembered that since I got a Ph.D in applied physics and blog about science studies, it would be a tad bit hypocritical. Nevertheless, here is my take on why they’re all wasting their time.
In one sense science is a brand, a very powerful and coveted brand. So it is understandable that economists want to be associated with this brand. To do so, they go through the usual routine of showing that economists care about data, change their theories under new evidence, try to make testable predictions and so on. Here’s Ryan Avent:
Is economics a science? Let me first associate myself with Adam Ozimek’s comments here. If you want to say that economics isn’t a “hard science”, that might be all right, depending on just what you mean by it. If you mean that economists can’t run lab experiments and can’t predict outcomes as accurately as, say, chemists, then that’s acceptable to me. If you mean that economists have no experiments, or don’t use the scientific method, or something of that nature, then you’re dead wrong. The currency of the economics realm is evidence. When economists do research they form hypotheses, build models, gather data, test the models against the data, and publish their conclusions. If other economists try to get similar results and fail, the original result is called into question.
All of this is (I believe) true. Economists do attempt to gather data and test their conclusions. But then again, so do historians and plumbers! And while we clearly don’t consider them scientists, we often do find their findings meaningful. So it seems to me that the important question isn’t whether or not economics is science, but whether it can be useful. Here is where, I believe, we need a bit more particularization: What economics findings are most robust? What relevance do they have for policy? And which policies? Which findings are more tentative? How do we weigh them against political exigencies?
There are two main problems with these demarcation efforts. First, it encourages us to be lazy and take shortcuts. We should be doing the hard work of determining whether a certain piece of scholarship is relevant to our specific problem and whether there is expert consensus. Instead, the above authors seem to argue: “By my simple heuristics, economics is science and thus should be trusted.” But even physics, that most scientific of sciences, contains a long history of errors and frauds. Being branded as science is no guarantee of veracity.
Perhaps the bigger, more pernicious problem is that the mere act of demarcation closes our mind to useful research that may be useful. If we have it in our mind that science and only science belongs in policy debates, we’ve really hindered ourselves. Here’s a particularly baffling passage from Jim Manzi:
I’m not arguing that economics has produced nothing of value, but rather that its most useful outputs are more like those of historians than those of biologists. Draping the cloak of “science” over its findings can often be a rhetorical strategy designed to increase the leverage of economists in policy debates.
Well, there are instances when historians should be leveraged more than biologists in policy debates. As Sarah Mayeux showed in a wonderful post on trends in American incarceration rates, a historical perspective can be illuminating. What matters is not whether our policies exclusively make use of science, but whether they exclusively make use of sound evidence and research whatever their form.
My last post discussed Lehrer’s column on the increased difficulty of making scientific discoveries. Lehrer should have stuck with that topic alone instead of pivotting off Tyler Cowen’s The Great Stagnation to ponder the relationship between scientific discoveries and standard of living:
I think it’s also worth contemplating the disturbing possibility that our cresting living standards might ultimately be rooted in the difficulty of making new scientific discoveries. After all, at a certain point the pursuit of reality is subject to diminishing returns – our asteroids will get so small that we’ll stop searching for them.
For someone who often paints wonderfully nuanced pictures of science, I’m a bit confused to see Lehrer write this. Living standards never have been and never will be “rooted” in new science. If they are rooted in anything, it is productivity increases related to innovation. The rule of law, tax structure, monetary policies, and capital investment all play a pretty big role here.
In fact, only since WWII has science been more than a minor player. Industrial revolution technologies were not strongly linked to the scientific revolution that preceded it, and may have depended more on a robust patent system than heroic scientists. Even if we grant that science has recently become critical, it may be more so in America than elsewhere. The Japanese economic miracle occurred despite a paltry level of science funding, and was spurred by careful industrial planning. Germany seems to have escaped the worst of the Great Recession despite spending relatively little on R&D. And while our standard of living may be “cresting”, the developing world is, well, developing quite well. So again, there is no straightforward link between science, innovation, and living standards.
The U.S. may indeed have a problem with economic stagnation, and it’s important to understand what exactly is going on. But casually assigning too much credit or blame to discovery is, for lack of a better term, pretty unscientific and doesn’t help.
Jonah Lehrer has again written a provacative piece, arguing that growing collaboration and teamwork among scientists is a response to ”all the low-hanging facts [having] been found.” Today’s science is simply much too hard and complex to tackle alone. Lehrer quotes Samuel Arbesman:
If you look back on history, you get the sense that scientific discovery used to be easy. Galileo rolled objects down slopes. Robert Hooke played with a spring to learn about elasticity; Isaac Newton poked around his own eye with a darning needle to understand color perception. It took creativity and knowledge to ask the right questions, but the experiments themselves could be almost trivial.
Today, if you want to make a discovery in physics, it helps to be part of a 10,000 member team that runs a multibillion dollar atom smasher. It takes ever more money, more effort, and more people to find out new things.
Given that I wrote something similar just a month ago, I of course liked this passage. A few responders at Andrew Sullivan do critique the notions that science was ever easy or that we’ve reached the ”end of disovery.” On the latter point, I agree that the meme is a bit overwrought. Over 1 million papers get published every year, and presumably most of them make a discovery of some form. Some might later be proven wrong, and some may be meaningless (Lehrer claims one-third of papers never get cited), but they are discoveries nonetheless.
Perhaps a better phrase is the slowdown of meaningful discovery. The fact that all the low-hanging fruit have been found doesn’t end the pursuit of knowledge. It just means we have to focus on smaller, more specialized problems that have less payoff (hence all those uncited papers), or focus on really, really hard problems that can’t be solved (hence all those corrections in medical research). The productivity slowdown in pharmaceutical R&D is one notable example of the former phenomenon. The graph is particularly striking (h/t Roger Pielke Jr.).
The rate of discovery ultimately matters because we think it affects our standard of living, a point Lehrer addresses towards the end of his post. Unfortunately, he really muddles the relationship between discovery and innovation. I’ll try to address this in my next post.
I should have linked to this earlier, but here’s the webcast and white papers from a 2-day NSF workshop on the science of science measurement. Rush Holt’s remarks on Friday are particularly good–they start at the 35-minute mark here and run for ~40 minutes. In the question and answer session, he refers to early research on the return on investments from science as “sketchy.” Methinks it was quite amusing.