Science Philosophy and Practice: Research Funding and the Grant System
Science Philosophy and Practice: Research Funding and the Grant System
Most modern scientific research is expensive: the days when one could discover fundamental laws of physics using home-made tools, as Italian physicist Galileo Galilei (1564–1642) did at the dawn of the Scientific Revolution, are gone. Money for research comes primarily from three sources: private foundations such as the Rockefeller Foundation or Gates Foundation, private corporations such as IBM, and government agencies such as the National Institutes of Health (NIH), the Pentagon, and the National Science Foundation (NSF), the last three being funding agencies of the U.S. government.
A sum of money given to fund specific research is called a grant. Scientists wanting grants must apply for them, explaining in writing, in detail, why the proposed research is important and convincing potential funders that they are qualified and equipped to do it. The number of scientists seeking funding in any given field is always greater than the number that can be funded, so grant-seeking is competitive: This means that grant-givers must decide which research to fund and which research not to fund. The need to persuade funders to give money shapes the growth of modern scientific knowledge in ways that are controversial. Some persons argue that the system works well, others that some aspects and types of knowledge are neglected because funding institutions care less about them.
Today, science research, especially basic science research, is heavily dependent on grants from public and private institutions. By 2000, U.S. universities and colleges were receiving almost $20 billion annually for research programs. Most of this money was invested in science research. Grant-derived funding paid a range of costs associated with research, from test tubes to the salaries of technicians and professional investigators.
Historical Background and Scientific Foundations
Until recently, most scientific research was conducted by independently wealthy individuals in their spare time. Since the fundamental laws of mechanics, optics, electricity, and other aspects of science are discoverable using relatively simple, cheap equipment, this system worked well for centuries. By the early twentieth century, however, science was getting much more complicated—and expensive. More fundamental research in physics and other fields was being carried out in the laboratories funded by major universities. However, government funding of scientific research was still slight, and corporate funding of basic research was still unheard of.
World War II (1939–1945) changed this picture radically and permanently. Victory in that war was heavily influenced by technology: It became clear that governments needed weapons based on the newest science. Radar, for example, was secretly developed at the Massachusetts Institute of Technology (MIT) and in the United Kingdom, helping turn the tide of air war in Europe, while several governments raced to develop jet aircraft. The war project that depended most on basic scientific research was the drive to build an atomic bomb. The U.S. government began its massive, expensive, and intensely secret Manhattan Project in 1941 and exploded the world's first atomic bomb in 1945.
The scientists employed in the Manhattan Project were mostly not government scientists, but university professors and students hired into the program for the duration of the war. The Los Alamos Scientific Laboratory in Nevada, formed to carry on the Manhattan Project, was managed by the University of California starting in 1943 (and still is, in part). The spectacular success of the Project, along with other weapons made possible by scientific advances, made clear the relationship of national security to science, and the Manhattan Project created a precedent for government-university collaboration.
In the last year of the war, 1945, Vannevar Bush (1890–1974), Director of the Office of Scientific Research and Development, wrote a report for President Harry Truman (1884–1972) in which he urged strongly that the government fund all forms of scientific research, not only those with obvious, direct benefits for military technology. This was vital to long-term national survival, Bush argued: “We can no longer count on ravaged Europe as a source of fundamental knowledge.”
The government took Bush's advice, establishing the NSF in 1950. The NSF quickly began giving grants, and by 2008 was funding about a fifth of all federally supported science research in U.S. universities. Other federal funding comes from a patchwork of agencies including the National Aeronautics and Space Administration (NASA), the NIH, the Pentagon, and others. After World War II the NIH expanded its grant-giving for biological and medical research from $4 million in 1947 to $1 billion in 1974 to about $28 billion in 2007. Similar systems were created in other industrialized countries: Only governments could supply the large sums of money needed to conduct modern, fundamental research in physics, chemistry, and biology. Because of its large population and industrial sophistication, the U.S. government has been by far the largest single funder of scientific research in the world.
Efforts to balance the federal budget in the 1980s (e.g., the Gramm-Rudman-Hollings Act of 1985) and the end of the Cold War in the early 1990s led to cutbacks in U.S. science funding in some areas. Funding for science research derived from both private and federal grants increased throughout the 1990s, but the rate of growth in competition for grant dollars greatly exceeded that rate of real growth in funds available. Tight competition for federal funding caused universities to seek more funding from private companies, allowing industry to increasingly shape the course of research in the university system. In the early 2000s, the continued shifting of resources to military spending (by 2008, the U.S. military budget was almost half of total world military spending) and political pressure to reduce taxes led to renewed pressure on grant sources, increased competition in grant-seeking, and increasing control of industrial funders over some university research priorities. The cost of doing research continued to rise, so that a single grant dollar purchased less scientific knowledge with each passing year. From 2000 to 2006, although the NSF's grant budget grew by 44%, applications for funding grew even faster, resulting in greater competitiveness: 30% of grant applications to the NSF were funded in 2000, but only 21% in 2006.
While the grant system has funded most of the considerable scientific progress in the last half-century, it has also been criticized. Research that is likely to enhance technologies of destruction is favored by the military goals of much government grant funding, while research that is likely to yield patentable, profitable technologies is favored by industry. Forms of knowledge that do not yield profit, political payback, or weapons have been funded, but at much lower rates. In biology, for example, genetic engineering has been well-funded by industry because of its potential to produce patentable life-forms and high profits down the road: Ecology, paleontology, soil science, taxonomy, and other disciplines have been poorly funded.
Critics of the present grant system maintain that it has produced mediocre and misshapen science. Defenders argue that it has been effective. In the following section, some of these arguments are explored.
Modern Cultural Connections
Critics of the current grant process often focus on the problems with obtaining federal grants. They especially point out that grants are easier to obtain for studies involving the application of research rather than for basic science research. However, the existing system produces steady output of scientific innovation. There are many grants awarded by public and private sources that act to support good science at the most fundamental level. For example, some grants are designed solely to prepare undergraduate students for graduate education in science. Although certainly not basic-science research, these types of grants are important in the training of future scientists. In a sense, it is the most basic and most fundamental investment in science.
The present system also contains checks and balances that are intended to promote good science by
ensuring a distance between the research lab and the marketplace. Programs designed strictly for the marketplace are the antithesis of rigorous science, in which failure may be as informative as success: Recognizing this truth, many grant agencies (e.g., the NSF) explicitly refuse to grant money for the development of products for commercial markets.
It is true that over the last quarter century or so, there has been a shift away from basic science research to more applied science research within granting agencies such as the NIH. However, defenders of the grant system argue that this trend is balanced by the actions of other agencies to specifically encourage rigorous pursuit of basic science knowledge. For example, the NSF specifically discourages proposals involving particular medical goals (i.e., where the aim of the project may be the diagnosis or treatment for a particular disease or disease process).
The present grant system also seeks to provide special support for women, minority scientists, and scientists with disabilities. As with direct grants to students, grants to faculty at non-research colleges, with primarily undergraduate students, are designed solely to provide the most fundamental support of science in the development of the next generation of researchers. Grants can also be used to remedy a shortage of investigators in a particular area or research.
Most grant review processes seek to promote good science by allocating resources based upon the significance of the project (including potential impact on theory) and the capability and approach of the investigator or investigative team. Evaluating committees—especially when staffed with experts and functioning as designed—are able help fine-tune research proposals so that methodologies are well-integrated and appropriate to the hypothesis advanced. In cases of research involving human subjects or potentially dangerous research (e.g., genetic alteration of microorganisms), the grant-review process also provides some oversight of procedures that assure that research projects are conducted with due regard to ethical, legal, and safety considerations: Federal agencies do not grant funding to proposals that do not explain how they will meet certain ethical and safety standards.
Some critics of the existing funding system argue that although grants are designed to promote good science, the process has become so cumbersome, clogged, and confused that despite noble intent, it increasingly encourages mediocre, “safe” science.
Increasing competition and dependence on grants to fund increasingly complex and expensive research programs has exacerbated pre-existing weaknesses in strained grant evaluation systems. Moreover, the specific reforms designed to cope with increasing numbers
of grant applications (e.g., triage and electronic submissions) are proving to have the unintended side effect of profoundly shaping the kinds of science research funded.
Grant awards, critics also argue, are rapidly becoming a contest of grantsmanship (the ability to write proposals and secure grants) rather than being decided on scientific merit. Emphasis on the form and procedures of the grant evaluation process, rather than on the substance of the science proposed, forces researchers away from the lab and into seminars on the craft of grant writing. Even more ominously for science, the investigators are forced, in many cases, to develop research proposals specially designed to please grant review committees. This impacts science research in several ways.
First, there is a loss of scientific diversity as proposals that have predictable outcomes are viewed as less risky investments of precious capital by grant evaluation committees. This drives research toward what critics of the current grant process term “safe science” and away from the types of risky research that are the likeliest path to more spectacular scientific insights and advances.
Second, as grantsmanship becomes increasingly important, new investigators fight an uphill battle to gain funding and build labs. Already several steps behind seasoned principal investigators who know how to craft strong proposals, new researchers often struggle along on smaller grants designed for new scientists. There is little funding of dissertation research: The NSF, for example, actively discourages graduate students from submitting grant proposals. Grants to scientists starting out on research programs are often insufficient. In fact, only about one out of four researchers seeking initial NIH funding actually apply for the easier-to-obtain grants designed for researchers making their first application for funding as a principal investigator (leader of a research effort). More confining and debilitating to new researchers are early development grants that carry restrictive clauses that prohibit researchers from seeking other types of funding.
Actual funding reflects an increasingly brutal reality for investigators at all levels. The grant process is extremely competitive. Most grant proposals are not funded, and the percentage of proposed projects funded has steadily declined since the mid-1980s to current levels at which only about 10% of proposals, overall, are ultimately funded. In this environment, some scientists and their sponsoring institutions become proposal mills—often putting out a shotgun pattern of many proposals in hope that one or two may get funded. The time cost is a staggering drain on scientists and scientific research. Many investigators spend more time on the grant application process than on actual research.
Models of evaluation often work against more open-ended, basic-science-oriented research proposals. Basic science proposals usually contain a wider range of possible outcomes than do more narrowly focused goal-oriented projects (e.g., projects regarding a specific clinical application); reviewers tend to regard this unpredictability as a negative and so give a lower project priority to the proposal under review.
This emphasis on predictable outcomes may distort the research process itself: Biasing research toward a particular goal may tilt interpretation of data. It is a well-known axiom of science that researchers, regardless of discipline, often find the results they are looking for because even the most intellectually honest researchers are prone to shade and interpret—quite unconsciously and unintentionally—data that correspond to expected results. Indeed, this is the whole rationale behind the double-blind study that is standard in so much medical research: People conducting the research must be kept from knowing which medications are “supposed” to work, lest they unintentionally feed their expectations back into the research process.
In sum, critics of the present grant system argue that it encourages mediocre science because it encourages predictability. Researchers who fail to predict all possible outcomes for a project in their grant applications receive worse priority scores and so are less likely to be funded. Under these conditions, research tends to become an exercise in producing predicted results. This fundamentally reshapes the intent of research and results, creating a weak foundation upon which to build future applied research.
Mervis, Jeffrey. “Grants Management: NSF Survey of Applicants Finds a System Teetering on the Brink.” Science 317 (2007): 880–881.
Rajan, T.V. “Would Harvey, Sulston, and Darwin Get Funded Today?” The Scientist 13 (1999): 12.
Bush, Vannevar. “The Endless Frontier: A Report to the President by Vannevar Bush, Director of the Office of Scientific Research and Development.” National Science Foundation. July 1945. http://www.nsf.gov/about/history/vbush1945.htm#ch3.8 (accessed January 22, 2008).
Brenda Wilmoth Lerner K. Lee Lerner