Procurement: Overview Military procurement necessarily stands apart from the mainstream of the American economy. This owes partly to the uniqueness of much military equipment, which requires special design, development, and production facilities, and partly to the military's mobilization needs, which often dictate maintenance of reserve production tooling and supplies. Uniqueness has grown more significant with the advance of technology, and especially with the American military's Cold War push for technological superiority. Military mobilization needs have grown less important with the invention of nuclear weapons, which made a long conventional war inconceivable, and with the development of conventional weapons too complex to be produced rapidly in any case.
Neither of these factors explains why the American military buys even simple things in complicated ways, however. While reports of $465 hammers, $10,000 coffeemakers, and sixteen‐page technical specifications for sugar cookies prompt cries of fraud or stupidity, more often they result from the fact that defense is a public good, financed by a large federal bureaucracy dispensing public money advanced by a pluralistic political process.
Historically, military procurement has involved three major sets of arrangements, in proportions that have changed over time. The simplest arrangement has involved the purchase from commercial vendors of commercial items, perhaps slightly modified for military use. This was the principal mode of military procurement in the early years of the republic, when military technology differed little from the muskets and saddles people normally owned. The procurement challenge then lay in meeting the military's need for large production quantities and interchangeable parts. As weaponry has grown more sophisticated, the military has moved to other modes of procurement. But today's military still buys office supplies, computers, and some motor vehicles from commercial vendors.
The development, production, and maintenance of military equipment has also been carried out by government‐owned laboratories, arsenals, and depots. The “arsenal system” originated early in the 1800s and grew over the following century into an elaborate array of specialized facilities. Arsenals contributed production techniques as well as weapons; work on mass‐production techniques at the Springfield and Harpers Ferry arsenals, for example, contributed to the nation's initial industrial development—an early example of so‐called spin‐off, wherein defense research produces items of commercial value.
But the arsenals also became famous for their stodginess and resistance to technologies they did not themselves invent—the “not‐invented‐here” syndrome. As technology became more important to military power, the arsenals came under increasing attack. Most of the original arsenals were closed in the twentieth century. Although some labs and depots still operated, these too were disappearing, or at least shrinking in size, as the defense budget fell and the military services “outsourced” such activities to private firms or operators. Although a new set of government‐owned facilities grew up during and after World War II to develop and build nuclear weapons, with the end of the Cold War the nuclear weapons facilities too were shrinking and scrambling for new missions.
The most pronounced break with the older arsenals came in the years just after World War I, as the services sought to explore the new aircraft technologies demonstrated during that war. Government arsenals were unable to keep up with these relatively fast‐moving technologies; in the time it took a government facility to draw up specifications for a new aircraft engine, for example, still newer models would render those specifications obsolete. The army and naval air arms thus turned to the era's aircraft entrepreneurs, who were eager for government contracts to help finance their fledgling companies. Contracting procedures were complex and never wholly satisfactory, since the exploratory nature of the work made it almost impossible to specify costs in advance, or to run formal competitions against established specifications. Thus, there was far more prototyping than actual production of aircraft in the interwar years. Still, almost all of the aircraft used during World War II were prototyped by private aircraft firms before the war began.
The Cold War saw a massive expansion of this mode of military procurement, stemming partly from the importance of aircraft and missile procurement and partly from the presumption that these firms were far more innovative than the arsenals. In the 1960s, for example, Secretary of Defense Robert S. McNamara forced the army to shift from what remained of its arsenals to private contractors for much of its procurement. During the Cold War, military procurement came to be highly concentrated in such large aerospace giants as Boeing, Lockheed, and General Dynamics; normally, the top twenty‐five defense contractors won nearly half of all procurement dollars awarded annually. It was this form of military procurement that President Dwight D. Eisenhower referred to as the “military‐industrial complex.”
What worried Eisenhower was the political clout such firms seemed to wield. The pioneer aircraft entrepreneurs excelled at lobbying Congress as well as designing innovative aircraft, and thus managed to pull down more than half of all military research and development money allocated in the years between World Wars I and II. Lobbying activities grew in the 1950s. Although the arsenals had always maintained close ties to their local legislators, commercial firms could lobby more aggressively. They could also spread the award of subcontracts widely, at least partly to seek broader political support on Capitol Hill. Although the literature on defense contractors questions the effectiveness of these tactics—permanent installations like arsenals or depots seem always to have had more clout with legislators—there is no denying a political dimension to defense contractors' activity.
The political nature of their market slowly shaped these “private” firms into a form more properly labeled “quasi‐socialized.” Lacking the competition or price signals found in a real market, and facing steadily growing government regulation, defense firms generally acquired layers of bureaucracy that mirrored the military bureaucracy they served. Commercial firms owning defense facilities have tended to keep these divisions quite separate within the firm, for example, if only because defense accounting techniques differ substantially from those used commercially. More important, the military's drive for technological superiority slowly pushed many defense firms to levels of technical sophistication well beyond what could be marketed commercially. As defense spending fell in the wake of the Cold War, few defense firms were able to “convert” to commercial production except by simply buying commercial subsidiaries. Thus, the nation's defense giants sold out (General Dynamics), purchased their way into commercial sectors (Rockwell), or merged into huge defense conglomerates (Lockheed‐Martin, Northrup‐Grumman).
Americans want these firms to be both efficient and creative. But no one can measure their efficiency, since no one knows what the world's best fighter aircraft or tank “should” cost (although one credible study comparing the cost of U.S. and European military aircraft concluded that American weapons cost more but performed better). On the other hand, these firms have clearly been creative; in the nineteenth century, U.S. military technology often lagged behind Europe's; during the Cold War, it moved into first place worldwide in many categories. Overall, Americans seem to have paid a premium for premium technology.
Yet it remains to be seen whether the dominant Cold War mode of military procurement can handle the challenge posed by modern information technologies. Like aircraft technologies in the 1920s, today's electronics technologies have military utility but are advancing much faster than the established procurement apparatus can handle. Thus, the commercial world now leads the military in many electronics sectors, and defense procurement reform seeks to forge links from its own process over to commercial electronics firms. Procurement routines are deeply ingrained and difficult to change, of course, raising questions about whether such reform will succeed. If it does, it will shift the mode of military procurement back toward the early years of the republic: the purchase of military technologies from commercial vendors.
[See also Industry and War.]
Merton J. Peck and and Frederic M. Scherer , The Weapons Acquisition Process: An Economic Analysis, 1962.
Thomas L. McNaugher , New Weapons, Old Politics: America's Military Procurement Muddle, 1989.
Kenneth R. Mayer , The Political Economy of Defense Contracting, 1991.
Paul A. C. Koistinen , Beating Plowshares into Swords: The Political Economy of American Warfare, 1606–1865, 1997.
Paul A. C. Koistinen , Mobilizing for Modern War: The Political Economy of American Warfare, 1865–1919, 1997.
Paul A. C. Koistinen , Planning War, Pursuing Peace: The Political Economy of American Warfare, 1920–1939, 1997.
Thomas L. McNaugherProcurement: Aerospace Industry The relationship between the U.S. aircraft industry and the military has always been close. Fixed‐wing piloted flight was technologically demanding and required large sums of capital. Early inventors turned to the military services for markets, and the U.S. Army Signal Corps ordered its first craft from the Wright brothers in 1908. Although American capabilities lagged behind those in Europe, the U.S. government spent $350 million during World War I to produce 14,000 military airplanes. It also created the National Advisory Committee for Aeronautics (NACA) to explore aircraft science and advise the military.
Military patronage produced an unusually cooperative structure in the early aircraft industry. The government engineered the formation of the Manufacturers Aircraft Association to moderate ferocious competition and avoid patent battles that might delay wartime production by pooling patents and sharing plane‐making methods. After World War I, orders collapsed by over 80 percent. The association successfully pressed for government sponsorship of airmail services and an infrastructure of airports, weather reporting, and flight control, as well as continued military contracts to develop new aircraft. During World War II, President Franklin D. Roosevelt's call for 40,000 aircraft led to the expansion of small companies like Boeing, Douglas, North American, Consolidated (later General Dynamics), McDonnell, and Grumman by tenfold or more. Aircraft accounted for more than 12 percent of all U.S. wartime manufacturing output.
After World War II, when aircraft orders plunged from a peak of $16 billion to $1 billion, the aircraft industry campaigned vigorously with the newly independent U.S. Air Force for a public commitment to air defense systems, expansion of domestic and international air transport, and the preservation of a strong aircraft manufacturing industry. Close ties with military strategies were relied upon to help shape military markets and government notions of defense necessities. Some scholars have argued that this “technology push” contributed to the development of the arms race of the Cold War.
Contracts ballooned in the Cold War and were welcomed by economists advocating “military Keynesianism” to achieve full employment via public spending. Competition between the piloted bomber and the ballistic missile groups within the air force, and between army and navy bids for helicopters and their own fighter aircraft, resulted in the production of a broad array of aviation weapons systems, which kept most of the major aircraft companies in business. In 1958, NACA became the National Aeronautics and Space Administration (NASA), with huge additional projects for the industry. The size, speed, and aggressiveness of the industry's development led President Dwight D. Eisenhower in 1961 to warn the nation about the dangers of a permanent “military industrial complex.”
The renamed “aerospace” industry has been favored by this de facto but unofficial industrial policy. The military acted as the underwriters of aerospace development after World War II, encouraging development of the jet engine and the communications satellite. In 1989, the Pentagon paid for 82 percent of the aerospace industry's research and development effort and purchased 65 percent of its output. As a result, aircraft remain the nation's strongest manufacturing export, and U.S. companies dominate the world market for commercial as well as military aircraft. Since the end of the Cold War, the industry has undergone deep retrenchment, consolidating into fewer and larger firms (Northrop‐Grumman, Lockheed‐Martin), and relying more heavily on arms exports.
[See also Industry and War.]
John Rae , Climb to Greatness: The American Aircraft Industry, 1920–1960, 1968.
G. R. Simonson, ed., The History of the American Aircraft Industry: An Anthology, 1968.
Martin van Creveld , Technology and War, 1989.
Ann Markusen,, Peter Hall,, Scott Campbell, and and Sabina Deitrick , The Rise of the Gunbelt, 1992.
Ann MarkusenProcurement: Government Arsenals As industrial products, weapons have unique requirements: production rates must radically increase in times of war and rapidly decrease in peacetime; cost, although a significant factor, matters less than uniformity, precision, and performance; manufacturing often involves a mix of mass and batch production uncommon in commercial markets. To meet these special demands, the U.S. government sometimes maintained its own production facilities—armories for small arms, and arsenals for guns, carriages, powder, and other equipment.
Why should government compete with private industry? Critics argued that lower production costs at arsenals represented unfair competition (for they need not make a profit), while conversely, higher costs represented inefficiency. Proponents of arsenals pointed to government production as a “yardstick” to gauge costs in private industry, and to arsenals' ability to nurture costly new technologies for long periods. Military arsenals, as state‐owned factories in a capitalist system, have historically raised problems over the government's relationship to technology.
Prior to 1794, the U.S. Army procured arms solely from private contractors, an arrangement that proved expensive and unreliable. Following the War of 1812, Congress placed the Army Ordnance Department in charge of production at five government‐owned arsenals of construction, Allegheny (Pittsburgh); Frankford (Philadelphia); Washington D.C.; Watervliet (upstate New York); and Watertown (outside Boston); and at the two armories, Springfield (Massachusetts) and Harpers Ferry (Virginia). These establishments operated as a cross between industrial plants and military facilities: managers and executives were ordnance officers (usually trained as engineers) but the workforce was civilian. Officers had duties similar to industrial managers but with no worries of marketing, sales, or profits and losses. Thus, they could focus their efforts on production, efficiency, and technical management. Ordnance officers tended to rotate through their assignments every few years, but workers remained at the arsenals for much longer periods. Hence the core of technical and manufacturing expertise resided in highly skilled machinists and workmen. This “armory practice,” through a difficult but steady path, laid the foundations of the so‐called American System of Manufacturing, which became key to the late nineteenth‐century era of large‐scale industrialization. By the 1850s, the armory system, characterized by highly mechanized precision production, manufactured rifles with genuinely interchangeable parts—a goal inventor Eli Whitney had promised forty years before but could not deliver. Government‐owned manufacturing facilities provided a stable institutional environment for technology to mature over several decades, despite uncertain economic returns.
This slow, expensive development process proved essential to meeting the Civil War's unprecedented demand for arms, especially with the loss of the Harpers Ferry Armory in 1861. Between 1860 and 1865, the Springfield Armory produced over 800,000 weapons, more than it had produced in all its previous 67‐year history. This accomplishment depended on extensive subcontracting to private firms, made possible by the armory's hard‐earned expertise with interchangeable parts. When the war ended, the entire system shrank substantially, but the private contractors had been seeded with the American System. Many failed for lack of government business, but others applied the new manufacturing techniques to sewing machines, typewriters, agricultural equipment, business machines, and even bicycles, thus spurring the great wave of American mass production.
Despite these feats, the armories and the arsenals were often subject to criticism. Because of their unique organization, they tended to focus innovative energies on production and not on design, hence the Ordnance Department's often remarked failure to introduce breechloaders or repeating rifles for the common soldier in the Civil War. This technical conservatism owed less to narrow‐mindedness than to the Ordnance Department's appreciation for the difficulty of producing new weapons in large numbers. Still, the austerity of the post–Civil War military budget induced stagnation. By 1900, the U.S. Army's small arms were at least a decade behind those of European militaries, which depended on private companies like Krupp and Vickers for new technology. Even in production, the arsenals could not keep up. In the early twentieth century, ordnance officers introduced new techniques, such as scientific management, to streamline operations, but they ran headlong into political and labor opposition. Arsenals, unlike private industry, were subject to congressional oversight, and arsenal workers, unlike their counterparts in companies, could seek redress of their grievances by appealing to political patrons.
World War I caught the arsenal system unprepared, and only heavy reliance on weapons from the British and French saw the United States through the critical period of mobilization. While the Ordnance Department learned important lessons from the experience, peacetime budgets between the world wars meant that it could do little to implement improvements. Still, the army did accomplish some critical procurement planning during the 1930s, with the consequence that production ramped up more smoothly for World War II, although greatly aided by America's delayed entry into the war. By the 1940s, the increasing complexity and scientific sophistication of weapons tended to favor government laboratories and private companies instead of the older arsenals. Even with small arms, the armory had difficulty introducing new technology; the debacle over the M‐16 rifle resulted in the closing of the Springfield Armory in 1968. During the Cold War, the military gradually came to rely on large, diversified corporations, what some analysts have called “private arsenals.” These institutions have proven technically innovative, if expensive, the government having lost the arsenals' yardstick function. The end of the Cold War, however, highlighted one great advantage of the arsenal system, conspicuous in its absence: the ability to cut back rapidly in peacetime.
[See also Industry and War.]
Felicia Johnson Deyrup , Arms Makers of the Connecticut Valley, 1948.
Constance M. Green,, Harry C. Thompson,, and and Peter C. Roots , The Ordnance Department: Planning Munitions for War, 1955.
Hugh Aitken , Scientific Management in Action: Taylorism at the Watertown Arsenal, 1960.
Merritt Roe Smith , Harpers Ferry Armory and the New Technology: The Challenge of Change, 1977.
Edward Clinton Ezell , The Great Rifle Controversy, 1984.
James J. Farley , Making Arms in the Machine Age: Philadelphia's Frankford Arsenal 1816–1870, 1994.
David A. MindellProcurement: Influence On Industry In the United States after World War II, a military‐industrial complex developed, quite unlike its counterparts in other advanced industrial countries. A distinctive set of firms in a select set of industries emerged as dominant suppliers to the Pentagon, and in turn were beneficiaries of a de facto industrial policy. During World War II, the Pentagon appropriated the strongly centralized and strategically planned New Deal state apparatus, creating a permanent security state that endured throughout and even beyond the Cold War. Traditional “hot war” suppliers such as the auto and machinery industries turned their sights back on commercial markets following the war, but the newly expanded aircraft, communications, and electronics (ACE) industries remained dependent upon military markets for both research monies and sales.
The ACE complex centered on a set of firms that subsequently climbed the ranks of the Fortune 500 biggest corporations—aerospace companies like Grumman, Rockwell, Northrop, General Dynamics, and Lockheed, and communications/electronics firms like Hughes, TRW, and Raytheon. Boeing, successful in both commercial and military markets, was an exception. As commercial shipbuilding declined, shipyards like Newport News, Bath Iron Works, Litton, and Todd also became increasingly defense‐dedicated. In a market that operated as a bilateral monopoly (defined as one buyer and one seller, each dominating its “side of the market”), these firms flourished under military patronage and were kept afloat by “follow‐on” procurement practices. Pentagon oversight practices generated a specialized business culture that stressed high performance and timeliness over cost‐consciousness, rendering military contractors increasingly ill‐equipped to compete for commercial sales.
During the early postwar period, advances in jet engines, navigation and guidance systems, and new forms of rocket propulsion yielded significant technologies for the commercial sector, giving American aircraft, communications, and electronics industries a head start in international competition. Through the end of the century, U.S. net exports remained dominated by these sectors plus arms and agricultural goods. Increasingly, however, the esoteric nature and exorbitant cost of military requirements curtailed spin‐off, while commercially oriented economies like Japan and Germany were able to capitalize on U.S. defense‐underwritten inventions in electronics, robots, and computers.
In the past few years, scholars have begun to question the contribution of the Cold War military‐industrial effort to the American economy. Consuming more than $4 trillion since the 1950s, on average 5 percent and 7 percent of GNP annually, much of it deficit‐financed, the military‐ industrial complex has siphoned off a large portion of the nation's scientific and engineering talent and its capital investment funds. The relatively poor postwar performance of American auto, metals, machinery, and consumer electronics industries can be attributed in part to this relative starvation of resources and the absence of similar industrial incentives.
Its costliness has been exacerbated by the spatial segregation of much of the complex from the traditional industrial heartland, inhibiting cross‐fertilization and requiring new public infrastructure in “Gunbelt” cities and areas such as Los Angeles, San Diego, Silicon Valley, Seattle, Colorado Springs, Albuquerque, and Huntsville. The dependency of these firms, industries, and regions on the Pentagon budget has made it more difficult to adjust to post–Cold War realities, especially with associated geopolitical shifts in political representation.
[See also Consultants; Economy and War; Industry and War.]
Merton J. Peck and and Frederick W. Scherer , The Weapons Acquisition Process, 1962.
Seymour Melman , The Permanent War Economy: American Capitalism in Decline, 1974.
Gregory Hooks , Creating the Military‐Industrial Complex, 1992.
Ann Markusen and and Joel Yudken , Dismantling the Cold War Economy, 1992.
Ann MarkusenProcurement: Military Vehicles and Durable Goods Industry The maxim, “In war the best is always the enemy of enough,” describes the U.S. Army's experience with the wagon and truck industries. The army has relied historically on a mix of large public arsenals, armories and depots, and a number of civilian producers to meet its needs. But procurement of wagons and trucks, its primary transport vehicles, has never followed that pattern. During the nineteenth century, American wagon makers were a mature industry—high‐volume manufacturers of quality goods—with sufficient political power to prevent the establishment of competing public production facilities. (American automobile manufacturers occupied a similar position in the twentieth century.) Following the War of 1812, the Quartermaster Bureau, which procured most of the army's general‐purpose vehicles, established standard specifications for wagons and bought them from large private wagon makers like Studebaker, Espenschied, and Murphy. After 1840, certain assemblies and parts like wheels and axles were interchangeable, but industry practice was to adapt off‐the‐rack commercial lines to military demands. They were not the best wagons to be had, but they were good enough, and could be procured in time and in sufficient numbers to meet military needs.
Between 1906, when the army began to experiment with motor transport, and 1937, there were two attempts to modify those traditional procedures. In 1913, the Quartermaster Bureau developed a working relationship with the new Society of Automotive Engineers (SAE), and in mid‐1916, a team of Quartermaster, Ordnance, and SAE specialists designed a fleet of standardized, noncommercial military trucks that the government attempted to place in production after entering the war in 1917. The idea was to contract components throughout the industry and assemble the trucks at central locations. Only the 3‐ton Standard B “Liberty” truck reached production before the armistice. Resistance to an independent design was widespread in the automobile industry. Manufacturers like the Four Wheel Drive Company, Marmon, Reo, White, and Ford argued that their own commercial models were sufficient, and often refused Liberty B contracts. Parts and subassemblies from less experienced manufacturers would not interchange. Assembly of completed vehicles was slow to get underway, and as a result, the American Expeditionary Forces were forced to use much Allied equipment. In comparison with the British and French, the Americans were often short of truck transport.
After the war, the Quartermaster Bureau complained that it had not been able to get the kind of trucks it needed from private producers and spent over a decade designing its own Quartermaster Standard Fleet. The automobile industry insisted that its trucks were adequate and lobbied successfully to prevent the introduction of the Quartermaster designs. A compromise in 1937 brought a return to traditional practices. The army set general standards and specifications, and truck makers—General Motors, Dodge, Studebaker, Ford, and others—supplied “modified commercial” vehicles like the 2.5‐ton general‐purpose truck (“Deuce and a Half”) in quantities sufficient to meet wartime needs, while specialized producers like Mack, Diamond T, and Reo built 4‐ and 6‐ton trucks and semitractors. (Ironically, Willys‐Overland, according to many the original developer of the 1/4‐ton General Purpose Vehicle “Jeep,” built relatively few of these wartime vehicles itself, allegedly because of its modest engineering and production capability.) Ultimately, American industry produced approximately 3 million military trucks during the war, and Gen. George C. Marshall asserted in 1945 that American truck transport, especially the Deuce and a Half and the Jeep, was “the greatest advantage in equipment” the United States possessed.
Since 1945, the practice of building on industry strength to supply general‐purpose vehicles economically and in adequate quantity has remained most effective. But military‐industrial institutional memories have, on occasion, failed. Again, specially designed trucks like the complex low‐pressure‐tired, flex‐bodied, mid‐engined, deafening “GOER” (built by Caterpillar) have proved less successful than anticipated, and off‐the‐rack vehicles have not held up well. It remains to be seen whether the specially designed “Hummer”—a stocky, wide‐stanced, low‐profile, state‐of‐the‐art vehicle built of space‐age materials and intended to replace the Jeep—will secure a place in the civilian market sufficient to reduce its costs of production.
[See also Industry and War.]
Erna Risch , Quartermaster Support of the Army: A History of the Corps 1775–1939, 1962.
Fred Crismon , U.S. Military Wheeled Vehicles, 1983.
Daniel R. BeaverProcurement: Munitions and Chemical Industry The chemical industry has been strategically important to the U.S. military since World War I. As late as the Spanish‐American War in 1898, the only military explosive was black powder, the ancient Chinese mixture of charcoal, saltpeter (postassium nitrate), and sulfur. Only a year later, the British employed two powerful new chemical‐based explosives, smokeless powder and picric acid, in the Boer War in South Africa. Smokeless powder, made from nitrocellulose obtained by reacting cotton fibers with nitric acid, was a powerful propellant that did not generate the smoke that previously had revealed the firer's position. The second new explosive, picric acid, was used as a high explosive in artillery shells; it was derived from chemicals that are found in coal and had been used as a yellow dye for textiles. By the beginning of World War I, the Germans developed another high‐explosive compound, trinitrotoluene (abbreviated as TNT), which soon became the most widely used high explosive.
The American military, especially the navy with its large‐gunned battleships, had been experimenting with smokeless powder and high explosives since the 1890s. The Dupont Company, the nation's leading producer of black powder and dynamite, had worked with the army and navy on smokeless powder. Dupont hoped to transfer the skills it had acquired in nitrating glycerine to make dynamite to nitrating cotton to make smokeless powder. When World War I began in 1914, Dupont was the only company in the United States that manufactured smokeless powder. Over the next several years, Dupont and a few other American companies—most notably, Hercules, split off from Dupont in an antitrust suit settlement—built large new plants to supply the Allies with smokeless powder. In two years Dupont sales increased from $25 million to $318 million and profits soared from $5.6 million to $82 million. Dupont used these profits to diversify its business into dyestuffs, plastics, and paints.
When the United States entered the war in April 1917, the government made contracts with Dupont and other companies on terms much more favorable to the purchasing agency than the desperate Europeans had received. Dupont even built two huge smokeless powder plants for the government in Tennessee and Virginia. Many other smaller chemical companies, such as Dow Chemical and Allied Chemical, grew and prospered by producing chemicals used to make high explosives and poison gases during the war.
The mutual dependence of the American munitions industry and the Allies in World War I led some critics in the mid‐1930s to attribute American participation in the war to the influence of the munitions industry on the U.S. government. After Senate hearings chaired by Gerald Nye of North Dakota, Congress passed a series of neutrality acts prohibiting the sale of munitions to belligerent nations.
When the United States entered World War II, the now mature American chemical industry became a key component of the arsenal of democracy. It turned out explosives in much greater quantities than in World War I; contributed new materials such as nylon and synthetic rubber; and played a critical role in building atomic bombs. The synthetic rubber project was critical to the war effort because the Japanese had cut off the supply of natural rubber from Asia. Within two years, a massive government‐sponsored cooperative program, including oil, chemical, and rubber companies, established a new synthetic rubber industry. In the Manhattan Project, companies such as Dupont, Union Carbide, and Tennessee Eastman helped contruct and operate the nuclear materials plants at Oak Ridge, Tennessee, and Hanford, Washington. In the 1950s, the government contracted with Dupont and Dow to build and operate nuclear facilities at Savannah River, South Carolina, and Rocky Flats, Colorado.
[See also Industry and War; Nuclear Weapons.]
John Kenly SmithProcurement: Nuclear Weapons Industry The nuclear weapons industry developed after the end of World War II at facilities built for the Manhattan Project. The industry soon spread to seventeen isolated sites across the United States. These sites became the main economic support for their host regions, and this in turn created continuous political pressure for nuclear weapons spending.
The early U.S. lead in nuclear weapons began to disappear in the 1950s as the Soviet Union built a nuclear force of its own, patterning its research laboratories and early delivery systems directly on U.S. sites and models. For the next thirty years, the United States and the USSR engaged in a massive nuclear arms race that pumped huge sums of money into the U.S. nuclear weapons complex. The total amount paid by the United States for nuclear weapons from 1940 through 1996 was almost $5 trillion in 1996 dollars. This spending made nuclear weapons one of the two most expensive government projects in the history of the United States (the other being Social Security).
After START I was ratified and nuclear testing stopped, the weapons production complex shrank to four sites by 1998: warhead pits are developed and produced at Los Alamos National Laboratory in New Mexico and Livermore National Laboratory in California; the remaining warhead parts are produced at Sandia National Laboratory in New Mexico, and warheads are assembled at the Pantex plant in Texas. This put significant economic pressure on the remaining sites in the nuclear production network, and a number of those sites—Oak Ridge (Tenn.), Savannah River (S.C.), the Idaho National Engineering Laboratory (Idaho), and Hanford (Wash.)—attempted to salvage a nuclear mission by using or reprocessing nuclear materials for other applications such as energy production. None of these attempts is economical and each would require major government subsidies to survive.
With the continued ban on nuclear testing and likely future cuts in nuclear warheads, the remaining nuclear weapons facilities and the regions in which they reside are threatened with large job losses. In response to these threats, the weaponeers in the national laboratories, in conjunction with their political representatives, proposed a new program to manage existing warheads and to design and computer‐test new ones. This “Science‐Based Stockpile Stewardship” program was funded in 1998 at an annual level equal to two Manhattan Projects.
Environmental and safety pressures from federal and local sources, as well as loss of mission for the nuclear weapons industry, have caused the remaining sites in the nuclear weapons complex to concentrate on cleaning up the massive amounts of nuclear waste produced during the arms race. As a result, about $4.5 billion was spent in 1998 to clean up contaminated sites. Massive amounts are also dedicated to building storage sites in New Mexico and Nevada for nuclear waste. These cleanup and storage programs will eventually consume hundreds of billions of dollars and are expected to continue for forty years. At many sites they will provide as much employment and economic stimulus as the original weapons programs that created the waste.
[See also Consultants; Economy and War; Industry and War; Nuclear Weapons.]
William J. Weida , Regaining Security—A Guide to the Costs of Disposing of Plutonium and Highly Enriched Uranium, 1997.
Steven Schwartz, ed., Atomic Audit, 1998.
William J. WeidaProcurement: Ordnance and Arms Industry Since the 1790s, the American army has procured ordnance through a mixed system of government and private manufacturers. Anxious to have a domestic source of weapons, it established early government arsenals to turn out muskets. These arsenals produced only a small quantity of weapons when the nation faced possible war with France in 1798. As a result, the army contracted for additional firearms from private entrepreneurs. Only a few of the manufacturers completed their contracts, but a precedent had been established for utilizing the private sector to supplement government production.
In the first part of the nineteenth century, the army adopted a policy of expanding its arsenals and of retaining private firms on a long‐term basis. The Ordnance Department evolved an ideology of uniformity in the manufacture of arms in both arsenals and private firms that developed and spread the principles of the so‐called “American System of Manufacturing,” characterized by mass production of standardized interchangeable parts and tighter management control and supervision.
In the Civil War, because of the rapid buildup of the Union army, government arsenals and private contractors were unable to meet initial goals, forcing the army to purchase firearms in Europe. By 1863, however, the combination of profitable contracts for private firms and increased production at arsenals enabled domestic production to exceed demand. After the Civil War, government contracts for weapons were practically suspended and the army depended upon its arsenals.
In the two world wars, the army relied heavily on private firms for its weapons once its arsenals lacked the capacity to meet the demands of modern war and it was not deemed wise to build expensive huge arsenals for war production that would largely stand idle in peacetime. During World War II, private arms firms like the Winchester Company and the Remington Arms Company were major suppliers of weapons, as were firms not usually involved in arms production like the Chrysler Corporation, the General Electric Company, the General Motors Corporation, and the Singer Sewing Machine Company.
During the 1960s, the Department of Defense, in an effort to end the long‐standing rivalry between combat soldiers and military technicians by separating design and doctrine development from production, drastically reduced the army's own production capacity. Since then, the army has relied primarily on a group of quasi‐public industrial suppliers for weapons (the army still produces weapons today at the Rock Island, Illinois, and Waterville, New York, arsenals). These suppliers, such as the General Dynamics Corporation and the United Defense Company, while private corporations, often use government‐owned equipment and depend heavily on government contracts.
The mixed system has generally worked well in ordnance procurement. Government arsenals set production standards, improved production methods, trained technicians, and provided data on costs, while private firms contributed improved designs and production methods and the industrial base for large‐scale production in wartime. But in recent years the expanded reliance on private firms has prompted concern that undue pressure can be exercised in favor of special economic interests in the selection of weapons.
[See also Industry and War; military‐industrial complex; Weaponry.]
James A. Huston , The Sinews of War: Army Logistics, 1775–1953, 1966.
Merritt Roe Smith , Military Arsenals and Industry Before World War I, in B. Franklin Cooling, ed., War Business, and American Society: Historical Perspectives on the Military‐Industrial Complex, 1977.
John Kennedy OhlProcurement: Shipbuilding Industry The shipbuilding industry includes new construction as well as modernization, overhaul, and repair of existing ships. Naval warships average a thirty‐year life, while commercial ships may last longer. Most ships require several major overhauls or modernizations during this lifetime.
The first commercial ship built in America was a 30‐ton bark in 1607, and the first oceangoing vessel was launched in 1631. Mercantilist England saw New England as a source of naval stores and fishing. Because the colonies had all the natural resources required to build ships, they soon became England's major provider. By 1750, there were more than 125 shipyards in America producing faster ships at costs 30 to 50 percent less than in England.
The indigenous shipbuilding industry was also fueled by local demand for fishing boats, water transport between colonies, and delivery of American raw materials and produce to England and the Caribbean to exchange for needed manufactured goods. Tobacco, cotton, molasses, and then slave trade all helped sustain the industry, as did the China trade after 1783.
Robert Fulton produced the first commercially viable steamboat in 1807, and by 1820 steamships were crossing the Atlantic. The first iron hull was floated in 1825, but American shipping and shipbuilding peaked in 1855 and then began a decline broken only by wartime spending programs during the Civil War and the Spanish‐American War.
As vessels turned from the graceful clipper ship of the 1850s to steel, the competitiveness of U.S. shipyards declined because the Europeans took this new technology more seriously than the Americans. U.S. iron works put their energy and innovation into building railroads. Consequently, the price of American steel never became internationally competitive, and shipyards languished. Shipbuilding had spectacular growth in the 1890s and early 1900s because of large navy orders for the new steel‐hulled “Great White Fleet,” coupled with commercial fleet replacement. The decline set in again rather quickly, however.
World War I caused a major rush to build the “bridge of ships” to Europe. Established firms were booked solid with warships, while new yards were started to undertake a crash merchant fleet building program. Bureaucratic delays were such that most of the ships (80%) were completed after the war was over. Postwar depression dropped prices and shipbuilding stagnated as idle commercial ships became common and warships were limited by the Washington Naval Arms Limitation Treaty of 1922.
The almost sixty years following the Civil War offered minimal hope for a sustained revival in shipbuilding. It was obvious that political leadership would not consider expending massive public funds to support an American‐flag merchant fleet. This required shipbuilders to fall back on naval warship production.
Spasmodic congressional intervention with subsidies beginning in 1924 was required to build even a few commercial ships. In preparation for World War II, a second “emergency” shipbuilding program, the Liberty ships, was begun in 1939. Some 4,732 easy‐to‐build and simple‐to‐operate maritime ships were built between 1942 and 1945. A major naval shipbuilding program lasted from 1938 to 1945. A massive movement of labor to shipyards on the East, West, and Gulf Coasts was undertaken to complete this effort successfully.
At the end of World War II, the United States owned 60 percent of the world's tonnage, yet decline of the merchant marine began immediately. By 1948, the sale of 1,746 ships to U.S. and international operators had been completed. During the Cold War, a 1970 law authorized subsidies for building 300 new merchant vessels over the next 10 years, but a world economic slump driven by rising oil prices hindered this program, and only 83 ships were delivered.
Even new technologies and designs pioneered in the United States could not make American yards competitive for commercial ships because of high material and labor costs (outmoded shipyard processes) in construction and exorbitant operating costs. South Korea, Japan, and Taiwan developed highly automated shipyards that U.S. industry simply could not match. Nevertheless, American shipyards were busy during the 1980s, as President Ronald Reagan presided over one of the largest peacetime expansions of the navy in U.S. history.
[See also Navy, U.S.]
Clinton H. Whitehurst, Jr. , The U.S. Shipbuilding Industry, 1986.
K. Jack Bauer , A Maritime History of the United States, 1988.
William D. SmithProcurement: Steel and Armor Plate Industry (1865–1918). The battleship era between the Civil War and World War I brought about an intensification of business‐government relationships, the origins of what some historians term the “military‐industrial complex” or “command economy.” Intense interaction between naval officials and leading steelmakers was necessary to procure the latest and most effective armor, especially when its possession was a necessity for major warships. Certainly, no private‐sector market existed for huge steel plates up to 22 inches in thickness, costing fifteen to twenty times more than steel rails. America's building of a “Great Power” navy, then, required its mastering this technology. However, government incentives given to private steelmakers fanned Populist and Progressive criticism of big business. In these ways, armor procurement became entwined with debates about the nation's foreign and domestic policies.
The armor trade resulted from successive waves of technical change in shipbuilding and steel manufacture. While Britain began building large iron ships in the 1840s, and there were iron gunboats in the Civil War, most American warships were constructed of wood through the 1860s. In the 1870s, the U.S. Navy's Ordnance Bureau found itself unable to obtain the heavy, rifled, breech‐loading steel guns then finding favor in Europe. At the navy's behest, Midvale Steel became the country's leading manufacturer of ordnance steel. In the next decade, large appropriations for steel warships extended the navy's scope of interaction with private industry.
Multi‐million‐dollar contracts for steel armor plate began with Bethlehem Iron in 1887 and Carnegie Steel in 1890. Initially, the navy helped these two private firms transfer the necessary armor‐making technology from France and heavy forging technology from England. The two steel companies soon found it indispensable to hire ex‐naval officers (and at least one sitting U.S. senator) to deal effectively with the U.S. government. Finally, the two companies effected a pact of splitting contracts and maintaining high prices, extended to Midvale after its entry into armor plate in 1903. Proponents of the early “military‐industrial complex” thesis such as Benjamin Franklin Cooling also cite a series of congressional inquiries about high armor prices and recurrent public scandals about low armor quality. Furthermore, there was a massive procurement following the Spanish‐American War (armor contracts increased fivefold between 1898 and 1900), which could be seen as an instance of a command economy.
The already byzantine politics of armor received an outlandish international twist after 1895, just as Anglo‐German antagonism entered a critical phase. Until this time, the great power navies simply chose one of three types of armor; none could be proven definitively superior. But in the mid‐1890s, a clearly superior armor was developed whose glass‐hard face shattered incoming shells. This armor was invented in America by Hayward A. Harvey, improved in Germany by the Krupp concern, and came to be controlled by an international patent pool based in London (1895–1912). Precisely during the peak years of the global naval arms race, then, warships of all the great powers used the same armor.
From 1887 to 1915, the U.S. Navy purchased from the Bethlehem, Carnegie, and Midvale steel companies a total of 233,400 tons of armor plate (85% of which came after 1898) costing $102 million, and from 1916 to 1920 an additional 121,000 tons of armor plate costing about $65 million. The 1916 Naval Expansion Act authorized a government armor plant to limit private profit. It was built at Charleston, West Virginia, but its first 60‐ton armor ingot was not cast until 1921, and it was closed by the Republican administration of Warren Harding. The proper significance of armor plate is to be found not in the battleship itself (which was largely rendered obsolete by the submarine, aerial warfare, and naval disarmament treaties of the 1920s), but in the characteristic entanglement of public and private entities concerning the promotion and procurement of new military technologies.
[See also Battleships; Industry and War; Navy, U.S.: 1866–98.]
Benjamin Franklin Cooling , Gray Steel and Blue Water Navy: The Formative Years of America's Military Industrial Complex, 1881–1917, 1979.
Thomas J. Misa , A Nation of Steel: The Making of Modern America, 1865–1925, 1995.
Thomas J. Misa
"Procurement." The Oxford Companion to American Military History. . Encyclopedia.com. (May 24, 2018). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/procurement
"Procurement." The Oxford Companion to American Military History. . Retrieved May 24, 2018 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/procurement
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.
Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:
Modern Language Association
The Chicago Manual of Style
American Psychological Association
- Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
- In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.