In one form or another, deterrence is a motivational force in many everyday relationships: a child learns not to misbehave for fear of being scolded by its parents; a potential criminal might decide against committing a crime for fear of being caught and punished; a nation may choose one foreign policy course over another out of fear of military or economic retaliation; or an international alliance may threaten war if any one of its members is attacked. In each case, one party has influenced the choice of another by threatening consequences that outweigh gains.
In the field of foreign policy, the threat and fear of retaliation has been a powerful force. Imperial powers have found that making an example of an enemy or lawbreaker ultimately provided a cheaper and easier method of controlling peoples than maintaining a large standing police or armed forces, conforming to the common adage that prevention is usually better than cure. To that end, Roman legions deliberately cultivated a reputation for ruthlessness to promote stability, and European imperial powers did everything they could to impress their technological and military prowess upon the peoples they commanded. But it is in the nuclear age that deterrence assumed a special significance and was refined from a mostly instinctive practice to a deliberate, if still inexact, science. Throughout the nuclear age, deterrence has been a dynamic concept propelled by technological and historical developments. In turn, the concept of deterrence came to influence those technological and historical developments.
THE THEORY OF DETERRENCE
In its most basic theoretical form, deterrence is an equation involving two parties, where one party weighs the gain it may make by pursuing a course of action against the price it may pay by the retaliation of the other party. By way of illustration, it is useful to consider two nations, Nation A and Nation B, whose interests collide on a particular issue. If Nation A is known to be considering a course of action contrary to the interests of Nation B, Nation B may signal its intent to retaliate if Nation A does indeed choose that course of action. This leaves Nation A with a decision. First, it must decide whether Nation B has the capability to carry out its threat, and, second, it must decide whether Nation B has the will to carry out its threat. And the deciding factor in both of these judgments is the credibility of Nation B's capability and will to carry out the threat. If, as a result of this cost-benefit calculation, Nation A decides not to go ahead with the course of action, then deterrence has been successful; but if Nation A decides that the gains out-weigh the risk or price, and goes ahead regardless of the threat, then deterrence has failed. This example, though, provides only the barest outline sketch of how deterrence operates in the foreign policy arena.
The simplicity of the theoretical formula belies the complexity of international diplomacy in practice and critics of deterrence have focused on the relative rigidity of the formula compared to the fluidity and unpredictability of international diplomacy. Furthermore, deterrence rests heavily on certain assumptions about the parties involved and the way they will react. First, deterrence rests heavily on the belief that rationality will prevail throughout the process. Not only do the various parties have to behave in fundamentally rational ways, but each party must perceive the other as behaving in a fundamentally rational way. In practice, this leads to a complicated process of perception and counterperception as policymakers and intelligence bodies from each nation estimate and try to influence what the other is thinking.
Critics of deterrence rightly point out that the decisions of political leaders often do not fall into neatly explainable categories and that however carefully the deterrence situation may be controlled, ultimately the decision is a subjective one. In foreign affairs as in everyday life, behavior in deterrence situations is based on instinct as well as reason and this instinctive element can never be confidently discounted. Moreover, the way both parties view the situation can be influenced qualitatively by any number of factors, including irrationality, misperception, poor judgment, or even just wishful thinking. In short, what may seem an unacceptable price to one person or under some circumstances could well be judged by another under different circumstances to be acceptable. Moreover, since history is filled with events that were quite simply unpredictable, it is clearly difficult to account with any certainty for the myriad of accidents, mistakes, and emotions that may play a part in a decision. At best, employing deterrence is an approximation, but in the thermonuclear age, with weapons capable of global annihilation, the stakes are higher than ever before.
Because deterrence is very much in the eye of the beholder, and actual intentions and capabilities are less important than the other side's estimate of those intentions and capabilities, the process of communicating and interpreting information is crucial. Known as signaling, this two-way discourse is open to a wide range of possible scenarios. One involves explicit communications, including public speeches and classified diplomatic correspondence. Another is implicit measures such as partial mobilization or the placing of forces on alert or more subtle measures designed to be detected by foreign intelligence bodies. Ideally, each method is designed to exploit the complex interplay of perceptions and counterperceptions.
Sometimes deterrence fails. Often in the history of international affairs the cost-benefit calculation has led to an unexpected result. In 1904 Russia ignored Japanese warnings that its policies would lead to retaliation, resulting ultimately in the Russo-Japanese War. In 1914 the Central Powers recognized that their policies risked drawing the Entente into war in Europe, but continued regardless. During World War II, Nazi Germany persisted with U-boat attacks on American transatlantic merchant shipping despite President Franklin D. Roosevelt's explicit warnings that it would likely provoke U.S. retaliation. In December 1941, despite U.S. military strength, Japan calculated that its interests were best served by a surprise attack on the U.S. fleet at Pearl Harbor. In mid-1962 Soviet premier Nikita Khrushchev installed nuclear missiles in Cuba despite President John F. Kennedy's explicit warnings that doing so would provoke U.S. retaliation. And in 1991 the Middle East erupted in violence as Saddam Hussein ignored the threat of U.S. intervention should Iraq invade Kuwait.
In diplomacy, threats that act as a deterrent have most often come in military form and have therefore implied the capability to project military power. Possessing a powerful navy gave Britain such a capability, but for much of its early history, the United States was not able to project military power and therefore made threats rarely and with questionable success. One notable deterrent effort was President James Monroe's unilateral declaration on 2 December 1823 of the independence of the Western Hemisphere, issued in order to deter Spanish intervention in Latin America and Russian expansion on to the American continent. Monroe's threat was twofold. First, he implied local military resistance if Spain tried to reestablish its colonies in Latin America or Russia expanded onto the American Northwest Coast. Second, he implied that if the European powers chose to interfere in the affairs of the Western Hemisphere then the United States would be forced to revoke its longstanding tradition of non-interference in European affairs. Ultimately, Spain did not try to reestablish its colonies, although this probably had less to do with Monroe's threats than it did with similar threats issued by Britain.
The advent of strategic bombing in the early twentieth century vastly increased the ability to project military force, and consequently led to the transformation of deterrence into its modern form. The twin technological developments of high explosives and aircraft that could deliver them to their targets made strategic bombing a decisive factor in modern warfare. It was first used in limited form in World War I in Germany's zeppelin (airship) attacks against Britain. German scientists had not only developed a way to incorporate poison gas into a bomb, thereby creating the first type of the weapons now classed as weapons of mass destruction, but had also developed a way to float zeppelins over enemy lines and drop their payloads on British cities. Although not widely recognized at the time, this ability to move beyond the confines of the battlefield and to attack an enemy's cities directly revolutionized modern warfare as offensive strategic bombing threatened to break the defensive deadlock that had evolved from tanks, machine guns, and trench warfare.
At the same time it introduced a new factor that, although intangible, was no less powerful. Like the German V-1 and V-2 rocket strikes during World War II, the German zeppelin of World War I resulted in few deaths, but the potential that the technology seemed to hold for extending the battlefront from the trenches to civilian homes captured the British public's imagination to an extent that far outweighed objective casualty counts. For the first time a military weapon was used not only for the tactical calculations of policymakers, but also to strike terror into the home front, which had become an increasingly vital component of modern warfare. The fluid variable of civilian morale suddenly became as important as military morale as the civilian population reacted to the new threat to their homes and lives. Moreover, during World War II, as strategic bombing became vastly more effective (and deadly), the industrial and economic hearts of an enemy became additional viable targets. Allied long-range bombers all but destroyed the German industrial city of Dresden, while the American firebombing of Tokyo started fires that raged out of control for days at a time. With the development of the atomic bomb, a weapon blatantly unable to discriminate between military and civilian targets, strategic bombing was taken to its extreme.
The development of the atomic bomb was the culmination of the top secret Manhattan Project, an extraordinary collaboration of international scientists headed by J. Robert Oppenheimer that was backed by vast resources provided by the U.S. government. Stringent security precluded public debate about what role the new weapon would have, but among those few who had information about the project's overall objective and progress, there was growing awareness that the new weapon would be unlike anything that had come before; it would perhaps even create "a new relationship of man to the universe," as a committee chaired by Secretary of War Henry L. Stimson put it. Such an unconventional weapon clearly required unconventional thinking. As Oppenheimer recognized, the elements of surprise and terror were as intrinsic to it as fissionable nuclei. In a top-secret report submitted to the War Department on 11 June 1945, a small committee chaired by physicist James Franck suggested that the psychological impact of the explosion might be more valuable to U.S. military objectives than the immediate physical destruction. In the hope that a demonstration of the destructive potential of the atomic bomb might be enough to compel the Japanese to surrender, the Franck Committee proposed a public demonstration in an uninhabited region.
After considering the various proposals, President Harry S. Truman concluded that a demonstration in an uninhabited region would likely be ineffective, and therefore ordered that the bomb be used against Japanese cities, a decision that has been passionately debated ever since. Some have argued that Truman's motivation was less the war in the Pacific than the impending contest with the Soviet Union and that as such it represented the opening gambit of so-called "atomic diplomacy," while others argue that the decision was not only militarily sound but necessary, and that it ultimately saved hundreds of thousands of American and Japanese lives. But whatever Truman's motives, at 8:15 a.m. on 6 August 1945, an American B-29 Superfortress long-range bomber named the Enola Gay delivered its single atomic bomb to the target of Hiroshima, the second most important military-industrial center in Japan. Upward of seventy thousand people were killed instantly in the blast. Three days later another bomb was dropped on Nagasaki, killing at least twenty thousand. In the following weeks the death counts in both cities rose as the populations succumbed to radiationrelated illnesses.
As the wire services flashed the story around the globe, journalists who had witnessed the Trinity test blast at Alamogordo, New Mexico, three weeks earlier on 16 July, were now free to write about what they had seen and help a startled world comprehend what had happened. The American public's reaction was a mixture of relief that the end of the war was in sight, satisfaction that revenge had been exacted upon the perpetrators of the Pearl Harbor attack, and a sober recognition of the responsibility the new weapon carried with it. In strategic terms, there was not yet such a thing as a U.S. atomic stockpile, despite President Truman's implication in his press statement announcing the Hiroshima bombing that atomic bombs were rolling off the production line. Had the first two atomic bombs failed to bring a Japanese surrender, some time would have passed before more were ready. Within days of the Nagasaki bombing, however, the Japanese leaders finally succumbed to the inevitable and formally surrendered to General Douglas MacArthur's forces on 2 September 1945. From that moment, the priority for U.S. military forces was not building more bombs, but going home.
As the United States demobilized in the postwar period, relations with the Soviet Union deteriorated. The coincidence of the beginning of the Cold War and the dawn of the nuclear age ensured that the history of the two would become inextricably entwined. During the early Cold War, the primary strategic contest was for Europe, and Germany in particular, but throughout the continent evidence mounted of a clash of interests and ideology. Within a few short years of the end of World War II, the U.S. government had publicly identified the Soviet Union as its primary strategic threat. And if war did break out in Europe, the Soviets had vastly more conventional forces and the geographical advantage as well. The challenge for U.S. defense planners, therefore, was to find a way to project the U.S. atomic force. Nevertheless, the planners moved slowly to devise a coherent nuclear strategy to serve foreign policy interests. Although fully recognizing that the Soviets would sooner or later develop the atomic bomb, American policymakers struggled to find a way to take advantage of the atomic monopoly. However, the American population was still weary from World War II and constrained military budgets were shrinking, so atomic development was a low priority. It was not until Dwight D. Eisenhower became president that a coherent deterrent role was found for the U.S. nuclear arsenal. But by the end of the Truman administration, defense budgets were growing rapidly, and that administration made some effort to bring military policy into the atomic age. Having recently witnessed how much sacrifice Soviet leader Joseph Stalin was willing to impose on his countrymen in the defense of the USSR, it was clear to U.S. policymakers that if war should break out between the two superpowers, the small stockpile of American atomic bombs that had been built up since Nagasaki would not guarantee victory.
In use, the atomic bomb was an offensive weapon. For the American atomic monopoly to be cast in a defensive role, that role had to be to prevent war altogether through the very threat of retaliation. Bernard Brodie, one of the first defense intellectuals to engage publicly the implications of the atomic bomb, succinctly summarized the momentous shift in military affairs that the bomb had sparked. "Thus far the chief purpose of our military establishment has been to win wars," Brodie commented in The Absolute Weapon (1946). "From now on its chief purpose must be to avert them." To that end, and recognizing that atomic weapons did not fall easily under the existing military force structure, the Truman administration in March 1946 created the Strategic Air Command (SAC) headed by General George Kenny. Adopting the motto "Peace is our profession," SAC's mission was to give the United States a long-term capability to project U.S. nuclear force anywhere on the globe. SAC's existence exemplified the paradox of deterrence strategies as summarized in the Latin adage Qui desiderat pacem, praeparet bellum (Let him who desires peace, prepare for war). Unlike the case with conventional military forces, for the remainder of its existence SAC's success would be measured not by its performance in battle, but by its never having actually to engage in combat.
The Cold War contest was ultimately a strategic one, but it was more often manifested in a series of short-term political contests. Committed to a policy of containing communist expansion, the Truman administration found itself having to rethink its assumption that atomic weapons would deter simply because they existed. By the end of 1948 there was mounting evidence that the American atomic monopoly was having little success in deterring communist political expansion through Europe; the threat of communist subversion in Greece and Turkey in 1947, the communist coup in Czechoslovakia in February 1948, and the strong communist presence during the Italian election of April 1948 all seemed to provide evidence to that effect. And in the first truly nuclear crisis of the Cold War, the Berlin blockade of 1948–1949, the Truman administration could manage only a half-hearted atomic threat that, if Stalin had pushed the matter, would likely have been revealed as a bluff. In a move with the twin objectives of temporarily bolstering the flagging British strategic bombing force and sending an atomic threat to Stalin, American B-29s were deployed in Britain at the height of the Berlin blockade crisis. But this early attempt at nuclear coercion was unconvincing. The B-29s initially sent were not modified to carry atomic bombs; furthermore, it was public knowledge that there was no procedure in place to store atomic warheads overseas. By mid-1948 even Secretary of Defense James Forrestal had to admit that American military planning, including its nuclear strategy, was "patchwork" at best. If America's national security was to realize the full potential of the atomic bomb for deterrence, a thorough rethinking would be needed.
One solution preferred by many was for the United States to exploit the window of opportunity by launching a preemptive strike against the Soviet Union. Former British prime minister Winston Churchill suggested sending Stalin an ultimatum stating that if he did not desist from his expansionist policies, U.S. planes would use atomic bombs against Soviet cities. The U.S. commander in Germany, General Lucius Clay, agreed. Other military voices in Washington lamented the wasting of an opportunity. The calls became more urgent as that window of opportunity seemed to be closing. When the Soviet Union detonated its first atomic device in August 1949, it caught the West by surprise. The mastering of the atomic process by Soviet scientists was not unexpected in a general sense, but beyond the inner sanctum of intelligence officials and defense planners, the Soviet achievement was never seriously anticipated beyond the vaguest of timetables. To make matters more alarming, two months later, in October 1949, Mao Zedong's Communist Party emerged victorious in China's civil war, a development seen in Washington as proof that Moscow's ambitions were not confined to Europe but were global.
THE THERMONUCLEAR REVOLUTION
The twin developments of thermonuclear weapons and long-range missiles ushered in a new phase of nuclear deterrence. Now, thermonu-clear warheads hundreds or even thousands of times more powerful than atomic bombs could be attached to missiles capable of reaching other continents and destroying cities in minutes. In these circumstances, deterrence became not just a policy imperative but a necessity for the survival of the human race. It was clear that a new stage had been reached not only in the history of international affairs, but also in the history of humankind. J. Robert Oppenheimer famously likened the situation to "two scorpions in a bottle, each capable of killing the other, but only at the risk of his own life." In delivering a comprehensive evaluation of British and NATO nuclear forces to the British House of Commons in March 1955, Prime Minister Winston Churchill observed that a paradox was likely to define international affairs in the future: "After a certain point has been passed it may be said, 'The worse things get the better.' … Then it may well be that we shall by a process of sublime irony have reached a stage in this story where safety will be the sturdy child of terror, and survival the twin brother of annihilation." Begun by technological innovation, the thermonuclear revolution aroused the most basic human fears and instincts.
In view of the apparently intensifying communist threat, and particularly in light of the recent development of Soviet atomic power, Truman in late January 1950 ordered a reexamination of U.S. strategic policy. Secretary of State Dean Acheson delegated the task to the new director of the State Department's Policy Planning Staff, Paul Nitze. The result, a document known as NSC 68, was a lengthy and dramatic call to arms. Before long, the report concluded, the Soviet Union would have the capability to launch a surprise atomic attack on the United States. To deter such an attack, NSC 68 recommended a massive buildup in conventional and atomic forces and that, moreover, the United States should commit resources to developing a new type of weapon, a "super" bomb harnessing the power generated by fusing hydrogen atoms rather than splitting them. Preliminary research into such a weapon had been undertaken within the Manhattan Project by a team of scientists headed by physicist Edward Teller. But with no hope of immediate success and with military budgets shrinking in the postwar economic environment, the research was halted. Based on theoretical data, Teller predicted that a hydrogen bomb would be several hundred times more powerful than the Hiroshima bomb and capable of devastating an area hundreds of square miles, with radiation traveling much farther. NSC 68 now proposed to resume H-bomb research at a greatly accelerated rate. After heated debate over the feasibility and morality of such a weapon, Truman ordered the project to proceed.
At the height of the debate about whether to proceed with the H-bomb project, communist North Korea launched an attack on pro-Western South Korea on 25 June 1950. Truman reacted by committing U.S. troops under the auspices of the United Nations. Since the deterrent had clearly lapsed or failed, General Douglas MacArthur, commander of UN forces in the theater, and General Curtis Le May, head of SAC, both recommended revitalizing it by using atomic weapons against Communist China. Truman, however, refused to expand the war and committed the United States to a limited conflict with limited objectives, a novel mission for American military forces.
For both political parties, Korea confirmed beyond a doubt that communist forces were on the offensive and that existing U.S. strategy was inadequate to stop them. The Democratic administration reacted by embracing NSC 68 and the massive military buildup it entailed, including the development of the H-bomb. For Republicans, Korea seemed to provide ample evidence for their charge that the Truman administration's approach to national security policy was based too heavily on reaction rather than prevention and therefore was putting the United States on track for financial bankruptcy. They argued that the primary failure lay not in the logistical difficulties of projecting U.S. military force to the distant shores of the Korean Peninsula, but in the administration's failure to prevent the war in the first place. The debate reached a crescendo in the presidential election campaign of 1952, as General Dwight D. Eisenhower, the Republican candidate, and his foreign policy adviser, John Foster Dulles, launched a sustained political attack on the Truman administration's foreign policy record.
Eisenhower and Dulles promised to take a new look at American national security policy and formulate a better plan for what was clearly going to be a long struggle with the Soviet Union. Dulles declared that the United States could not afford to keep fighting expensive "brushfire" wars like Korea and that what was required instead was an economically sustainable military posture designed to deter communist aggression over the long term. Since the United States could not in all likelihood compete with the Soviet Union in amassing conventional forces, Eisenhower and Dulles contended that the best use of American resources would be to invest in the next generation of nuclear weaponry and declare the intention to react to communist aggression "where it hurts, by means of our choosing." To illustrate Eisenhower's proposed deterrent strategy to the voters, Dulles called on the analogy of municipal police forces: "We do not station armed guards at every house to stop aggressors—that would be economic suicide—but we deter potential aggressors by making it probable that if they aggress, they will lose in punishment more than they can gain by aggression." Massive retaliation, as Eisenhower's deterrent strategy came to be called, was designed to impose upon would-be aggressors a blunt choice: either to desist, or to persist with the risk of nuclear annihilation. The primary challenge for U.S. policymakers, therefore, was to make the other side believe that aggression carried a high risk of nuclear retaliation. To that end, Dulles declared that the administration would be prepared to engage in diplomatic "brinkmanship," a diplomatic policy some observers likened to the youthful, and often deadly, test of nerves known as "chicken." Eisenhower and Dulles argued that only by being ready to push the crisis to a point where the opponent backed down first would the United States be able to protect its interests. And in order to give communist leaders reason to pause, Eisenhower employed calculated ambiguity in responding to the question of whether a nuclear response would be automatic.
Once in office, Eisenhower and Dulles were confronted with the challenge of implementing the results of their promised New Look. Some of it was clearly impractical, even dangerous. Campaign promises of the political "liberation" of Eastern Europe were quickly abandoned after the June 1953 uprisings in East Germany. The deter-rence strategy proved somewhat easier, although perhaps even more dangerous. Increasing the relative emphasis on nuclear technology as opposed to maintaining large standing conventional forces allowed the administration to cut defense expenditures by about 25 percent compared to the late Truman years, leading Secretary of Defense Charles Wilson to declare proudly that the Pentagon now had "more bang for the buck."
Having adopted massive retaliation as a long-term Cold War strategy, Eisenhower and Dulles found that strategy tested in the short term by a series of crises. Nevertheless, Eisenhower threatened massive retaliation in times of crisis sparingly and deliberately. In a series of confrontations with Communist China ranging from bringing an end to the Korean War in 1953 to the Taiwan Straits crisis of 1958, Eisenhower several times threatened nuclear attack. The primary focus of U.S. foreign policy, however, remained Europe. The struggle for Germany continued to manifest itself in crises over Berlin, which in turn presented U.S. deterrence strategies with perhaps their most serious test. Given the superior strength of Soviet conventional forces in Europe and the location of West Berlin deep inside communist territory, that city was militarily indefensible. The situation was, as Senate Foreign Relations Committee chairman J. William Ful-bright put it, "a strategic nightmare." The only viable option available to the United States short of thermonuclear war was to deter the Soviets from moving against the city.
Simultaneous to these crises, the strategic balance was in a state of flux, which in turn affected thinking about deterrence. Although the Soviets had successfully tested their first atomic device as early as 1949, thoughtful observers recognized that one successful test did not make a deployable arsenal. By 1955, however, Soviet scientists had largely overcome the initial four-year lag and, particularly in the field of thermonuclear weapons, was on a par with the West. The Soviets still lagged far behind the United States both in quantity and quality of nuclear weapons, but the former's strategic arsenal was more than adequate to inflict considerable damage on the West and to play its own deterrent role. Furthermore, the accelerating arms race made it clear that the gap was narrowing. On 3 October 1952, Great Britain detonated its first atomic device on islands off the coast of Australia. Only weeks later, on 31 October, the United States detonated its first thermonuclear weapon. Less than a year after that, on 12 August 1953, the Soviet Union completed its first successful detonation of a thermonuclear device. In 1956 the first American tactical nuclear weapons were deployed in Europe. These new weapons, designed for battlefield use in localized action, came in the form of army artillery shells, each with explosive power roughly equivalent to the Hiroshima bomb.
This rapidly accelerating arms race confronted defense planners with a new question: How much was enough to deter? During the Eisenhower administration two main schools of thought defined the debate. The first held that the U.S. stockpile should consist of just enough weapons to play a deterrent role, a concept that became known as minimum deterrence. The opposing school of thought held that the United States should maintain a large and constantly growing nuclear arsenal in order to be able to engage in redundant targeting, or allocating several weapons to each target. Known as "overkill," it was this approach of building up an over-whelming nuclear force that prevailed, largely as a result of unchecked bureaucratic politics. As a result, for the remainder of the decade, the Eisenhower administration invested deeply in building up the U.S. nuclear stockpile.
On 3 August 1957 the successful Soviet test of a new type of missile, with the potential capability of reaching the continental United States from a launch-point in the USSR, augured in a new phase of deterrence. This was dramatically confirmed two months later, on 4 October 1957, when Soviet Strategic Rocket Forces used the same type of intercontinental ballistic missile (ICBM) to propel the world's first artificial satellite into space. That satellite, known as Sputnik, was in itself harmless, being little more than a nitrogen-filled aluminum sphere fitted with a rudimentary transmitter that emitted a distinctive "beep" every few seconds, but it would nevertheless have profound ramifications. For the world public, it offered a dramatic demonstration that suggested Soviet missile technology was ahead of the West's, especially when contrasted with a spate of well-publicized American test failures. The American public's fears were manifested in accusations that the Eisenhower administration had allowed first a "bomber gap" and then a "missile gap" to develop. In reality, neither gap existed, but the administration's critics made considerable political mileage of the issue. For deterrence theorists and practitioners, Sputnik also demonstrated that the mainland United States was for the first time vulnerable to direct missile attack, potentially opening a window of vulnerability that introduced new challenges to the viability of massive retaliation as a deterrent strategy. In response to Sputnik, the American government put new emphasis on missile technology and the space race. By the late 1950s U.S. intermediate-range ballistic missiles (IRBMs) and medium-range ballistic missiles (MRBMs) were based in Turkey and Italy, aimed at targets in the Soviet Union. In 1959 the first generation of American ICBMs, Atlas D missiles with a range of 7,500 miles, were deployed in California. The following year the Polaris submarine-launched ballistic missiles (SLBMs), the final leg of what became known as the U.S. nuclear triad, were added to the U.S. arsenal to complement the strategic bomber and missile forces. By the early 1960s the Corona intelligence satellite program was sending back its first photographs of Soviet military installations, promising a quantum leap in the collection of military intelligence. Meanwhile, development continued of the massive Saturn V rockets that, by the end of the 1960s, would finally shatter any concept of safety in geography by propelling Americans to the moon.
Once again, technological developments introduced new challenges for deterrence strategy. Since missiles reduced to minutes the time available to respond to a nuclear strike, then conceivably the side that struck first would have an advantage if it could neutralize the enemy's retaliatory capability in that strike. Accordingly, the Eisenhower administration implemented new procedures to protect its retaliatory capability. Beginning in 1952, SAC's primary strike forces were on twenty-four-hour alert to guard against surprise attack and so-called fail-safe procedures had been implemented. In highly classified Chrome Dome missions, B-52 Stratofortress bombers flew to within striking distance of enemy targets and waited for a signal to proceed. If no signal came, the planes returned to base. The procedures were designed partly to improve the response time of the nuclear strike force, but more importantly to ensure that the strike force could not be destroyed in Soviet first-strike attacks on airfields. Scattered and airborne, the strike force was less vulnerable. The same principle was also applied to command and control procedures. To reduce the risk that a Soviet first strike on a few central underground command centers might disable the entire U.S. nuclear strike force—possibly an incentive for a Soviet preemptive first strike—a fleet of air force planes were specially outfitted with command and communications equipment to be able to take control of the American nuclear arsenal in the event the central underground command center was destroyed or disabled. From 3 February 1961 through 24 July 1990, a Looking Glass plane was in the air at all times. Protected by mobility, these Looking Glass missions, along with the Chrome Dome missions, became key elements of the U.S. deterrent in the missile age by protecting America's second-strike capability.
MASSIVE RETALIATION QUESTIONED
From the mid-1950s, criticism of massive retaliation became increasingly vocal. As Eisenhower well knew, the most challenging aspect of implementing massive retaliation was that it required a leap of faith on the part of the adversary that the United States would respond to localized and small-scale aggression by launching a nuclear strike, a reaction that was increasingly akin to suicide because of the rapid advances the Soviets were making in nuclear technology. As a consequence, there were a growing number of calls for the United States and NATO to bridge that leap of faith by modifying the strategy of massive retaliation to what retired British Rear Admiral Sir Anthony W. Buzzard called "graduated deter-rence." Only by being capable of responding in proportion to the threat, critics of massive retaliation argued, would nuclear threats become credible. Implicit here was a distinction between the tactical and strategic use of nuclear weapons, a distinction that massive retaliation explicitly disavowed. In 1957 Harvard professor Henry Kissinger elaborated on this argument by calling for increased investment in tactical nuclear weapons and acceptance of the possibility of limited nuclear war.
The observations of Buzzard and Kissinger were part of a trend toward public debate over nuclear policy. The increasing frequency of nuclear crises in the late 1950s and early 1960s and the growing absurdity of both superpowers' nuclear postures led to increased public concern with nuclear policy. For the first decade of the nuclear age, the American public had for the most part treated nuclear policy as "something best left to the experts," but by the end of the 1950s nuclear strategy had become a topic of public debate led by a cadre of increasingly visible professional strategists. Often civilians associated with think tanks such as the RAND Corporation, these professional strategists began to assume a new place in the U.S. military hierarchy and, in turn, in the public imagination. Scientists like Robert Oppenheimer, Edward Teller, and Werner von Braun had all become national figures through their contributions to the technology of the nuclear age, and by the late 1950s civilian professional strategists like Bernard Brodie, Henry Kissinger, Albert Wohlstetter, and Herman Kahn were becoming just as famous for their theorizing about how to use that technology. Although their fame most often came in the form of notoriety for their ability to discuss the absurdity of nuclear war in cold, calculating terms, they were nevertheless crucial for fueling the public debate. In the absence of hard evidence concerning Soviet decision making, these strategists were forced to form judgments about nuclear war without having any experience to draw on; thus, they substituted deductive hypotheses derived from the fields of political science, psychology, and economics for inductive historical experience. In a series of books, the most well-known of which is On Thermonuclear War (1960), Kahn challenged policy-makers and the general public to get beyond what he called "ostrichlike behavior" and to "think the unthinkable." His central point, as he put it in Thinking the Unthinkable (1962), was that "thermonuclear war may seem unthinkable, impossible, insane, hideous, or highly unlikely, but it is not impossible."
The presidential election of 1960 further propelled the public debate on deterrence. Since Buzzard's call for "graduated deterrence" in 1956, Eisenhower's political opponents had adopted the strategy under a revised name: flexible response. Maxwell Taylor, the army chief of staff in the Eisenhower administration, had for some time been a voice of dissent on massive retaliation and had expressed his concerns in his book The Uncertain Trumpet (1960), in which he called for a reprioritizing of U.S. defense spending to place more emphasis on the ability to control the escalation of crises. When John F. Kennedy was nominated as the Democratic Party's presidential nominee, he quickly adopted flexible response as the basis of his military program.
The Kennedy administration thus came to office basing much of its military program on a political refutation of the Eisenhower administration's strategy of massive retaliation. Despite campaign promises to institute ways to control escalation and thereby make crises "safer," the Kennedy administration quickly assumed an aura of being in perpetual emergency. From the failed invasion of Cuba in April 1961, the renewed Berlin crisis just months later, and the civil rights crisis at the University of Mississippi in the fall of 1962, it appeared to many that the administration was careening from crisis to crisis. The first practical test of flexible response came in the summer of 1961, when Soviet premier Nikita Khrushchev revived his ultimatum to end Western rights in West Berlin and thereby once again provided U.S. deterrence strategy with perhaps its most difficult challenge. With the crisis brewing, and concerned that he had undermined his own credibility through the Bay of Pigs imbroglio a few months earlier, Kennedy responded with a massive buildup of conventional forces in Europe in order, in his words, "to have a wider choice than humiliation or all-out nuclear action." At the same time, he reaffirmed NATO's nuclear guarantee to the city. In turn, Khrushchev quietly lifted his deadline, as he had two years earlier.
Of all the crises confronted during Kennedy's short presidency, the Cuban missile crisis proved the most dangerous, with the United States and the Soviet Union coming closer to the brink of nuclear war than ever before or since. When Khrushchev decided to deploy Soviet MRBMs, IRBMs, tactical nuclear weapons, and nuclear-capable medium-range bombers secretly in Cuba, where they would be positioned to strike most of the continental United States within minutes, his reasoning was to bolster the Soviet deterrent. Whether he wanted to use this deterrent in an offensive or defensive role has been debated by historians ever since. Once the deployments were discovered, Kennedy responded to the challenge by implementing a naval blockade of the island and threatening military action if the missiles and bombers were not removed. After a weeklong standoff, during which SAC's forces went on airborne alert, Khrushchev agreed to remove the missiles and a month later agreed to remove the bombers.
The crisis was resolved peacefully, but those who had witnessed the secret negotiations and the classified near misses had seen all too clearly how command and control might break down under crisis conditions. On the one hand, the resolution of the Cuban missile crisis without global destruction seemed to enhance the credibility of the deterrent on both sides. On the other hand, the missile crisis demonstrated that brinkmanship and ambiguity were simply too dangerous. Consequently, the crisis accelerated the momentum toward East-West détente. Formal negotiations to limit nuclear testing, which had been under way since 1958, finally bore fruit on 5 August 1963 in the form of the Limited Test Ban Treaty that effectively imposed mutual restraint on large-scale, above-ground nuclear weapons tests. And to reduce the risk of miscalculation and misinterpretation in a crisis, a communications hotline was established between the White House and the Kremlin.
MUTUAL ASSURED DESTRUCTION (MAD)
Paradoxically, however, one interpretation of the missile crisis held that the decisive factor in its resolution had been America's nuclear superiority—that if the American nuclear arsenal had not been more powerful than the Soviet arsenal, the crisis might have turned out differently. Both sides subscribed to this interpretation at least in part, which led to a new round in the arms race just as both sides were moving closer to agreements on nuclear testing. During the mid-and late 1960s, the Soviet Union expanded its military expenditures so that by the end of the decade, Soviet Strategic Rocket Forces had a new generation of even more powerful ICBMs at their disposal. At the same time, the administration of President Lyndon B. Johnson abandoned the idea of seeking an overwhelming nuclear superiority and settled upon a new measure of nuclear striking power called "sufficiency." As defined by the administration it meant having the ability to survive a Soviet first strike with enough forces intact to retaliate with a devastating second strike. To do so, the emphasis would be placed on a better balanced triad structure of U.S. nuclear forces, consisting of missile, air, and naval strategic forces, together leading to the power to "assure destruction" to an adversary without engaging in a destabilizing arms race. Secretary of Defense Robert S. McNamara argued that such a structure was both cost effective and stable, and it was retained as the structure of the U.S. nuclear force until the end of the Cold War.
By the beginning of the 1970s, the nuclear forces of the Soviet Union and the United States were at relative parity. In terms of sheer explosive power the USSR had surpassed the United States and was in the process of developing weapons with even larger payloads and greater accuracy, but the United States retained the technological lead. With this parity came new challenges to deterrence theory. No longer did one side have a preponderance of strategic power, and it appeared doubtful that even a preemptive first strike would hold the advantage, since it was increasingly clear that neither side would survive a nuclear exchange without casualties measured in the millions. American policymakers quickly found, however, that the promise of mutual destruction in the bipolar contest with the Soviet Union was frustratingly ineffective in conflicts such as Vietnam, which fell outside of the strictly defined U.S.–USSR relationship.
Mutual assured destruction (MAD) lay at the heart of the Strategic Arms Limitation Talks (SALT) that began in Helsinki, Finland, in November 1969. The objective of the talks was not to reduce the arsenals of either side but rather to negotiate limits on future growth of those arsenals precisely to preserve mutual vulnerability. Two technological developments of the late 1960s threatened to destabilize the nuclear status quo: antiballistic missile (ABM) systems and the development of multiple, independently targetable, reentry vehicles (MIRVs) technology. ABM systems, as they were conceived at the time, were designed to protect cities from incoming missiles. Both the United States and the Soviet Union had developed first-generation ABM systems that could in theory, if not yet in practice, offer protection against first strikes. Partly to overcome such an advantage, both sides had invested considerable resources in developing the technology of MIRVs, a system whereby one missile could deliver several warheads to independent targets. Although these new technologies were designed to cancel each other out, in truth they threatened to destabilize the mutual destruction deterrent and spark off a new arms race, a race that would not only be dangerous, but expensive. The SALT process, therefore, was designed to limit these technologies and keep each side vulnerable to attack by the other.
With strategic nuclear war finally recognized as unwinnable, President Richard M. Nixon ordered Secretary of Defense James Schlesinger to review the military posture of the United States in light of recent technology. The result, known as the Schlesinger Doctrine, was essentially a refinement of flexible response, designed to balance Soviet bloc capabilities by threatening retaliation commensurate with the threat. Specifically, it enhanced the role of tactical nuclear weapons in a three-layered defense structure: conventional forces for conventional threats; tactical nuclear forces to counter tactical nuclear threats; and strategic nuclear forces to counter strategic threats. In essence, the Schlesinger Doctrine embraced what Henry Kissinger had proposed in the late 1950s: that a limited nuclear war was possible and was a desirable capability to have.
Despite President Jimmy Carter's efforts to further détente and continue the focus on nuclear sufficiency rather than superiority, the international and domestic political environments of the late 1970s actually pressured the administration to increase military spending drastically. During the presidential election campaign of 1980, the Republican candidate Ronald Reagan seized upon accusations made by prominent groups such as the Committee on the Present Danger, headed by Eugene Rostow and Paul Nitze, to accuse the Carter administration of allowing a window of vulnerability to open, claiming that détente had allowed the Soviets to gain a dangerous lead in the arms race to the point that even the hardened-silo Minuteman forces, the mainstay of the U.S. strategic missile force, were vulnerable to high-yield Soviet missiles. Reagan promised not only to neutralize that gap, but also to restore American military superiority and, to that end, deliberately strove to upset the balance of terror by focusing on defense rather than deterrence. The shift had important ramifications for the Cold War. Reagan reauthorized the development of the B-1 bomber and the next generation of highly accurate and MIRV-equipped Peacekeeper missiles to replace the aging Minuteman forces. He also authorized development of a controversial radiationenhanced weapon, the neutron bomb, which killed living matter but left nonliving matter relatively unscathed. At the same time, Reagan endorsed the recommendations of a high-level commission chaired by Brent Scowcroft calling for an evolution toward small, single-warhead ICBMs backed up by Peacekeeper missiles.
In 1983 President Reagan ordered a large-scale scientific and military project to examine the feasibility of a new generation of ABM defenses. Officially labeled the Strategic Defense Initiative (SDI), but more commonly known as Star Wars after the popular science-fiction movie, the objective was to develop a multilayered shield capable of stopping thousands of incoming ballistic missiles. In theory, lasers mounted on satellites, electromagnetic guns, and charged particle beam weapons would be used to shoot down incoming ballistic missiles anywhere from boost phase (soon after launch) to reentry (final descent to target). In championing the project, Edward Teller, the reputed "father of the H-bomb," made a dramatic and controversial return to the public debate of deterrence. Not only was the technology unproven, but it quickly became apparent that the price tag of such a system was almost impossible to predict and entirely impossible to pay. Not surprisingly, the Soviet Union reacted angrily to what seemed a blatant disavowal of the 1972 SALT Treaty. Nevertheless, Reagan ordered the project to proceed. For the remainder of the 1980s, the Reagan administration struggled to find a way to make SDI a reality while at the same time continuing to pursue meaningful arms reduction.
AFTER THE COLD WAR
With the end of the Cold War and the disintegration of the USSR from 1989 to 1991, the bipolar balance of terror suddenly collapsed, and it became clear that the Soviet nuclear strength had been disguising severe internal weakness. Almost overnight, it seemed, the international environment had changed almost beyond recognition. But the U.S. and NATO defense postures, built up so carefully and at such expense over the previous half century, could not change so quickly; U.S. foreign and military policies relied heavily on deterrence and they would need time to adjust. For deterrence, this had profound and often unforeseen challenges.
As it happened, cutting forces was relatively straightforward; the more difficult stage of adapting military strategy to the post–Cold War situation was that of reducing reliance on nuclear weapons. The problem was approached in two steps. First, the progress made over the previous decade in arms reduction was to be consolidated and advanced. The United States and Russia were committed to deep cuts in their strategic arsenals; under the terms of the START (Strategic Arms Reduction Talks) II treaty, those arsenals would be reduced to approximately one-third their size at the height of the Cold War, and both sides would eliminate the most destabilizing of the first-strike weapons, the MIRVed ICBMs. NATO, still formally committed to the defense posture of flexible response, concluded that it had to move away from a forward defense posture and that, accordingly, in the post–Warsaw Pact strategic landscape, substrategic, short-range nuclear weapons—that is, tactical IRBMs and MRBMs—no longer had a viable deterrent role. Consequently, at the NATO heads of state meeting in London on 5–6 July 1990, NATO committed itself to eliminating all nuclear artillery shells in Europe. At the same time NATO declared that it now regarded nuclear forces as "truly weapons of last resort." Simultaneously, negotiations were under way for what became the Conventional Forces in Europe (CFE) Treaty, which provided for drastic cuts in both NATO and Warsaw Pact conventional forces stationed in Central Europe. In September 1991, President George H. W. Bush declared that the forward deployment of tactical nuclear forces was no longer a useful part of the U.S. deterrent and that he was therefore ordering the removal of all tactical nuclear weapons from the U.S. Navy. Never fully abandoning Reagan's dream of an impenetrable shield against incoming missiles, the Bush administration quietly proceeded with a scaled-down version of SDI.
If the end of the Cold War drastically reduced the likelihood of strategic nuclear war, it nevertheless increased the risk of a small-scale nuclear exchange, mainly because of the growing problem of nuclear proliferation. In an effort to bring nuclear policy up to date and to confront head-on the problem that nuclear proliferation and the equalization of power that it created might one day work against the United States, the Clinton administration in late 1993 announced that it planned to redefine "deterrence." No longer would the emphasis be on preventing the use of weapons of mass destruction, since that risk had declined markedly; instead, the focus would be on preventing the acquisition of those weapons. The announcement was followed up in September of the following year with a formal replacement of the MAD doctrine with "mutual assured safety" (MAS), a long-term program designed primarily to make Russia's military reductions irreversible by reducing not only the number of weapons themselves but also reducing the technological and industrial infrastructures needed for nuclear weapons development. Through economic incentives and technological aid, steps were taken to dismantle what Soviet leader Mikhail Gorbachev called the "infrastructure of fear." By November 1997, in the first formal presidential directive on the actual employment of nuclear weapons since the Carter administration, President Clinton formally abandoned the Cold War tenet that the U.S. military forces must be prepared to fight a protracted nuclear war. Nuclear weapons would still play an important deterrent role, but the emphasis on them would be reduced in keeping with the changing nature of the threats in the post–Cold War international environment.
While the role of nuclear weapons in the U.S. military posture was reducing, the role of conventional weapons was increasing. Confronting crises in the Balkans and the Middle East, the United States and its allies demonstrated that they could now project conventional military force with great effectiveness. Exploiting the so-called "revolution in military affairs" of superior intelligence information coupled with technologically advanced conventional weapons, the United States and NATO were able to strike with overwhelming conventional military force in a precise and controlled manner, leading to successful combat in both regions while suffering few casualties. Such capability, demonstrated convincingly and publicly, became an important part of U.S. efforts to confront and deter what the defense community called "asymmetrical threats," or threats from rogue nations or terrorist organizations.
The role played by nuclear weapons and the deterrence strategies they bred since 1945 is a controversial issue. Many historians have argued that the very existence of nuclear weapons deterred the outbreak of another global war and kept what the historian John Lewis Gaddis called "the long peace" since 1945, while others have argued that other factors rendered major wars obsolete and that nuclear weapons were a largely irrelevant factor. Many others have argued that the sometimes absurd military postures that the existence of nuclear weapons encouraged greatly and needlessly increased the risk of global destruction and that the peace was maintained despite the existence of nuclear weapons. But whether a positive, negative, or irrelevant force, deterrence has made indelible impressions on the practice of foreign policy and the public imagination.
Alperovitz, Gar. Atomic Diplomacy: Hiroshima and Potsdam, The Use of the Atomic Bomb and the American Confrontation with Soviet Power. 2d ed. New York, 1985. Makes a controversial argument that impending confrontation with the Soviet Union influenced Truman's decision to use the bomb against Japan.
Brodie, Bernard, ed. The Absolute Weapon: Atomic Power and the World Order. New York, 1946. Various authors contribute to one of the earliest efforts to assess the implications of the atomic bomb for international affairs.
Bundy, McGeorge. Danger and Survival: Choices about the Bomb in the First Fifty Years. New York, 1988. A first-rate historical study of the impact of the bomb on U.S. foreign policy during the Cold War.
Buzzard, Anthony W. "Massive Retaliation and Graduated Deterrence." World Politics 8, no. 2 (January 1956): 228–237.
Dockrill, Saki. Eisenhower's New-Look National Security Policy, 1953–61. London, 1996.
Gaddis, John Lewis, et. al., eds. Cold War Statesmen Confront the Bomb: Nuclear Diplomacy Since 1945. Oxford, 1999.
Garthoff, Raymond L. Soviet Strategy in the Nuclear Age. Rev. ed. New York, 1962.
George, Alexander L., and Richard Smoke. Deterrence in American Foreign Policy: Theory and Practice. New York, 1974. A groundbreaking work in the field of political science that uses historical case studies of ten nuclear crises that took place between 1948 and 1962 to examine how deterrence has been employed in times of crisis.
Gowing, Margaret. Independence and Deterrence: Britain and Atomic Energy, 1945–1952. New York, 1974. A thorough study of the early British nuclear program.
Herken, Gregg. The Winning Weapon: The Atomic Bomb in the Cold War, 1945–1950. New York, 1980.
Holloway, David. Stalin and the Bomb. New York, 1994. Uses formerly closed archives to examine the early Soviet nuclear program.
Kahn, Herman. On Thermonuclear War. Princeton, N.J., 1960. The most widely read of several important books by this influential nuclear strategist. Challenges policymakers and the public to "think the unthinkable" and anticipate a post–nuclear war world.
Kaplan, Fred M. The Wizards of Armageddon. New York, 1983. An excellent study of professional nuclear strategists.
Kaufman, William W., et al., eds. Military Policy and National Security. Princeton, N.J., 1956. One of the earliest rebuttals of the massive retaliation doctrine.
Kissinger, Henry A. Nuclear Weapons and Foreign Policy. New York, 1957. An influential rebuttal of massive retaliation that advocates the development of tactical nuclear weapons to make limited nuclear war viable.
Leffler, Melvyn P. A Preponderance of Power: National Security, the Truman Administration, and the Cold War. Stanford, Calif., 1992. A meticulously researched account of the national security policy of the Truman administration.
Mandelbaum, Michael. The Nuclear Question: The United States and Nuclear Weapons, 1946–1976. Cambridge, 1979.
Mueller, John. "The Essential Irrelevance of Nuclear Weapons." International Security 13, no. 2 (fall 1988): 55–79. Makes a controversial argument that nuclear weapons were largely irrelevant to keeping the peace during the Cold War.
Osgood, Robert E. Limited War Revisited. Boulder, Colo., 1979. A concise reevaluation of the notion of limited war.
Quester, George H. Deterrence Before Hiroshima: The Airpower Background of Modern Strategy. New York, 1966. The development of strategic bombing, with particular emphasis on the years from World War I to the end of World War II.
Rosenberg, David A. "The Origins of Overkill: Nuclear Weapons and American Strategy, 1945–1960." International Security 7 (spring 1983): 3–71. A meticulously researched account of American nuclear programs during the Truman and Eisenhower administrations.
Sagan, Scott D. The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. Princeton, N.J., 1993. Examines several of the Cold War's "near misses" and the chances of accidental nuclear war.
Schelling, Thomas C. Choice and Consequence. Cambridge, Mass., 1984. Applies economic principles and game theory to deterrence.
Stromseth, Jane E. The Origins of Flexible Response: NATO's Debate over Strategy in the 1960s. Houndmills, U.K., 1987.
Taylor, Maxwell. The Uncertain Trumpet. New York, 1960. A rebuttal of massive retaliation that became extremely influential in the formulation of the Kennedy administration's defense program.
Trachtenberg, Marc. History and Strategy. Princeton, N.J., 1991. Important essays on various aspects of nuclear policy.
——. A Constructed Peace: The Making of a European Settlement, 1945–53. Princeton, N.J., 1999. Argues that nuclear weapons were a decisive factor in the peaceful division of Europe.
Williamson, Samuel R., Jr., and Stephen L. Reardon. The Origins of U.S. Nuclear Strategy, 1945–1953. New York, 1993.
Wohlstetter, Albert J. "The Delicate Balance of Terror." Foreign Affairs 37, no. 1 (1959): 211–234. Argues that technological advances will ultimately destabilize deterrence.
See also Arms Control and Disarmament; Balance of Power; Cold War Evolution and Interpretations; Cold War Origins; Cold War Termination; Doctrines; Nuclear Strategy and Diplomacy; Post–Cold War Policy; Presidential Power; Superpower Diplomacy .
THE WORSE THE BETTER
"There is an immense gulf between the atomic and hydrogen bomb. The atomic bomb, with all its terrors, did not carry us outside the scope of human control or manageable events in thought or action, in peace or war. But when Mr. Sterling Cole, the Chairman of the United States Congressional Committee, gave out a year ago—17th February, 1954—the first comprehensive review of the hydrogen bomb, the entire foundation of human affairs was revolutionised, and mankind placed in a situation both measureless and laden with doom….
"I shall content myself with saying about the power of this weapon, the hydrogen bomb, that apart from all the statements about blast and heat effects over increasingly wide areas there are now to be considered the consequences of 'fall out,' as it is called, of wind-borne radio-active particles. There is both an immediate direct effect on human beings who are in the path of such a cloud and an indirect effect through animals, grass and vegetables, which pass on these contagions to human beings through food.
"This would confront many who escaped the direct effects of the explosion with poisoning, or starvation, or both. Imagination stands appalled. There are, of course, the palliatives and precautions of a courageous Civil Defense … but our best protection lies, as I am sure the House will be convinced, in successful deterrents operating from a foundation of sober, calm and tireless vigilance.
"Moreover, a curious paradox has emerged. Let me put it simply. After a certain point has been passed it may be said, 'The worse things get the better.' The broad effect of the latest developments is to spread almost indefinitely and at least to a vast extent the area of mortal danger. This should certainly increase the deterrent upon Soviet Russia by putting her enormous spaces and scattered population on an equality or near-equality of vulnerability with our small densely populated island and with Western Europe….
"Then it may well be that we shall by a process of sublime irony have reached a stage in this story where safety will be the sturdy child of terror, and survival the twin brother of annihilation."
DETERRENCE REDUCES COSTS
"In the face of this strategy [of Soviet expansion], measures cannot be judged adequate merely because they ward off an immediate danger. It is essential to do this, but it is also essential to do so without exhausting ourselves.
"When the Eisenhower administration applied this test, we felt that some transformations were needed.
"It is not sound military strategy permanently to commit U.S. land forces to Asia to a degree that leaves us no strategic reserves.
"It is not sound economics, or good foreign policy, to support permanently other countries; for in the long run, that creates as much ill will as good will.
"Also, it is not sound to become permanently committed to military expenditures so vast that they lead to 'practical bankruptcy.' …
"We need allies and collective security. Our purpose is to make these relations more effective, less costly. This can be done by placing more reliance on deterrent power and less dependence on local defensive power.
"This is accepted practice so far as local communities are concerned. We keep locks on our doors, but we do not have an armed guard in every home. We rely principally on a community security system so well equipped to punish any who break in and steal that, in fact, would-be aggressors are generally deterred. That is the modern way of getting maximum protection at a bearable cost.
"What the Eisenhower administration seeks is a similar international security system. We want, for ourselves and the other free nations, a maximum deterrent at bearable cost….
"The way to deter aggression is for the free community to be willing and able to respond vigorously at places and with means of its own choosing."
The narrow sense: fear of punishment. In a narrow sense, deterrence can be defined as the prevention of socially undesirable behavior by fear of punishment. A person who might otherwise have committed a crime is restrained by the thought of the unpleasant consequences of detection, trial, conviction, and sentence ("simple deterrence"). A distinction is often made between general deterrence, which signifies the deterrent effect of the threat of punishment, and special deterrence (or individual deterrence ), which signifies the effect of actual punishment on the offender.
The basic phenomenon is the fear of punishment. This fear may be influenced by the experience of punishment. When an offender has been punished he knows what it is like to be prosecuted and punished, and this may strengthen his fear of the law. The experience may, however, work the other way. It is conceivable that the offender previously had exaggerated ideas of the consequences of being caught and now draws the conclusion that it was not as bad as he had imagined. In this case, the special deterrent effect of the punishment is negative. More important, probably, a person who has been convicted of a somewhat more serious crime, and especially one who was sentenced to imprisonment, will have less to fear from a new conviction, since his reputation is already tarnished. In practice, it will be difficult or impossible to isolate the deterrent effects of the prison experience from other effects of the stay in prison. What we can measure is how offenders perform after punishment, expressed in figures of recidivism.
The broad sense: the moral effects of criminal law. In a broad sense, deterrence is taken to include not only the effect of fear on the potential offender but also other influences produced by the threat and imposition of punishment. Criminal law is not only a price tariff but also an expression of society's disapproval of forbidden behavior, a fact influencing citizens in various ways. Most people have a certain respect for formal law as such. Moreover, the criminalization of a certain type of behavior may work as a moral eye-opener, making people realize the socially harmful character of the act ("the law as a teacher of right and wrong"). The moral condemnation expressed through the criminal law may also affect the moral attitudes of the individual in a less reflective way. Various labels are used to characterize these effects: the moral, the educative, the socializing, the attitude-shaping, or the norm-strengthening influence of the law. From the legislator's perspective, the creation of moral inhibitions is of greater value than mere deterrence, because the former may work even in situations in which a person need not fear detection and punishment. In the Scandinavian countries and Germany the moral component in general prevention is considered to be essential. For the moral effect of criminal law the perceived legitimacy of the system, rooted in the application of principles of justice, proportionality and fairness, are regarded as more important than severity of sentences.
General deterrence and general prevention. In continental literature general prevention is used as a technical term that denotes both the effect of fear and the moral effect of the criminal law. This is equivalent to general deterrence in the broad sense, but the term deterrence tends to focus on the effect of fear. Most American research papers on deterrence do not mention the question of definition but do in fact work with the broad concept, since they are concerned with all effects on crime rates of the system of criminal justice and make no effort to exclude effects produced through mechanisms other than fear.
Habituative effects of criminal law. Much law-abiding conduct is habitual, and the threat of punishment plays a role in this habit formation. It is sufficient to mention the response of drivers to traffic signals. In a broad sense deterrence can be taken to include also the habituative effects of the law. Habit formation is, however, a secondary phenomenon. For a habit to be established, there must first be compliance based on other sources, which may include fear and respect for the law. The habit is eventually formed through repetition of the law-abiding conduct.
A historical perspective
Historically, deterrence has been, along with retribution, the primary purpose of punishment. The deterrent purpose has often led to penalties that, to contemporary minds, seem cruel and inhuman. Capital punishment and corporal punishment were the backbone of the systems of criminal justice up to the late eighteenth century. Executions were made public spectacles, and cruel methods of execution were often invented in order to enhance the deterrent effect.
In the eighteenth century the writers of the classical school of criminal justice—notably Cesare Beccaria in Italy, Jeremy Bentham in England, and P. J. A. von Feuerbach in Germany—based their theory of criminal law on general deterrence. The central idea was that the threat of punishment should be specified so that in the mind of the potential lawbreaker the fear of punishment would outweigh the temptation to commit the crime. The penalty should be fixed by law in proportion to the gravity of the offense. The certainty of punishment was considered as more important than the severity of the punishment. According to the classical theory, the penalty in the individual case had as its primary function to make the threat of the law credible. Only occasionally did these writers mention the moral effects of the criminal law.
In the late nineteenth century and the first half of the twentieth century the idea of deterrence lost ground to the idea of treatment and rehabilitation. Criminologists and penologists voiced the view that the most important aim of punishment was to correct the offender and, if this proved impossible, to incapacitate him. Therefore, the penalty had to be adjusted to the needs of the individual offender. In the United States the indeterminate sentence was introduced. The idea of the indeterminate sentence is based on an analogy to medical treatment in a hospital. The offender should be kept as long as necessary in order to cure him, no shorter, no longer; and just as with a stay in a hospital, the duration should not be decided in advance but on the basis of the observation of progress. On the European continent, measures of safety and reform for certain categories of offenders were introduced, based on similar ideas. The idea of deterrence was often ridiculed as fictitious, outmoded, and the cause of much unnecessary suffering. The saying "Punishment does not deter crime" was often accepted as established truth.
Although these ideas were dominant in the professional literature up to the 1950s, legislators, prosecutors, and judges continued to have faith in deterrence. From the early 1960s a change in criminological thought began to take place and gradually gained momentum. Research into the differential effects of various sanctions led to great skepticism with regard to society's ability to rehabilitate offenders. It appeared that choice of sanction had very little effect when compared to the personality and background of the offender and to the social environment he went back to after his encounter with the machinery of justice. Moreover, it seemed that no one was able to tell when to release the offender in order to maximize his chances of a law-abiding life in the future. At least for the overwhelming majority of offenders, the hospital analogy does not work.
Two tendencies have emerged: a movement in favor of fixed sentences in proportion to the gravity of the offense, as demanded by the classical school of criminal law ("neoclassicism"); and a revival of interest in deterrence. When faith is lost in the idea of treatment and rehabilitation as the basis for a system of criminal sanctions, other aims of punishment come into focus. Up to 1965 the only empirical research in deterrence consisted of a few papers on the death penalty. Since the mid-1960s a series of books and a stream of research papers have been published on the subject, mainly in the United States, Canada, and Great Britain, but also in Germany, the Netherlands, and Scandinavia (see Beyleveld). Most research has been undertaken by either sociologists or economists. The economists, following the lead of Gary Becker, look upon the risk of punishment as a cost of crime and apply econometric methods to find out how a change in the price affects the rate of crime (Eide).
Empirical and ethical questions
In discussing deterrence one is confronted with two categories of questions. One category consists of empirical or factual questions: Does deterrence work, and if so, how well, in which fields, and under what circumstances? Another category consists of ethical questions: To what extent is the purpose of deterrence a valid moral basis for lawmaking, sentencing, and the execution of sentences? A penalty may be effective as a deterrent yet unacceptable because it is felt to be unjust or inhumane. The position on such questions as capital punishment, corporal punishment, and the length of prison sentences is dependent not only on views on efficacy but also on moral considerations. Even if it were possible to prove that cutting off the hands of thieves would effectively prevent theft, proposals for such a practice would scarcely win many adherents in the Western world today. Much of the discussion on deterrence has been of an emotional nature and has not separated the empirical questions from the value questions. Often people have let their views on empirical questions be heavily colored by their value preferences instead of basing them on a dispassionate scrutiny of the available evidence (Andenaes, 1974, pp. 41–44).
General deterrence: myth or reality?
The strongest basis for the belief in deterrence is the eminent plausibility of the theory from the viewpoint of common sense. That the foresight of unpleasant consequences is a strong motivating factor is a familiar experience of everyday life. It would be a bold statement that this well-known mechanism of motivation is of no importance in the decision to commit or not commit an offense. Most offenders, and even more so most potential offenders, are within the borders of psychological normalcy. There is no prima facie reason to assume that they are insensitive to negative inducements.
Historical experiences from police strikes and similar situations show that even a short breakdown of criminal justice leads to great increases in offenses such as burglary and robbery (Andenaes, 1974, pp. 16–18, 50–51). By introspection many know that the risk of detection and negative sanctions plays a role for their own compliance with rules about taxation, customs, drinking and driving, and other traffic offenses. It seems to be a universal experience that police regulations that are not enforced gradually cease to be taken seriously. Paradoxically, the consequences of police corruption can be mentioned as a demonstration of the deterrent impact that the criminal law has when the machinery of justice is working normally and properly (Andenaes, 1975, pp. 360–361). All available data indicate that organized crime flourishes most where the local police have been corrupted. Police corruption paralyzes enforcement and gives professional criminals a feeling of immunity from punishment. That crime flourishes when the criminal justice system is paralyzed through corruption is another way of stating that a criminal justice system that works normally does deter crime, or at least some forms of crime, to some degree.
It seems safe to conclude that criminal law and law enforcement play an indispensable role in the functioning of a modern, complex society. However, from a practical point of view, this insight is of limited value. Policymakers are not confronted with the choice of retaining or abolishing the whole system of criminal justice. The choices are of a much more narrow kind. The legislator sometimes has the choice between criminalization or decriminalization of a certain type of behavior, such as homosexual conduct, abortion, pornography, or blasphemy. More often the choice is between a somewhat stricter or milder penalty or between somewhat higher or lower appropriations for the police or other control agencies. For the police, the prosecutor, the judge, and the prison administrator the choices are still more limited. The questions of practical importance do not refer to the total effects of criminal law but to the marginal effects of this or that change in the level of punishment or the allotment of resources (Zimring and Hawkins, pp. 7–8). These effects are difficult to foresee. Decisions on whether to change or not to change are often made on the basis of overly simplistic assumptions.
Factors in deterrence
Severity and credibility of the threat.
According to common sense, the motivating force of the threat of punishment will normally increase with the severity of the penalty and the risk of detection and conviction. It is a fair assumption that most offenses would not have been committed if the potential offender foresaw a 50 percent risk of being detected and receiving a severe prison sentence. Even in this situation there would, of course, be exceptions: cases of psychopathological crime, crime under extreme emotional stress, certain political crimes, and so on.
Since Beccaria it has been generally accepted that certainty of punishment is more important than severity, and research gives some support for this assumption. Such a simple formula needs qualifications. For example, in the field of white-collar crime a fine may be considered merely a business expense, whereas a prison sentence, through its stigmatizing character, may act as a strong deterrent. But if the level of penalties is already high, it seems probable that further increases in severity will yield diminishing returns. Moreover, excessively severe penalties may be counterproductive by reducing the risk of conviction. When the penalties are not reasonably attuned to the gravity of the violation, the public is less inclined to inform the police, the prosecuting authorities are less disposed to prosecute, and juries are less apt to convict.
Experience in Finland in the postwar years indicates that the general level of sentencing has a limited influence on deterrence. At the beginning of the 1950s the prison rate in Finland was about four times higher than in the Scandinavian neighboring countries (Denmark, Norway, Sweden). Since then the Finnish authorities systematically have endeavored to reduce the use of prison. Through decriminalization of offenses (most important public drunkenness), shorter sentences, more use of suspended sentences, community service, and heavy fines, the prison population has gradually decreased. In the 1990s it was on the same level as in the other Scandinavian countries, in which the prison rate has remained fairly stable (between 50 and 60 per 100,000 inhabitants).
Despite the great reduction of imprisonment in Finland the crime trend has been the same in all countries. The amount of crime has increased, but the curves are strikingly symmetric (Lappi-Seppälä, 1998). It should be added that the incapacitative effect of imprisonment plays a minor role in the Scandinavian countries as compared with the United States, which has much longer sentences and a prison population that is at least ten times higher (about 650 per 100,000 inhabitants in 1998).
The problem of communication. The motivating effect of criminal law does not depend on the objective realities of law and law enforcement but on the subjective perception of these realities in the mind of the citizen. A change that is not noticed can have no effect. If we intend, for example, to increase the deterrent effect in a certain field by more severe sentences or increased police activity, a crucial question will be whether people will become aware of the change. This aspect did not attract much attention in the classical theory of deterrence. It seemed to be tacitly assumed that there would be an accord between objective facts and subjective perceptions. Survey research into public beliefs and attitudes has demonstrated that this is far from the case. Smaller changes tend to go unnoticed whether they tend toward increased severity or leniency.
Types of offenses. The importance of deterrence is likely to vary substantially, depending on the character of the norm being protected by the threat of punishment. Common sense tells one that the threat of punishment does not play the same role in offenses as different as murder, incest, tax fraud, shoplifting, and illegal parking. One distinction of importance is between actions that are immoral in their own right, mala in se, and actions that are morally neutral if they were not prohibited by law, mala prohibita. In the case of mala per se, the law supports the moral codes of society. If the threat of legal punishment were removed, moral feelings and the fear of public judgment would remain as powerful crimeprevention forces. In the case of mala prohibita the law stands alone; without effective legal sanctions the prohibition would soon be empty words. There are, however, great variations within each of the two groups. As Leslie Wilkins stated, "The average normal housewife does not need to be deterred from poisoning her husband, but possibly does need a deterrent from shoplifting" (p. 322). A realistic appraisal of the role of deterrence demands a thorough study of the specific offense and the typical motivation of violators.
Differences among persons. People are not equally responsive to legal threats. Some are easily deterred, others may lack the intellectual or emotional ability to adjust their behavior to the demands of the law. Children, the insane, and the mentally deficient are for this reason poor objects of deterrence. The same holds true for people who lack the willpower to resist the desires and impulses of the moment, even when realizing that they may have to pay dearly for their self-indulgence. Individuals who are well integrated into the social fabric have more to lose by conviction than those on the margin of society. When experts and political decision-makers discuss the deterrent impact of the threat of punishment, there is always a risk that they may draw unjustified conclusions on the basis of experience limited to their own social groups.
Conflicting group norms. The motivating influence of the criminal law may become more or less neutralized by group norms working in the opposite direction. One may think of religious groups opposing compulsory military service, organized labor fighting against a prohibition of strikes, or a racial minority fighting against oppressive legislation. In such cases there is a conflict between the formalized laws of the state and the norms of the group. Against the moral influence of criminal law stands the moral influence of the group; against the fear of legal sanction stands the fear of group sanction, which may range from the loss of social status to economic boycott, violence, and even homicide. Experience shows that the force of the group norm often prevails. In an atmosphere of alienation and antagonism, any attempt at law enforcement, even a well-justified and lawful arrest, may be the signal for an outbreak of violence and disorder, as was the case with the Watts riot of 1965 (President's Commission, pp. 119–120).
Methods of research
In spite of the great importance accorded deterrence in lawmaking and sentencing, deterrence remained a neglected field of research until about 1970, in part because of ideology and in part because of great methodological difficulties. In subsequent years research activity has been intense. Most of the research falls under the following categories.
Comparison over time. The most straightforward method of exploring the effects of a change in legislation or enforcement on the rate of crime is before-and-after research. The great difficulty in such research is to identify the impact of the change among all the other factors that have been involved at the same time. Only abrupt and major changes can be expected to give clear statistical evidence of the effects. Changes introduced in the criminal justice system may be accompanied by changes in the tendency of the victims to report the crime or by changes in the practice of crime recording by the police, so that the statistics are not comparable. These difficulties can, to some degree, be overcome by victimization studies undertaken both before and after the reform.
Perhaps the best-known example of before-and-after research was conducted in Great Britain in connection with the Road Safety Act of 1967, which made it an offense to drive with a blood alcohol concentration of 0.08 percent or more. The penalty is normally a fine and loss of driving license for one year on the first offense. From the day the new legislation went into effect, there was a considerable drop in highway casualties as compared with previous years. For the first three months casualties were 16 percent lower than in the preceding year, and deaths were down by 23 percent. For the night hours casualties were reduced by about 40 percent. Unfortunately, it seems that most of the effect has gradually been lost. As time passed it became increasingly difficult to isolate the effects of the law, but H. Laurence Ross's conclusion seems well founded: the benefits produced by the legislation had largely been canceled by the end of 1970 (p. 77).
According to Ross, the explanation of this declining effect lies in a lack of enforcement. The publicity accompanying the law had given the public exaggerated and quite unrealistic ideas about the risk of apprehension and conviction, but little effort was made to enforce the law. The police did not perceive the law as defining an important task, and gradually the public learned that it had overestimated the risk.
The crucial importance of the risk of detection in this area is convincingly demonstrated by the effects of the Finnish reform of drinking-anddriving legislation in 1977 (Andenaes, 1988, pp. 42–63). Before the reform Finland had the most severe sentences for drunken driving among the Scandinavian countries, with prison sentences of several months. After the reform the great majority of offenders got fines or suspended prison sentences. At the same time a fixed limit of 0.05 percent blood-alcohol concentration was established, and the amount of random breath tests of drivers was drastically increased, from about 10,000 in 1977 to about 700,000 in 1984. Roadside surveys of a representative sample of drivers showed that the proportion of motorists driving under the influence of alcohol after the reform had been reduced to about half. The number of alcohol-related accidents also had diminished, although not to the same degree. The main reason for this probably is that many alcohol-related accidents are caused by drivers who have serious alcohol problems and do not react to the threat of punishment in the same way as average drivers.
Comparison between geographic areas. A second method is to compare areas with differences in legislation, in sentencing, or in law enforcement, to see whether these differences are reflected in crime rates. This method was used in research on capital punishment as early as the 1920s, by comparing murder rates in retentionist and abolitionist states. Beginning in the late 1960s the method of geographical comparison has been widely used for different types of crime, by both sociologists and economists, who have employed various statistical techniques in order to discover the effects of differences in certainty and severity of sanction. Most of the American studies use the individual states as units of comparison, are based on official statistics, and are limited to the seven index crimes (homicide, assault, rape, robbery, burglary, larceny, and auto theft, as enumerated by the Federal Bureau of Investigation).
The research has almost invariably found an inverse relationship between certainty of punishment (or rather certainty of imprisonment) and crime rates. Some, but not all, of the researchers have found a similar but mostly lower relationship between severity of punishment (normally measured in length of prison sentences) and crime rates. The findings are, however, difficult to interpret. A few points should be mentioned:
- Many of the studies do not try to distinguish between effects of deterrence and effects of incapacitation. The effects they ascribe to deterrence may in fact be a result of the incapacitation of offenders sentenced to prison.
- A correlation between crime rates and severity and certainty of sanction does not in itself say anything about the direction of causality. Crime rates may influence severity or certainty of sanction as well as the other way around. The correlation may also be the result of a third factor, for example, the normative climate in a society. Few of the studies tackle these problems in a wholly satisfactory way.
- The statistical equations have certain built-in assumptions that are not necessarily true.
- If a study does not find a correlation between crime rates and severity or certainty of sanction, this does not prove that the differences in severity or certainty are without effect but only that in the given sample the effect is not of a sufficient magnitude to be statistically demonstrable.
- As noted previously, official crime statistics fail to account for variations in rates of victim reporting and police recording of offenses. A low crime-reporting and/or recording rate tends to simultaneously lower the official crime rate, while raising the apparent rate of imprisonment; a high reporting or recording rate has the opposite effect. These variations naturally tend to produce a spurious inverse relationship between official crime rates and imprisonment rates.
For these and other reasons the comparative research should not be accepted uncritically. The highly technical character of such research also constitutes a barrier against practical application until a high degree of agreement among researchers is reached.
Survey research. Survey research can be of interest to the theory of deterrence in many ways. The simplest form of such research consists in collecting data on public knowledge and beliefs about the system of criminal justice. Studies have generally found that such knowledge is low and haphazard. Comparisons over time or between geographical areas of such surveys can be used to explore how perceptions of severity and certainty of punishment vary with actual severity and certainty.
The survey method seems especially suitable for research into the moral effects of criminal law. Attitude surveys in England before and after introduction of the blood-alcohol limit (Sheppard) showed that the new statute and the accompanying publicity did not have any tangible effect on the attitudes to drinking and driving. In contrast, a survey study from Norway, where similar but stricter legislation had been in force for forty years, indicated that the law had been successful in reaching the citizens with its message (Hauge). Thus, the two studies taken together give support to the view that the moral effect of the law depends on a longtime process.
Limits of research
The stream of research papers and the accompanying theoretical discussions have above all clarified the methodological problems and illustrated the limitations of different research methods. The research has produced fragments of knowledge that can be of use to check and supplement commonsense reasoning, which will have to be relied on for a long time to come. There is a long way to go before research can give quantitative forecasts about the effect on crime rates of contemplated changes in the system. Some researchers have tried to quantify their findings. The best-known example is Isaac Ehrlich's controversial work on the effects of capital punishment on the murder rate. According to Ehrlich, statistics on the use of capital punishment in the United States in the years from 1933 to 1969 indicated that each execution in this period had prevented seven to eight murders. The study has been severely criticized (see Beyleveld), and such quantitative assessments seem clearly premature.
It may be asked how far the problems of deterrence are at all researchable. The long-term moral effects of criminal law and law enforcement are especially hard to isolate and quantify. Some categories of crime are so intimately related to specific social situations that generalizations of a quantitative kind are impossible. One may think of race riots, corruption among politicians and public employees, and many types of white-collar crime. An inescapable fact is that research will always lag behind actual developments. When new forms of crime come into existence, as did hijacking of aircraft or terrorist acts against officers of the law, there cannot possibly be a body of research ready as a basis for the decisions that have to be taken. Common sense and trial and error have to give the answers.
Deterrence and public sentiment
Most serious students of crime and criminal justice probably would agree that the fluctuations in crime rates have more to do with social and economic changes than with changes in criminal law. However, the limited role of criminal justice has not become common knowledge. It seems that politicians as well as the general public tend to overestimate the deterrent effect of criminal law on crime rates. Moreover, in the political struggle more votes are won by promising to be tough on crime than by taking a moderate attitude. A complicating factor is that the invocation of deterrence may be a cloak for retributive feelings. This is most obvious with regard to the death penalty. In this field public sentiment in the United States contrasts sharply with that of the rest of the Western world.
See also Capital Punishment: Morality, Politics, and Policy; Punishment; Sentencing: Alternatives.
——. "General Prevention Revisited: Research and Policy Implications." Journal of Criminal Law and Criminology 66 (1975): 338–365.
Becker, G. S. "Crime and Punishment: An Economic Approach." Journal of Political Economy 76 (1968): 168–217.
Beyleveld, Deryck. A Bibliography on General Deterrence. Aldershot, Hampshire, U.K.: Saxon House, 1980.The bibliography also gives summaries of and useful comments to the included studies.
Blumstein, Alfred; Cohen, Jacqueline; and Nagin, Daniel, eds. Deterrence and Incapacitation: Estimating the Effects of Criminal Sanctions on Crime Rates. Washington, D.C.: National Academy of Sciences, 1978.
Ehrlich, Isaac. "The Deterrent Effect of Capital Punishment: A Question of Life and Death." American Economic Review 65 (1975): 397–417. For full references and commentaries to the controversy, see Beyleveld, pp. 184–201, 382–385.
Eide, Erling. Economics of Crime. Deterrence and the Rational Offender. North-Holland, Amsterdam, The Netherlands: Elsevier Science B.V., 1994.
Gibbs, Jack P. Crime, Punishment, and Deterrence. New York: Elsevier, 1975.
Hauge, Ragnar. "Drinking-and-Driving: Biochemistry, Law, and Morality." Scandinavian Studies in Criminology 6 (1978): 61–68.
Lappi-SeppÄlÄ, Tapio. "General Prevention—Hypotheses and Empirical Evidence." Ideologi og empiri i kriminologien. Rapport fra NSfKs 37. forskerseminar, Sverige (1995): pp. 136–159.
——. Regulating the Prison Population. Experience from a Long-Term Policy in Finland. Helsinki: National Research Institute of Legal Policy, 1998.
President's Commission on Law Enforcement and Administration of Justice, Task Force on Assessment of Crime. Task Force Report: Crime and Its Impact—An Assessment. Washington, D.C.: The Commission, 1967.
Ross, H. Laurence. "Law, Science, and Accidents: The British Road Safety Act of 1967." Journal of Legal Studies 2 (1973): 1–78.
Sheppard, D. The 1967 Drink-and-Driving Campaign: A Survey among Drivers. Road Research Laboratory Report LR230. Crowthorne, Berkshire, U.K.: Ministry of Transport, 1968.
von Hirsch, Andrew; Bottoms, Anthony E.; Burney, Elizabeth; and WikstrÖm, Per-Olof. Criminal Deterrence and Sentence Severity: An Analysis of Recent Research. Oxford, U.K.: Hart Publishing Ltd., 1999.
Wilkins, Leslie T. "Criminology: An Operational Research Approach." In Society: Problems and Methods of Study. Edited by A. T. Welford. London: Routledge & Kegan Paul, 1962. Pages 311–337.
Zimring, Franklin E., and Hawkins, Gordon J. Deterrence: The Legal Threat in Crime Control. Foreword by James Vorenberg. Chicago: University of Chicago Press, 1973.
Deterrence is a military strategy in which one actor attempts to prevent an action by another actor by means of threatening punishment if the action is undertaken. Deterrence is, in essence, a threat to use force in response to a specific behavior. While deterrence is an inherently defensive strategy, it does not involve defense; that is, the deterring party does not actively protect its assets or try to prevent its opponent from taking the action, but rather threatens the use of violence to convince the opponent not to act in the first place.
Deterrence can best be understood with reference to the “3 Cs”: capability, communication, and credibility. Any deterrent threat must meet all three criteria to succeed. Capability refers to whether the actor issuing the deterrent threat is capable of carrying out the threat. Thus, the ability to successfully deter depends to some degree on the power of the deterring actor. Capability is, generally, the most straightforward category, as states typically match deterrent threats to their extant military capabilities. States can, however, dissemble by threatening actions requiring capabilities that they do not have, or by claiming capabilities that they do not possess, though subterfuge and secrecy tend to undermine the efficacy of a deterrent threat.
In order for deterrence to work, a state must communicate its threats. If a state does not know that an action is proscribed, it cannot be deterred from taking that action. Communication is therefore essential if states are to know what actions they are not supposed to take, as well as what will happen if the action is taken. Thus, good lines of communication are crucial. The “hot line” between the Soviet Union and the United States during the cold war served this function well. With deterrence, the goal is to avoid armed conflict and communication is vital to create boundaries and reveal expectations.
The final C, credibility, is perhaps the most difficult criterion to meet. Deterrence is, in a sense, a fundamentally irrational action, because the threat is carried out after the violation occurs. Once the forbidden action is taken, it does not necessarily make sense to carry out the threat that was intended to deter that action in the first place. This is similar to the economic theory of “sunk costs,” in which costs that have already been incurred should not be included in decisions about future behaviors. States have to work very hard at establishing their credibility, especially in situations of extended deterrence (explained below), and they often try to “tie their hands” meaning the decision to carry out the deterrent threat is made automatically. During the cold war, the U.S. soldiers stationed in West Germany had no chance of repelling a Soviet invasion of Western Europe. Instead, the troops served as a “tripwire,” ensuring Americans would be killed in any Soviet invasion and increasing the odds that the U.S. would come to the defense of Western Europe, making more credible the American deterrent threat.
There are two fundamental types of deterrence: central and extended. Central deterrence occurs when a state attempts to deter attacks against itself, its nationals, or other intrinsic assets. In extended deterrence, a state attempts to prevent attacks against an ally or another third party. Credibility is usually easier to establish with central deterrence, because the need to develop a reputation for protecting one’s own assets is vital to the strength of a state. Credibility is much more difficult to build in extended deterrence, because it is harder for the deterring state to risk war to protect an ally.
Deterrence can also be broken down into two strategic categories: denial and punishment. In a denial strategy, also known as counterforce deterrence, the military deterrence is targeted at the enemy’s military and political assets, such as military bases, command and control assets, and governmental facilities. The purpose of the deterrent is, therefore, to prevent the enemy from achieving whatever goal it seeks with the use of force. If the opponent correctly reads the deterrent threat as sufficiently reducing the likelihood of obtaining the desired outcome, then the action will not be taken and deterrence will succeed. Flexible response, discussed below, is an example of denial strategy.
In punishment, or countervalue, deterrence, the deterrent threat is targeted at the enemy’s “soft” targets, such as population centers or industrial capabilities. The aim of punishment deterrence is to threaten such a high cost to the fabric of the opponent’s society that the action in question will no longer be worth the cost. Both massive retaliation and mutual assured destruction are illustrations of punishment strategies.
While deterrence can be based on both nuclear and conventional forces, deterrence strategy can best be demonstrated by examining the evolution of U.S. nuclear strategy over time. The first formulation of nuclear deterrence was “massive retaliation,” a punishment strategy created by John Foster Dulles, the U.S. secretary of state under President Dwight Eisenhower. Here, the United States reserved the right to respond to any military provocation with nuclear weapons, and particularly with much greater force than the original attack. The United States was thus relying on its superior nuclear forces to deter the numerically superior Soviet Union from invading Western Europe. The main problem was the inability to deal with minor threats. As the Soviet nuclear arsenal became more capable of matching that of the United States, massive retaliation was replaced, under the Kennedy Administration, with “flexible response,” a denial strategy that created a menu of responses that could be tailored to a specific action. The intent was to make the threatened use of nuclear weapons more credible. However, it was determined that any use of nuclear weapons could escalate into large-scale nuclear exchanges, and flexible response was discarded and replaced with “mutual assured destruction” or MAD. This was a punishment strategy that relied on the ability of both nations to completely destroy each other.
Deterrence is a critical part of the strategic arsenal of countries because all nations usually seek to avoid armed conflict and war. The need to maintain a strong deterrent posture can also inhibit states from achieving other political goals. For example, there was great resistance to the Israeli withdrawals from Lebanon in 2000 and from Gaza in 2005 because of fears that these actions would undermine Israeli deterrence capabilities. However, the years since 1950 have been something of a triumph for deterrence theory, and for nuclear deterrence in particular, as a conflict between the major powers has been avoided.
SEE ALSO Deterrence, Mutual; Military; Violence
Freedman, Lawrence. 2004. Deterrence. Cambridge, U.K.: Polity Press.
Sagan, Scott, and Kenneth Waltz. 1995. The Spread of Nuclear Weapons: A Debate. New York: W.W. Norton.
As the release and utilization of energy from atomic nuclei have challenged and intrigued physical scientists, so behavioral and social scientists have been intrigued and challenged by the intrusion of this energy into the domain of their concern. Awareness of the potential destructiveness of weapons employing nuclear energy has prompted extensive consideration of their impact on international relations.
In its contemporary usage, the concept “deterrence” refers to hypothesized effects of nuclear weapons technology on the set of alternatives from which national policy makers choose their courses of action. By extension, these “effects” affect the conduct of international relations.
The set of alternatives from which national leaders make their choices has traditionally included among its elements the “resort to war” as an instrument of national policy; the hypothesized effects of nuclear weapons are relevant to this subset of foreign policy alternatives.
”Deterrence” refers to the attempt by decision makers in one nation or group of nations to restructure the set of alternatives available to decision makers in another nation or group of nations by posing a threat to their key values. The restructuring is an attempt to exclude armed aggression (resort to war), from consideration.
The fundamental deterrence hypothesis is: If the threat to values is sufficiently large, the exclusion of armed aggression from consideration is probable.
This hypothesis, taken together with a subsidiary hypothesis—nuclear weapons pose a sufficiently large threat to values—yields the deduction: Nuclear weapons make probable the rejection of armed aggression as a potential policy alternative.
The ubiquity of this syllogism in the deterrence literature (Brody I960; Lefever 1962, pp. 313–332; Halperin 1963, pp. 133–184) argues for the examination of the postulates on which it is founded. Three central assumptions appear to underlie this syllogism; they may be classified under the following themes: (1) the rationality of decision makers; (2) the unidimensionality of threat and of response to threats; and (3) the constancy of sets of policy alternatives (see Deutsch 1963, pp. 71–72, for a critique of a somewhat different set of deterrence theory assumptions).
The assumption of rationality. The rationality of decision makers posited in the deterrence literature is both a norm against which behavior can be evaluated and a prescriptive guide to choice. The rational decision maker, in deterrence theory, is presumed to avoid the resort to war in those situations in which the cost anticipated from aggression is greater than the gain expected from such an action.
This notion of rationality concentrates on avoidance behavior; it predicts the rejection of alternatives where cost exceeds gain. Empirical studies on foreign policy decision making are not abundant, but those that have been done raise doubts about this characterization of the policy process (Snyder & Paige 1958; Holsti 1962). In the light of research on domestic and foreign policy making, this postulate of deterrence theory is of dubious validity.
Conceptions of threat. The fundamental deterrence syllogism contains implicit assumptions about the nature of threat and about the relationship of threat to deterrence.
Threat is presumed to be a simple function of destructive capacity; the greater the destructive capability, the greater the threat. A more complex view of threat (for example, Singer 1958) calls this conception into question unless “destructive capacity” is construed very broadly—beyond numbers of weapons, warhead size (yield), and accuracy.
Experiments on the relationship of power to threat indicate that the ambiguity (or, conversely, the clarity) of the threat is closely related to the amount of threat experienced (Cohen 1959).
It has been suggested elsewhere (Brody 1963, pp. 696–697) that the threatening interaction can be conceived of as being determined by three factors: (1) perception by one party of the other party’s hostile intentions; (2) perception of the other party’s capability of inflicting damage to the perceiver; and (3) the credibility of the other party’s declaratory policy. If threat inhered only in capability (and we ignored hostility and credibility), we would conclude that adversaries and allies are equally threatened by the same weapons systems.
Complex conceptions of threat call into question not only the proposition that threat varies uniformly with destructive power but also many of the assumptions about the relationship of threat to deterrence. The literature is replete with suggestions that the effectiveness of deterrence is a direct function of the amount of destructive capability. Wohlstetter (1959) has refined this conception by pointing out that deterrence is more properly conceived of as a function of the amount of capability potentially remaining after an attack has been absorbed. Both of these conceptions assume that the threatened decision maker will be “deterred,” that is, that he will abandon even the consideration of the course of action that worried the threatener.
”Credibility” has received attention from deterrence theorists (Kaufmann 1956, pp. 12–38), but as a quality of weapons rather than of relations among nations. However, the possibility of an aggressive response to a credible threat receives much less attention (for example, Kahn 1963). Psychologists, on the other hand, have often found aggressive responses to threat (for example, McNeil 1959). This suggests at least the potential for such responses in threatening international situations.
Milburn, drawing on psychological research, proposes that we characterize threats on a number of dimensions (for example, symbolic–concrete and clear–vague); however, the relationship between types of threats, characterized on these dimensions, and the inhibition of behavior (deterrence) or excitation of behavior (provocation) is without empirical research, despite the salience of such knowledge for deterrence theory.
Availability of policy alternatives. Implicit also in much of the literature on deterrence is the assumption that alternatives to the resort to war are available to and perceived by decision makers, irrespective of the international situation.
The validity of this proposition is difficult to establish: the historical record is rich with examples of national leaders who, in intense crises, “saw no way out” and for whom war became the only viable alternative. On the other hand, despite extremely intense crises, decision makers in the nuclear-armed nations apparently have not felt that their policy alternatives were reduced to one—war.
Contradictory hypotheses can be entertained to resolve this paradox: Because of their destructiveness, nuclear weapons may have transformed the international system, thus making the prenuclear historical examples simply inapplicable. However, it is equally plausible that the international system has moved onto a plateau of high intensity (McClelland 1961)—the cold war—and that increases in tension (relative to this base), comparable to historical crises, have not yet been experienced by leaders in one nuclear nation when confronted by another. Which hypothesis will stand the test of research cannot be forejudged, but an answer is needed to strengthen the conceptual foundations of deterrence theory.
Deterrence strategies. Despite the lack of confirmation of the premises on which elemental deterrence theory is based, there is widespread a priori acceptance of them. The dialogue, which has produced a voluminous literature, has largely been concerned with how best to deter; thus, the debate has been primarily strategic rather than social scientific.
An important impetus to this debate has been the rapidly changing weapons technology; each new capability has found spokesmen arguing the necessity of including it in the considerations affecting strategic thought. Technology has tended to lead strategy, bringing an evanescent quality to the deterrence literature.
The debate among deterrence strategists can be characterized by examining the various positions along three dimensions: (1) the mission of the deterrent; (2) the means by which deterrence is to be accomplished; and (3) the values threatened and to be threatened.
There is little disagreement that a goal of foreign policy is the dissuasion of other nations from committing aggression of any kind—from strategic nuclear attack to guerrilla warfare. Debate arises over how much of this spectrum can be deterred with nuclear weapons.
Glenn Snyder (1961) provides a useful distinction between the missions of denying access to territory and of punishing aggression (that is, of retaliation ); weapons appropriate to one mission may not be suited to the other. Thus, for example, the mission of denying western Europe to Soviet forces, it is argued, requires different weapons than does the mission of retaliating against the Soviet homeland, should the Soviet leaders choose to attack. The counter to this argument, stemming from the strategic doctrines of Douhet (Brodie 1959), asserts that the massive threat inherent in nuclear retaliatory capability will deter tactical as well as strategic aggression.
Those who argue for specific weapons for specific missions—the strategy usually called “graduated deterrence”—point to the ineffectiveness of strategic bombing as a deterrent in World War ii (Blackett 1962) and to the reduced credibility of an all-purpose deterrent after the Soviet Union emerged as a nuclear power.
This debate overlaps the dialogue about the means by which deterrence can best be accomplished. The strategists who advocate graduated deterrence generally argue for the limitation of strategic capability at the minimum needed to deter (Morgenstern 1959). The logic of this position relies heavily on the invulnerability to attack of individual units of the deterrent force. The critics of minimum deterrence point out, however, that the continuation of this invulnerability in view of the growth of weapons technology is too uncertain for so fundamental an element of policy (Kahn I960; 1963). The advocates of minimum deterrence argue that sustained effort in producing weapons beyond the minimum (in an attempt to guarantee at least “statistical invulnerability” of the force as a whole) is itself a stimulus to the search for countermeasures and to the uncontrolled stockpiling of arms, that is, to arms races,
Proponents of graduated deterrence argue that the absence of military capability to counter a particular lower-level or nonnuclear threat, for example, insufficient conventional forces to deny territory in western Europe to the Soviet Army, creates an unstable situation fraught with the danger of escalation to strategic nuclear war. It is in escalation, that is, in resort to weapons with much greater destructive capacity than the weapons employed by the attacking side, that deterrence theorists see the greatest likelihood of limited wars becoming general nuclear wars (Schelling I960; Halperin 1963). Moreover, the expansion of limited conflicts is often seen as the most probable cause of general war.
Concentration on the deterrence of general war has led to a debate about what is the best object of threat: Is the threatened destruction of cities more effective than the threatened destruction of military targets in precluding resort to war?
The dual nature of weapons—they can be used to fight as well as to threaten—has brought an element of confusion to the debate. If a war is to be fought, response weapons capable of destroying military capability may be most effective in limiting damage to targets of high value to the responding nation (Kahn 1963). On the other hand, it is contended that producing this amount of weaponry (because there are many more well-defended military targets than civilian targets) could create an unstable arms race that would make war more likely.
The advocates of minimum deterrence have tended to opt for targeting population and industrial centers; the opposition to this point of view has tended to argue for preserving the option of retaliating against any target, civilian or military. The second policy seems to be characteristic of United States military posture; the first seems to have been adopted by the Soviet Union.
Deterrence and international relations. The lively debates among deterrence strategists has had a salutary effect on the field of international relations. Social scientists have expressed uncertainty about the validity of the principles on which deterrence theory rests and some skepticism about assertions concerning the way nations act and are likely to act; from this doubt and questioning, professional students of international politics have begun to think of the relationship of military technology to foreign policy as a significant area of research. Moreover, the subject matter has brought scholars from all the social sciences together and has forced interdisciplinary communication.
The strategic debates will continue as military technology continues to change, but the conceptual foundations laid by the work of concerned social scientists are the beginnings of the reintegration of military strategy into the study of international relations; in this context we can expect the patient development of an empirical theory of deterrence.
Richard A. Brody
Blackett, Patrick M. S. 1962 Studies of War: Nuclear and Conventional. New York: Hill & Wang.
Brodie, Bernard 1959 Strategy in the Missile Age. Princeton Univ. Press.
Brody, Richard A. 1960 Deterrence Strategies: An Annotated Bibliography. Journal of Conflict Resolution 4:443–457.
Brody, Richard A. 1963 Some Systemic Effects of the Spread of Nuclear Weapons Technology: A Study Through Simulation of a Multi-nuclear Future. Journal of Conflict Resolution 7:663–753.
Cohen, A. R. 1959 Situational Structure, Self-esteem, and Threat Oriented Reactions to Power. Pages 35–52 in Dorwin Cartwright (editor), Studies in Social Power. Ann Arbor: Univ. of Michigan, Institute for Social Research.
Deutsch, Karl W. 1963 The Nerves of Government: Models of Political Communication and Control. New York: Free Press.
Halperin, Morton H. 1963 Limited War in the Nuclear Age. New York: Wiley. → See especially the bibliography on pages 133–184.
Holsti, Ole R. 1962 The Belief System and National Images: A Case Study. Journal of Conflict Resolution 6:244–252.
Kahn, Herman (1960) 1961 On Thermonuclear War. 2d ed. Princeton Univ. Press.
Kahn, Herman 1963 Strategy, Foreign Policy, and Thermonuclear War. Pages 43–70 in Robert A. Goldwin (editor), America Armed: Essays on United States Military Policy. Chicago: Rand McNally.
Kaufmann, William W. (editor) 1956 Military Policy and National Security. Princeton Univ. Press.
Lefever, Ernest W. (editor) 1962 Arms and Arms Control: A Symposium. New York: Praeger. → See especially the bibliography on pages 313–332.
McClelland, Charles A. 1961 The Acute International Crisis. World Politics 14:182–204.
McNeil, Elton B. 1959 Psychology and Aggression. Journal of Conflict Resolution 3:195–293.
Milburn, Thomas W. 1959 What Constitutes Effective Deterrence? Journal of Conflict Resolution 3:138–145.
Morgenstern, Oskar 1959 The Question of National Defense. New York: Random House. → A paperback edition was published in 1961 by Vintage.
Schelling, Thomas C. 1960 The Strategy of Conflict. Cambridge, Mass.: Harvard Univ. Press.
Singer, J. David 1958 Threat-perception and the Armament-tension Dilemma. Journal of Conflict Resolution 2:90–105.
Singer, J. David 1962 Deterrence, Arms Control, and Disarmament. Columbus: Ohio State Univ. Press.
Snyder, Glenn H. 1961 Deterrence and Defense: Toward a Theory of National Security. Princeton Univ. Press.
Snyder, Richard C.; and Paige, Glenn D. 1958 The United States Decision to Resist Aggression in Korea: The Application of an Analytical Scheme. Administrative Science Quarterly 3:341–378.
Wohlstetter, Albert 1959 The Delicate Balance of Terror. Foreign Affairs 37:211–234.
For deterrence to operate, two conditions must exist. First, A must possess an effective coercive strategy—some combination of negative and positive sanctions large enough to shift B's evaluation of the desirability of a particular action. Negative sanctions for noncompliance may be of three sorts: denial of benefits; retaliation; or punishment. Second, A must be able credibly to commit itself to carrying out its effective coercive strategy. Because imposing negative or positive sanctions is unlikely to be cost‐free for A, A's capacity credibly to commit itself may be problematic. Credible commitment to threats and promises can be established in three ways: by taking steps, ex ante, to ensure that the costs of failing to carry out threats and promises exceed the costs of carrying them out; by arranging for the threats and promises to be carried out automatically (as in “Dr. Strangelove's” fictional doomsday machine); or by ensuring, ex ante, that decisions to execute sanctions will be made irrationally, without due attention to costs and benefits.
Though both are exercises in coercion, deterrence differs from compellence in what A demands of B. In deterrence, A seeks to convince B not to undertake particular actions. In compellence, A seeks to force B to undertake particular actions. The distinction is between coercion aimed at preserving the status quo and coercion aimed at changing it. Deterrence is likely to be easier to accomplish than compellence because deterrence does not involve a deadline for action and is less likely to involve a visible and humiliating act of compliance, and because, whereas deterrence simply maintains the status quo, in compellence it is unclear where A's demands will end once B begins to make concessions.
Deterrence and compellence both involve coercive uses of power by A to achieve its goals indirectly, by obtaining B's compliance. They differ from direct uses of power aimed at achieving A's desired outcome regardless of B's behavior. This difference yields the distinction between deterrence and defense. Deterrence aims to reduce or eliminate B's interest in undertaking certain actions, and its success rests on A's capacity credibly to commit itself to harm B. Defense aims to reduce or eliminate B's capacity to hurt A or A's interests: its success rests on A's capacity to disarm, defeat, or protect against B. A's ability to limit or eliminate B's physical capacity to impose pain on A is irrelevant to deterrence, but is the essential element of defense. Measures aimed at defense may be preemptive (that is, may involve destroying or neutralizing B's capabilities before B has an opportunity to use them); active (defeating, repulsing, or blunting B's actions); or passive (protecting items of value against the consequences of B's successful actions).
The distinction between deterrence and defense is evident in alternative Cold War strategies developed for dealing with the possibility of a Soviet nuclear attack. The Assured Destruction and Mutual Assured Destruction (MAD) doctrines enunciated by Secretary of Defense Robert S. McNamara, and the various strategies of controlled nuclear retaliation developed after the early 1960s, reflect the logic of deterrence: they acknowledged the vulnerability of American society to a Soviet attack, but aimed to protect the territory of the United States by credibly committing it to exact appropriate retribution. By contrast, active defenses like the proposed Sentinel thin area defense antiballistic missile (ABM) program of the late 1960s, or broad missile defenses like those envisioned in President Ronald Reagan's 1984 Strategic Defense Initiative (SDI), reflect the idea of defending against, rather than deterring, an attack.
Though deterrence has always coexisted with defense as an element in American military policy, the development by the end of World War II of effective long‐range airpower, missile technology, and atomic weapons simultaneously rendered defense more difficult and increased national capacity to threaten an adversary with massive suffering. Insightful observers like Bernard Brodie noted almost immediately the basic implications of these technological developments for American security policy. The Eisenhower administration's explicit incorporation of nuclear deterrence—“massive retaliation”—into U.S. defense planning in 1954 as part of its “New Look” in national security policy sharply accelerated the development of deterrence theory, principally by civilian analysts and scholars.
The early theorizing of the immediate postwar period was supplemented in the late 1950s and early 1960s by careful analyses by Brodie, Herman Kahn, William Kaufman, Klaus Knorr, Thomas Schelling, Glenn Snyder, and Albert Wohlstetter, among others, who explored the problems of achieving credible commitment, assuring “second‐strike” capability, enhancing stability in situations of mutual vulnerability, using threats of limited and controlled retaliation to make nuclear deterrence credible even while American cities remained hostage, and employing arms control to enhance crisis management and arms race stability. This theorizing provided the blueprint for American nuclear strategy and arms control policy from the mid‐1960s until the administration of Ronald Reagan. With SDI and particularly with the end of the Cold War, the focus of U.S. nuclear policy shifted increasingly from the problem of deterrence to the problems of defense against limited nuclear attacks, as well as nuclear proliferation.
[See also Arms Control and Disarmament; Game Theory; Missiles; Nuclear Weapons; Nuclear War, Prevention of Accidental; Strategy: Fundamentals; Strategy: Nuclear Warfare Strategy.]
Bernard Brodie , Strategy in the Missile Age, 1959.
Thomas C. Schelling , Arms and Influence, 1966.
Glenn H. Snyder , Deterrence and Defense, 1961.
Alexander L. George and and Richard Smoke , Deterrence in American Foreign Policy, 1974.
Robert Jervis , The Illogic of American Nuclear Strategy, 1984.
Edward Rhodes , Power and MADness: The Logic of Nuclear Coercion, 1989.
Ted Hopf , Peripheral Visions: Deterrence Theory and American Foreign Policy in the Third World, 1965–1990, 1994.
A theory that criminal laws are passed with well-defined punishments to discourage individual criminal defendants from becoming repeat offenders and to discourage others in society from engaging in similar criminal activity
Deterrence is one of the primary objects of the criminal law. Its primary goal is to discourage members of society from committing criminal acts out of fear of punishment. The most powerful deterrent would be a criminal justice system that guaranteed with certainty that all persons who broke the law would be apprehended, convicted, and punished, and would receive no personal benefit from their wrongdoing. However, it is unrealistic to believe that any criminal justice system could ever accomplish this goal, no matter how many law enforcement resources were dedicated to achieving it.
As a result, philosophers, criminologists, judges, lawyers, and others have debated whether and to what extent any criminal justice system actually serves as a deterrent. Deterrence requires the would-be criminal to possess some degree of reflective capacity before the crime is committed, at least enough reflection to consider the possible consequences of violating the law if caught.
Since many crimes are committed during "the heat of the moment" when an individual's reflective capacities are severely compromised, most observers agree that some crimes simply cannot be deterred. Individuals who commit crimes for the thrill of "getting away with it" and outwitting law enforcement officials probably cannot be deterred either. In fact, such individuals may only be tempted and encouraged by law enforcement claims of superior crime-prevention and crime-solving skills.
Deterrence ★★ 2000 (R)
Stagy one-room thriller set during the presidential campaign of 2008. Veep Walter Emerson (Pollak) became prez when the incumbant died—now he's campaigning for re-election. He's at a Colorado primary when a blizzard forces Emerson and his aides (as well as a TV crew) to take shelter in a small town diner. The diner's cable TV hookup reports an international crisis—Iraq forces have invaded Kuwait and slaughtered American peacekeepers. So Emerson decides the thing to do is nuke Baghdad. Lots of pontificating. 101m/C VHS, DVD . Kevin Pollak, Timothy Hutton, Sheryl Lee Ralph, Sean Astin, Clotilde Courau, Badja (Medu) Djola, Mark Thompson; D: Rod Lurie; W: Rod Lurie; C: Frank Perl; M: Lawrence Nash Groupe.