Share This Article

How crazy was the Cold War doctrine of mutually assured destruction? Turns out, it was a surprisingly good idea.

Few concepts in military strategy are more puzzling than the notion of mutually assured destruction. More than frightening, it sounds irrational or even insane to plan for the total obliteration of society, an apparent contradiction of the very motivation for war. Why would anyone propose a strategy in which both sides lose? For all its disturbing and apparently illogical aspects, the doctrine of mutually assured destruction, or MAD as it came to be called, involved some of the best strategic thinkers of the 20th century, including presidents Dwight Eisenhower and John Kennedy, and Defense Secretary James Schlesinger. It reflected the uncertain times in which decisions were made, the novelty of the weapon and, quite simply, an inability to come up with anything better.

The origins of MAD date to the strategy of massive aerial bombardment developed in the Second World War. There was little doubt in the spring and summer of 1945 that the Allies would eventually prevail over the Japanese. The only question was how long it would take, how many lives would be lost and how much destruction would be wrought before the Imperial General Staff finally capitulated.

The strategy to defeat Japan took two parallel paths. First, the Allies waged a relentless campaign to destroy the Japanese war machine, both its troops and the industrial base that supplied them. Simultaneous American thrusts led by Fleet Admiral Chester Nimitz (from the east) and General of the Army Douglas MacArthur (from the south) were eliminating what remained of the Japanese army, navy and air force and were cutting off the Home Islands from much-needed oil, rubber and other vital resources. At the same time, under the direction of Maj. Gen. Curtis LeMay and General Tommy Power, a withering strategic bombing campaign was conducted to eliminate war industries, transportation networks, ports and anything else that might prolong the hostilities. But even the astonishing level of damage caused daily by hundreds of B-29 bombers was considered too slow a process to bring to terms a government that had vowed to defend its homeland to the last drop of Japanese blood. So, in parallel with attacks on industrial and military sites, American generals decided to bomb the civilian populations of major cities to destroy their morale and hence hasten an end to the war. It was within this context of purposeful and massive destruction that the decision to use the first atomic bombs was made. Lacking any new policy to govern the use of atomic weapons, they were simply inserted into the existing policy of mass aerial bombardment.

So different was the atomic bomb from all weapons that preceded it that even seasoned military officers refused at first to believe that a single explosion could cause such destruction. This is illustrated in an anecdote related by Harold Agnew, a crewmember on one of the planes involved in the Hiroshima attack and later director of Los Alamos Scientific Laboratory. Immediately after the cessation of hostilities, Agnew was assigned to Tinian, the island from which the attacks were launched, to brief visiting generals and admirals on the bomb and its effects. He used the box that held the plutonium core of the Nagasaki bomb, about the only thing left after the bombs had been dropped, as a prop. During one of Agnew’s talks the ranking general scowled and said, “Son, you may think that a city could be destroyed by what was in that box, but I don’t have to believe it,” after which he stood up and walked out.

To other officers, the fact that a single bomb could achieve a level of destruction that had previously required thousands of conventional bombs was only a quantitative distinction, a large but nevertheless understandable advance in military capability. More than 100,000 people had been killed in the firebombing of Tokyo, more than were killed in either of the atomic attacks, so the sheer level of destruction was not demonstrably different. Many military thinkers viewed atomic bombs as just another weapon in the arsenal, usable against any future enemy that might threaten the interests of the United States.

President Harry S. Truman thought otherwise, as has every U.S. president since. He saw nuclear weapons as a qualitative shift in warfare, a transition point from the past, when wars were frequent but survivable, to a future in which conflict might end civilization itself. Truman saw the use of nuclear weapons as a presidential decision and insisted their development remain in civilian hands. Congress created the Atomic Energy Commission in 1946 to oversee their development, and in the autumn of 1948, Truman formalized presidential authority over atomic weapons in a national security policy memorandum.

Immediately after the war there were discussions of international controls on nuclear weapons and the special forms of nuclear material—uranium and plutonium—that powered them. Some suggested putting them under the authority of the newly founded United Nations, to be part of an international peacekeeping force. Here Truman had the support of the Joint Chiefs of Staff, who saw the proliferation of nuclear weapons as the only threat to American military superiority.

Such utopian dreams were shattered as the Soviet Union rejected any form of international controls on atomic energy, believing that such policies would only cement the superior position of the Western powers. In 1948 Soviet occupation forces in Germany cut off access to the American, British, and French controlled sectors of Berlin, creating what Winston Churchill described as an “Iron Curtain” separating East and West. That same year, the United States created the Strategic Air Command (SAC) under the leadership of Lt. Gen. LeMay, who had designed the campaign of massed air attacks on Japan. SAC was responsible for all types of strategic bombing, but it quickly focused on the unique aspects of nuclear warfare.

The American monopoly on nuclear weapons was broken a year later when, in 1949, the Soviets conducted their first nuclear test, well ahead of American intelligence estimates. It was no longer a question of whether the United States should have nuclear weapons, but how those weapons would affect an evolving geopolitical struggle between two irreconcilable political systems. Seeing the Soviet Union as the most likely adversary of the future, LeMay and his staff at SAC developed nuclear war plans that included attacks on warfighting industries, transportation networks and associated national infrastructure—a direct continuation of the approach he used against Japan. More than 200 critical targets were identified, including most of the major cities of the Soviet Union, and an intense manufacturing program was launched to supply the nuclear weapons required for such attacks.

At the same time, planners began to assess the effect of a Soviet strike on the United States by long-range bombers and commercial ships, the latter a harbinger of modern worries over terrorist weapons on ships. Tactics and weapons were developed to protect the United States from air and sea attack. The extraordinary destructive power of nuclear weapons meant that anything less than 100 percent success in their interception—something that was already considered impossible— would result in catastrophe.

Pressure was also building for the consideration of nuclear options in otherwise conventional (i.e., non-nuclear) wars. When North Korea marched south on June 25, 1950, the Pentagon feared that all might be lost on the peninsula unless overwhelming force could be brought to bear, and quickly. With hardly enough troops to put up a delaying action, American generals argued for the use of atomic weapons as practical weapons of war, an equalizer against massed Chinese attacks. But a closer analysis revealed that there were few targets in Korea that could not be destroyed with conventional weapons, and wiser minds noted that the use of nuclear weapons against supporting Soviet or Chinese bases could trigger a global war that the United States was ill-prepared to fight. This was the first example of what would become a persistent dilemma in nuclear strategy: The risk of using nuclear weapons could easily outweigh their military benefit. They were not practical instruments of war like battleships and tanks. Their use could trigger an escalation of conflict that was far more destructive than the conflict they were designed to stop.

Nevertheless, LeMay and other military commanders continued to argue for a first strike against the Soviet Union, a “preventive war” that would settle the nuclear standoff once and for all. Truman was opposed to a first-strike option, but he did authorize contingency planning for war with the Soviet Union, including a fundamental change in how nuclear weapons were to be employed in a strategic conflict. Previously, the highest priority targets were military units and war industries. Now, a “counterforce” strategy was developed that put priority on the destruction of enemy nuclear weapons that could strike the United States. The goal was not to destroy the Soviet Union, but to prevent the Soviet Union from destroying the United States.

After the Russians demonstrated that they too had the atomic bomb, Truman realized that he could no longer unilaterally control nuclear technology. He authorized research into the full spectrum of atomic munitions, from those with yields of only a few tons intended for battlefield applications to megaton behemoths for strategic “city busting.” Edward Teller and conservative elements in the scientific community pushed tirelessly for the United States to start work on the hydrogen bomb. They argued that the Soviets were probably already working on their own “super” bomb and that the United States could not allow itself to be found flat-footed in an arms race. (Russian nuclear weapons designers gave the identical reason for their hydrogen bomb program—the Americans were already well along and the Soviet Union could not place itself at a strategic disadvantage.) The first American test of the hydrogen bomb, code named “Mike,” was conducted in November 1952, graphically demonstrating the almost limitless destructive potential of the H-bomb. It seemed practically impossible to halt the development of new types of atomic weaponry.

Dwight Eisenhower initiated his own review of nuclear weapons policy when he was sworn in as U.S. president in 1953. He understood from personal experience the tenuous position of the Western powers in Europe and saw nuclear weapons as the essential balancing factor to massive Soviet troop concentrations deployed on the German border. The United States could not afford to match the Soviets soldier for soldier and tank for tank; nuclear weapons provided a counter at much lower cost. As the Iron Curtain settled into place, American planners began to talk of containment of Soviet expansionism and perhaps an eventual rollback in which the occupied countries of Eastern Europe would be freed from communist domination. The notion of “deterrence” was refined to emphasize the role of nuclear weapons in preventing any provocative action on the part of the Soviets—any move that threatened the United States or its allies would bring a swift and devastating response.

Eisenhower was intensely engaged in all aspects of nuclear strategy, from what policy should govern the use of weapons to how nuclear science might be used for peaceful purposes. Reviving interest in international controls, he announced an “Atoms for Peace” program in his first year in office, which aimed to begin an international dialogue on the future of atomic energy. The stated goal of the program was to find ways to make the peaceful uses of the atom available to other countries while discouraging them from pursuing weapons. Bringing more countries into the nuclear fold was a calculated risk, all the more remarkable in that it occurred at the height of the Cold War.

While Eisenhower was holding out an olive branch, his secretary of state was brandishing a stick. On January 12, 1954, John Foster Dulles gave a speech to the Council on Foreign Relations in New York that outlined a new policy of massive retaliation to Soviet aggression. In an apparent abandonment of the idea of nuclear weapons as unique instruments of destruction, a National Security Council directive issued the previous year stated that “the United States will consider nuclear weapons to be available for use as other munitions,” effectively creating a policy of ambiguity that was designed to deter any potential adversary from attacking.

Dulles’ speech was actually penned by Eisenhower, who wanted to present a new approach to the world (Atoms for Peace) while maintaining a strong line against communism. He believed the Soviets did not want nuclear war any more than did the United States. He was especially concerned that such a war, once started, might be difficult or impossible to control. The first nuclear weapon used would inevitably lead to a second, a third and so on until there was a massive exchange of hydrogen bombs on each side. To prevent disastrous decisions from being made in the heat of an international crisis, he directed that a Single Integrated Operational Plan (SIOP) be developed to coordinate all nuclear warfighting by American forces. Previously, each military service had its own nuclear plans, some of which interfered with one another. Eisenhower forced the services to accept central planning and control of nuclear weapons, always under direct presidential authority, as a means of ensuring that they followed his strategic policy.

By 1955 the essential questions that would govern all future nuclear debate were firmly established. What is the role of nuclear weapons in fighting limited wars, and how can escalation to full-scale nuclear war be avoided? What is the role of defenses—antiaircraft missiles and ballistic missile defenses—in nuclear war planning? Finally, how can deterrence be maintained without provoking a nuclear war by stimulating an enemy first strike? The fundamental tension affecting nuclear policy has always been between a commitment to use them if absolutely necessary and a hope that their destructive potential will never be unleashed. To say that you would never use a weapon renders it ineffective as a deterrent to aggression—the adversary knows in advance that you will not shoot and acts accordingly. But to plan on using a nuclear weapon in anything other than the most extreme circumstances might trigger a war in which both sides would lose. Such contradictions have been a constant problem for nuclear planners.

A RAND Corporation study done in the early 1950s warned that American bomber bases were vulnerable to a Soviet first strike that could destroy most of our nuclear stockpile, rendering the United States incapable of mounting an effective counterstrike. Two solutions were implemented to deal with these threats. First, a new generation of surface-to-air missile batteries were placed around the country’s borders to defend against incoming Soviet bombers. Second, a new emphasis was placed on the delivery of nuclear weapons by ballistic missiles, ensuring that American warheads could break through the formidable Soviet air defenses. This was the beginning of what came to be known as the “strategic triad” of bombers, land-based missiles in hardened silos, and sea-based ballistic missiles on submarines. Bombers could be controlled right up until the time they dropped their bombs, as opposed to missiles that, once launched, automatically flew to their targets. However, bombers could crash or be shot down. Land-based intercontinental missiles were not vulnerable to enemy air defenses and, in principle, could be controlled from Washington, but they were fixed in location and hence could be destroyed by a massive enemy first strike. Submarines offered the same advantages of land-based missiles and were virtually impossible to detect and destroy, meaning that some would survive to inflict a devastating counterstrike on the Soviet Union. But communication with submarines could be a problem in a time of crisis. Each leg of the triad had its advantages and disadvantages; together they represented an almost indestructible implementation of the theory of deterrence.

Thinking that restraints on nuclear testing were a way to moderate the development of new weapons, Eisenhower reached an agreement with the Soviet Union in 1958 to stop all testing of nuclear weapons. He established the U.S. Disarmament Agency to develop and implement new arms control measures, a first step toward dismantling the massive arsenals that had already been built. It is remarkable that a former general took major steps to reduce what he saw as a growing nuclear threat, especially at a time of intense competition with, and distrust of, the Soviet Union.

Eisenhower’s policies for controlling the use of nuclear weapons in any future conflict still focused on a single massive attack against the Soviet Union. When President John Kennedy was briefed on the country’s nuclear war plans after taking office in 1961, he was astonished at their rigidity and destructiveness. Surely there must be something better than an “all or nothing” strategy that launched the full arsenal in one desperate bid for victory. Kennedy was appalled by the incredible devastation that would ensue from a nuclear exchange, including the projected hundreds of millions of civilian casualties. With Kennedy’s concurrence, Secretary of Defense Robert McNamara proposed a “no cities” strategy in which population centers would be avoided in favor of military targets, especially nuclear missiles that could threaten the United States. Recognizing that both sides had to adopt this new approach for it to have any value, he proposed discussions with the Soviets on the rules that might govern a future nuclear war. Unfortunately, the Soviet Union rejected the policy. A new SIOP was developed to implement what came to be known as “flexible response,” a limited use of nuclear weapons well short of the massive exchanges of previous plans. The United States would use nuclear weapons only to achieve urgent military objectives. The new SIOP included a reserve force, ending what was essentially a policy of launching everything in one massive strike against the enemy. Even after a “military exchange,” there would be sufficient weapons left over to destroy Soviet society, hence maintaining an assured destruction element to deterrence.

The magnitude of destruction that was “assured” by this reserve force was breathtaking: McNamara projected that a third of the Soviet population and half of its industrial capability would be eliminated by an American counterstrike, a potent deterrent to any rash act on the part of the Kremlin. With these frightening statistics in mind, McNamara tried to stem the unchecked arms race by insisting that the United States needed only enough weapons to accomplish preestablished objectives—there was a point at which “enough was enough.”

The Soviet Union broke the moratorium on nuclear testing in 1961 with a rapid series of explosions at their test site on Novaya Zemlya, an island north of the Arctic Circle, and at a desert test range in Kazakhstan. The United States quickly followed suit, and the arms race was back in earnest. However, negotiations aimed at limiting the environmental damage caused by nuclear explosions continued, and just before President Kennedy was assassinated in 1963, the Limited Test Ban Treaty entered into force, an agreement that outlawed nuclear testing in the atmosphere, in the ocean or in space. Only underground tests were to be permitted, a compromise that permitted arms development with much less radioactive fallout.

By the early 1960s, the nuclear arsenals of the two superpowers had reached staggering levels, with many thousands of weapons on each side. B-52 bombers flew circular patterns just outside Soviet airspace, ready to pounce upon preassigned targets on receipt of a properly coded message from the White House. More bombers sat at the ends of runways, engines running, in case a Soviet strike was detected. Missile crews stood alert 24 hours a day, and missile-carrying submarines remained submerged and undetected for months at a time. Nuclear warheads appeared on almost every conceivable weapons system, including naval torpedoes, field artillery and even backpack-sized demolition munitions. Technology was developing so rapidly that a weapon remained in the stockpile only a few years before it was replaced by a newer and more efficient model. Higher yield was not the only objective, as safety and security were major concerns in new weapons.

The closest the world ever came to nuclear Armageddon was the Cuban Missile Crisis in October 1962. Alarmed by a clear American lead in both the number and quality of its ballistic missiles, and frustrated at the recent placement of U.S. missiles in Turkey, Soviet Premier Nikita Khrushchev reached agreement with the Castro government to put Russian missiles in Cuba. While only a relatively small number of weapons were involved, their proximity to the American mainland meant that all the major cities on the eastern seaboard could have been destroyed before America could mount a response. In a rare break from secrecy, the U.S. government showed overhead spy photos of Soviet installations in Cuba and suspicious cargoes on incoming ships in an attempt to force world opinion against the Soviets. Intense negotiations were conducted around the clock, some in the open and some in private phone calls between Washington and Moscow, ending in the agreement that the Soviets would withdraw their weapons from Cuba if the United States withdrew ours from Turkey. Emotions ran high on both sides. Whether the world averted nuclear catastrophe by diplomatic skill or simple luck is left for historians to debate.

 

Originally published in the May 2009 issue of Military History. To subscribe, click here