Loading

War (part 3)

The control of war

The international environment within which states and the people within them operate is regarded by many theorists as the major factor determining the occurrence and nature of wars. War remains possible as long as individual states seek to ensure self-preservation and promote their individual interests and—in the absence of a reliable international agency to control the actions of other states—rely on their own efforts. It is no accident that reforms of the international system figure prominently in many prescriptions for the prevention of war. Whereas the reform of human propensities or of the state is bound to be a long drawn-out affair if it is at all possible, relatively straightforward partial reforms of the international system may produce significant restraints upon resorting to war, and a thorough reform could make war impossible.
Some theorists, being more optimistic about the nature of states, concentrate upon the removal of the fear and suspicion of other states, which is characteristic of the present as well as of all historical political systems; others, being less optimistic, think mainly of possible controls and restraints upon the behaviour of states. The underlying reasoning of both parties is generally similar. If individual states in competitive situations are governed by a short-term conception of their interests, acute conflicts between them will occur and will show a strong tendency to escalate. Thus, one state erects a tariff barrier to protect its industry against the competition of a trade partner, and the partner retaliates, the retaliatory interaction being repeated until the two countries find themselves in a trade war. Armaments races show a similar tendency to escalate, particularly so in an age of rapid technological change. The economic and scientific efforts necessary to avoid falling behind rivals in the invention and development of rapidly improving weapons of mass destruction have already reached unprecedented heights. And yet, neither trade wars nor arms races necessarily end in violent conflict. There seem to be operating some restraining and inhibiting factors that prevent an automatic escalation. Much of the theory of war concerns itself with the identification, improvement, and development of these restraining factors.

Diplomacy

The outcome of starkly competitive behaviour leading to wars is clearly against the interests of states, and it is rational for them to seek more desirable outcomes. If competitive behaviour is dangerous, theorists seek for alternative methods of cooperative behaviour that would not jeopardize the interests of the state through exposing it to the possibly less cooperative behaviour of others. Some theorists concentrate upon improving the rationality of the decision making of individual states through a better understanding of the international environment, through eliminating misperceptions and irrational fears, and through making clear the full possible costs of engaging in war and the full destructiveness of an all-out war, possible in our age.
The relative paucity of wars and their limited nature throughout the century following the Napoleonic Wars (1815–1914) stirred great theoretical interest in the nature of the balance-of-power system of that period—that is, in the process by which the power of competing groups of states tended toward a condition of equilibrium. Contributing to the successful operation of the balance-of-power system of the 19th century were relatively slow technological change, great diversionary opportunities for industrial and colonial expansion, and the ideological and cultural homogeneity of Europe. Pursuit of a balance of power is a way of conducting foreign policy that is perhaps less prone to war than other types of policy because, instead of indiscriminately increasing their power, states increase it only moderately, so as not to provoke others; and instead of joining the strongest, they join the weaker side in order to ensure balance. States in a balance-of-power system must, however, be ready to abide by constraints upon their behaviour in order to ensure stability of the system.
The application to international relations of a branch of mathematics—game theory—that analyzes the strategy of conflict situations has provided a new tool of analysis. In state interaction, as in any game situation, one side's strategy generally depends upon that side's expectations of the other side's strategy. If all sides in a game are to maximize their chances of a satisfactory outcome, it is necessary that some rational rules of behaviour be conceptualized and agreed upon, and this idea of a set of rational rules can be applied to competing states in the international system. Game theorists distinguish antagonistic situations called zero-sum games, in which one state's gain can be only at the expense of another state because the “payoff” is fixed. Even then a mutually acceptable distribution of gains can be rationally reached on the basis of the “minimax” principle—the party in a position of advantage satisfies itself with the minimum acceptable gain because it realizes that the other party, in a position of disadvantage, would yield on the basis of its possible minimum loss but would violently oppose a distribution even more to its detriment. In other situations, called non-zero-sum games, the payoff is not constant but can be increased by a cooperative approach; the gain of one participant is not at the cost of another. The contestants, however, have to agree about the distribution of the gain, which is the product of their cooperation.
The theory of games is the foundation of theories of bargaining that analyze the behaviour of individual states in interaction. Diplomacy based upon such theories is less likely to lead to war. Policymakers pursuing such strategies will conduct conflicts of the zero-sum type so that war is avoided. More than that, with some skill, such situations can be transformed into the non-zero-sum type by introducing additional benefits accruing from cooperation in other interactions and also, more generally, by eliminating the likelihood of war and, consequently, by reducing the costs of preparing for one.

Regional integration


Because wars within states have been eliminated through the establishment of suitable political structures, such as central governments that hold a monopoly of coercive power, many theories concentrate upon the establishment of parallel structures within the international context. Regional integration (cooperation in economic, social, and political affairs, as, for example, within the European Union) and the establishment of security communities (such as the North Atlantic Treaty Organization) have made much greater advances than attempts at the reform of the entire global international system.
Because conflicts among neighbours tend to be frequent, regional integration is an important advance toward reducing the incidence of war. Even if it were to become generally successful, however, regional integration would simply shift the problem of war to a different level: there would be fewer possibilities of war because intraregional conflicts would be contained, but interregional conflicts could still give rise to wars of much greater scope and severity. The phenomenon of war must, therefore, be analyzed at the universal level.

International law

Some of the most influential thinking about war and the international system has come from specialists in international law. All of them postulate that there exists an international society of states that accepts the binding force of some norms of international behaviour. These norms are referred to as international law, although they differ fundamentally from municipal law because no sovereign exists who can enforce them. Most international lawyers realistically accept that international law is, consequently, among rather than above states. It is, according to legal doctrine, binding on states but unenforceable.
International law concerns itself largely with two aspects of war: its legality and its regulation. As far as the legality of war is concerned, there arose in the 20th century a general consensus among states, expressed in several international treaties, including the Covenant of the League of Nations, the Kellogg-Briand Pact of 1928, and the Charter of the United Nations, that resort to armed force, except in certain circumstances such as self-defense, is illegal. Such a legalistic approach to the prevention of war, however, remains futile in the absence of a means of enforcement. The enforcement provisions of the United Nations Charter, which entail the application of military and economic sanctions, have never been applied successfully, owing to political disagreement among the major powers. This underlines the fact that legal norms, to be effective, must reflect an underlying political reality.

The United Nations

The United Nations is charged with the maintenance of international peace and security. The several approaches to peace outlined in its Charter and developed in its practice are based upon and clearly reflect the cumulative development of the relevant theories of war.
Drawing heavily upon the experience of the League of Nations, the Charter develops three interrelated approaches: first, pacific settlement of disputes, which would leave nations with nothing to fight about; second, collective security, which would confront aggressors with too much to fight against; and third, disarmament, which would deprive them of anything substantial with which to fight.

Peaceful settlement of disputes

Pacific settlement of disputes is based upon the assumption that war is primarily a technique for settling disputes, although it can, of course, also serve other purposes, such as allaying fears and seeking status. Further assumptions are that war frequently comes about because of the unawareness of decision makers of the possibility of settling disputes peacefully to the mutual advantage of both sides—an unawareness due to mere ignorance, pride, lack of imagination, or selfish and cynical leadership. It is thus possible that international organizations can contribute to the prevention of wars by devising and institutionalizing alternative, peaceful techniques for the settlement of disputes and by persuading the states to use them.
The scope of this approach is limited, for states are notoriously reluctant to abide by impartial findings on matters they regard as being of vital importance. Hence, what the procedures really offer is a means of slowing down the progression of a dispute toward war, giving reason a chance to prevail.

Collective security

Collective security is an approach to peace involving an agreement by which states agree to take collective action against any state defined as an aggressor. Leaving aside the problems of settling disputes or enforcing law or satisfying justice, it concentrates upon forestalling violence by bringing to bear an overwhelmingly superior international force against any aggressor. Although collective security, in somewhat different forms, played a prominent part in the League of Nations Covenant and is embodied in the United Nations Charter, it has completely failed in both cases. Failing an international government capable of ultimately determining the issues, nations have not managed to agree on an unequivocal definition of aggression, have not in practice accepted the principle that aggression must be acted against independently of the identity of the perpetrator, and, therefore, have not established the international collective security force envisaged in the Charter.

Disarmament

 Disarmament and limitation of armaments are based upon the theory that states are inclined to strive for dominance in arms over any potential rivals and that this leads to arms races that tend to end in war. The major besetting sin of this theory is that it often tends to confuse cause with effect. Although arms races develop momentum of their own, they are themselves the result of political tensions leading to war. In short, it is the tensions that cause war, not the arms races. To hold otherwise is to mistake a symptom for a cause. Hence, reducing the levels of armaments does not necessarily reduce these tensions. Furthermore, it is the instability of strategic balances, rather than their level, that leads to war; agreements about disarmament or limitation of armaments may easily disturb the existing precarious balance and, therefore, be actually conducive to war.

Limiting conflict

As these major approaches to peace envisaged in its Charter have not proved very fruitful, the United Nations has developed two new procedures aiming at the limitation of wars. First, “preventive diplomacy,” largely comprising the diplomatic initiatives of the secretary-general and the stationing of peacekeeping forces, has served to contain local conflicts and to prevent escalation, especially the involvement of the superpowers. Second, although the General Assembly's recommendations have no legal binding force, they have become increasingly influential, for the assembly has become an important agency for what has been called the collective legitimization of state policies. Resort to war becomes more costly when a state is faced with the prospects of a collective condemnation. This new restraint upon war does not, however, act upon conflicts that the assembly may favourably regard as wars of colonial liberation. Nor could the assembly's disapproval be relied upon to deter states from waging war in pursuit of an interest they deemed to be truly vital.

World government

Both the shortcomings and the limited practicability of all the approaches to the elimination of war through the reform of the international system have driven many thinkers to accept the idea that war can only be abolished by a full-scale world government. No midway solution between the relative anarchy of independent, individual states and a world government with the full paraphernalia of legislative powers and of an overwhelming military force would provide a sufficiently stable international framework for the nations to feel that wars would not break out and thus stop them from behaviour that is often conducive to wars. In an age faced with the danger of a war escalating into a general extermination of mankind, the central importance of preserving peace is obvious and is generally accepted. But here the thinkers divide. Some press on from this analysis to the logical conclusion that mankind must and, therefore, will establish a world government, and they advance ideas on how best to proceed in this direction. Others regard the world government as completely utopian, no matter how logical and desirable it may be. Yet, in terms of actual policies, the adherents of the two schools do not necessarily divide. Whether they do or do not believe that world government is attainable, they agree that the complex phenomenon of war represents a potential calamity of such a magnitude that all theorists must endeavour to understand it and to apply their understanding to the prevention and mitigation of war with all the means at their disposal.

War (part 2)

The Causes of War

Contemporary theories of the causes of war divide roughly into two major schools. One attributes war to certain innate biological and psychological factors or drives, the other attributes it to certain social relations and institutions. Both schools include optimists and pessimists concerning the preventability of war.

Biological theories

Theories centring upon man's innate drives are developed by ethologists, who draw analogies from animal behaviour, and also by psychologists and psychoanalysts.

Ethology

Ethologists start with the persuasive argument that study of animal warfare may contribute toward an understanding of war as employed by man. The behaviour of monkeys and apes in captivity and the behaviour of young children, for example, show basic similarities. In both cases it is possible to observe that aggressive behaviour usually arises from several drives: rivalry for possession, the intrusion of a stranger, or frustration of an activity. The major conflict situations leading to aggression among animals, especially those concerning access of males to females and control of a territory for feeding and breeding, are usually associated with patterns of dominance.
The analogies of animal to human behaviour drawn by many ethologists, however, are severely questioned by their more restrained colleagues as well as by many social scientists. The term “aggression,” for example, is imprecisely and inconsistently used, often referring merely to the largely symbolic behaviour of animals involving such signals as grimaces.
Observed animal behaviour can be regarded as a possible important source of inspiration for hypotheses, but these must then be checked through the study of actual human behaviour. As this has not yet been adequately done, the hypotheses advanced have little foundation and are merely interesting ideas to be investigated. Further, human behaviour is not fixed to the extent that animal behaviour is, partly because man rapidly evolves different patterns of behaviour in response to environmental factors, such as geography, climate, and contact with other social groups. The variety of these behaviour patterns is such that they can be used on both sides of an argument concerning, for example, whether or not men have an innate tendency to be aggressive.
Two particularly interesting subjects studied by ethologists are the effects of overcrowding on animals and animal behaviour regarding territory. The study of overcrowding is incomplete, and the findings that normal behaviour patterns tend to break down in such conditions and that aggressive behaviour often becomes prominent are subject to the qualification that animal and human reactions to overcrowding may be different. Ethologists have also advanced plausible hypotheses concerning biological means of population control through reduced fertility that occurs when animal populations increase beyond the capacity of their environment. Whether such biological control mechanisms operate in human society, however, requires further investigation.
Findings concerning the “territorial imperative” in animals—that is, the demarcation and defense against intrusion of a fixed area for feeding and breeding—are even more subject to qualification when an analogy is drawn from them to human behaviour. The analogy between an animal territory and a territorial state is obviously extremely tenuous. In nature the territories of members of a species differ in extent but usually seem to be provided with adequate resources, and use of force in their defense is rarely necessary, as the customary menacing signals generally lead to the withdrawal of potential rivals. This scarcely compares with the sometimes catastrophic defense of the territory of a national state.

Psychology

One school of theorists has postulated that the major causes of war can be found in man's psychological nature. Such psychological approaches range from very general, often merely intuitive assertions regarding human nature to complex analyses utilizing the concepts and techniques of modern psychology. The former category includes a wide range of ethical and philosophical teaching and insights, including the works of such figures as St. Augustine and the 17th-century Dutch philosopher Spinoza.
Modern writers utilizing psychological approaches emphasize the significance of psychological maladjustments or complexes and of false, stereotyped images held by decision makers of other countries and their leaders. Some psychologists posit an innate aggressiveness in man. Others concentrate upon public opinion and its influence, particularly in times of tension. Others stress the importance of decision makers and the need for their careful selection and training. Most believe that an improved social adjustment of individuals would decrease frustration, insecurity, and fear and would reduce the likelihood of war. All of them believe in the importance of research and education. Still, the limitations of such approaches derive from their very generality. Also, whether the psychological premises are optimistic or pessimistic about the nature of man, one cannot ignore the impact upon human behaviour of social and political institutions that give man the opportunities to exercise his good or evil propensities and to impose restraints upon him.

Social theories

Whereas psychological explanations of war contain much that seems to be valid, they are insufficient because man behaves differently in different social contexts. Hence, many thinkers have sought their explanations in these contexts, focusing either on the internal organization of states or on the international system within which these operate. The most voluminous and influential theories attributing war to the nature of the state fall into two broad streams, which can be loosely called liberal and socialist.

 Liberal analyses

The early or classical liberals of the 18th and 19th centuries distinguished three basic elements in their analysis—individuals, society, and the state—and regarded the state as the outcome of the interaction of the former two. They assumed that society is self-regulating and that the socioeconomic system is able to run smoothly with little interference from the government. Economy, decentralization, and freedom from governmental control were the classical liberal's main concerns, as shown particularly clearly in the writings of John Stuart Mill. They accepted the necessity of maintaining defense but postulated the existence of a basic harmony of interests among states, which would minimize the incidence of wars. Economic cooperation based upon an international division of labour and upon free trade would be in the interests of everybody—commerce would be the great panacea, the rational substitute for war.
In explanation of wars that did occur, however, liberals emphasized a variety of factors. First, they focused on autocratic governments, which were presumed to wage war against the wishes of peacefully inclined people. It thus became a major tenet of liberal political philosophy that war could be eliminated by introducing universal suffrage because the people would surely vote out of office any belligerently inclined government. From the early American pamphleteer Thomas Paine onward, a major school of liberals supported republicanism and stressed the peaceful impact of public opinion. Although they could not agree about actual policies, they stressed certain general ideas concerning relations between states, paralleling their laissez-faire ideas of the internal organization of the state with ideas of a minimum amount of international organization, use of force strictly limited to repelling aggression, the importance of public opinion and of democratically elected governments, and rational resolution of conflicts and disputes. Later in the course of the 19th century, however, and especially after World War I, liberals began to accept the conclusion that an unregulated international society did not automatically tend toward peace and advocated international organization as a corrective.

Socialist analyses

Whereas liberals concentrated on political structures, regarding them as of primary importance in determining the propensity of states to engage in war, socialists turned to the socioeconomic system of states as the primary factor. Early in the 20th century the two streams did to some extent converge, as evidenced by the fact that the English radical liberal John Hobson explained wars in terms later adopted by Lenin.
Karl Marx attributed war not to the behaviour of states but to the class structure of society. To him wars occurred not as an often voluntary instrument of state policy but as the result of a clash of social forces. To Marx the state was merely a political superstructure; the primary, determining factor lies in the capitalist mode of production, which leads to the development of two antagonistic classes: the bourgeoisie and the proletariat. The bourgeoisie controls governmental machinery in its own interests. In its international relations, the capitalist state engages in wars because it is driven by the dynamism of its system—the constantly growing need for raw materials, markets, and supplies of cheap labour. The only way to avoid war is to remove its basic cause, by replacing capitalism with socialism, thus abolishing both class struggle and states. The Marxist doctrine, however, gave no clear guidance about the interim period before the millennium is reached; and the international solidarity of the proletariat proved a myth when war broke out in 1914, facing the European Social Democratic parties with the problem of adopting an attitude to the outbreak of the war. The Second International of working-class parties had repeatedly passed resolutions urging the working classes to bring pressure upon their respective governments to prevent war, but, once war had broken out, each individual party chose to regard it as defensive for its own state and to participate in the war effort. This was explained by Lenin as being due to a split in the organization of the proletariat that could be overcome only through the activity of a rigidly organized revolutionary vanguard.
Socialists in the West turned increasingly, although in varying degrees, to revisionist interpretations of Marxism and returned to their attempts to revise socioeconomic structures through evolutionary constitutional processes, seeing this as the only possible means of preventing wars. In the Soviet Union the socialist theory of war changed as the new communist regime responded to changes in circumstances. Soviet theoreticians distinguished three major types of war: between capitalist states, between capitalist and socialist states, and colonial wars of liberation. The internecine wars among capitalist states were supposed to arise from capitalist competition and imperialist rivalries, such as those that led to the two world wars. They were desirable, for they weakened the capitalist camp. A war between capitalist and socialist states was one that clearly expressed the basic principle of class struggle and was, therefore, one for which the socialist states must prepare. Finally, wars of colonial liberation could be expected between subjugated people and their colonial masters.
The weakness of the theory was that the two major expected types of war, the intracapitalist and the capitalist-socialist, did not materialize as frequently as Soviet theoreticians had predicted. Further, the theory failed to adequately analyze the situation in the Soviet Union and in the socialist camp. Even in communist countries, nationalism seems to have proved more powerful than socialism: “national liberation” movements appeared and had to be forcibly subdued in the Soviet Union, despite its communist regime. Also, war between socialist states was not unthinkable, as the doctrine indicated: only the colossal preponderance of Soviet forces prevented a full-scale war in 1956 against Hungary and in 1968 against Czechoslovakia; war between the Soviet Union and the People's Republic of China was a serious possibility for two decades after the Sino-Soviet split in 1962; and armed conflict erupted between China and Vietnam after the latter country became the most powerful in Southeast Asia. Finally, the theory did not provide for wars of liberation against socialist states, such as that conducted by the Afghan mujahideen against the Soviet Union from 1979 to 1989.

 Nationalism

Many theories claim or imply that wars result ultimately from the allegiance of men to nations and from the intimate connection between the nation and a state. This link between the nation and the state is firmly established by the doctrine of national self-determination, which has become in the eyes of many the major basis of the legitimacy of states and the major factor in their establishment and breakup. It was the principle on which the political boundaries of eastern Europe and the Balkans were arranged after World War I and became the principal slogan of the anticolonial movement of the 20th century, finding expression in Chapter I, article 1, of the Charter of the United Nations in the objective of “self-determination of peoples,” as well as in the more specific provisions of Chapters XI and XII. It is this intimate link between nationalism and statehood that renders them both so dangerous. The rulers of a state are ultimately governed in their behaviour by what is loosely summed up as the “national interest,” which occasionally clashes directly with the national interests of other states.
The ideal of the nation-state is never fully achieved. In no historical case does one find all members of a particular nation gathered within one state's boundaries. Conversely, many states contain sizable national minorities. This lack of full correlation has frequently given rise to dangerous tensions that can ultimately lead to war. A government inspired by nationalism may conduct a policy aiming at the assimilation of national minorities, as was the general tendency of central and eastern European governments in the interwar period; it may also attempt to reunite the members of the nation living outside its boundaries, as Adolf Hitler did. National groups that are not in control of a state may feel dissatisfied with its regime and claim self-determination in a separate state, as demonstrated in the attempt to carve Biafra out of Nigeria and the separation of Bangladesh from Pakistan.
There is no rational basis for deciding on the extent to which the self-determination principle should be applied in allowing national minorities to break away. As a rule, the majority group violently opposes the breakaway movement. Violent conflicts can ensue and, through foreign involvement, turn into international wars. No suitable method has been found for divorcing nationalism from the state and for meeting national demands through adequate social and cultural provisions within a larger unit. Such an attempt in the Austro-Hungarian Empire before its dissolution in World War I failed. Even the Soviet Union was not permanently successful in containing its large proportion of national minorities.
Nationalism not only induces wars but, through the severity of its influence, makes compromise and acceptance of defeat more difficult. It thus tends to prolong the duration and increase the severity of wars. Possibly, however, this is the characteristic only of new, immature nationalisms, for nationalism has ceased to be a major cause of conflict and war among the nations of western Europe.
Nationalism is but one form of ideology: in all ages people seem to develop beliefs and try to proselytize others. Even within particular ideological groups, schisms result in conflicts as violent as those between totally opposed creeds, and heretics are often regarded as more dangerous and hostile than opponents. As long as individual states can identify themselves with explosive differences in beliefs, the probability of a war between states is increased, and its intensity is likely to be greater.

 Special-interest groups

Whereas some theories of war regard the state as an undifferentiated whole and generalize about its behaviour, other theorists are more sociologically oriented and focus on the roles played within the state by various special-interest groups.
A distinction is made by these theorists between the great mass of people and those groupings directly involved or influential with government. The people, about whose attitudes adequate knowledge is lacking, are generally assumed to be taken up with their daily lives and to be in favour of peace. The influential groups, who are directly involved in external affairs and, hence, in wars, are the main subject of analysis. Warlike governments dragging peace-loving people into international conflict is a recurrent theme of both liberal and socialist analyses of war. Some writers have gone to the length of postulating a continuous conspiracy of the rulers against the ruled that can be traced to prehistoric times, when priests and warriors combined in the first state structures. Most writers, however, narrow the field and seek an answer to the question of why some governments are more prone to engage in war than others, and they generally find the answer in the influence of important interest groups that pursue particular and selfish ends.
The chief and most obvious of such groups is the military. Military prowess was a major qualification for political leadership in primitive societies; the search for military glory as well as for the spoils of victory seems to have been one of the major motivations for war. Once the military function became differentiated and separated from civilian ones, a tension between the two became one of the most important issues of politics. The plausible view has generally been held that the military strive for war, in which they attain greater resources and can satisfy their status seeking and, sometimes, also an aspiration for direct and full political power. In peacetime the military are obviously less important, are denied resources, and are less likely to influence or attain political power directly. At the same time, a second, although usually subsidiary, consideration of the military as a causal agent in war holds that an officer corps is directly responsible for any fighting and is thus more aware of its potential dangers for its members and for the state as well. Although intent on keeping the state in a high state of preparedness, the military may be more cautious than civilians about engaging in war. It is often held, however, that increased military preparedness may result in increased tensions and thus indirectly lead to the outbreak of war.
Closely allied are theories about groups that profit from wars economically—capitalists and the financiers, especially those involved in industries catering to war. All these play a central part as the villains of the piece in socialist and liberal theories of war, and even those not subscribing to such theories do not deny the importance of military-industrial complexes in countries in which large sectors of the economy specialize in war supplies. But, although industrialists in all the technologically advanced systems are undoubtedly influential in determining such factors as the level of armaments to be maintained, it is difficult to assume that their influence is or could be decisive when actual questions concerning war or peace are being decided by politicians.
Finally, some scientists and technologists constitute a new, much smaller, but important group with special interests in war. To some extent one can generalize about them, although the group is heterogeneous, embracing as it does nuclear scientists, space researchers, biologists and geneticists, chemists, and engineers. If they are involved in defense work, they all share the interest of the military in securing more resources for their research: without their military applications, for example, neither nuclear nor space research would have gone ahead nearly as fast as it has. War, however, does not enhance the status and standing of scientists; on the contrary, they come under the close control of the military. They also usually have peaceful alternatives to military research, although these may not be very satisfactory or ample. Consequently, although modern war technology depends heavily upon scientists and although many of them are employed by governments in work directly or indirectly concerned with this technology, scientists as a group are far from being wedded to war. On the contrary, many of them are deeply concerned with the mass destruction made possible by science and participate in international pacifist movements.

War (part 1)

Introduction

In the popular sense, a conflict among political groups involving hostilities of considerable duration and magnitude. In the usage of social science, certain qualifications are added. Sociologists usually apply the term to such conflicts only if they are initiated and conducted in accordance with socially recognized forms. They treat war as an institution recognized in custom or in law. Military writers usually confine the term to hostilities in which the contending groups are sufficiently equal in power to render the outcome uncertain for a time. Armed conflicts of powerful states with isolated and powerless peoples are usually called pacifications, military expeditions, or explorations; with small states, they are called interventions or reprisals; and with internal groups, rebellions or insurrections. Such incidents, if the resistance is sufficiently strong or protracted, may achieve a magnitude that entitles them to the name “war.”
In all ages war has been an important topic of analysis. In the latter part of the 20th century, in the aftermath of two world wars and in the shadow of nuclear, biological, and chemical holocaust, more was written on the subject than ever before. Endeavours to understand the nature of war, to formulate some theory of its causes, conduct, and prevention, are of great importance, for theory shapes human expectations and determines human behaviour. The various schools of theorists are generally aware of the profound influence they can exercise upon life, and their writings usually include a strong normative element, for, when accepted by politicians, their ideas can assume the characteristics of self-fulfilling prophecies.
The analysis of war may be divided into several categories. Philosophical, political, economic, technological, legal, sociological, and psychological approaches are frequently distinguished. These distinctions indicate the varying focuses of interest and the different analytical categories employed by the theoretician, but most of the actual theories are mixed because war is an extremely complex social phenomenon that cannot be explained by any single factor or through any single approach.

Evolution of theories of war

Reflecting changes in the international system, theories of war have passed through several phases in the course of the past three centuries. After the ending of the wars of religion, about the middle of the 17th century, wars were fought for the interests of individual sovereigns and were limited both in their objectives and in their scope. The art of maneuver became decisive, and analysis of war was couched accordingly in terms of strategies. The situation changed fundamentally with the outbreak of the French Revolution, which increased the size of forces from small professional to large conscript armies and broadened the objectives of war to the ideals of the revolution, ideals that appealed to the masses who were subject to conscription. In the relative order of post-Napoleonic Europe, the mainstream of theory returned to the idea of war as a rational, limited instrument of national policy. This approach was best articulated by the Prussian military theorist Carl von Clausewitz in his famous classic On War (1832–37).
World War I, which was “total” in character because it resulted in the mobilization of entire populations and economies for a prolonged period of time, did not fit into the Clausewitzian pattern of limited conflict, and it led to a renewal of other theories. These no longer regarded war as a rational instrument of state policy. The theorists held that war, in its modern, total form, if still conceived as a national state instrument, should be undertaken only if the most vital interests of the state, touching upon its very survival, are concerned. Otherwise, warfare serves broad ideologies and not the more narrowly defined interests of a sovereign or a nation. Like the religious wars of the 17th century, war becomes part of “grand designs,” such as the rising of the proletariat in communist eschatology or the Nazi doctrine of a master race.
Some theoreticians have gone even further, denying war any rational character whatsoever. To them war is a calamity and a social disaster, whether it is afflicted by one nation upon another or conceived of as afflicting humanity as a whole. The idea is not new—in the aftermath of the Napoleonic Wars it was articulated, for example, by Tolstoy in the concluding chapter of War and Peace (1865–69). In the second half of the 20th century it gained new currency in peace research, a contemporary form of theorizing that combines analysis of the origins of warfare with a strong normative element aiming at its prevention. Peace research concentrates on two areas: the analysis of the international system and the empirical study of the phenomenon of war.
World War II and the subsequent evolution of weapons of mass destruction made the task of understanding the nature of war even more urgent. On the one hand, war had become an intractable social phenomenon, the elimination of which seemed to be an essential precondition for the survival of mankind. On the other hand, the use of war as an instrument of policy was calculated in an unprecedented manner by the nuclear superpowers, the United States and the Soviet Union. War also remained a stark but rational instrumentality in certain more limited conflicts, such as those between Israel and the Arab nations. Thinking about war, consequently, became increasingly more differentiated because it had to answer questions related to very different types of conflict.
Clausewitz cogently defines war as a rational instrument of foreign policy: “an act of violence intended to compel our opponent to fulfill our will.” Modern definitions of war, such as “armed conflict between political units,” generally disregard the narrow, legalistic definitions characteristic of the 19th century, which limited the concept to formally declared war between states. Such a definition includes civil wars but at the same time excludes such phenomena as insurrections, banditry, or piracy. Finally, war is generally understood to embrace only armed conflicts on a fairly large scale, usually excluding conflicts in which fewer than 50,000 combatants are involved.

Army

A large organized force armed and trained for war, especially on land. The term may be applied to a large unit organized for independent action, or it may be applied to a nation's or ruler's complete military organization for land warfare.
Throughout history, the character and organization of armies have changed. Social and political aspects of nations at different periods resulted in revision in the makeup of armies. New weapons influenced the nature of warfare and the organization of armies. At various times armies have been built around infantry soldiers or mounted warriors or men in machines. They have been made up of professionals or amateurs, of mercenaries fighting for pay or for plunder, or of patriots fighting for a cause. Consideration of the development of armies must be made in the light of the times in which the particular army was forged and the campaigns that it fought.

ALCM, SLCM And GLCM

By 1972, constraints placed on ballistic missiles by the SALT I treaty prompted U.S. nuclear strategists to think again about using cruise missiles. There was also concern over Soviet advances in antiship cruise missile technology, and in Vietnam remotely piloted vehicles had demonstrated considerable reliability in gathering intelligence information over previously inaccessible, highly defended areas. Improvements in electronics—in particular, microcircuits, solid-state memory, and computer processing—presented inexpensive, lightweight, and highly reliable methods of solving the persistent problems of guidance and control. Perhaps most important, terrain contour mapping, or Tercom, techniques, derived from the earlier Atran, offered excellent en route and terminal-area accuracy.
Tercom used a radar or photographic image from which a digitalized contour map was produced. At selected points in the flight known as Tercom checkpoints, the guidance system would match a radar image of the missile's current position with the programmed digital image, making corrections to the missile's flight path in order to place it on the correct course. Between Tercom checkpoints, the missile would be guided by an advanced inertial system; this would eliminate the need for constant radar emissions, which would make electronic detection extremely difficult. As the flight progressed, the size of the radar map would be reduced, improving accuracy. In practice, Tercom brought the CEP of modern cruise missiles down to less than 150 feet (see Figure 1).
Improvements in engine design also made cruise missiles more practical. In 1967 the Williams International Corporation produced a small turbofan engine (12 inches in diameter, 24 inches long) that weighed less than 70 pounds and produced more than 400 pounds of thrust. New fuel mixtures offered more than 30-percent increases in fuel energy, which translated directly into extended range.
By the end of the Vietnam War, both the U.S. Navy and Air Force had cruise missile projects under way. At 19 feet three inches, the navy's sea-launched cruise missile (SLCM; eventually designated the Tomahawk) was 30 inches shorter than the air force's air-launched cruise missile (ALCM), but system components were quite similar and often from the same manufacturer (both missiles used the Williams engine and the McDonnell Douglas Corporation's Tercom). The Boeing Company produced the ALCM, while the General Dynamics Corporation produced the SLCM as well as the ground-launched cruise missile, or GLCM. The SLCM and GLCM were essentially the same configuration, differing only in their basing mode. The GLCM was designed to be launched from wheeled transporter-erector-launchers, while the SLCM was expelled from submarine tubes to the ocean surface in steel canisters or launched directly from armoured box launchers aboard surface ships. Both the SLCM and GLCM were propelled from their launchers or canisters by a solid-rocket booster, which dropped off after the wings and tail fins flipped out and the jet engine ignited. The ALCM, being dropped from a bomb-bay dispenser or wing pylon of a flying B-52 or B-1 bomber, did not require rocket boosting.
As finally deployed, the U.S. cruise missiles were intermediate-range weapons that flew at an altitude of 100 feet to a range of 1,500 miles. The SLCM was produced in three versions: a tactical-range (275-mile) antiship missile, with a combination of inertial guidance and active radar homing and with a high-explosive warhead; and two intermediate-range land-attack versions, with combined inertial and Tercom guidance and with either a high-explosive or a 200-kiloton nuclear warhead. The ALCM carried the same nuclear warhead as the SLCM, while the GLCM carried a low-yield warhead of 10 to 50 kilotons.
The ALCM entered service in 1982 and the SLCM in 1984. The GLCM was first deployed to Europe in 1983, but all GLCMs were dismantled after the signing of the INF Treaty.
Although their small size and low flight paths made the ALCM and SLCM difficult to detect by radar (the ALCM presented a radar cross section only one one-thousandth that of the B-52 bomber), their subsonic speed of about 500 miles per hour made them vulnerable to air defenses once they were detected. For this reason, the U.S. Air Force began production of an advanced cruise missile, which would incorporate stealth technologies such as radar-absorbent materials and smooth, nonreflective surface shapes. The advanced cruise missile would have a range of over 1,800 miles.

Matador And Other Programs

The third postwar U.S. cruise missile effort was the Matador, a ground-launched, subsonic missile designed to carry a 3,000-pound warhead to a range of more than 600 miles. In its early development, Matador's radio-controlled guidance, which was limited essentially to the line of sight between the ground controller and the missile, covered less than the missile's potential range. However, in 1954 an automatic terrain recognition and guidance (Atran) system was added (and the missile system was subsequently designated Mace). Atran, which used radar map-matching for both en-route and terminal guidance, represented a major breakthrough in accuracy, a problem long associated with cruise missiles. The low availability of radar maps, especially of areas in the Soviet Union (the logical target area), limited operational use, however. Nonetheless, operational deployments began in 1954 to Europe and in 1959 to Korea. The missile was phased out in 1962, its most serious problems being associated with guidance.
While the U.S. Air Force was exploring the Snark, Navaho, and Matador programs, the navy was pursuing related technologies. The Regulus, which was closely akin to the Matador (having the same engine and roughly the same configuration), became operational in 1955 as a subsonic missile launched from both submarines and surface vessels, carrying a 3.8-megaton warhead. Decommissioned in 1959, the Regulus did not represent much of an improvement over the V-1.
A follow-on design, Regulus II, was pursued briefly, striving for supersonic speed. However, the navy's preference for the new large, angle-deck nuclear aircraft carriers and for ballistic missile submarines relegated sea-launched cruise missiles to relative obscurity. Another project, the Triton, was similarly bypassed due to design difficulties and lack of funding. The Triton was to have had a range of 12,000 miles and a payload of 1,500 pounds. Radar map-matching guidance was to have given it a CEP of 1,800 feet.
In the early 1960s the Air Force produced and deployed the Hound Dog cruise missile on B-52 bombers. This supersonic missile was powered by a turbojet engine to a range of 400–450 miles. It used the guidance system of the earlier Navaho. The missile was so large, however, that only two could be carried on the outside of the aircraft. This external carriage allowed B-52 crew members to use the Hound Dog engines for extra thrust on takeoff, but the extra drag associated with the carriage, as well as the additional weight (20,000 pounds), meant a net loss of range for the aircraft. By 1976 the Hound Dog had given way to the short-range attack missile, or SRAM, essentially an internally carried, air-launched ballistic missile.

Cruise Missiles

The single most important difference between ballistic missiles and cruise missiles is that the latter operate within the atmosphere. This presents both advantages and disadvantages. One advantage of atmospheric flight is that traditional methods of flight control (e.g., airfoil wings for aerodynamic lift, rudder and elevator flaps for directional and vertical control) are readily available from the technologies of manned aircraft. Also, while strategic early-warning systems can immediately detect the launch of ballistic missiles, low-flying cruise missiles presenting small radar and infrared cross sections offer a means of slipping past these air-defense screens.
The principal disadvantage of atmospheric flight centres around the fuel requirements of a missile that must be powered continuously for strategic distances. Some tactical-range antiship cruise missiles such as the U.S. Harpoon have been powered by turbojet engines, and even some non-cruise missiles such as the Soviet SA-6 Gainful surface-to-air missile employed ramjets to reach supersonic speed, but at ranges of 1,000 miles or more these engines would require enormous amounts of fuel. This in turn would necessitate a larger missile, which would approach a manned jet aircraft in size and would thereby lose the unique ability to evade enemy defenses. This problem of maintaining balance between range, size, and fuel consumption was not solved until reliable, fuel-efficient turbofan engines were made small enough to propel a missile of radar-evading size.
As with ballistic missiles, guidance has been a long-standing problem in cruise missile development. Tactical cruise missiles generally use radio or inertial guidance to reach the general vicinity of their targets and then home onto the targets with various radar or infrared mechanisms. Radio guidance, however, is subject to line-of-sight range limitations, and inaccuracies tend to arise in inertial systems over the long flight times required of strategic cruise missiles. Radar and infrared homing devices, moreover, can be jammed or spoofed. Adequate long-range guidance for cruise missiles was not available until inertial systems were designed that could be updated periodically by self-contained electronic map-matching devices.
Beginning in the 1950s, the Soviet Union pioneered the development of tactical air- and sea-launched cruise missiles, and in 1984 a strategic cruise missile given the NATO designation AS-15 Kent became operational aboard Tu-95 bombers. But Soviet programs were so cloaked in secrecy that the following account of the development of cruise missiles focuses by necessity on U.S. programs.

The V-1

The first practical cruise missile was the German V-1 of World War II, which was powered by a pulse jet that used a cycling flutter valve to regulate the air and fuel mixture. Because the pulse jet required airflow for ignition, it could not operate below 150 miles per hour. Therefore, a ground catapult boosted the V-1 to 200 miles per hour, at which time the pulse-jet engine was ignited. Once ignited, it could attain speeds of 400 miles per hour and ranges exceeding 150 miles. Course control was accomplished by a combined air-driven gyroscope and magnetic compass, and altitude was controlled by a simple barometric altimeter; as a consequence, the V-1 was subject to heading, or azimuth, errors resulting from gyro drift, and it had to be operated at fairly high altitudes (usually above 2,000 feet) to compensate for altitude errors caused by differences in atmospheric pressure along the route of flight.
The missile was armed in flight by a small propeller that, after a specified number of turns, activated the warhead at a safe distance from the launch. As the V-1 approached its target, the control vanes were inactivated and a rear-mounted spoiler, or drag device, deployed, pitching the missile nose-down toward the target. This usually interrupted the fuel supply, causing the engine to quit, and the weapon detonated upon impact.
Because of the rather crude method of calculating the impact point by the number of revolutions of a small propeller, the Germans could not use the V-1 as a precision weapon, nor could they determine the actual impact point in order to make course corrections for subsequent flights. In fact, the British publicized inaccurate information on impact points, causing the Germans to adjust their preflight calculations erroneously. As a result, V-1s often fell well short of their intended targets.
Following the war there was considerable interest in cruise missiles. Between 1945 and 1948, the United States began approximately 50 independent cruise missile projects, but lack of funding gradually reduced that number to three by 1948. These three—Snark, Navaho, and Matador—provided the necessary technical groundwork for the first truly successful strategic cruise missiles, which entered service in the 1980s.

Snark

The Snark was an air force program begun in 1945 to produce a subsonic (600-mile-per-hour) cruise missile capable of delivering a 2,000-pound atomic or conventional warhead to a range of 5,000 miles, with a CEP of less than 1.75 miles. Initially, the Snark used a turbojet engine and an inertial navigation system, with a complementary stellar navigation monitor to provide intercontinental range. By 1950, due to the yield requirements of atomic warheads, the design payload had changed to 5,000 pounds, accuracy requirements shrank the CEP to 1,500 feet, and range increased to more than 6,200 miles. These design changes forced the military to cancel the first Snark program in favour of a “Super Snark,” or Snark II.
The Snark II incorporated a new jet engine that was later used in the B-52 bomber and KC-135A aerial tanker operated by the Strategic Air Command. Although this engine design was to prove quite reliable in manned aircraft, other problems—in particular, those associated with flight dynamics—continued to plague the missile. The Snark lacked a horizontal tail surface, it used elevons instead of ailerons and elevators for attitude and directional control, and it had an extremely small vertical tail surface. These inadequate control surfaces, and the relatively slow (or sometimes nonexistent) ignition of the jet engine, contributed significantly to the missile's difficulties in flight tests—to a point where the coastal waters off the test site at Cape Canaveral, Fla., were often referred to as “Snark-infested waters.” Flight control was not the least of the Snark's problems: unpredictable fuel consumption also resulted in embarrassing moments. One 1956 flight test appeared amazingly successful at the outset, but the engine failed to shut off and the missile was last seen “heading toward the Amazon.” (The vehicle was found in 1982 by a Brazilian farmer.)
Considering the less than dramatic successes in the test program, the Snark, as well as other cruise missile programs, probably would have been destined for cancellation had it not been for two developments. First, antiaircraft defenses had improved to a point where bombers could no longer reach their targets with the usual high-altitude flight paths. Second, thermonuclear weapons were beginning to arrive in military inventories, and these lighter, higher-yield devices allowed designers to relax CEP constraints. As a result, an improved Snark was deployed in the late 1950s at two bases in Maine and Florida.
The new missile, however, continued to exhibit the unreliabilities and inaccuracies typical of earlier models. On a series of flight tests, the Snark's CEP was estimated to average 20 miles, with the most accurate flight striking 4.2 miles left and 1,600 feet short. This “successful” flight was the only one to reach the target area at all and was one of only two to go beyond 4,400 miles. Accumulated test data showed that the Snark had a 33-percent chance of successful launch and a 10-percent chance of achieving the required distance. As a consequence, the two Snark units were deactivated in 1961.

Rocket And Missile System3

Ballistic missile defense

 Although ballistic missiles followed a predictable flight path, defense against them was long thought to be technically impossible because their RVs were small and traveled at great speeds. Nevertheless, in the late 1960s the United States and Soviet Union pursued layered antiballistic missile (ABM) systems that combined a high-altitude interceptor missile (the U.S. Spartan and Soviet Galosh) with a terminal-phase interceptor (the U.S. Sprint and Soviet Gazelle). All systems were nuclear-armed. Such systems were subsequently limited by the Treaty on Anti-Ballistic Missile Systems of 1972, under a protocol in which each side was allowed one ABM location with 100 interceptor missiles each. The Soviet system, around Moscow, remained active and was upgraded in the 1980s, whereas the U.S. system was deactivated in 1976. Still, given the potential for renewed or surreptitious ballistic missile defenses, all countries incorporated penetration aids along with warheads in their missiles' payloads. MIRVs also were used to overcome missile defenses.

Maneuverable warheads 

 Even after a missile's guidance has been updated with stellar or satellite references, disturbances in final descent could throw a warhead off course. Also, given the advances in ballistic missile defenses that were achieved even after the ABM treaty was signed, RVs remained vulnerable. Two technologies offered possible means of overcoming these difficulties. Maneuvering warheads, or MaRVs, were first integrated into the U.S. Pershing II IRBMs deployed in Europe from 1984 until they were dismantled under the terms of the INF Treaty. The warhead of the Pershing II contained a radar area guidance (Radag) system that compared the terrain toward which it descended with information stored in a self-contained computer. The Radag system then issued commands to control fins that adjusted the glide of the warhead. Such terminal-phase corrections gave the Pershing II, with a range of 1,100 miles, a CEP of 150 feet. The improved accuracy allowed the missile to carry a low-yield 15-kiloton warhead.
MaRVs would present ABM systems with a shifting, rather than ballistic, path, making interception quite difficult. Another technology, precision-guided warheads, or PGRVs, would actively seek a target, then, using flight controls, actually “fly out” reentry errors. This could yield such accuracy that nuclear warheads could be replaced by conventional explosives

Rocket And Missile System2

Multiple warheads

By the early 1970s, several technologies were maturing that would produce a new wave of ICBMs. First, thermonuclear warheads, much lighter than the earlier atomic devices, had been incorporated into ICBMs by 1970. Second, the ability to launch larger throw weights, achieved especially by the Soviets, allowed designers to contemplate adding multiple warheads to each ballistic missile. Finally, improved and much lighter electronics translated into more accurate guidance.
The first steps toward incorporating these technologies came with multiple warheads, or multiple reentry vehicles (MRVs), and the Fractional Orbital Bombardment System (FOBS). The Soviets introduced both of these capabilities with the SS-9 Scarp, the first “heavy” missile, beginning in 1967. FOBS was based on a low-trajectory launch that would be fired in the opposite direction from the target and would achieve only partial earth orbit. With this method of delivery, it would be quite difficult to determine which target was being threatened. However, given the shallow reentry angles associated with a low trajectory and partial earth orbit, the accuracy of FOBS missiles was questionable. A missile carrying MRVs, on the other hand, would be launched toward the target in a high ballistic trajectory. Several warheads from the same missile would strike the same target, increasing the probability of killing that target, or individual warheads would strike separate targets within a very narrow ballistic “footprint.” (The footprint of a missile is that area which is feasible for targeting, given the characteristics of the reentry vehicle.) The SS-9, model 4, and the SS-11 Sego, model 3, both had three MRVs and ballistic footprints equal to the dimensions of a U.S. Minuteman complex. The only instance in which the United States incorporated MRVs was with the Polaris A-3, which, after deployment in 1964, carried three 200-kiloton warheads a distance of 2,800 miles. In 1967 the British adapted their own warheads to the A-3, and beginning in 1982 they upgraded the system to the A3TK, which contained penetration aids (chaff, decoys, and jammers) designed to foil ballistic missile defenses around Moscow.
Soon after adopting MRVs the United States took the next technological step, introducing multiple independently targetable reentry vehicles (MIRVs). Unlike MRVs, independently targeted RVs could be released to strike widely separated targets, essentially expanding the footprint established by a missile's original ballistic trajectory. This demanded the capacity to maneuver before releasing the warheads, and maneuvering was provided by a structure in the front end of the missile called the “bus,” which contained the RVs. The bus was essentially a final, guided stage of the missile (usually the fourth), that now had to be considered part of the missile's payload. Since any bus capable of maneuvering would take up weight, MIRVed systems would have to carry warheads of lower yield. This in turn meant that the RVs would have to be released on their ballistic paths with great accuracy. As stated above, solid-fueled motors could be neither throttled nor shut down and restarted; for this reason, liquid-fueled buses were developed for making the necessary course corrections. The typical flight profile for a MIRVed ICBM then became approximately 300 seconds of solid-rocket boost and 200 seconds of bus maneuvering to place the warheads on independent ballistic trajectories.
The first MIRVed system was the U.S. Minuteman III. Deployed in 1970, this three-stage, solid-fueled ICBM carried three MIRVs of an estimated 170 to 335 kilotons. The warheads had a range of 8,000 miles with CEPs of 725–925 feet. Beginning in 1970 the United States also MIRVed its SLBM force with the Poseidon C-3, which could deliver up to 14 50-kiloton RVs to a range of 2,800 miles and with a CEP of about 1,450 feet. After 1979 this force was upgraded with the Trident C-4, or Trident I, which could deliver eight 100-kiloton MIRVs with the same accuracy as the Poseidon, but to a distance of 4,600 miles. Much longer range was made possible in the Trident by adding a third stage, by replacing aluminum with lighter graphite epoxies, and by adding an “aerospike” to the nose cone that, extending after launch, produced the streamlining effect of a pointed design while allowing the larger volume of a blunt design. Accuracy was maintained by updating the missile's inertial guidance during bus maneuvering with stellar navigation.
By 1978 the Soviet Union had fielded its first MIRVed SLBM, the SS-N-18 Stingray. This liquid-fueled missile could deliver three or five 500-kiloton warheads to a distance of 4,000 miles, with a CEP of about 3,000 feet. On land in the mid-1970s, the Soviets deployed three MIRVed, liquid-fueled ICBM systems, all with ranges exceeding 6,000 miles and with CEPs of 1,000 to 1,500 feet: the SS-17 Spanker, with four 750-kiloton warheads; the SS-18 Satan, with up to 10 500-kiloton warheads; and the SS-19 Stiletto, with six 550-kiloton warheads. Each of these Soviet systems had several versions that traded multiple warheads for higher yield. For instance, the SS-18, model 3, carried a single 20-megaton warhead. This giant missile, which replaced the SS-9 in the latter's silos, had about the same dimensions as the Titan II, but its throw weight of more than 16,000 pounds was twice that of the U.S. system.
Beginning in 1985, France upgraded its SLBM force with the M-4, a three-stage MIRVed missile capable of carrying six 150-kiloton warheads to ranges of 3,600 miles.
A second generation of MIRVed U.S. systems was represented by the Peacekeeper. Known as the MX during its 15-year development phase before entering service in 1986, this three-stage ICBM carried 10 300-kiloton warheads and had a range of 7,000 miles. Originally designed to be based on mobile railroad or wheeled launchers, the Peacekeeper was eventually housed in Minuteman silos. A second-generation MIRVed SLBM of the 1990s was the Trident D-5, or Trident II. Even though it was one-third again as long as its predecessor and had twice the throw weight, the D-5 could deliver 10 475-kiloton warheads to a range of 7,000 miles. Both the Trident D-5 and Peacekeeper represented a radical advance in accuracy, having CEPs of only 400 feet. The improved accuracy of the Peacekeeper was due to a refinement in the inertial guidance system, which housed the gyros and accelerometers in a floating-ball device, and to the use of an exterior celestial navigation system that updated the missile's position by reference to stars or satellites. The Trident D-5 also contained a star sensor and satellite navigator. This gave it several times the accuracy of the C-4 at more than twice the range.
Within the generally less-advanced guidance technology of the Soviet Union, an equally radical advance came with the solid-fueled SS-24 Scalpel and SS-25 Sickle ICBMs, deployed in 1987 and 1985, respectively. The SS-24 could carry eight or 10 MIRVed warheads of 100 kilotons, and the SS-25 was fitted with a single 550-kiloton RV. Both missiles had a CEP of 650 feet. In addition to their accuracy, these ICBMs represented a new generation in basing mode. The SS-24 was launched from railroad cars, while the SS-25 was carried on wheeled launchers that shuttled between concealed launch sites. As mobile-based systems, they were long-range descendants of the SS-20 Saber, an IRBM carried on mobile launchers that entered service in 1977, partly along the border with China and partly facing western Europe. That two-stage, solid-fueled missile could deliver three 150-kiloton warheads a distance of 3,000 miles with a CEP of 1,300 feet. It was phased out after the signing of the Intermediate-Range Nuclear Forces (INF) Treaty in 1987.

Rocket And Missile System

From liquid to solid fuel


This first generation of missiles was typified by its liquid fuel, which required both a propellant and an oxidizer for ignition as well as a complex (and heavy) system of pumps. The early liquid fuels were quite dangerous, difficult to store, and time-consuming to load. For example, Atlas and Titan used so-called cryogenic (Hypercold) fuels that had to be stored and handled at very low temperatures (−422° F [−252° C] for liquid hydrogen). These propellants had to be stored outside the rocket and pumped aboard just before launch, consuming more than an hour.

As each superpower produced, or was thought to produce, more ICBMs, military commanders became concerned about the relatively slow reaction times of their own ICBMs. The first step toward “rapid reaction” was the rapid loading of liquid fuels. Using improved pumps, the reaction time of the Titan I was reduced from over one hour to less than 20 minutes. Then, with a second generation of storable liquids that could be kept loaded in the missile, reaction time was reduced to approximately one minute. Examples of second-generation storable-liquid missiles were the Soviet SS-7 Saddler and SS-8 Sasin (the latter deployed in 1963) and the U.S. Titan II. The Titan II was the largest ballistic missile ever developed by the United States. This two-stage ICBM was more than 100 feet long and 10 feet in diameter. Weighing more than 325,000 pounds at launch, it delivered its single warhead (with a throw weight of about 8,000 pounds) to a range of 9,000 miles and with a CEP of about one mile.

In about 1964 China began developing a series of liquid-fueled IRBMs given the NATO designation CSS, for Chinese surface-to-surface missile. (The Chinese named the series Dong Feng, meaning “East Wind.”) The CSS-1 carried a 20-kiloton warhead to a range of 600 miles. The CSS-2, entering service in 1970, was fueled by storable liquids; it had a range of 1,500 miles and carried a one- to two-megaton warhead. With the two-stage CSS-3 (active from 1978) and the CSS-4 (active from 1980), the Chinese reached ICBM ranges of over 4,000 and 7,000 miles, respectively. The CSS-4 carried a warhead of four to five megatons.

Because storable liquids did not alleviate the dangers inherent in liquid fuels, and because the flight times of missiles flying between the United States and the Soviet Union shrank to less than 35 minutes from launch to impact, still faster reactions were sought with even safer fuels. This led to a third generation of missiles, powered by solid propellants. Solid propellants were, eventually, easier to make, safer to store, lighter in weight (because they did not require on-board pumps), and more reliable than their liquid predecessors. Here the oxidizer and propellant were mixed into a canister and kept loaded aboard the missile, so that reaction times were reduced to seconds. However, solid fuels were not without their complications. First, while it was possible with liquid fuels to adjust in flight the amount of thrust provided by the engine, rocket engines using solid fuel could not be throttled. Also, some early solid fuels had uneven ignition, producing surges or abrupt velocity changes that could disrupt or severely confound guidance systems.

The first solid-fueled U.S. system was the Minuteman I. This ICBM, conceived originally as a rail-mobile system, was deployed in silos in 1962, became operational the following year, and was phased out by 1973. The first Soviet solid-fueled ICBM was the SS-13 Savage, which became operational in 1969. This missile could carry a 750-kiloton warhead more than 5,000 miles. Because the Soviet Union deployed several other liquid-fueled ICBMs between 1962 and 1969, Western specialists speculated that the Soviets experienced engineering difficulties in producing solid propellants.

The French deployed the first of their solid-fueled S-2 missiles in 1971. These two-stage IRBMs carried a 150-kiloton warhead and had a range of 1,800 miles. The S-3, deployed in 1980, could carry a one-megaton warhead to a range of 2,100 miles.
The first SLBMs

Simultaneous with the early Soviet and U.S. efforts to produce land-based ICBMs, both countries were developing SLBMs. In 1955 the Soviets launched the first SLBM, the one- to two-megaton SS-N-4 Sark. This missile, deployed in 1958 aboard diesel-electric submarines and later aboard nuclear-powered vessels, had to be launched from the surface and had a range of only 350 miles. Partly in response to this deployment, the United States gave priority to its Polaris program, which became operational in 1960. Each Polaris A-1 carried a warhead of one megaton and had a range of 1,400 miles. The Polaris A-2, deployed in 1962, had a range of 1,700 miles and also carried a one-megaton warhead. The U.S. systems were solid-fueled, whereas the Soviets initially used storable liquids. The first Soviet solid-fueled SLBM was the SS-N-17 Snipe, deployed in 1978 with a range of 2,400 miles and a 500-kiloton warhead.

Beginning in 1971, France deployed a series of solid-fueled SLBMs comprising the M-1, M-2 (1974), and M-20 (1977). The M-20, with a range of 1,800 miles, carried a one-megaton warhead. In the 1980s the Chinese fielded the two-stage, solid-fueled CSS-N-3 SLBM, which had a range of 1,700 miles and carried a two-megaton warhead.

Facebook Badge

 
Design by Cybermoshfiq | Bloggerized by Moshfiqur's Rahman - .