Kosnost IV: Perception and Misperception in International Politics (Robert Jervis, 1976)

Cognitive dissonance theory 
The theory of cognitive dissonance can explain a number of puzzling misperceptions. The basic outlines of the theory are not startling, but some of its implications are contrary to common sense and other theories of cognitive consistency. The definition of dissonance is relatively straightforward: “two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other.” For example, the information that a Ford is a better car than a Chevrolet is dissonant with the knowledge that I have bought a Chevy. Information that the MLF is not favored by most Europeans is dissonant with the decision-maker’s knowledge that he adopted this policy believing it would reduce strains within the alliance. The basic hypotheses are: “1. The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce dissonance and achieve consonance. 2. When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance.
The basis of dissonance theory lies in the postulate that people seek strong justification for their behavior. They are not content to believe merely that they behaved well and chose wisely—if this were the case they would only have to maintain the beliefs that produced their decisions. Instead, people want to minimize their internal conflict. This leads them to seek to believe that the reasons for acting or deciding as they did were overwhelming. The person will then rearrange his beliefs so that they provide increased support for his actions. Knowledge of the advantages of rejected courses of action and costs of the chosen one will be a source of uncomfortable dissonance that he will try to reduce. To do this he will alter his earlier opinions, seeing more drawbacks and fewer advantages in the policies he rejected and more good points and fewer bad ones in the policy he adopted. He may, for example, come to believe that the rejected policy would not “satisfy certain criteria that he originally thought it would, or that those criteria are less important than he originally believed, or that the chosen policy will not cost as much as he first thought. The person may also search out additional information supporting his decision and find new reasons for acting as he did and will avoid, distort, or derogate new dissonant information. If doubts nevertheless creep in, he will redouble his efforts to justify his decision. As a result, “Following a decision there is an increase in the confidence in the decision or an increase in the discrepancy in attractiveness between the alternatives involved in the choice, or both.” This is known as the “spreading apart of the alternatives” because of the perceived increase in the gap between the net advantages of the chosen policy and those of the rejected ones.
As the last quote implies, the theory has been developed only for post-decision situations. Two further conditions are necessary. First, there must be a “definite commitment resulting from the decision…. It seems that a decision carries commitment with it if the decision unequivocally affects subsequent behavior.” Second, the person must feel that his decision was a free one, i.e. that he could have chosen otherwise. If he had no real choice, then the disadvantages of the policy will not create dissonance because his lack of freedom provides sufficient justification for his action.
Making such a decision will, according to dissonance theory, greatly alter the way a person thinks. Before reaching his decision the individual will seek conflicting information and make “some compromise judgment between the information and his existing cognitions or between bits of information inconsistent with each other and with his existing cognitions.” But once the person has reached a decision, he is committed and “cannot process information and make some compromise judgment.” Quite the contrary, the person must minimize the extent to which the evidence pointed in opposite directions.
An ingenious demonstration, an anecdote, and an international example illustrate the meaning of this phenomenon and show that it occurs outside the laboratory. Immediately after they place their bets, race track bettors become more confident that their horse will win. A few hours after deciding to accept a job offer that he had agonizingly considered for several months, a professor said, “I don’t understand how I could have seriously thought about not taking the job.” And the doubts of British Liberals about whether to go to war in 1914 were almost totally dissolved after the decision was reached.
We should note that contrary to first impressions, dissonance reduction does not always imply wishful thinking. First, reducing dissonance can involve changing evaluations of alternatives, thus altering desires themselves. Second, selecting and interpreting evidence so as to confirm that one’s decision was wise may not conform to one’s desires. For example, the decision to take costly precautions against the possibility that a negatively valued event will occur can generate dissonance reduction that will lead to unwishful thinking. Thus if I decide to build a bomb shelter, I may reduce dissonance by not listening to people who argue that the peace is secure. But I will still hope for peace.
Three conceptual problems are apparent from the start. First, one must be able to draw a line between pre- and post-decision situations. This is easy to do in most laboratory experiments. At a given time, subjects are made to choose a toy, an appliance, or a side of an argument. But in most political cases there is no such clear moment of decision. Even if one can determine when orders were given or papers signed, this may not coincide with the time of commitment. It is even harder to pinpoint the time when an actor decides what the other’s intentions are. This does not mean, however, that no distinctions are possible. Even if we cannot tell exactly when the actor decided, we can contrast the early stages when he was obviously undecided with later ones when he is carrying out a course of action. Similarly, as images become firmly established and policies are based upon them, we can consider the actor to be committed to this image and can analyze his behavior in terms of dissonance theory.
Second, and more important, when we deal with complex questions and subtle reasoning it is not clear what beliefs and bits of information are consonant or dissonant with each other. Is the information that Russia seeks a disarmament agreement dissonant with the belief that Russia is hostile? Is the perception that relations between China and the United States have improved dissonant with the belief that the previous American “hard line” policy was appropriate? Dissonance research has usually avoided these questions by constructing experiments in which it seems obvious that certain elements are dissonant. But the theoretical question of how one determines if an element follows from the obverse of the other has not been answered. Perhaps the best we can say is that a dissonant cognition is one that the person would have counted as a reason against following the course of action that he later chose, and the exact specification of what is dissonant will therefore vary with the person’s beliefs. 
This problem is further complicated by the fact that evidence that the policy is incurring high costs or proving to be a failure is dissonant with the person’s knowledge that he chose that policy only if he feels that he could have predicted this outcome on the basis of the information he had, or should have had, when he made his decision. Dissonance reduction is employed to ward off the perception that the decision was unwise or inappropriate, not to inhibit the conclusion that in an unpredictable world the policy had undesired consequences. So the person can recognize, with hindsight, that the decision was wrong without believing that, given the evidence available when he chose, he made a bad decision. This makes it important for us to know how much the decision-maker thinks he should be able to foresee future events. On the one hand, most statesmen see the world as highly contingent and uncertain. But they also have great confidence in their own abilities. Unfortunately, we do not know the relative strengths of these conflicting pressures and have little evidence about the degree to which and conditions under which decision-makers will consider evidence that their policy is failing to be dissonant with their knowledge that they initially favored the policy. 
Although we cannot treat this topic at length here, we should note that the counterargument has been made, and supported by a good deal of evidence, that “When a person gets himself into a situation, and therefore feels responsible for its consequences, inconsistent information should, no matter how it comes about, arouse dissonance.” Thus even if the person could not have foreseen an undesired consequence of his decision, its occurrence will be dissonant with the cognition that he made the decision and so was, in some sense, responsible for the consequence. At the heart of this debate are differing conceptions of the driving forces of dissonance. The view we have endorsed sees dissonance arising out of an inconsistency between what a person has done and his image of himself. The alternative view sees the sources of dissonance as inconsistencies between the person’s values and the undesired effects of his behavior. The former position has better theoretical support, but the experimental evidence is mixed.
Third, little is known about the magnitude of dissonance effects. Pressures that appear in carefully controlled laboratory situations may be slight compared to those produced by forces active in decision-making in the outside world. Such influences as institutional interests, political incentives, and feelings of duty may dwarf the impact of dissonance. Indeed even in the laboratory, this effect is far from overwhelming, producing changes ranging from 4 percent to 8 percent in the relative attractiveness of the alternatives.
Cognitive Dissonance and Inertia
If this theory dealt only with the ways people increased their comfort with decisions already reached we would not be concerned with it since it could not explain how people decide. In reducing dissonance, however, people alter their “beliefs and evaluations, thereby changing the premises of later deliberations, and so the theory has implications for the person’s future decisions, actions, and perceptions. One of the most important is the added force that dissonance provides to the well-known power of inertia. Many reasons why tentative decisions become final ones and why policies are maintained in the face of discrepant information do not require the use of complex psychological theories to be understood. The domestic or bureaucratic costs of policy change are usually high. The realization of the costs of change makes subordinates hesitant to call attention to the failure of current practices or to the potential of alternatives. There are external costs as well since international policies often involve commitments that cannot be broken without damaging the state’s reputation for living up to its word. Decision-makers may calculate that the value of this reputation outweighs the loss entailed by continuing an unwise policy.
Other psychological theories that stress consistency also predict that discrepant information will not be given its due and imply that policies will be maintained longer than political calculations can explain. Dissonance theory elaborates upon this point in two ways, neither of which, however, is easy to verify in the political realm. First, dissonance theory asserts that, after making a decision, the person not only will downgrade or misinterpret discrepant information but will also avoid it and seek consonant information. This phenomenon, known as “selective exposure,” has been the subject of numerous experiments that have not resolved all the controversies. But even the founder of dissonance theory now admits that this effect “is small and is easily overcome by other factors” including “the potential usefulness of dissonant material,” curiosity, and the person’s confidence that he can cope with discrepant information. Thus “avoidance would be observed only under circumstances where other reasons for exposure … were absent.” Such a weak effect is hard enough to detect in the laboratory; it is harder to find and less relevant in the arena of political decision-making.
A second hypothesis that may explain why policies are continued too long is more fruitful: the aim and effect of dissonance reduction is to produce post-decision spreading apart of the alternatives. There are important political analogues to experiments that show that if a person is asked to rate the attractiveness of two objects (e.g. toys or appliances), is allowed to choose one to keep, and then is asked to rate them again, his ratings will shift to increase the relative desirability of the chosen object. After they have made a choice, decision-makers too often feel certain that they decided correctly even though during the pre-decision period that policy did not seem obviously and overwhelmingly the best. The extent and speed “of this shift and the fact that contemporary participants and later scholars rarely attribute it entirely to political considerations imply that dissonance reduction is taking place.
Before we discuss the evidence from and implications for political decision-making, two objections should be noted. First, modifications of dissonance theory consider the possibility of the opposite of the spreading of the alternatives—“post-decision regret.” But the slight theory and data on this subject do not show that the phenomenon is of major importance. Second, some scholars have argued that, especially when the decision-maker does not expect to receive further information and the choice is a hard one, the spreading apart of the alternatives precedes and facilitates the decision. But the bulk of the evidence supports the dissonance position.
We should also remember that this whole argument is irrelevant if a decision changes the state’s environment and destroys the availability of alternative policies. Some decisions to go to war do this. For example, once Japan bombed Pearl Harbor, she could make little use of new information that the United States was not willing to suffer a limited defeat but would prefer to fight the war through to complete victory. Even had dissonance not made it more difficult for the Japanese to think about ways of ending the war, it is not certain that viable alternatives existed.
Many decision-makers speak of their doubts vanishing after they embarked on a course of action, or they say that a situation seemed much clearer after they reached a decision. Evidence that would have been carefully examined before the decision is rejected at later stages. These effects are illustrated by President Madison’s behavior in 1811 after he accepted an ambiguous French offer to lift her decrees hindering American trade in return for Madison’s agreement to allow trade with France while maintaining the ban on trade with England. When he took this step, “Madison well understood the equivocal nature” of the French offer and realized that Napoleon might be deceiving him. But even as evidence mounted that this was in fact the case, Madison firmly stood by his earlier decision and did not recognize even privately that the French were not living up to their bargain. The British, who had the same information about the French actions as the Americans had, soon realized that Napoleon was engaging in deception.
Similarly, only slowly and painfully did Woodrow Wilson decide to ask for a declaration of war against Germany. His awareness of the costs of entering the war was acute. But after the decision was made, he became certain that his policy was the only wise and proper one. He had no second thoughts. In the same way, his attempt to reduce the dissonance that had been aroused by the knowledge that he had had to make painful compromises during the negotiations on the peace treaty can explain Wilson’s negative reaction to the last-minute British effort to alter the treaty and bring it more in line with Wilsonian principles: “The time to consider all these questions was when we were writing the treaty, and it makes me a little tired for people to come and say now that they are afraid that the Germans won’t sign, and their fear is based upon things that they insisted upon at the time of the writing of the treaty; that makes me very sick.” There were political reasons for not reopening negotiations, but the vehemence with which Wilson rejected a position that was in harmony with his ideals suggests the presence of unacknowledged psychological motivation. Wilson’s self-defeating refusal to perceive the strength of the Senate opposition to the League may be similarly explained by the fact that the belief that alterations in the Covenant were necessary would have been dissonant with the knowledge that he had given away a great deal to get the Europeans to accept the League. Once the bargains had been completed, dissonance reduction made them seem more desirable.
More recently, the American decision not to intervene in Vietnam in 1954 was followed by a spreading apart of the alternatives. When they were considering the use of force to prevent a communist victory, Dulles and, to a lesser extent, Eisenhower believed that a failure to act would expose the neighboring countries to grave peril. When the lack of Allied and domestic support made intervention prohibitively expensive, American decision-makers altered their perceptions of the consequences of not intervening. Although they still thought that the immediate result would be the fall of some of Indochina, they came to believe that the further spread of communism—fear of which motivated them to consider entering the war—would not necessarily follow. They altered their views to reject the domino theory, at least in its most deterministic formulation, and held that alternative means could save the rest of Southeast Asia. It was argued—and apparently believed—that collective action, which had initially been sought in order to hold Vietnam, could stabilize the region even though part of Vietnam was lost. “By judging that military victory was not necessary, the decision-makers could see the chosen policy of not intervening as greatly preferable to the rejected alternative of unilateral action, thereby reducing the dissonance aroused by the choice of a policy previously believed to entail extremely high costs. 
This spreading apart of the alternatives increases inertia. By altering their views to make it seem as though their decision was obviously correct, decision-makers “increase the amount of discrepant information necessary to reverse the policy. Furthermore, continuing efforts to reduce dissonance will lead decision-makers to fail to seek or appreciate information that would call their policy into question if they believe that they should have foreseen the negative consequences of their decision.
The reduction of dissonance does more than help maintain policies once they are set in motion. By altering instrumental judgments and evaluations of outcomes, it indirectly affects other decisions. Most of us have at times found that we were determining our values and standards by reference to our past behavior. For example, no one can do a complete cost-benefit study of all his expenditures. So we tend to decide how much we can spend for an object by how much we have spent for other things. “If the book I bought last week was worth $10.00 to me, then this record I’m now considering buying is certainly worth $1.98.”
Acting on a value or a principle in one case can increase its potency and influence over future cases. The value is not depleted in being used, but rather replenished and reinforced because of the effects of dissonance reduction aimed at easing the person’s doubts about acting on that value. Thus if a person refuses a tempting bribe he is more likely to display integrity in the future because dissonance reduction will lead him to place greater value on this quality, to place less value on the money he might receive, and/or to come to believe that he is likely to get caught if he commits an illegal act. Similarly, if a professor turns down a tempting offer from another university, his resistance to future offers will increase if he reduces dissonance by evaluating his own institution more favorably or by raising his estimate of the costs of leaving. Statesmen’s post-decision re-evaluations of their goals and beliefs can have more far-reaching consequences. If in reducing the dissonance created by the decision to intervene to prevent revolution in one country, the statesman inflates his estimate of the bad consequences of internal unrest, he will be more likely to try to quell disturbances in other contexts. Or if he reduces dissonance by increasing his faith in the instrument he chose to use, he will be more likely to employ that instrument in later cases. Similarly, downgrading a rejected goal means that it will not be pursued even if altered circumstances permit its attainment. This may be one reason why once an actor has decided that a value will be sacrificed if this becomes necessary—i.e. once he has developed a fall-back position—he usually fights less hard to maintain his stand. But because these effects, although important, are hard to verify and because other theories predict that policy-making will be characterized by high inertia, we must turn to other hypotheses, some of which run counter to common sense, in order to further demonstrate the utility of cognitive dissonance theory.  (Robert Jervis, 1976, pages 508 – 517) 

How Statesmen Think: The Psychology of International Politics (Robert Jervis, 2017)

Perceptual Biases 
Signals affect behavior only as they are perceived and interpreted, and the difficulty with constructing a parsimonious theory is that perceptions and their causes are quite varied. Nevertheless, several main tendencies can be detected. Although I cannot build them into a unified theory of signaling and perception, any attempt to create such a theory has to incorporate them. In efforts to make sense of their world, people are moved by both motivated (that is, affect-driven) and unmotivated (purely cognitive) biases.26 The former derive from the need to maintain psychological well-being and a desired self-image, the latter from the need for short-cuts to rationality in an environment characterized by complex and ambiguous information. Motivated and cognitive influences are hard to separate,27 and I merely discuss the single most important bias of each type. 
The generalization that is most powerful, in the sense of occurring most often and exercising most control over perceptions, is that information is interpreted within the framework established by preexisting beliefs.28 Three implications are crucial here. First, images of other states are strongly influenced by the often implicit theories held by statesmen that specify the existence and meaning of indices (e.g., democracies are peaceful; countries experiencing rapid economic growth will demand an increased international role). These theories can vary from one individual or society to another and often are related to general ideas about how people and politics function. For example, if a new regime in a country suppressed democracy and civil liberties and proclaimed the superiority of the dominant racial group, many observers would predict that it would menace its neighbors. Indeed, it might be seen as a potential Nazi Germany. But it was the Nazi experience itself that made these links between domestic and international politics so salient; one reason for the appeasement policy was that in the 1930s oppressive regimes were not believed to be especially aggressive. A second consequence of the influence of preexisting beliefs is that images of individual states, once established, will change only in response to discrepant information that is high in quantity or low in ambiguity. This helps account for the inertia of many policies, the frequency with which states are taken by surprise, and many of the “occasions on which signals are missed or misinterpreted. Third, and relatedly, observers who believe different theories or hold different images of the state will draw different inferences from its behavior. The same Signals and indices will be read very differently by observers with different beliefs about the actor. This means that if observers—and actors—are to estimate how signals will be received, they need to understand the theories and cognitive predispositions of the perceivers. If you want to know whether an act will be seen as hostile or not, you should first inquire as to whether the observer already has an image of the actor as malign; to tell whether a promise or a threat will be viewed as credible, it is crucial to discern the perceivers’ theories and beliefs about the actor. This is true irrespective of the truthfulness of the signals. The famous “Double-Cross” system that the British used during World War II to control the Nazi spy network in Britain and feed it misleading messages would not have worked had the British not cracked many German codes, thus enabling them to understand what reports the Germans would believe.
This shows the psychological naiveté of signaling theories that, although acknowledging the importance of preexisting beliefs, argue that new information is combined with old as specified by Bayesian updating of prior beliefs on the basis of new information. The model generally used is of a person who has to estimate the proportions of red and blue chips in a paper bag; increasing evidence is provided as one chip after another is drawn at random from the bag. The assumption, appropriate for this example, is that judgments of specific bits of evidence are independent of expectations. That is, whether I think the bag contains mostly red or mostly blue chips does not affect whether I see any particular chip as red or blue; the color of the chip is objective and will be perceived the same way by all people, irrespective of their prior beliefs. But this is a poor model for perceptions of actors’ types: how I perceive your signal is strongly influenced by what I already think of you.
Even what might seem to be the clearest signals will make no impression if the perceiver’s mind is made up or is focused elsewhere. It did not require the impeachment trial of President Clinton to show us that people with different beliefs and interests differ not only in their estimates of how much new evidence should change priors, but also in their evaluations of whether this evidence points in one direction or its opposite.
Cold cognitive processes driven by the requirement to simplify the contradictory and complex informational environment are not the only ones at work. Affective forces or “motivated biases” also influence how signals are perceived. The most important force of this type is an aversion to facing psychologically painful value trade-offs discussed in other chapters of this book. An implication for signaling is that motivated biases often reinforce cognitive inertia. A decision-maker who has staked his or her ego and/or domestic fortunes on a line of policy will find it difficult to recognize that it is likely to fail. Perceptions will be systematically distorted to shield the person from excessively painful choices. Lebow showed this process at work in defeating the appreciation of indications that the state is heading for an undesired war unless it changes its policy.30 Thus it is to Nehru’s political and psychological commitment to his “forward policy” and not to China’s lack of adequate signals that we must attribute his failure to see that the PRC would attack in the fall of 1962 if India did not make concessions. Similarly, although German secrecy and deception played a large role, much of the reason why British intelligence misread Germany in the 1930s was that the analysts did not want to draw inferences that contradicted British policy.
Motivated biases also help explain faulty signaling. Actors often misunderstand how others will interpret their behavior not only because they fail to grasp others’ theories and images of them, but because they view their own behavior in a biased way. Individuals and states generally think well of themselves, believe that they have benevolent motives, and see their actions as reasonable and legitimate. So it is not surprising that in some cases these views are not only rejected by those with whom they are interacting, but also are at variance with what disinterested observers see. Self-justification, if not self-righteousness, can lead actors to believe that their acts will be seen as benign when there is good reason for others to draw a very different inference. A state may thus believe that it is signaling firm but nonaggressive intentions by behavior that most reasonable perceivers would take to be hostile and threatening. (Robert Jervis, 2017, pages 124 – 126) 

The Rashomon Effect

Living in Different Worlds
The sharing of documents and memories from the Cold War makes it increasingly clear that each side lived in its own world. Each thought that its perceptions were universally valid and failed to realize that others saw a very different world. This is not unusual: it is hard to find any case of international conflict, or even of sustained international interaction, in which each participant was able to fully grasp the other’s perceptions.
Of course in the Cold War, as in previous conflicts, each side believed that the other held different values and sought different goals than it did. It is only a slight exaggeration to say that each side saw itself as good and the other as evil; itself as protecting and furthering the best values of mankind and the other as destroying them; itself as honest and moral and the other as deceptive and hypocritical; itself as seeking security and the other as aggressive. But the sense of these deep differences did not make it easier for either side to understand how the other saw the world; indeed, each acted a though the other’s deviant values did not prevent the other from sharing some of its rival’s basic perceptions. Thus it is not surprising that the Cold War was filled with cases in which once side or the other was taken by surprise. I suspect that only rarely could one side’s statesmen write a position paper as the other side would have written it.
The reasons lie in three interrelated perceptual process that characterize international politics. First, Rashomon effects are common: different states see the same situation very differently. Because leaders proceed on their own understanding of situations, they will often be in their own perceptual and conceptual worlds. Furthermore, leaders are not likely to understand this and will assume that others see the world the way they do. Second, perceptions by leaders of how their nation is viewed by others are also important.18 Does the state think that the other side believes that the state is aggressive? That it will retreat in the face of demonstrations of strength? That it is acting on the basis of long-range plans? These perceptions are likely to be major influence on the other’s behavior, and so the state needs to grasp them if it is to understand and affect what the other does. 
These can be called second-order perceptions. Third, behavior is influenced by leaders’ perceptions and beliefs about their own nation (self-perceptions).19 A state that sees itself in decline is likely to see others and to behave very differently from one that conceives of itself as continuing to be strong, if not dominant. Self-images thus often became the battleground for policy. Furthermore, appreciating others’ perceptions of the state is especially difficult and painful when this view contradicts the state’s self-image.
It is rarely easy for a leader to see that others view her country and the acts she undertakes on its behalf very differently than she sees them. Part of the difficultly is cognitive: adopting a perspective different from one’s own calls for mental agility and the ability to interpret information from alternative perspectives, often employing alien concepts. This is one reason why scientific breakthroughs are so difficult.20 Where self-images are concerned, additional barriers are in place as well. To realize that others see you in a different, and presumably an unflattering light, is to acknowledge the possibility that they are right, or at least that you did something that could have given rise to this impression. Thus, most American leaders and member of the general public have difficulty understanding how sensible and well-motivated people abroad might see the United States as overbearing if not aggressive. It is clear to most Americans that when their country supported foreign dictators there were overriding considerations of national security at stake and the alternative was a violent left-wing regime that would have been worse for the people of the country. To see an alternative interpretation would be to call into question many deeply held beliefs about American life and society, as well as the wisdom of specific policies. It is thus no accident that the citizens of a country who are most critical of its foreign policy usually hold very different beliefs about the government, if not the country as a whole, than do most other citizens, and that Gorbachev and his colleagues changed their views of Soviet society as they carried out radical changes in foreign policy.
The propensity for people to live in different perceptual worlds is not limited to relations between adversaries. Even under the most propitious circumstances actors may misperceive each other and fail to understand the other’s perceptions of them. Thus Richard Neustadt has shown that the two most serious Anglo-American crises of the postwar era (the Suez crisis and Kennedy’s cancellation of the Skybolt missile program) were occasioned by a failure of each side to understand the other’s interests and perspectives and, even more strikingly, the second-order failure to understand the extent to which the other failed to understand them. Indeed, even people in the same society may reveal the same pattern of perceptual differences: after all, Rashomon was not about international politics.
Further evidence of the pervasiveness of these problems, and the propensity for statesmen to underestimate them, is revealed when two countries’ records of the same conversation are laid side by side. In the most extreme cases, which are not infrequent, it is hard to believe that the two accounts are reporting the same conversation. In other cases, each participant records more of what he said than of his counterpart’s remark, loses crucial nuances, and confidently tells his superiors that his message was received as intended.25
As the Cold War came to an end, perceptions converged and statesmen realized that others could see them differently than they saw themselves. As usual, the strong adjusted much less than did the weak; indeed, to a considerable extent, many of the standard American beliefs about the origins and sources of the Cold War were confirmed by what the Soviet leaders said at the end of the conflict. But we should not be too quick to assert the accuracy of various perceptions. Finding and explaining divergences and convergences may be the more appropriate task, at least at this stage.
Most obviously, we want to know whether the two states saw the military balance in the same way.27 Although more research is needed for a complete picture, it seems clear that in at least some instances both sides tended to engage in “worst case” analysis. The result was that each feared that the other was superior to it in strength, that trends were running against it, and that the other would be likely to exploit this imbalance. In the early 1980s, for example, both the United States and the Soviet Union saw the situation as dangerous and deteriorating.
Statesmen often fail to understand that they live in their own worlds because they assume that the other has the same information that they do. Most obviously, the leaders of a state that does not seek to threaten others generally does not expect the security dilemma to operate because they believe that their benign intentions are clear to others, who therefore will not feel threatened by the state’s security measures. As John Foster Dulles put it, “Khrushchev does not need to be convinced of our good intentions. He knows we are not aggressors and do not threaten the security of the Soviet Union.” Similarly, after the Chinese entered the Korean War, Dean Acheson argued that “no possible shred of evidence could have existed in the minds of the Chinese communist authorities about the open [peaceful] intentions of the forces of the United Nations.”37 The United States may be particularly prone to self-righteousness, but it is not unique in assuming that its knowledge and understanding of its own behavior and intentions are both valid and clear to others.
Decision-makers often assume that the information they receive is correct and think that their way of seeing the world is the only possible one. They are likely to live in a world that is quite different from that of their counterparts. Often the first step toward dealing with serious conflicts is to understand this. But there is a tension if not a paradox here. The realization that information may be incorrect and that Rashomon effects are possible may reduce a statesman’s confidence in his ability to chart a new course. Fortunately, the leaders at the end of the Cold War were both confident and willing to try to live in a common world. (Robert Jervis, 2017, pages 273 – 282)