Cognitive dissonance theory
The theory of cognitive dissonance can explain a number of puzzling misperceptions. The basic outlines of the theory are not startling, but some of its implications are contrary to common sense and other theories of cognitive consistency. The definition of dissonance is relatively straightforward: “two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other.” For example, the information that a Ford is a better car than a Chevrolet is dissonant with the knowledge that I have bought a Chevy. Information that the MLF is not favored by most Europeans is dissonant with the decision-maker’s knowledge that he adopted this policy believing it would reduce strains within the alliance. The basic hypotheses are: “1. The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce dissonance and achieve consonance. 2. When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance.
The basis of dissonance theory lies in the postulate that people seek strong justification for their behavior. They are not content to believe merely that they behaved well and chose wisely—if this were the case they would only have to maintain the beliefs that produced their decisions. Instead, people want to minimize their internal conflict. This leads them to seek to believe that the reasons for acting or deciding as they did were overwhelming. The person will then rearrange his beliefs so that they provide increased support for his actions. Knowledge of the advantages of rejected courses of action and costs of the chosen one will be a source of uncomfortable dissonance that he will try to reduce. To do this he will alter his earlier opinions, seeing more drawbacks and fewer advantages in the policies he rejected and more good points and fewer bad ones in the policy he adopted. He may, for example, come to believe that the rejected policy would not “satisfy certain criteria that he originally thought it would, or that those criteria are less important than he originally believed, or that the chosen policy will not cost as much as he first thought. The person may also search out additional information supporting his decision and find new reasons for acting as he did and will avoid, distort, or derogate new dissonant information. If doubts nevertheless creep in, he will redouble his efforts to justify his decision. As a result, “Following a decision there is an increase in the confidence in the decision or an increase in the discrepancy in attractiveness between the alternatives involved in the choice, or both.” This is known as the “spreading apart of the alternatives” because of the perceived increase in the gap between the net advantages of the chosen policy and those of the rejected ones.
As the last quote implies, the theory has been developed only for post-decision situations. Two further conditions are necessary. First, there must be a “definite commitment resulting from the decision…. It seems that a decision carries commitment with it if the decision unequivocally affects subsequent behavior.” Second, the person must feel that his decision was a free one, i.e. that he could have chosen otherwise. If he had no real choice, then the disadvantages of the policy will not create dissonance because his lack of freedom provides sufficient justification for his action.
Making such a decision will, according to dissonance theory, greatly alter the way a person thinks. Before reaching his decision the individual will seek conflicting information and make “some compromise judgment between the information and his existing cognitions or between bits of information inconsistent with each other and with his existing cognitions.” But once the person has reached a decision, he is committed and “cannot process information and make some compromise judgment.” Quite the contrary, the person must minimize the extent to which the evidence pointed in opposite directions.
An ingenious demonstration, an anecdote, and an international example illustrate the meaning of this phenomenon and show that it occurs outside the laboratory. Immediately after they place their bets, race track bettors become more confident that their horse will win. A few hours after deciding to accept a job offer that he had agonizingly considered for several months, a professor said, “I don’t understand how I could have seriously thought about not taking the job.” And the doubts of British Liberals about whether to go to war in 1914 were almost totally dissolved after the decision was reached.
We should note that contrary to first impressions, dissonance reduction does not always imply wishful thinking. First, reducing dissonance can involve changing evaluations of alternatives, thus altering desires themselves. Second, selecting and interpreting evidence so as to confirm that one’s decision was wise may not conform to one’s desires. For example, the decision to take costly precautions against the possibility that a negatively valued event will occur can generate dissonance reduction that will lead to unwishful thinking. Thus if I decide to build a bomb shelter, I may reduce dissonance by not listening to people who argue that the peace is secure. But I will still hope for peace.
Three conceptual problems are apparent from the start. First, one must be able to draw a line between pre- and post-decision situations. This is easy to do in most laboratory experiments. At a given time, subjects are made to choose a toy, an appliance, or a side of an argument. But in most political cases there is no such clear moment of decision. Even if one can determine when orders were given or papers signed, this may not coincide with the time of commitment. It is even harder to pinpoint the time when an actor decides what the other’s intentions are. This does not mean, however, that no distinctions are possible. Even if we cannot tell exactly when the actor decided, we can contrast the early stages when he was obviously undecided with later ones when he is carrying out a course of action. Similarly, as images become firmly established and policies are based upon them, we can consider the actor to be committed to this image and can analyze his behavior in terms of dissonance theory.
Second, and more important, when we deal with complex questions and subtle reasoning it is not clear what beliefs and bits of information are consonant or dissonant with each other. Is the information that Russia seeks a disarmament agreement dissonant with the belief that Russia is hostile? Is the perception that relations between China and the United States have improved dissonant with the belief that the previous American “hard line” policy was appropriate? Dissonance research has usually avoided these questions by constructing experiments in which it seems obvious that certain elements are dissonant. But the theoretical question of how one determines if an element follows from the obverse of the other has not been answered. Perhaps the best we can say is that a dissonant cognition is one that the person would have counted as a reason against following the course of action that he later chose, and the exact specification of what is dissonant will therefore vary with the person’s beliefs.
This problem is further complicated by the fact that evidence that the policy is incurring high costs or proving to be a failure is dissonant with the person’s knowledge that he chose that policy only if he feels that he could have predicted this outcome on the basis of the information he had, or should have had, when he made his decision. Dissonance reduction is employed to ward off the perception that the decision was unwise or inappropriate, not to inhibit the conclusion that in an unpredictable world the policy had undesired consequences. So the person can recognize, with hindsight, that the decision was wrong without believing that, given the evidence available when he chose, he made a bad decision. This makes it important for us to know how much the decision-maker thinks he should be able to foresee future events. On the one hand, most statesmen see the world as highly contingent and uncertain. But they also have great confidence in their own abilities. Unfortunately, we do not know the relative strengths of these conflicting pressures and have little evidence about the degree to which and conditions under which decision-makers will consider evidence that their policy is failing to be dissonant with their knowledge that they initially favored the policy.
Although we cannot treat this topic at length here, we should note that the counterargument has been made, and supported by a good deal of evidence, that “When a person gets himself into a situation, and therefore feels responsible for its consequences, inconsistent information should, no matter how it comes about, arouse dissonance.” Thus even if the person could not have foreseen an undesired consequence of his decision, its occurrence will be dissonant with the cognition that he made the decision and so was, in some sense, responsible for the consequence. At the heart of this debate are differing conceptions of the driving forces of dissonance. The view we have endorsed sees dissonance arising out of an inconsistency between what a person has done and his image of himself. The alternative view sees the sources of dissonance as inconsistencies between the person’s values and the undesired effects of his behavior. The former position has better theoretical support, but the experimental evidence is mixed.
Third, little is known about the magnitude of dissonance effects. Pressures that appear in carefully controlled laboratory situations may be slight compared to those produced by forces active in decision-making in the outside world. Such influences as institutional interests, political incentives, and feelings of duty may dwarf the impact of dissonance. Indeed even in the laboratory, this effect is far from overwhelming, producing changes ranging from 4 percent to 8 percent in the relative attractiveness of the alternatives.
Cognitive Dissonance and Inertia
If this theory dealt only with the ways people increased their comfort with decisions already reached we would not be concerned with it since it could not explain how people decide. In reducing dissonance, however, people alter their “beliefs and evaluations, thereby changing the premises of later deliberations, and so the theory has implications for the person’s future decisions, actions, and perceptions. One of the most important is the added force that dissonance provides to the well-known power of inertia. Many reasons why tentative decisions become final ones and why policies are maintained in the face of discrepant information do not require the use of complex psychological theories to be understood. The domestic or bureaucratic costs of policy change are usually high. The realization of the costs of change makes subordinates hesitant to call attention to the failure of current practices or to the potential of alternatives. There are external costs as well since international policies often involve commitments that cannot be broken without damaging the state’s reputation for living up to its word. Decision-makers may calculate that the value of this reputation outweighs the loss entailed by continuing an unwise policy.
Other psychological theories that stress consistency also predict that discrepant information will not be given its due and imply that policies will be maintained longer than political calculations can explain. Dissonance theory elaborates upon this point in two ways, neither of which, however, is easy to verify in the political realm. First, dissonance theory asserts that, after making a decision, the person not only will downgrade or misinterpret discrepant information but will also avoid it and seek consonant information. This phenomenon, known as “selective exposure,” has been the subject of numerous experiments that have not resolved all the controversies. But even the founder of dissonance theory now admits that this effect “is small and is easily overcome by other factors” including “the potential usefulness of dissonant material,” curiosity, and the person’s confidence that he can cope with discrepant information. Thus “avoidance would be observed only under circumstances where other reasons for exposure … were absent.” Such a weak effect is hard enough to detect in the laboratory; it is harder to find and less relevant in the arena of political decision-making.
A second hypothesis that may explain why policies are continued too long is more fruitful: the aim and effect of dissonance reduction is to produce post-decision spreading apart of the alternatives. There are important political analogues to experiments that show that if a person is asked to rate the attractiveness of two objects (e.g. toys or appliances), is allowed to choose one to keep, and then is asked to rate them again, his ratings will shift to increase the relative desirability of the chosen object. After they have made a choice, decision-makers too often feel certain that they decided correctly even though during the pre-decision period that policy did not seem obviously and overwhelmingly the best. The extent and speed “of this shift and the fact that contemporary participants and later scholars rarely attribute it entirely to political considerations imply that dissonance reduction is taking place.
Before we discuss the evidence from and implications for political decision-making, two objections should be noted. First, modifications of dissonance theory consider the possibility of the opposite of the spreading of the alternatives—“post-decision regret.” But the slight theory and data on this subject do not show that the phenomenon is of major importance. Second, some scholars have argued that, especially when the decision-maker does not expect to receive further information and the choice is a hard one, the spreading apart of the alternatives precedes and facilitates the decision. But the bulk of the evidence supports the dissonance position.
We should also remember that this whole argument is irrelevant if a decision changes the state’s environment and destroys the availability of alternative policies. Some decisions to go to war do this. For example, once Japan bombed Pearl Harbor, she could make little use of new information that the United States was not willing to suffer a limited defeat but would prefer to fight the war through to complete victory. Even had dissonance not made it more difficult for the Japanese to think about ways of ending the war, it is not certain that viable alternatives existed.
Many decision-makers speak of their doubts vanishing after they embarked on a course of action, or they say that a situation seemed much clearer after they reached a decision. Evidence that would have been carefully examined before the decision is rejected at later stages. These effects are illustrated by President Madison’s behavior in 1811 after he accepted an ambiguous French offer to lift her decrees hindering American trade in return for Madison’s agreement to allow trade with France while maintaining the ban on trade with England. When he took this step, “Madison well understood the equivocal nature” of the French offer and realized that Napoleon might be deceiving him. But even as evidence mounted that this was in fact the case, Madison firmly stood by his earlier decision and did not recognize even privately that the French were not living up to their bargain. The British, who had the same information about the French actions as the Americans had, soon realized that Napoleon was engaging in deception.
Similarly, only slowly and painfully did Woodrow Wilson decide to ask for a declaration of war against Germany. His awareness of the costs of entering the war was acute. But after the decision was made, he became certain that his policy was the only wise and proper one. He had no second thoughts. In the same way, his attempt to reduce the dissonance that had been aroused by the knowledge that he had had to make painful compromises during the negotiations on the peace treaty can explain Wilson’s negative reaction to the last-minute British effort to alter the treaty and bring it more in line with Wilsonian principles: “The time to consider all these questions was when we were writing the treaty, and it makes me a little tired for people to come and say now that they are afraid that the Germans won’t sign, and their fear is based upon things that they insisted upon at the time of the writing of the treaty; that makes me very sick.” There were political reasons for not reopening negotiations, but the vehemence with which Wilson rejected a position that was in harmony with his ideals suggests the presence of unacknowledged psychological motivation. Wilson’s self-defeating refusal to perceive the strength of the Senate opposition to the League may be similarly explained by the fact that the belief that alterations in the Covenant were necessary would have been dissonant with the knowledge that he had given away a great deal to get the Europeans to accept the League. Once the bargains had been completed, dissonance reduction made them seem more desirable.
More recently, the American decision not to intervene in Vietnam in 1954 was followed by a spreading apart of the alternatives. When they were considering the use of force to prevent a communist victory, Dulles and, to a lesser extent, Eisenhower believed that a failure to act would expose the neighboring countries to grave peril. When the lack of Allied and domestic support made intervention prohibitively expensive, American decision-makers altered their perceptions of the consequences of not intervening. Although they still thought that the immediate result would be the fall of some of Indochina, they came to believe that the further spread of communism—fear of which motivated them to consider entering the war—would not necessarily follow. They altered their views to reject the domino theory, at least in its most deterministic formulation, and held that alternative means could save the rest of Southeast Asia. It was argued—and apparently believed—that collective action, which had initially been sought in order to hold Vietnam, could stabilize the region even though part of Vietnam was lost. “By judging that military victory was not necessary, the decision-makers could see the chosen policy of not intervening as greatly preferable to the rejected alternative of unilateral action, thereby reducing the dissonance aroused by the choice of a policy previously believed to entail extremely high costs.
This spreading apart of the alternatives increases inertia. By altering their views to make it seem as though their decision was obviously correct, decision-makers “increase the amount of discrepant information necessary to reverse the policy. Furthermore, continuing efforts to reduce dissonance will lead decision-makers to fail to seek or appreciate information that would call their policy into question if they believe that they should have foreseen the negative consequences of their decision.
The reduction of dissonance does more than help maintain policies once they are set in motion. By altering instrumental judgments and evaluations of outcomes, it indirectly affects other decisions. Most of us have at times found that we were determining our values and standards by reference to our past behavior. For example, no one can do a complete cost-benefit study of all his expenditures. So we tend to decide how much we can spend for an object by how much we have spent for other things. “If the book I bought last week was worth $10.00 to me, then this record I’m now considering buying is certainly worth $1.98.”
Acting on a value or a principle in one case can increase its potency and influence over future cases. The value is not depleted in being used, but rather replenished and reinforced because of the effects of dissonance reduction aimed at easing the person’s doubts about acting on that value. Thus if a person refuses a tempting bribe he is more likely to display integrity in the future because dissonance reduction will lead him to place greater value on this quality, to place less value on the money he might receive, and/or to come to believe that he is likely to get caught if he commits an illegal act. Similarly, if a professor turns down a tempting offer from another university, his resistance to future offers will increase if he reduces dissonance by evaluating his own institution more favorably or by raising his estimate of the costs of leaving. Statesmen’s post-decision re-evaluations of their goals and beliefs can have more far-reaching consequences. If in reducing the dissonance created by the decision to intervene to prevent revolution in one country, the statesman inflates his estimate of the bad consequences of internal unrest, he will be more likely to try to quell disturbances in other contexts. Or if he reduces dissonance by increasing his faith in the instrument he chose to use, he will be more likely to employ that instrument in later cases. Similarly, downgrading a rejected goal means that it will not be pursued even if altered circumstances permit its attainment. This may be one reason why once an actor has decided that a value will be sacrificed if this becomes necessary—i.e. once he has developed a fall-back position—he usually fights less hard to maintain his stand. But because these effects, although important, are hard to verify and because other theories predict that policy-making will be characterized by high inertia, we must turn to other hypotheses, some of which run counter to common sense, in order to further demonstrate the utility of cognitive dissonance theory. (Robert Jervis, 1976, pages 508 – 517)