In my recent summary and review of Randall Holcombe’s book Following Their Leaders: Political Preferences and Public Policy, one of the ideas I found most interesting was Holcombe’s distinction between anchor and derivative preferences. Holcombe attempts to explain something many people have noticed before – why is there such a strong correlation among political views that seemingly have nothing to do with each other?
For example, consider the question of whether the rich have a moral obligation to pay taxes at a higher rate. If I know someone’s answer to this question, I can confidently predict whether or not they believe stricter gun control laws will effectively reduce violent crime. These are not merely different topics; they are fundamentally different kinds of questions. Whether or not there is a moral obligation for the rich to pay higher taxes is a normative question, while the effectiveness of gun control legislation is an empirical question. Why should one’s normative beliefs about tax policy predict their factual beliefs about the effectiveness of gun control?
Some writers have made attempts to create a sort of Grand Unifying Theory tying together all these seemingly unrelated positions into a consistent worldview. Thomas Sowell’s A Conflict of Visions: Ideological Origins of Political Struggles describes a “constrained vision” and “unconstrained vision” (which in later works he also refers to as the “tragic vision” and “utopian vision”) and argues that beliefs about these seemingly different issues cluster together because of these underlying differences of vision. George Lakoff has argued that the clustering of unrelated views is due to unconscious beliefs about family structure, with conservatives taking a “strict father” worldview and liberals taking a “nurturing parent” worldview. Arnold Kling has offered a model with three divisions rather than two, arguing that conservatives view the world through a barbarism versus civilization divide, progressives through an oppressor versus oppressed divide, and libertarians through a lens of liberty versus coercion. Johnathan Haidt, in The Righteous Mind, suggests a six axis model consisting of care and harm, fairness and cheating, loyalty and betrayal, authority and subversion, sanctity and degradation, and liberty and oppression. In Haidt’s telling, progressives place great value on care and fairness but little value on the others, libertarians put almost all their eggs in the liberty/oppression basket, and conservatives treat all six axes as equally important.
In contrast to these theories, Holcombe’s explanation seems startlingly simple – people anchor on a party, movement, or leader, and then just adopt whatever bundle of beliefs happens to come with that anchor. But simple does not mean simplistic, and Holcombe’s theory has a notable advantage over these other explanations. According to these other theories, major changes in a party’s platform should be followed by a significant shift in the people who support it. However, as Holcombe notes, in practice party leaders can drastically alter the party platform, even swapping positions with the opposing party, while the party’s supporters and opponents remain largely unchanged. This is easily explained by Holcombe’s account, but much harder to explain by these other theories.
However, there is a key caveat to make. The supporters or opponents of a party can remain largely unchanged, but not completely so. When Trump came along on a platform that was in many ways the exact opposite of everything the Republican party had been advocating for decades, most Republicans simply changed their views to match Trump’s, but not all. Some left the party and denounced the direction it was moving in, George Will being a high-profile example. What should we make of this?
I think the explanation is found in an idea put forth by Tim Urban in his recent (and excellent) book What’s Our Problem: A Self-Help Book for Societies. Urban argues that the usual depiction of views as a spectrum from left wing to moderate to right wing is unhelpful, in part because it seems to imply that people in the middle are intrinsically more reasonable. This isn’t true, as Urban correctly notes. Lots of so-called “moderates” are dogmatic and close minded, and many people who are far left or right are intelligent, reasonable, and open-minded. To account for this, Urban proposes a new model that doesn’t just go left to right, but also up and down. He distinguishes thinkers as being on higher or lower rungs of a ladder, corresponding to the quality of their thought.
The highest rung is for what he calls “scientists.” This is rung is for the Platonic Ideal of how thinkers should operate. Scientists are open-minded, willing to consider all the evidence, will freely admit when their interlocutor makes a good point, follow the evidence wherever it may lead, aren’t committed to a pre-existing view, and so forth. Of course, nobody is perfect in this regard, but some people approximate it more than others.
The next rung down is for what he calls “sports fans.” Sports fans have a preferred outcome and are rooting for a side, but they are also fundamentally driven by respect for the game. If a referee makes an ambiguous call, a sports fan will instinctively interpret it in whatever way is more favorable to their team. But if the slow-motion replay makes it clear they were mistaken, they will freely admit the referee should call in favor of the other team. They want their team to win, but only if they win fair and square.
The next rung down is for the “attorney.” These are people who are committed to arguing for a specific side, just like lawyers in a court of law. If the prosecution presents a particularly damning bit of evidence, no defense attorney will ever say “wow, that’s a great point, my client probably is guilty then!” They will always seek out some grounds to argue against any evidence contradicting their established position. Still, they are attempting to persuade and make arguments, tendentious as their arguments will be.
The lowest rung is for “zealots.” Zealots don’t bother with arguments and aren’t interested in the evidence. They operate on pure tribalism and are convinced members of the other tribe are necessarily stupid, evil, or otherwise corrupt. In this model, Urban says, we can see that “moderate” doesn’t imply “reasonable.” You can be a low-rung moderate, or a high-rung extremist.
I think we can use this ladder to connect Holcombe’s model with the others. Models like the conflict of visions or the three languages of politics better describe high-rung thinkers, while lower-rung thinkers are probably better described by the anchor and derivative preference model. Still, the implications for democracy are not good. As Diane Mutz has documented in her book Hearing The Other Side: Deliberative versus Participatory Democracy, the more politically engaged a voter is, the more likely they are to be a low-rung thinker, and the more high-rung a thinker someone is, the less likely they are to be politically engaged or to vote. It’s easy to feel motivated to action when one is a zealot who is convinced their side is obviously right about everything, and the opposition is motivated entirely by vile intentions or sheer stupidity. It’s difficult to conjure that same motivation when you think issues are complicated, evidence is frequently ambiguous, and reasonable people can disagree.