Posted on 2008-11-26
The latest Science has a psych article saying we think of distant stuff more abstractly, and vice versa. “The brain is hierarchically organized with higher points in the cortical hierarchy representing increasingly more abstract aspects of stimuli”; activating a region makes nearby activations more likely. This has stunning implications for our biases about the future.
All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits.
Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits.
Since these things mostly just cannot go together in reality, this must bias our thinking both about now and about distant futures. When “in the moment,” we focus on ourselves and in-our-face details, feel “one with” what we see and close to quirky folks nearby, see much as uncertain, and safely act to achieve momentary desires given what seems the most likely current situation. Kinda like smoking weed.
Regarding distant futures, however, we’ll be too confident, focus too much on unlikely global events, rely too much on trends, theories, and loose abstractions, while neglecting details and variation. We’ll assume the main events take place far away (e.g., space), and uniformly across large regions. We’ll focus on untrustworthy consistently-behaving globally-organized social-others. And we’ll neglect feasibility, taking chances to achieve core grand symbolic values, rather than ordinary muddled values. Sound familiar?
More bluntly, we seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united them determined to oppose our core symbolic values, making infeasible overly-risky overconfident plans to oppose them. We seem primed to neglect the value and prospect of trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures, mostly getting along peacefully in vastly-varied uncoordinated and hard-to-predict local cultures and life-styles.
Of course being biased to see things a certain way doesn’t mean they aren’t that way. But it should sure give us pause. Selected quotes for those who want to dig deeper:
In sum, different dimensions of psychological distance – spatial, temporal, social, and hypotheticality – correspond to different ways in which objects or events can be removed from the self, and farther removed objects are construed at a higher (more abstract) level. Three hypotheses follow from this analysis. (i) As the various dimensions map onto a more fundamental sense of psychological distance, they should be interrelated. (ii) All of the distances should similarly affect and be affected by the level of construal. People would think more abstractly about distant than about near objects, and more abstract construals would lead them to think of more distant objects. (iii) The various distances would have similar effects on prediction, evaluation, and action. … [On] a task that required abstraction of coherent images from fragmented or noisy visual input … performance improved … when they anticipated working on the actual task in the more distant future … when participants thought the actual task was less likely to take place and when social distance was enhanced by priming of high social status. … Participants who thought of a more distant event created fewer, broader groups of objects. … Participants tended to describe more distant future activities (e.g., studying) in high-level terms (e.g., “doing well in school”) rather than in low-level terms (e.g., “reading a textbook”). … Compared with in-groups, out-groups are described in more abstract terms and believed to possess more global and stable traits … Participants drew stronger inferences about others’ personality from behaviors that took place in spatially distal, as compared with spatially proximal locations. … Behavior that is expected to occur in the more distant future is more likely to be explained in dispositional rather than in situational terms … Thinking about an activity in high level, “why,” terms rather than low level, “how,” terms led participants to think of the activity as taking place in more distant points in time. … Students were more confident that an experiment would yield theory-confirming results when they expected the experiment to take place in a more distant point in time. … Spatial distance enhanced the tendency to predict on the basis of the global trend rather than on the basis of local deviation. … As temporal distance from an activity (e.g., attending a guest lecture) increased, the attractiveness of the activity depended more on its desirability (e.g.,how interesting the lecture was) and less on its feasibility (e.g., how convenient the timing of the lecture was). … People take greater risks (i.e., favoring bets with a low probability of winning a high amount over those that offer a high probability to win a small amount) in decisions about temporally more distant bets.
Posted on 2014-04-28
A new JPSP paper confirms that we are idealistic in far mode, and selfish in near mode. If you ask people for short abstract descriptions of their goals, they’ll say they have ideal goals. But if you ask them to describe in details what is it like to be them pursuing their goals, their selfishness shines clearly through. Details: Completing an inventory asks the respondent to take an observer’s perspective upon the self, effectively asking, “What do you look like to others?” Imagining watching a video of oneself driving a car, playing basketball, or speaking to a friend is an experience as the self-as-actor. Rating the importance of various goals also recruits the self-as-actor. Motivated to maintain a moral reputation, the self-as-actor is infused with prosocial, culturally vetted scripts.
Another way of accessing motivation is by asking people questions about their lives. Open-ended verbal responses (e.g., narratives or implicit measures) require the respondent to produce ideas, recall details, reflect upon the significance of concrete events, imagine a future, and narrate a coherent story. In effect, prompts to narrate ask respondents, “What is it like to be you?” Imagining actually driving a car, playing basketball, or speaking to a friend is an experience as the self-as-agent (McAdams, 2013). Asking people to tell about their lives also recruits the self-as-agent. Motivated by survival, the self-as-agent is selfish in nature. …
Taken together, this leads to the prediction that frames the current research: Inventory ratings, which recruit the self-as-actor, will yield moral impressions, whereas narrated descriptions, which recruit the self-as-agent, will yield the impression of selfishness. …
The motivation to behave selfishly while appearing moral gave rise to two, divergently motivated selves. The actor—the watched self— tends to be moral; the agent—the self as executor—tends to be selfish. Each self serves its own adaptive function: The actor helps people maintain inclusion in groups, whereas the agent attends to basic survival needs. Three studies support the thesis that the actor is moral and the agent is selfish. In Study 1, actors claimed their goals were equally about helping the self and others (viz., moral); agents claimed their goals were primarily about helping the self (viz., selfish). This disparity was evident in both individualist and collectivist cultures, albeit more so among individualists. Study 2 compared actors and agents’ motives to those of people role-playing highly prosocial or selfish exemplars. In content and in the impression they made upon an outside observer, actors’ motives were similar to those of the prosocial role-players, whereas agents’ motives were similar to those of the selfish role-players. In Study 3, participants claimed that their agent’s motives were the more realistic and their actor’s motives the more idealistic of the two. When asked to take on an idealistic mindset, agents became more moral; a realistic mindset made the actor more selfish. (more)
Posted on 2012-10-02
Quick, what is the best gift you ever got from a woman? From your parents? From a left-handed person? From a teacher? These aren’t easy questions to answer. But they seem easier than these questions: What is the total value of all the gifts you ever got from women? From your parents? From left-handed folks? From teachers?
For the first set of questions you can try to think of examples of particular people in those categories, and then think of particular gifts you got from those particular people. That can help you guess at the best gift from those categories. But to estimate the total value of gifts from people in categories, you’ll have to also estimate how many gifts you ever got from folks in each category.
Note that it also seems easy to estimate the average value of gifts from each category. To do this, you need only remember a few gifts that fit each category, and then average their values.
As another example, imagine you are looking at building entrance laid out in multi-colored tiles. Some tiles are blue, some red, some green, etc. You are looking at it from a distance, at an angle, in variable lighting. In this situation it will be much easier to estimate if there is more blue than red area in the tiles, than to estimate how many square inches of blue tile area is in that entrance. This later estimate requires you to additionally estimate distances to reference points, to estimate the total surface area.
These examples suggest that when we think in far mode, without a structured systematic representation of our topic, it is usually easier to average than to add values. So averaging is what we’ll tend to do. All of which I mention to introduce to a fascinating paper that I just noticed, even though it got a lot of publicity last December:
This analysis introduces the Presenter’s Paradox. Robust findings in impression formation demonstrate that perceivers’ judgments show a weighted averaging pattern, which results in less favorable evaluations when mildly favorable information is added to highly favorable information. Across seven studies, we show that presenters do not anticipate this averaging pattern on the part of evaluators and instead design presentations that include all of the favorable information available. This additive strategy (“more is better”) hurts presenters in their perceivers’ eyes because mildly favorable information dilutes the impact of highly favorable information. For example, presenters choose to spend more money to make a product bundle look more costly, even though doing so actually cheapened its value from the evaluators’ perspective. (more)
The authors attribute this to a near-far effect:
Presenters face many pieces of potentially relevant information and need to determine, in a bottom-up fashion, which ones to include in a presentation. This presumably draws attention to each individual piece of information as a discrete entity and a focus on piecemeal processing. If a given piece of information exceeds a neutrality threshold, the presenter will conclude that it is compatible with the message he or she seeks to convey and will include it. This results in presentations that would fare better under an adding rather than averaging rule. In contrast, evaluators’ primary task is to make a summary judgment of the overall presentation, which fosters a focus on holistic processing and the big picture and results in an averaging pattern as observed in many impression formation studies.
Additional experiments confirm this near-far interpretation. Those who prepare presentations and proposals tend to focus on them in detail, and so add part values in near mode style, while those who consume such presentations or proposals tend to pay much less attention, and so average their values in far mode style.
This result seems to me quite pregnant with interesting implications, none of which were mentioned in the dozen blog posts on the subject that have appeared since last December. So I guess it’s up to me.
First, this result predicts the usual academic advice to delete publications from low ranked journals from your vita. Yes those extra publications took extra work, and show more total intellectual contribution, but distracted readers evaluate you by averaging your publications, not adding them.
Second, this also predicts that academia will tend in general to neglect conclusions suggested by lots of weak clues, relative to conclusions based on a single strong theory or empirical comparison. People with a practical understanding of particular areas will correctly complain that academics tend too much to latch on to a few easy to explain and justify arguments, at the cost of lots of detail that practitioners appreciate.
Third, this predicts that in morality and politics, which are especially far sorts of topics, arguments tend to be won by those who push simple strong principles, even though people privately tend to choose actions that deviate from such principles. For example, while laws say no one can get medical advice from non-doctors, on the grounds that docs know best, but given a private choice most of us would often let other considerations convince us to listen to non-docs. While actions tend to be chosen in a near mode where lots of other weaker considerations get added, people know their best chance for winning an argument with a distracted audience is to focus on their one strongest point.
Fourth, this predicts Tetlock’s hedgehog vs. foxes result. Foreign policy is an especially far view sort of subject, and experts who focus on one strongest consideration get the most respect and attention, but experts who rely on many considerations, which are on average weaker, are more accurate.
Futurism is probably the most far view sort of topic, so I’d guess that all this holds there the most strongly. That is, while the most futurists who get the most attention from distracted audiences are those who harp endlessly on one clear plausible idea, the most accurate futurists are probably those who know and use hundreds of clues, many of them weak. Alas this is a problem for those of us who want to consider some aspect of the future in detail, since we quickly run out of strong principles, and then have to rely more on many weak clues.
Added Nov 25, 2012: This post gives data showing people donate money based more on the average than the total sympathy of the recipients. So you are better off asking for donations to help a particular especially sympathetic recipient, than to help many such folks.
Posted on 2022-10-27
Moons and Junes and Ferris wheels
The dizzy dancing way that you feel
As every fairy tale comes real
I’ve looked at love that wayBut now it’s just another show
And you leave ’em laughing when you go
And if you care, don’t let them know
Don’t give yourself awayI’ve looked at love from both sides now
From give and take and still somehow
It’s love’s illusions that I recall
I really don’t know love
Really don’t know love at allBoth Sides Now, Joni Mitchell 1966.
If you look at two things up close, it is usually pretty easy to tell which one is closest. And also to tell their relative sizes, e.g., which one might fit inside the other. But if you look far in the distance, such as toward the sky or the horizon, it gets much harder to tell relative sizes or distances. While you might notice that one thing occludes another, when considering unknown things in different directions it is harder to tell relative sizes or distances.
I see similar effects also for things that are more “distant” in other ways, such as in time, social distance, or hypothetically; it also seems harder to judge relative distance when things are further away in these ways. Furthermore, it seems harder to tell of two abstract descriptions which is more abstract, but easier to tell which of two detailed things which has more detail. Thus in the sense of near-far (or construal-level) theory, it seems that we generally find it harder to compare relative distances when things are further away.
According to near-far theory, we also frame our more stable, general, and fundamental goals as more far and abstract, compared to the more near local considerations that constrain our plans. Thus this theory seems to predict that we will have more trouble comparing the relative value of our more abstract values. That is, when comparing two general persistent values, we will find it hard to say which one we value more. Thus near-far theory predicts a big puzzling human feature: we know surprisingly little about what we want. For example, we find it very hard to imaging concrete, coherent, and attractive utopias.
When we see an object from up close, and then we later see it from afar, we often remember its details from when we saw it up close. So similarly, we might learn to compare our general values by remembering examples of concrete decisions where such values were in conflict. And we do often have concrete situations where we are aware that our general values apply to those concrete cases. Such as when we are very hungry, horny, injured, or socially embarrassed. Why don’t we learn our values from those?
Here I will invoke my theory of the sacred: for some key values and things, we set our minds to try to always see them in a rather far mode, no matter how close we are to them. This enables different people in a community to bond together by seeing those sacred things in the same way, even when some of them are much closer to them than others. And this also enables a single person to better maintain a unified identity and commitments over time, even when that person sees concrete examples from different distances at different times in their life. (I thank Arnold Brooks for pointing this out in an upcoming MAM podcast.)
For example, most of us have felt strong feelings of lust, limerence, and attachment to other people at many times during our lives. So we should have plenty of data on which to base rough estimates of what exactly is “love”, and how much we value it compared to other things. But our treating love as sacred makes it harder to use that data to construct such a detailed and unified account. Even when we think about concrete examples up close, it seems hard to use those to update our general views on “love”. We still “really don’t know love at all.”
Because we really can’t see love up close and in detail. Because we treat love as sacred. And sacred things we see from afar, so we can see them together.
Posted on 2022-08-27
I’ve recently been trying to make sense of our concept of the “sacred”, by puzzling over its many correlates. And I think I’ve found a way to make more sense of it in terms of near-far (or “construal level”) theory, a framework that I’ve discussed here many times before.
When we look at a scene full of objects, a few of those objects are big and close up, while a lot more are small and far away. And the core idea of near-far is that it makes sense to put more mental energy into analyzing each object up close, objects that matters to us more, by paying more attention to their detail, detail often not available about stuff far away. And our brains do seem to be organized around this analysis principle.
That is, we do tend to think less, and think more abstractly, about things far from us in time, distance, social connection, or hypothetically. Furthermore, the more abstractly we think about something, the more distant we tend to assume are its many aspects. In fact, the more distant something is in any way, the more distant we tend to assume it is in other ways.
This all applies not just to dates, colors, sounds, shapes, sizes, and categories, but also to the goals and priorities we use to evaluate our plans and actions. We pay more attention to detailed complexities and feasibility constraints regarding actions that are closer to us, but for far away plans we are content to think about them more simply and abstractly, in terms of relatively general values and principles that depend less on context. And when we think about plans more abstractly, we tend to assume that those actions are further away and matter less to us.
Now consider some other ways in which it might make sense to simplify our evaluation of plans and actions where we care less. We might, for example, just follow our intuitions, instead of consciously analyzing our choices. Or we might just accept expert advice about what to do, and care little about experts incentives. If there are several relevant abstract considerations, we might assume they do not conflict, or just pick one of them, instead of trying to weigh multiple considerations against each other. We might simplify an abstract consideration from many parameters down to one factor, down to a few discrete options, or even all the way down to a simple binary split.
It turns out that all of these analysis styles are characteristic of the sacred! We are not supposed to calculate the sacred, but just follow our feelings. We are to trust priests of the sacred more. Sacred things are presumed to not conflict with each other, and we are not to trade them off against other things. Sacred things are idealized in our minds, by simplifying them and neglecting their defects. And we often have sharp binary categories for sacred things; things are either sacred or not, and sacred things are not to be mixed with the non-sacred.
All of which leads me to suggest a theory of the sacred: when a group is united by valuing something highly, they value it in a style that is very abstract, having the features usually appropriate for quickly evaluating things relatively unimportant and far away. Even though this group in fact tries to value this sacred thing highly. Of course, depending on what they try to value, such attempts may have only limited success.
For example, my society (US) tries to value medicine sacredly. So ordinary people are reluctant to consciously analyze or question medical advice; they are instead to just trust its priests, namely doctors, without looking at doctor incentives or track records. Instead of thinking in terms of multiple dimensions of health, we boil it all down to a single health dimension, or even a binary of dead or alive.
Instead of seeing a continuum of cost-effectiveness of medical treatments, along which the rich would naturally go further, we want a binary of good vs bad treatments, where everyone should get the good ones no matter what their cost, and regardless of any other factors besides a diagnosis. We are not to make trades of non-sacred things for medicine, and we can’t quite believe it is ever necessary to trade medicine against other sacred things. Furthermore, we want there to be a sharp distinction between what is medicine and what is not medicine, and so we struggle to classify things like mental therapy or fresh food.
Okay, but if we see sacred things as especially important to us, why ever would we want to analyze them using styles that we usually apply to things that are far away and the least important to us? Well one theory might be that our brains find it hard to code each value in multiple ways, and so typically code our most important values as more abstracted ones, as we tend to apply them most often from a distance.
Maybe, but let me suggest another theory. When a group unites itself by sharing a key “sacred” value, then its members are especially eager to show each other that they value sacred things in the same way. However, when group members hear about and observe how an associate makes key sacred choices, they will naturally evaluate those choices from a distance. So each group member also wants to look at their own choices from afar, in order to see them in the same way that others will see them.
In this view, it is the fact groups tend to be united by sacred values that is key to explaining why they treat such values in the style usually appropriate for relatively unimportant things seen from far away, even though they actually want to value those things highly. Even though such a from-a-distance treatment will probably lead to a great many errors and misjudgments when actually trying to promote that thing.
You see, it may be more important to groups to pursue a sacred value together than to pursue it effectively. Such as the way the US spends 18% of GDP on medicine, as a costly signal of how sacred medicine is to us, even though the marginal health benefit of our medical spending seems to be near zero. And we show little interest in better institutions that could make such spending far more cost effective.
Because at least this way we all see each other’s ineffective medical choices in the same way. We agree on what to do. And after all, that’s the important thing about medicine, not whether we live or die.
Added 10Sep: Other dual process theories of brains give similar predictions.
Posted on 2010-10-19
The future is not the realization of our hopes and dreams, a warning to mend our ways, an adventure to inspire us, nor a romance to touch our hearts. The future is just another place in spacetime. (more)
Our minds have two very different modes (and a range between). We model important things nearby in more detail than less important things far away. The more nearby aspects we notice in a thing, the more other nearby aspects and relevant detail we assume it has. On the other hand, the more far aspects we see in something, the more other far aspects we assume it has, and the more we reason about it via broad categories and relations. (More on near vs. far thinking here and here.)

Since the future is far in time, thinking about it tends to invoke a far mode of thought, which introduces other far mode defaults into our image of the future. And thinking about the far future makes us think especially far. Of course many other considerations influence any particular imagined future, but it can help to understand the assumptions your mind is primed to make about the far future, regardless of whether those assumptions are true.
For example, since we expect things further away in time to also be further away in space, we expect future folk to live further away, such as in space, and to habitually travel longer distances. Since the distant past is also further away in time, we also expect past folk to live further away and travel longer distances, but the many concrete details we know about the past reduces this effect.
Since blue light scatters more easily than red, far away things in our field of view tend to look more blue. So we expect future stuff to look blue. And since blue stuff looks cold, we expect future stuff to look cold. Finally, since we expect far away things to have less detail, we tend to imagine them with fewer parts and flourishes, and less detailed textures and patterns. The future is not paisley.
And in fact, if you Goggle “futuristic style” images, you’ll tend to see images like those in this post – simple, smooth, cool, blue, and sky/spacy. In a word, “shiny.”
We also tend to assume there are fewer relevant categories of far things. So we’ll tend to assume future folk have fewer kinds of food, furniture, cars, houses, roads, buildings, and land uses, whose styles of use vary less from place to place. Instead of seeing a million variations bleeding into each other in dizzying complexity, we tend to assume there are fewer more discrete types, with less variation within each type and larger differences between types. For example, futuristic movies often have everyone wearing very similar clothes.
Another example is that we tend to assume future creatures are divided into relatively distinct groups whose internal divisions are less important. And since creatures that are more different seem further in social distance, we expect future groups to differ more from each other than current groups. So we expect eloi vs. morlock, romulan vs. klingon, ape vs. human, human vs. robot, etc.
Far tends to be happy, and high in status, power, and confidence. Conformity and obeying authority is near, but supporting underdogs is far. Sex, money, and temptation tend to be near, while love, satisfaction, trust, and self-control are far. So we often assume future folks have forgotten how to have sex, as in Sleeper or Barbarella, or that money motives are less common, as in Star Trek.
In far mode we tend to focus more on our simple abstract ideals and values, relative to messy desires and practical constraints. We also tend to neglect our messy internal contradictions and conflicts, and therefore assume our values and actions are coherent and consistent. So in far mode we tend more to explain good acts as virtue, and bad acts as vice or evil. We assume future folk are less driven by base desires, more strongly committed to their ideals, less tolerant of domination, more morally enlightened, and more morally judgmental about others’ failings.
We therefore tend to assume that future folk feel relatively moral, confident, and strong, and that future groups have less trouble coordinating to achieve common ends (making us especially blind to coordination being hard). So we can more easily imagine stark uncompromising conflicts between distinct future groups. Of course robots will war with humans, we think. And since we tend to feel more moral and uncompromising about the future, we more accept future uncompromising self-righteous conflict, relative to such conflict today.
Math and logic analysis is near, while creative analogy is far. So we tend to reason about the future via analogy rather than precise analysis, feeling more comfortable using metaphors and broad concepts like “exploitation”, “progress”, “boredom”, or “intelligence.” Math models of the far future are quite rare. Far mode minds tends to be more confident in the trends or theories they use, making them especially confident in trends and theories used to forecast the far future. Rather than seeing our theories about the future as weak all-else-equal tendencies, we are tempted to see them as absolute laws with rare exceptions.
We also tend to assume that future folk themselves rely more on analogy than analysis. They may have great tech, but we tend to see it arising more from rare sparks of creative genius than from vast armies devoting decades of attention to mind-numbing detail.
Likely familiar events are near, while unlikely novel events are far. So we think it more likely that “there be dragons” in distant lands. Scenarios that would seem too unlikely to consider today can seem reasonable possibilities for a far future. In fact, we may well reject future scenarios that don’t seem strange enough.
While we tend to imagine that trends during the future will be followed with few deviations, we are pretty willing to believe theories which predict that today’s trends, even long term trends, won’t continue into the future. For example, even though natural resource prices rarely rise, we are willing to believe theories that resources will soon “run out”, so that prices greatly rise.
Since important things seem nearer to us, stronger emotions feel nearer, and so we have weaker motives and emotions regarding far things. Instead of being filled with elation or terror regarding good or bad things that might happen in the far future, we tend to treat such events more philosophically, and to assume future folk will do so as well. In a scene from Monty Python’s Meaning of Life, a woman is willing to die to donate a liver after she’s seen how vast is the universe. Similarly, in far mode even human extinction may seem no big deal; “it was our time to go.”
Tasting and touching tend to feel near, while seeing and hearing tend to feel far. So we mainly imagine what the future looks and sounds like, relative to its taste or touch. Words and polite speech tend to be far, while voices, grunts, and slang tend to be near. So we more often imagine future folks’ polite words than their earthy sounds. We imagine future folk being relatively cerebral – we see them as relatively patient in listening to long intellectual speeches, and less often imagine their grunts or wild passionate music.
Of course it remains possible that many of the above far-mode-based expectations about the future will be realized. Maybe stuff in the future really will be simple, smooth, blue, and cold. Maybe future creatures will be spread across space, habitually travel far, and be divided into a few distinct types that vary greatly between types, very little internally, and coordinate well to achieve group ends. Maybe future folk really will be more driven by abstract ideals, with moral judgements driving uncompromising self-righteous conflict between groups. Maybe their innovations really will come from a few geniuses. Maybe the future will be very strange, yet is predictable from powerful theories available now. Maybe future trends really will have few deviations, and future folk will accept their demises philosophically. But please at least consider the possibility that you expect such things not because you have strong supporting evidence, but because your mind was just built to expect such things.
Added: Coincidentally, I was just quoted in this NPR article saying the future isn’t what it used to be.
Posted on 2010-09-29
A while back I was discussing long term future values, i.e., what we want our descendants to be or achieve, and I realized that pretty much any simple description of such values seems crazy. With a little effort it is easy to find counter-examples, or at least discomfort-examples, to most any description much beyond “I hope future folks get what they want.”
I’ve also noticed that among smart folks, the most successful keep their smarts on a short leash. They use their smarts to make the sale, win the case, pass the test, get published, etc., but they don’t use much smarts to consider whether they really want to make the sale, win the case, etc. Oh sure they might express some angst at a Saturday dinner, but come Monday they are back on the job.
In contrast, on average smart folks gain far less success when they seriously apply their smarts to big pictures, reconsidering what they want, what we really know, how the world is organized, what they can do to make the world a better place, and so on. They go off in a thousand directions, and while some might break new ground, on average such smart folk gain much less personal success, and may well do less to help the world.
I count myself in this smart sincere syndrome. I’m often distracted by what I see as important neglected topics, which offer fewer academic or other rewards. These topics have included future robot econ, foundations of quantum mechanics, prediction markets, and much more. Lately I find myself obsessed by a homo hypocritus account of human nature. I’m not at all clear on the best route to pursue this, but no route seems especially promising for success in ordinary terms, or to rely heavily on skills I’ve previously invested in developing. Yet on I go. Applying these observations to myself, I think I have to conclude that I just don’t know much about what I really want, or what I should do to get it, in general far terms, and can’t trust my far mind to tell me much. Lacking a good basis for challenging ordinary concepts of success, I should accept them. If I’m feeling insecure, where success matters more, I should follow the example of smart successful folks in positions similar to me. You know, write academic papers or books, or do business consulting.
In contrast, if I’m feeling rich and comfortable, and so less in need of success, well then I should enjoy myself by doing whatever seems appealing at the time, as long as that doesn’t threaten my basic stable position in life. I’m capable of doing a lot more abstract thinking about what is good for me or the world, but at the moment I just don’t trust that thinking much. What I most enjoy may well be to think on big far topics, but I shouldn’t presume I have a coherent integrated account showing their true global importance.
Posted on 2007-07-25
Instead of watching fireworks on July 4, I did 1500 piece jigsaw puzzle of fireworks, my first jigsaw in at least ten years. Several times I had the strong impression that I had carefully eliminated every possible place a piece could go, or every possible piece that could go in a place. I was very tempted to conclude that many pieces were missing, or that the box had extra pieces from another puzzle. This wasn’t impossible – the puzzle was an open box a relative had done before. And the alternative seemed humiliating.
But I allowed a very different part of my mind, using different considerations, to overrule this judgment; so many extra or missing pieces seemed unlikely. And in the end there was only one missing and no extra pieces. I recall a similar experience when I was learning to program. I would carefully check my program and find no errors, and then when my program wouldn’t run I was tempted to suspect compiler or hardware errors. Of course the problem was almost always my fault.
Most, perhaps all, ways to overcome bias seem like this. In the language of Kahneman and Lovallo’s classic ’93 paper, we allow an outside view to overrule an inside view. From their paper:
Two distinct modes of forecasting were applied to the same problem in this incident. The inside view of the problem is the one that all participants adopted. An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, and by extrapolating current trends. The outside view is the one that the curriculum expert was encouraged to adopt. It essentially ignores the details of the case at hand, and involves no attempt at detailed forecasting of the future history of he project. Instead, it focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one. The case at hand is also compared to other members of the class, in an attempt to assess its position in the distribution of outcomes for the class. … The inside and outside views draw on different sources of information, and apply different rules to its use. … It should be obvious that when both methods are applied with equal intelligence and skill, the outside view is much more likely to yield a realistic estimate. … it is a serious error to assume the outcomes of the most likely scenarios are also the most likely, and that the outcomes for which no plausible scenarios come to mind are impossible. … It is a conservative approach, which will fail to predict extreme and exceptional events, but will do well with common ones. … Our main observation, which is psychological: the inside view is overwhelmingly preferred in intuitive forecasting. The natural way to think about a problem is to bring to bear all one knows about it, with special attention to its unique features. The intellectual detour into the statistics of related cases is seldom chosen spontaneously. Indeed, the relevance of the outside view is sometimes explicitly denied: physicians and lawyers often argue against the application of statistical reasoning to particular cases. In these instances, the preference for the inside view almost bears a moral character. The insider view is valued as a serious attempt to come to grips with the complexities of the unique case at hand, and the outside view is rejected for relying on crude analogy from superficially similar instances. … The contrast between the inside and outside views has been confirmed in systematic research. . … A typical result is that respondents are only correct on about 80% of cases when they describe themselves as “99% sure.” People are overconfident in evaluating the accuracy of their beliefs one at a time. It is interesting, however, that there is no evidence of overconfidence bias when respondents are asked after the session to estimate the number of questions for which they picked the correct answer.
If overcoming bias comes down to having an outside view overrule an inside view, then our questions become: what are valid outside views, and what will motivate us to apply them?
Posted on 2008-06-22
An inside view focuses on internals of the case at hand, while an outside view compares this case to other similar cases. The less you understand about something the harder it is to apply either an inside or an outside view. So the simplest approach would be to just do the best you could with each view and then combine their results in some simple way.
Can we do better? Perhaps, if we know something about when inside views tend to do better or worse, compared to outside views. For example, we should probably emphasize views that give more confident estimates, and de-emphasize views from those biased by self-interest. But do we know anything about on what topics to prefer an inside or outside view?
It is not clear to me that we really do know much about this. But whatever framework we use to make this judgment, it seems to me to count as a meta-view, a view about views. Furthermore, while it is easy to imagine useful outside meta-views, which compare this view-choice situation to other related view-choice situations, it is much harder to imagine useful inside meta-views, where you go through some detailed calculation to decide which view to prefer.
This suggests to me that most useful meta views are outside meta views. If you are going to reject an outside view in favor of an inside view on the basis of some insight on when inside views work better, you will be relying on an outside metaview. So it seems you can’t escape embracing some outside view, though you might embrace a meta outside view instead of a basic outside view.
Posted on 2009-01-14
Back in November I read this Science review by Nira Liberman and Yaacov Trope on their awkwardly-named “Construal level theory”, and wrote a post I estimated “to be the most dense with useful info on identifying our biases I’ve ever written”:
Since then I’ve become even more impressed with it, as it explains most biases I know and care about, including muddled thinking about economics and the future. For example, Ross’s famous “fundamental attribution error” is a trivial application.
The key idea is that when we consider the same thing from near versus far, different features become salient, leading our minds to different conclusions. This is now my best account of disagreement. We disagree because we explain our own conclusions via detailed context (e.g., arguments, analysis, and evidence), and others’ conclusions via coarse stable traits (e.g., demographics, interests, biases). While we know abstractly that we also have stable relevant traits, and they have detailed context, we simply assume we have taken that into account, when we have in fact done no such thing.
For example, imagine I am well-educated and you are not, and I argue for the value of education and you argue against it. I find it easy to dismiss your view as denigrating something you do not have, but I do not think it plausible I am mainly just celebrating something I do have. I can see all these detailed reasons for my belief, and I cannot easily see and appreciate your detailed reasons.
And this is the key error: our minds often assure us that they have taken certain factors into account when they have done no such thing. I tell myself that of course I realize that I might be biased by my interests; I’m not that stupid. So I must have already taken that possible bias into account, and so my conclusion must be valid even after correcting for that bias. But in fact I haven’t corrected for it much at all; I’ve just assumed that I did so.
Posted on 2010-07-29
In Jan ’09 I wrote: This is now my best account of disagreement. We disagree because we explain our own conclusions via detailed context (e.g., arguments, analysis, and evidence), and others’ conclusions via coarse stable traits (e.g., demographics, interests, biases). While we know abstractly that we also have stable relevant traits, and they have detailed context, we simply assume we have taken that into account, when we have in fact done no such thing.
New data suggests a different view:
The results of 4 studies suggest that when individuals mentally construe an attitude object concretely, either because it is psychologically close or because they have been led to adopt a concrete mindset, their evaluations flexibly incorporate the views of an incidental stranger. However, when individuals think about the same issue more abstractly, their evaluations are less susceptible to incidental social influence and instead reflect their previously reported ideological values. …
The results of these four studies appear quite robust: They held for a variety of political and social attitude objects (including general issues and specific policies related to four different and important topics: organ donation, euthanasia, illegal immigration, and universal health care), and they emerged across different types of evaluative responding (overall attitudes, voting intentions, and elaboration positivity) as well as different manipulations (temporal distance and two direct manipulations of construal level). …
Whereas local evaluations serve to guide responding in the here and now by flexibly incorporating incidental contextual details, global evaluations can help to guide action at a distance by consistently reflecting a person’s core values and ideals, which are likely to be shared within important relationships or groups. (more)
Here’s my tentative reading of this. We pay more attention to messy detail in near far, relative to far view. On any given topic, we see our core values and explicit reasons as big important central influences on our opinions, whereas we see the opinions of others more as incidental detail. So we think we should listen to random other people more on small detail topics, and less on big important topics.
Random others are little people, you see, which are fit for little topics. But they are just not big and important enough to influence us on big important topics; only big important things should do that. Like big explicit reasons. This makes us tend to disagree greatly with most others on what we see as big topics, though much less on millions of small detail topics, like “there’s another tree.”
Perhaps it makes sense to keep random others from influencing our core values (which are about us), but on questions of fact (which are about the world out there), most folks seem to make the huge mistake of vastly underestimating the info contained of others’ opinions, relative to the info contained in their own explicit reasons. Yes there may be people and times when others’ opinions really do contain relatively little info, but most folks are far too quick to assume that this applies to them now.
Posted on 2017-11-25
While I’m a contrarian in many ways, it think it fair to call my ex-co-blogger Eliezer Yudkowsky even more contrarian than I. And he has just published a book, Inadequate Equilibria, defending his contrarian stance, against what he calls “modesty”, illustrated in these three quotes:
In contrast, Yudkowsky claims that his book readers can realistically hope to become successfully contrarian in these 3 ways:
Few would disagree with his claim #1 as stated, and it is claim #3 that applies most often to reader lives. Yet most of the book focuses on claim #2, that “for just myself” one might annually improve on the recommendation of our best official experts.
The main reason to accept #2 is that there exist what we economists call “agency costs” and other “market failures” that result in “inefficient equilibria” (which can also be called “inadequate”). Our best experts don’t try with their full efforts to solve your personal problems, but instead try to win the world’s somewhat arbitrary games. Games that individuals just cannot change. Yudkowsky may not be saying anything especially original here about how broken the world can be, but his discussion is excellent, and I hope it will be widely read.
Yudkowsky gives some dramatic personal examples, but simpler examples can also make the point. For example, one can often use maps or a GPS to improve on official road signs saying which highway exits to use for particular destinations, as sign officials often placate local residents seeking less through-traffic. Similarly, official medical advisors tend to advise medical treatment too often relative to doing nothing, official education experts tend to advise education too often as a career strategy, official investment advisors suggest active investment too often relative to index funds, and official religion experts advise religion too often relative to non-religion. In many cases, one can see plausible system-level problems that could lower the quality of official advice, inducing these experts to try harder to impress and help each other than to help clients.
To explain how inadequate are many of our equilibria, Yudkowsky contrasts them with our most adequate institution: competitive speculative financial markets, where it is kind of crazy to expect your beliefs to be much more accurate than are market prices. He also highlights the crucial importance of competitive meta-institutions, for example lamenting that there is no place on Earth where one can pay to try out arbitrary new social institutions. (Alas he doesn’t endorse my call to fix much of the general problem of disagreement via speculative markets, especially on meta topics. Like many others he seems more interested in bets as methods of personal virtue than as institution solutions.) However, while understanding that systems are often broken can lead us to accept Yudkowsky’s claim #2 above, that doesn’t obviously support his claim #3, nor undercut the modesty that he disputes. After all, reasonable people could just agree that, by acting directly and avoiding broken institutions, individuals can often beat the best institutionally-embedded experts. For example, individuals can gain by investing more in index funds, and by choosing less medicine, school, and religion than experts advise. So the existence of broken institutions can’t by itself explain why disagreement exists, nor why readers of Yudkowsky’s book should reasonably expect to consistently pick who is right among disagreeing experts.
Thus Yudkowsky needs more to argue against modesty, and for his claim #3. Even if it is crazy to disagree with very adequate financial institutions, and not quite so crazy to disagree with less adequate institutions, that doesn’t imply that it is actually reasonable to disagree with anyone about anything.
His book says less on this topic, but it does say some. First, Yudkowsky accepts my summary of the rationality of disagreement, which says that agents who are mutually aware of being meta-rational (i.e., trying to be accurate and getting how disagreement works) should not be able to foresee their disagreements. Even when they have very different concepts, info, analysis, and reasoning errors. If you and a trusted peer don’t converge on identical beliefs once you have a full understanding of one another’s positions, at least one of you must be making some kind of mistake.
Yudkowsky says he has applied this result, in the sense that he’s learned to avoid disagreeing with two particular associates that he greatly respects. But he isn’t much inclined to apply this toward the other seven billion humans on Earth; his opinion of their meta-rationality seems low. After all, if they were as meta-rational as he and his two great associates, then “the world would look extremely different from how it actually does.” (It would disagree a lot less, for example.)
Furthermore, Yudkowsky thinks that he can infer his own high meta-rationality from his details:
I learned about processes for producing good judgments, like Bayes’s Rule, and this let me observe when other people violated Bayes’s Rule, and try to keep to it myself. Or I read about sunk cost effects, and developed techniques for avoiding sunk costs so I can abandon bad beliefs faster. After having made observations about people’s real-world performance and invested a lot of time and effort into getting better, I expect some degree of outperformance relative to people who haven’t made similar investments. … [Clues to individual meta-rationality include] using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning.
The possibility that some agents have low meta-rationality is illustrated by these examples:
Those who dream do not know they dream, but when you are awake, you know you are awake. … If a rock wouldn’t be able to use Bayesian inference to learn that it is a rock, still I can use Bayesian inference to learn that I’m not.
Now yes, the meta-rationality of some might be low, that of others might be high, and the high might see real clues allowing them to correctly infer their different condition, clues that the low also have available to them but for some reason neglect to apply, even though the fact of disagreement should call the issue to their attention. And yes, those clues might in principle include knowing about Bayes’ rule, sunk costs, debiasing, experiments, or math. (They might also include many other clues that Yudkowsky lacks, such as relevant experience.)
Alas, Yudkowsky doesn’t offer empirical evidence that these possible clues of meta-rationality are in fact actually clues in practice, that some correctly apply these clues much more reliably than others, nor that the magnitude of these effects are large enough to justify the size of disagreements that Yudkowsky suggests as reasonable. Remember, to justifiably disagree on which experts are right in some dispute, you’ll have to be more meta-rational than are those disputing experts, not just than the general population. So to me, these all remain open questions on disagreement.
In an accompanying essay, Yudkowsky notes that while he might seem to be overconfident, in many lab tests of cognitive bias,
around 10% of undergraduates fail to exhibit this or that bias … So the question is whether I can, with some practice, make myself as non-overconfident as the top 10% of college undergrads. This… does not strike me as a particularly harrowing challenge. It does require effort.
Though perhaps Yudkowsky isn’t claiming as much as he seems. He admits that allowing yourself to disagree because you think you see clues of your own superior meta-rationality goes badly for many, perhaps most, people:
For many people, yes, an attempt to identify contrarian experts ends with them trusting faith healers over traditional medicine. But it’s still in the range of things that amateurs can do with a reasonable effort, if they’ve picked up on unusually good epistemology from one source or another.
Even so, Yudkowsky endorses anti-modesty for his book readers, who he sees as better than average, and also too underconfident on average (even though most people are overconfident). His advice is especially targeted at those who aspire to his claim #1:
If you’re trying to do something unusually well (a common enough goal for ambitious scientists, entrepreneurs, and effective altruists), then this will often mean that you need to seek out the most neglected problems. You’ll have to make use of information that isn’t widely known or accepted, and pass into relatively uncharted waters. And modesty is especially detrimental for that kind of work, because it discourages acting on private information, making less-than-certain bets, and breaking new ground.
This seems to me to be a good reason to take a big anti-modest stance. If you are serious about trying hard to make a big advance somewhere, then you must get into the habit of questioning the usual accounts, and thinking through arguments for yourself in detail. If your chance of making a big advance is much higher if you are in fact more meta-rational than average, then you have a better chance of achieving a big advance if you assume your own high meta-rationality within your advance-attempt-thinking. Perhaps you could do even better if you limited this habit to the topic areas near where you have a chance of making a big advance. But maybe that sort of mental separation is just too hard.
So far this discussion of disagreement and meta-rationality has drawn nothing from the previous discussion of inefficient institutions in a broken world. And without such a connection, this book is really two separate books, tied perhaps by a mood affiliation.
Yudkowsky doesn’t directly make a connection, but I can make some guesses. One possible connection applies if official experts tend to deny that they sit in inadequate equilibria, or that their claims and advice are compromised by such inadequacy. When these experts are high status, others might avoid contradicting their claims. In this situation, those who are more willing to make cynical claims about a broken world, or more willing to disagree with high status people, can be on average more correct, relative to those who insist on taking more idealistic stances toward the world and the high in status.
In particular, such cynical contrarians can be correct about when individuals can do better via acting directly than indirectly via institution-embedded experts, and they can be correct when siding with low against high status experts. This doesn’t seem sufficient to me to justify Yudkowsky’s more general anti-modesty, which for example seems to support often picking high status experts against low status ones. But it can at least go part of the way.
We have a few other clues to Yudkowsky’s position. First, while he explains the impulse toward modesty via status effects, he claims to personally care little about status:
Many people seem to be the equivalent of asexual with respect to the emotion of status regulation—myself among them. If you’re blind to status regulation (or even status itself) then you might still see that people with status get respect, and hunger for that respect.
Second, note that if the reason you can beat on our best experts is that you can act directly, while they must win via social institutions, then this shouldn’t help much when you must also act via social institutions. So it is telling that in two examples, Yudkowsky thinks he can do substantially better than the rest of the world, even when he must act via social institutions.
First, he claims that the MIRI research institute he helped found “can do better than academia” because “We were a small research institute that sustains itself on individual donors. … we had deliberately organized ourselves to steer clear of [bad] incentives.” Second, he finds it “conceivable” that the world’s rate of innovation might increase noticeably if another small organization that he helped to found “annual budget grew 20x, and then they spent four years iterating experimentally on techniques, and then a group of promising biotechnology grad students went through a year of CFAR training.”
Putting this all together my best guess is that Yudkowsky sees himself, his associates, and his best readers as only moderately smarter and more knowledgeable than others; what really distinguishes them is that they really care much more about the world and truth. So much so that they are willing to make cynical claims, disagree with the high status, and sacrifice their careers. This is the key element of meta-rationality they see as lacking in the others with whom they feel free to disagree. Those others are mainly trying to win the usual status games, while he and his associates are after truth.
Alas this is a familiar story from a great many sides in a great many disputes. Each says they are right because the others are less sincere and more selfish. While most such sides must be wrong in these claims, no doubt some people do care more about the world and truth than others. Furthermore, those special people may see detailed signs telling them this fact, while others lack those signs but fail to sufficiently attend to that fact.
And we again come back to the core hard question in the rationality of disagreement: how can you tell if you are neglecting key signs about your (lack of) meta-rationality? But alas, other than just claiming that such clues exist, Yudkowsky doesn’t offer much analysis to help us advance on this hard problem.
Eliezer Yudkowsky’s new book Inadequate Equilibria is really two disconnected books, one (larger) book that does an excellent job of explaining how individuals acting directly can often improve on the best advice of experts embedded in broken institutions, and another (smaller) book that largely fails to explain why one can realistically hope to consistently pick the correct side among disputing experts. I highly recommend the first book, even if one has to sometimes skim through the second book to get to it.
Of course, if you are trying hard to make a big advance somewhere, then it can make sense to just assume you are better, at least within the scope of the topic areas where you might make your big advance. But for other topic areas, and for everyone else, you should still wonder how sure you can reasonably be that you have in fact not neglected clues showing that you are less meta-rational than those with whom you feel free to disagree. This remains the big open question in the rationality of disagreement. It is a question to which I hope to return someday.
Posted on 2022-02-15
The usual party chat rule says to not spend too long on any one topic, but instead to flit among topics unpredictably. Many thinkers also seem to follow a rule where if they think about a topic and then write up an opinion, they are done and don’t need to ever revisit the topic again. In contrast, I have great patience for returning again and again to the most important topics, even if they seem crazy hard. And for spending a lot time on each topic, even if I’m at a party.
A long while ago I spend years studying the rationality of Disagreement, though I haven’t thought much about it lately. But rereading Yudkowsky’s Inadequate Equilibria recently inspires me to return to the topic. And I think I have a new take to report: unusual for me, I adopt a mixed intermediate position. This topic forces one to try to choose between two opposing but persuasive sets of arguments. On the one side there is formal theory, to which I’ve contributed, which says that rational agents with different information and calculation strategies can’t have a common belief in, nor an ability to foresee, the sign of the difference in their opinions on any “random variable”. (That is, a parameter that can be different in each different state of the world.) For example, they can’t say “I expect your next estimate of the chance of rain here tomorrow to be higher than the estimate I just now told you.” Yes, this requires that they’d have the same ignorant expectations given a common belief that they both knew nothing. (That is, the same “priors”.) And they must be listening to and taking seriously what the other says. But these seem reasonable assumptions. An informal version of the argument asks you to imagine that you and someone similarly smart, thoughtful, and qualified each become aware that your independent thoughts and analyses on some question had come to substantially different conclusions. Yes, you might know things that they do not, but they may also know things that you do not. So as you discuss the topic and respond to each others’ arguments, you should expect to on average come to more similar opinions near some more intermediate conclusion. Neither has a good reason to prefer your initial analysis over the others’.
Yes, maybe you will discover that you just have a lot more relevant info and analysis. But if they see that, they should then defer more to you, as you would if you learned that they are more expert than you. And if you realized that you were more at risk of being proud and stubborn, that should tell you to reconsider your position and become more open to their arguments.
According to this theory, if you actually end up with common knowledge of or an ability to foresee differences of opinion, then at least one of you must be failing to satisfy the theory assumptions. At least one of you is not listening enough to, and taking seriously enough, the opinions of the other. Someone is being stubbornly irrational.
Okay, perhaps you are both afflicted by pride, stubbornness, partisanship, and biases of various sorts. What then?
You may find it much easier to identify more biases in them than you can find in yourself. You might even be able to verify that you suffer less from each of the biases that you suspect in them. And that you are also better able to pass specific intelligence, rationality, and knowledge tests of which you are fond. Even so, isn’t that roughly what you should expect even if the two of you were similarly biased, but just in different ways? On what basis can you reasonably conclude that you are less biased, even if stubborn, and so should stick more to your guns?
A key test is: do you in fact reliably defer to most others who can pass more of your tests, and who seem even smarter and more knowledgeable than you? If not, maybe you should admit that you typically suffer from accuracy-compromising stubbornness and pride, and so for accuracy purposes should listen a lot more to others. Even if you are listening about the right amount for other purposes.
Note that in a world where many others have widely differing opinions, it is simply not possible to agree with them all. The best that could be expected from a rational agent is to not consistently disagree with some average across them all, some average with appropriate weights for knowledge, intelligence, stubbornness, rationality, etc. But even our best people seem to consistently violate this standard.
All that we’ve discussed so far has been regarding just one of the two opposing but persuasive sets of arguments I mentioned. The other argument set centers around some examples where disagreement seems pretty reasonable. For example, fifteen years ago I said to “disagree with suicide rock”. A rock painted with words to pretend it was a sentient creature listening carefully to your words, but offering no evidence that it actually listened, should be treated like a simple painted rock. In that case, you have strong evidence to down-weight its claims. A second example involves sleep. While we are sleeping we don’t usually have an opinion on if we are sleeping, as that issue doesn’t occur to us. But if the subject does come up, we often mistakenly assume that we are awake. Yet a person who is actually awake can have high confidence in that fact; they can know that while a dreaming mind is seriously broken, their mind is not so broken.
An application to disagreement comes when my wife awakes in the night, hears me snoring, and tells me that I’m snoring and should turn my head. Responding half asleep, I often deny that I’m snoring, as I then don’t remember hearing myself snore recently, and I assume that I’d hear such a thing. In this case, if my wife is in fact awake, she can comfortably disagree with me. She can be pretty sure that she did hear me snore and that I’m just less reliable due to being only half awake.
Yudkowsky uses a third example, which I also find persuasive, but at which many of you will balk. That is the majority of people who say they have direct personal evidence for God or other supernatural powers. Evidence that’s mainly in their feelings and minds, or in subtle patterns in how their personal life outcomes are correlated with their prayers and sins. Even though most people claim to believe in God, and point to this sort of evidence, Yudkowsky and I think that we can pretty confidently say that this evidence just isn’t strong enough to support that conclusion. Just as we can similarly say that personal anecdotes are usually insufficient to support the usual confidence in the health value of modern medicine. Sure, its hard to say with much confidence that there isn’t a huge smart power somewhere out there in the universe. And yes, if this power did more obvious stuff here on Earth back in the day, that might have left a trail of testimony and other evidence, to which advocates might point. But there’s just no way that either of those considerations can remotely support the usual level of widespread confidence in a God meddling in detail with their heads and lives.
The most straightforward explanation I can see here is social desirability bias a bias that not only introduces predictable errors but also one’s willingness to notice and correlate such errors. By attributing their belief to “faith”, many of them do seem to acknowledge quite directly that their argument won’t stand up to the usual evaluation standards. They are instead believing because they want to believe. Because their social world rewards them for the “courage” and “affirmation” of such a belief.
And that pretty closely fits a social desirability bias. Their minds have turned off their rationality on this topic, and are not willing to consider the evidence I’d present, or the fact that the smartest most accomplished intellectuals today tend to be atheists. Much like the sleeper who just can’t or won’t see that their mind is broken and unable to notice that they are asleep.
In fact, it seems to me that this scenario matches a great many of the disagreements I’m willing to have with others. As I tend to be willing to consider hypotheses that others find distasteful or low status. Many people tell me that the pictures I paint in my two books are ugly, disrespectful, and demotivating, but far fewer offer any opposing concrete evidence. Even though most people seem able to notice the fact that social desirability would tend to make them less willing to consider such hypotheses, they just don’t want to go there.
Yes, there is an opposite problem: many people are especially attracted to socially undesirable hypotheses. A minority of folks see themselves as courageous “freethinkers” who by rights should be celebrated for their willingness to “think outside the box” and embrace a large fraction of the contrarian hypotheses that come their way. Alas, by being insufficiently picky about the contrarian stories they embrace, they encourage, not discourage, everyone else to embrace social desirability biases. On average, social desirability only causes modest biases in the social consensus, and thus only justifies modest disagreements from those who are especially rational. Going all in on a great many contrarian takes at once is a sign of an opposite problem. Yes, the stance I’m taking implies that contrarian views, i.e., views that seem socially undesirable to embrace, are on average neglected, and thus more likely than the consensus is willing to acknowledge. But that is of course far from endorsing most of them with high confidence. For example, UFOs as aliens are indeed more likely than the usual prestigious consensus will admit, but could still be pretty unlikely. And assigning a somewhat higher chance to claims like that the moon landings were faked it is not at all the same as endorsing such claims. So here’s my new take on the rationality of disagreement. When you have a similar level of expertise to others, you can justify disagreeing with an apparent social consensus only if you can identity a particularly strong way that the minds of most of those who think about the topic tend to get broken by the topic. Such as due to being asleep or suffering from a strong social desirability bias. (A few weak clues won’t do.) I see this position as mildly supported by polls showing that people think that those in certain emotional states are less likely to be accurate in the context of a disagreement; different emotions plausibly trigger different degrees of willingness to be fair or rational. (Here are some other poll results on what people think predicts who is right in a disagreement.) But beware of going too wild embracing most socially undesirable views. And you can’t just in general presume that others disagree with each of your many positions due to their minds being broken in some way that you can’t yet see. That way lies unjustified arrogance. You instead want specific concrete evidence of strongly broken minds.
Imagine that you specialize in a topic so much that you know nearly as much as the person in the world who knows the most, but do not have the sort of credentials or ways to prove your views that the world would easily accept. And this is not the sort of topic where insight can be quickly and easily translated into big wins, wins in either money or status. So if others had come to your conclusions before, they would not have gained much personally, nor found easy ways to persuade many others.
In this sort of case, I think you should feel more free to disagree. Though you should respect base rates, and try to test your views as fast and strongly as possible. As the world is just not listening to you, you can’t expect them yet to credit what you know. Just also don’t expect the world to reward you or pay you much attention, even if you are right.
Posted on 2018-10-18
Late in November 2006 I started this blog, and a month later on Christmas eve I reported briefly on the official publication (after 8 rejections) of my paper Uncommon Priors Require Origin Disputes. That was twelve years ago, and now Google Scholar tells me that this paper has 17 cites, which is about 0.4% of my 3933 total cites, which I’d say greatly under-estimates its value.
Recently I had the good fortune to be invited to speak at the Rutgers Seminar on Foundations of Probability, and I took that opportunity to raise awareness about my old paper. Only about ten folks attended (a famous philosopher spoke nearby at the same time), but this video was taken:
In the video my slides are at times dim, but they can be seen sharp here. Let me now try to explain why my topic is important, and what is my result.
In economics, the most common formal model of a rational agent, by far, is that of a Bayesian. This standard model is also very common in business, political science, statistics, computer science, and many other fields. As there is actually a family of related models, we can use this space to argue about what it means to be “rational”. People argue over various particular proposed “rationality constraints” which limit this space of possibilities to varying degrees.
In economics, the standard model starts with a large (finite) state space, wherein each state resolves all relevant uncertainty; every interesting question is completely answered once you know which state is the true state. Each agent in this model has a prior function which assigns a probability to each state in this space. For any given time and situation an agent’s info can be expressed as as set; at any state, each agent has an info set of states where they know that the true state is somewhere within that set, but don’t know where within that set. Any small piece of info is also expressible as a set; to combine info, you intersect sets.
Given a state space, prior, and info, an agent’s expectation or belief is given by a weighted average, using their prior and conditioned on their info set. That is, all variations in agent beliefs across time or situation are to be explained by variations in their info. We usually assume that info is cumulative, so that each agent knows everything that they have ever known in the past. In order to predict actions, in addition to beliefs, the most common approach is to assume agents maximize expected utility, where each agent has another function that assigns a numerical utility value to each possible state.
Some people study ways to relax these assumptions, such as by using a set of priors instead of a single prior, by seeking computationally feasible approximations, or by allowing agents to forget info they once knew. Other people focus on adding stronger assumptions. For example, when a situation has a natural likelihood function giving the chances of particular outcomes assuming particular parameter settings, we usually assume that each agent’s prior agrees with this likelihood. Some people offer arguments for why particular priors are natural for particular situations. And models also usually assume that differing agents have the same prior.
One key rationality question is when it is reasonable to disagree with other people. Most intellectuals see disagreement as rational, and are surprised to learn that theory often says otherwise. This issue turns crucially on the common prior assumption. Given uncommon priors, it is easy to disagree, but given common priors it is hard to escape the conclusion that it is irrational to knowingly disagree, in the following sense of “foresee to disagree.” Assume you are now estimating some number X, and also now estimating some other person’s future estimate of X, an estimate that they will make at some future time. There is a difference now between these two numbers, and you will now clearly tell that other person the sign of this difference. They will then take this sign into account when making their future estimate. In this situation, for standard Bayesians, this sign must equal zero; you can’t both warn them that you expect their estimate will be too high relative to your estimate, and then also still expect them to remain too high. They will instead listen to your warning and correct enough based on it. This sort of result holds nearly exactly for many slight weakenings of the standard rationality assumptions, but not if we assume big prior differences. And we have seen clearly in the lab, and in real life, humans can in fact often “foresee to disagree” in this sense.
Humans do foresee to disagree, while Bayesians with common priors do not. So are humans rational or irrational here? To answer that question, we must study the arguments for and against common priors. Not just arguments that particular aspects of priors should be common, or that they should be the common in certain simple situations. No, here we need arguments that entire prior functions should or should not be the same. And you can look long and hard without finding much on this topic.
Some people simply declare that differing beliefs should only result from differing information, but others are not persuaded by this. Some people note that as expected utility is a sum over products of probability and utility, one can arbitrarily rescale each probability and utility together holding constant that product, and get all the same decisions. So one can assume common priors without loss of generality, as long as one is free enough to change utility functions. But of course this also makes uncommon priors also without loss of generality. And we are often clear that we mean different things by probabilities and utilities, and thus are not free to vary them arbitrarily. If it means something different to say that an event is unlikely than it means to say that that event’s outcome differences are less important to you, then probabilities mean something different from utilities.
And so finally we get to my paper, Uncommon Priors Require Origin Disputes, which offers one of the few papers I have ever seen to give a concrete argument on common priors. Most everyone who hears it seems persuaded, yet it is rarely mentioned when people summarize what we know about rationality in Bayesian frameworks. If you read the rest of this post, at least you will know.
My argument is pretty simple, though I needed a clever construction to let me say it formally. If the beliefs of a person are described in part by a prior, then that prior must have come from somewhere. My key idea is to use beliefs about the origins of priors to constrain rational priors. For example, if you knew that a few minutes ago someone stuck a probe into your brain and randomly changed your prior, you would probably want to reverse that change. So not all causal origins of priors seem equally rational.
However, there’s one big obstacle to reasoning about prior origins. The natural way to talk about origins is to make and use some sort of probability distribution over different possible priors, origin features, and other events. But in every standard Bayesian model, the priors of all agents are common knowledge. That is, priors are all the same in all possible states, so no one can have any degree of uncertainty about them, or about what anyone else knows about them. Everyone is always completely sure about who has what priors.
To evade this obstacle, I chose to embed a standard model within a larger standard model. So there is a model and a pre-model. While the ordinary model has ordinary states and priors, the pre-model has pre-states and pre-priors. It is in the pre-model that we can reason about the causal origins of the priors of the model.
The pre-states of the pre-model are simply pairs of an ordinary state and an ordinary prior assignment, that says which agents get which priors. So a pre-prior is a probability distribution over the set of all combinations of possible states in the ordinary model, and possible prior assignments for that ordinary model. Each agent would initially know nothing about anything, including about ordinary states or who will get which prior. Their pre-prior would summarize their beliefs in this state of ignorance. Then at some point all agents would have learned about which prior they and the other agents will be using. From this point forward, agent info sets are entirely within an ordinary model, where their prior is common knowledge and gives them ordinary beliefs about ordinary states. So from this point on, an ordinary model is sufficient to describe everyone’s beliefs.
The key pre-rationality constraint that I propose is to have pre-priors agree with priors when they can condition on the same info. So if we condition an agent’s pre-prior on the assignment of who gets which priors, and then ask for the probability of some ordinary event, we should get the same answer as when we simply ask their prior for the probability of that ordinary event. And merely inspecting the form of this simple key equation is enough to draw my key conclusion: Within any single pre-prior that satisfies the pre-rationality condition, all ordinary events are conditionally independent of other agent’s priors, given that agent’s prior.
So, within a pre-prior, an agent believes that ordinary events and their own prior are informative about each other; priors are different when events are different, and in the sensible way. But also within this pre-prior, each agent believes that the priors of other agents are not otherwise informative about ordinary events. The priors of other agents can only predict ordinary events by predicting the prior of this agent; absent that connection, ordinary events and other priors do not predict each other.
I summarize this as believing that “my prior had special origins.” My prior was created via a process that caused it to correlate with other events in the world, but the priors of other agents were not created in this way. And of course this belief that you were made special is hard to square with many common beliefs about the causal origins of priors. This belief is not consistent with your prior being encoded in your genes via the usual processes of genetic inheritance and variation. It is similarly not consistent with many common theories of cultural inheritance and variation.
The obvious and easy way to not believe that your prior resulted from a special unusual origin process is to have common priors. And so this pre-rationality constraint can be seen as usually favoring common priors. I thus have a concrete argument that Bayesians should have common priors, an argument based on the reasonable rationality consideration that not all causal origins of priors are equally rational. If priors should be consistent with plausible beliefs about their causal origins, then priors must typically be common.
Posted on 2019-12-19
Violence was quite common during much of the ancient farming era. While farmers retained even-more-ancient norms against being the first to start a fight, it was often not easy for observers to tell who started a fight. And it was even harder to get those who did know to honestly report that to neutral outsiders. Fighters were typically celebrated for showing strength and bravery, And also loyalty when they claimed to fight “them” in service of defending “us”. Fighting was said to be good for societies, such as to help prepare for war. The net effect was that the norm against starting fights was not very effective at discouraging fights during the farming era, especially when many “us” and “them” were in close proximity.
Today, norms against starting fights are enforced far more strongly. Fights are much rarer, and when they do happen we try much harder to figure out who started them, and to more reliably punish starters. We have created much larger groups of “us” (e.g., nations), and use law to increase the resources we devote to enforcing norms against fighting, and the neutrality of many who spend those resources. Furthermore, we have and enforce stronger norms against retaliating overly strongly to apparent provocations that may have been accidental. We are less impressed by fighters, and prefer for people to use other ways to show off their strength and bravery. We see fighting as socially destructive, to be discouraged. And as fighting is rare, we infer undesired features about the few rare exceptions, such impulsiveness and a lack of empathy.
Now consider disagreement. I have done a lot of research on this topic and am pretty confident of the following claim (which I won’t defend here): People who are mainly trying to present accurate beliefs that are informative to observers, without giving much weight to other considerations (aside from minimizing thinking effort), do not foresee disagreements. That is, while A and B may often present differing opinions, A cannot publicly predict how a future opinion that B will present on X will differ on average from A’s current opinion on X. (Formally, A’s expectation of B’s future expectation nearly equals A’s current expectation.) Of course today such foreseeing to disagree is quite commonplace. Which implies that in any such disagreement, one or both parties is not mainly trying to present accurate estimates. Which is a violation of our usual conversational norms for honesty. But it often isn’t easy to tell which party is not being fully honest. Especially as observers aren’t trying very hard very to tell, nor to report what they see honestly when they feel inclined to support “our” side in a disagreement with “them”. Furthermore, we are often quite impressed by disagreers who are smart, knowledgeable, passionate, and unyielding. And many say that disagreements are good for innovation, or for defending our ideologies against their rivals. All of which helps explain why disagreement is so common today.
But the analogy with the history of violent physical fights suggests that other equilibria may be possible. Imagine that disagreement were much less common, and that we could spend far more resources to investigate each one, using relatively neutral people. Imagine a norm of finding disagreement surprising and expecting the participants to act surprised and dig into it. Imagine that we saw ourselves much less as closely mixed groups of “us” and “them” regarding these topics, and that we preferred other ways for people to show off loyalty, smarts, knowledge, passion, and determination.
Imagine that we saw disagreement as socially destructive, to be discouraged. And imagine that the few people who still disagreed thereby revealed undesirable features such as impulsiveness and ignorance. If it is possible to imagine all these things, then it is possible to imagine a world which has far less foreseeable disagreement than our world, comparable to how we now have much less violence than did the ancient farming world.
When confronted with such an imaged future scenario, many people today claim to see it as stifling and repressive. They very much enjoy their freedom today to freely disagree with anyone at any time. But many ancients probably also greatly enjoyed the freedom to hit anyone they liked at anytime. Back then, it was probably the stronger better fighters, with the most fighting allies, who enjoyed this freedom most. Just like today it is probably the people who are best at arguing to make their opponents look stupid who enjoy our freedom to disagree today. Doesn’t mean this alternate world wouldn’t be better.
Posted on 2008-03-05
xml version=”1.0” standalone=”yes”? html PUBLIC “-//W3C//DTD HTML 4.0 Transitional//EN” “http://www.w3.org/TR/REC-html40/loose.dtd”
A recent New Scientist mentions a 2005 American Political Science Review paper on the genetic basis of political beliefs, which includes this key table, breaking variation in opinions (among 30,000 Virginia twins) on 28 specific topics into three origin components: genetic (heritability), family (shared environment), and other (unshared environment):
The paper shows similar results for Australian twins.
This is a concrete occasion to revisit a general issue. In general, if you want to believe the truth, then you should just accept the average belief on any topic unless you have a good (and better than average) reason to think the causes of your belief difference would be substantially more informed than average.
So unless you have a good reason to think your genes tend to produce more informed beliefs than other genes, you should reject the genetically-caused parts of how your beliefs differ from average beliefs. Even if you have a higher genetically-given IQ, and even if high IQ folks have more accurate beliefs, you should still reject genetic ways in which your beliefs differ from the average beliefs of high IQ folks. After all, true beliefs are supposed to be about the world, not about the particular genes you were randomly given.
Unless you have a good reason to think your childhood family environment was more informed than an average family environment, ignoring any genetic advantages in your family, then you should reject the ways in which your beliefs differ from average beliefs due to your family background. Similarly, you should also reject other non-genetic non-family causes of your belief differences, if you do not have a better than average reason to think your causes are better than average.
Having an intuitive feeling that your belief causes are better is not a “good reason” if most everyone has a similar intuitive feeling. The fact that you have specific reasons for your specific beliefs is also not good enough – most everyone has specific reasons.
A good reason must be based on some feature of you that is different from others, where you have a good reason to think this feature is correlated with being right. The mere fact that you have a distinguishing feature, and would like to think well of yourself, is not by itself a reason to think that feature is correlated with being right! And you must be wary of the common bias to lower our evidential standards when concluding that people like us tend to be more right.
Added: Let’s say that on a scale of 0 to 100, your position on property taxes is 90 – you think such taxes are good for some standard widely accepted mix of values such as economic growth, inequality aversion, or good neighbors. When you disagree with someone at the other extreme, with a position of 10, you understand it to be a disagreement about facts, not values.
Let’s assume the average position on this issue is 50, and that you can see no good reason to think the genes that lean you toward property taxes are better than average. Since the table says that 41% of belief variation on this topic is genetic, then to eliminate this genetic component of your beliefs, you might reduce your position from 90 to 81, since this takes away 40% of the variance of your belief relative to the average.
Posted on 2010-05-28
Hunting has two main modes: searching and chasing. With searching you look for something to chase. With chasing, in contrast, you have a focus of attention that drives your actions. You may find something else worth chasing along the way, and then switch your focus to a new chase, but you’ll still maintain a focus.
It seems to me that while reading non-fiction, most folks are in searching mode. Most would be more intellectually productive, however, in chasing mode. It helps to have in mind a question, puzzle, or problem, and then read in order to answer your question, explain your puzzle, or solve your problem.
In searching mode, readers tend to be less critical. If a source came recommended, they tend to keep reading along even if they aren’t quite sure what the point is. Since authors tend to be more prestigious than readers, readers tend to feel reluctant to question or judge what they’ve read. They are more likely to talk about whether they enjoyed the read, than whether the author’s argument works.
In chasing mode, readers are naturally more critical. When you are looking for something particular, it feels less presumptuous to stop reading when your source comes to seem irrelevant. After all, the source might be good for some other purpose, even if not for your purpose.
In chasing mode, you continually ask yourself whether what you are reading is relevant for your quest, or whether the author actually has anything new or interesting to say. You flip around seeking sections that might be more relevant, and you might even look up the references for an especially relevant section.
Also, search-readers often don’t have a good mental place to put each thing they learn. In which case they don’t end up learning much. Chasers, in contrast, always have specific mental places they are trying to fill with what they read, so they better integrate new things they learn with old things they know.
In chasing mode, readers also tend to better interleave reading and thinking. People often hope that search-mode reading will inspire them to new thoughts, and are disappointed to find that it doesn’t. Chase-mode reading, in contrast, requires constant thinking, in order to evaluate how the current source addresses your chosen focus. This tends to make it easier to notice missing holes in the literature, where your new idea can be placed.
So if you read to be intellectually productive, rather than just to fill your time, consider reading while chasing something, anything. (From a conversation with Heather Macsorley.)
Added 8p: Katja and Andy comment, and dloye offers this quote from Samuel Johnson:
What we read with inclination makes a much stronger impression. If we read without inclination, half the mind is employed in fixing the attention; so there is but one half to be employed on what we read.
Posted on 2007-06-14
Freethinker. One who has rejected authority and dogma, especially in his religious thinking, in favor of rational inquiry and speculation. American Heritage Dictionary
Individual whose opinions are formed on the basis of an understanding and rejection of tradition, authority or established belief. Wikipedia
Many people see themselves as “free thinkers,” with minds open to new ideas and perspectives. They describe themselves positively as favoring rationality, but in practice their negative self-definition seems to have more force. Even when they turn out to have been wrong, freethinkers are proud of having resisted social pressure toward conventional wisdom.
Freethinkers see the deck stacked against new or contrary ideas, and see their own brave contrarian stance as a needed antidote to unreasonable conformity pressures. On net, however, freethinkers deserve much of the blame for resistance to new ideas. Bryan Caplan explains:
Suppose you’re interviewing a smart guy [for a job], without a college degree, and he offers you a money-back guarantee. You might think “What a great deal” and accept. But then again, you might start thinking “What a weirdo. What’s wrong with him?” And this, I propose, is the stumbling block to lots of worthwhile innovations. A person with an unconventional idea may have a point, but is also unlikely to be “normal.” He may not fit it with other people. He may have problems with authority. He may be deviant in more ways than one!
The problem is that on average people who support odd ideas are less desirable as associates, and less discriminating in which ideas they endorse. If people only endorsed odd ideas when they had new information suggesting such ideas were promising, we should be eager to hear of such news, and eager to associate with such people. But in fact the main task faced by those with good news on odd ideas is to distinguish themselves from freethinkers who just pretend to have such news. Contrary to their self-image, undiscriminating freethinkers are our main obstacle to innovation.
Posted on 2012-05-15
I’ve noticed that recommendations for action based on a vision of the future are based on an idea that something must “eventually” occur. For example, eventually:
The common pattern: project forward a current trend to an extreme, while assuming other things don’t change much, and then recommend an action which might make sense if this extreme change were to happen all at once soon.
This is usually a mistake. The trend may not continue indefinitely. Or, by the time a projected extreme is reached, other changes may have changed the appropriate response. Or, the best response may be to do nothing for a long time, until closer to big consequences. Or, the best response may be to do nothing, ever – not all negative changes can be profitably resisted.
It is just not enough to suspect that an extreme will be reached eventually – you usually need a good reason to think it will happen soon, and and that you know a robust way to address it. In far mode it often feels like the far future is clearly visible, and that few obstacles stand in the way of planning paths to achieve far ends. But in fact, the world is much messier than far mode is willing to admit.
Posted on 2006-12-04
I suspect the following issue will be a thorn in our sides for some time to come: when can we justify seen biases as correcting for unseen biases? “Seen” biases are relatively easy to see and document, whereas “unseen” biases are said to exist but are harder to clearly see.
The issue showed up in “Hide Sociobiology Like Sex?,” where some wanted the seen bias of focusing children on altruism instead of more realistic selfishness, to correct for the unseen bias of children confusing “is” and “ought.” And it shows up here in this recent Washington Post article on drug effectiveness:
Treating schizophrenia with an older, cheaper drug, rather than with heavily promoted newer medications, reduces the cost by as much as 30 percent with no apparent difference in safety and effectiveness, according to the first study to examine the economic implications of antipsychotic drug prescribing practices in the United States. … The findings have roiled the field of psychiatry in a fierce debate over the study’s implications and have triggered concerns it could lead public and private insurers to limit drastically which drugs they will pay for. …the new finding faced stiff headwinds before it was published, and was subjected to an extraordinary level of review. … several experts said they were very worried, however, that the choice of medications would be taken from physicians and would be decreed by insurers. That would ignore the complexities of treating schizophrenia and the need for flexibility, the experts said. Patients who have tried perphenazine unsuccessfully, for example, may not be good candidates to go back on it.
The seen effect here is that cheaper drugs seem just as effective, so insurers may limit coverage to only them, to counter the seen doctor bias of low sensitivity to drug prices. Doctors, on the other hand, resist these new findings, because they fear losing their freedom to choose drugs based on their judgment of detailed patient circumstances. Since are no clinical trials yet to document the claim that such doctor judgment improves patients on average, this is an unseen bias (if it exists).
I once told an investment adviser I didn’t want his services because people like him lose money for their clients on average. He replied, “But none of my clients are average; are you?” I guess he thought his seen bias was justified by all those unseen biases he was fixing.
The key issue here is that if it is too easy to believe in unseen biases, we could justify all of our seen biases as countering made-up unseen biases.
Posted on 2006-12-11
Why is there law? Some say for social justice, others for economic efficiency. I suspect that “law is theater”; i.e., law is there to make disputants shut up. When one person is mad as hell at another, law wants an outcome where neither they nor their friends yell and complain, and make the law look bad. To achieve this, hard to understand legal processes make it hard to know what exactly to complain about, a long expensive process saps the energy needed to complain, and the option of endless appeals makes it unclear when to complain.
A complaining loser’s best argument is often “it wasn’t fair”; there was bias. So law-as-theater predicts law-as-no-bias-theater; law will bend over backwards to avoid any possible appearance of bias. How far does our law go in this direction? Consider what law would be like with unbiased jurors.
If we expected jurors (or judges) to try their best to achieve social justice or economic efficiency, without substantial bias, we would have law, but not laws. That is, there would be lots of law, i.e., activity in a system for settling disputes, but few laws, i.e., rules about how to settle disputes. The legal system would be simple: you bring your complaint to a jury, you and your opponent each tell your side, and then the jury makes any decision they think appropriate. Think King Solomon.
Juries (or judges) would thus have complete control over the legal process. They would talk to anyone about anything they liked, hardly limited by any rules of procedure or evidence. Their final judgment would be any outcome they chose, based on any consideration they liked, hardly limited by any laws or precedent. In fact, of course, law is nothing like this.
Most of the legal biases that concern people are not due to juror interests. After all, we can pretty much eliminate strong interest biases by just preventing jurors (or judges) from extorting disputants, or from sharing any substantial interests with disputants Thus the structure of our legal system is driven in large part by fears of non-interest-based biases in juror beliefs.
While we have built elaborate legal structures to deal with these supposed biases, legal scholars have spent almost no effort to document that such biases are real. Legal rules constraining jurors are thus seen biases, justified as responding to unseen biases. If the law were about social justice or economic efficiency, you’d think the legal system would study biases a lot more. But if law is more about no-bias-theatre, needing only to make it hard for disputants to complain of bias, what would be the point?
Posted on 2006-12-29
One dictionary defines “to give the benefit of the doubt” as
To believe something good about someone, rather than something bad, when you have the possibility of doing either.
That is, assume the best. This may be better than assuming the worst, but honesty requires you to instead remain uncertain, assigning chances according to your evidence. Does this mean we should stereotype people? After all, M Lafferty commented:
To make assumptions about an individual based on a stereotype is wrong, even if the stereotypical view is broadly accurate.
To the contrary, I say honesty demands we stereotype people, instead of giving them “the benefit of the doubt.” Bryan Caplan has emphasized to me that most stereotypes are on average accurate:
Obviously, every stereotype has exceptions; stereotypes are useful because they are better than nothing, not because they are infallible.
For more, see John Ray’s, “Do We Stereotype Stereotyping?” I suspect people justify the usual dictum against stereotypes as countering a human tendency to assume the worst about outsiders. But until I see evidence of this, I’ll classify this as a seen bias justified by an unseen one. Consider Perry Metzger’s recent comment:
What you are saying, essentially, is that after seeing that a number of estimates of some constant do not fall within each other’s error bars, physicists should then increase the size of the error bars. I don’t think that is reasonable. Not all methods of measurement are identical, and different groups use different instruments, so the systematic errors made by different groups are different. That means that it is not necessarily the case that all groups are underestimating their errors — in fact, it is most likely that only some of them are underestimating error.
Yes, a set that is biased overall may be include subsets which are less biased. And by adjusting to correct for the overall bias we may increase the error in the less-biased subset. Nevertheless, unless we can distinguish the subsets that are more vs. less biased, we must accept this outcome
The general principle is: you need a better than average reason to think something is better than average. A physicist might say “I don’t need to adjust as much because I’m measuring voltage, where systematic bias is less a problem,” or “I’m from Harvard, where we are more careful.” But he needs to actually have evidence that there is less bias with voltage or at Harvard; no fair just giving himself “the benefit of the doubt.”
Furthermore, since our minds are good at selectively attending to factors favoring us, we must realize that others’ minds will attend to other factors, such as their years of experience or their IQ. To decide if you are less biased than average, you must consider the sorts of reasons that will occur to others, and ask if your reasons are better than those. Furthermore, if you are better in general at coming up with reasons for things, you must count that against yourself.
Finally, consider Eliezer Yudkowsky’s complaint about modesty:
How can you know which of you is the honest truthseeker, and which the stubborn self-deceiver? The creationist believes that he is the sane one and you are the fool. Doesn’t this make the situation symmetric around the two of you? … “But I know perfectly well who the fool is. It’s the other guy. It doesn’t matter that he says the same thing – he’s still the fool.” This reply sounds bald and unconvincing when you consider it abstractly. But if you actually face a creationist, then it certainly feels like the correct answer. … Those who dream do not know they dream; but when you wake you know you are awake.
The key question is: what concrete evidence can you cite that you are more sane than a creationist, or more awake than a dreamer? Perhaps you know more biology than a creationist, and you are more articulate than a dreamer. But the mere feeling that you are right does not justify giving yourself “the benefit of the doubt.”