Arjun Panickssery Storing Pages and stuff

Our Motives

  1. Decision Theory Remains Neglected
  2. What Function Music?
  3. Politics isn’t about Policy
  4. Views Aren’t About Sights
  5. Why Do Bets Look Bad?
  6. Homo Hypocritus
  7. Resolving Your Hypocrisy
  8. Errors, Lies, and Self-Deception
  9. Enforce Common Norms On Elites
  10. Identity Norms
  11. Exclusion As A Substitute For Norms, Law, & Governance
  12. How Idealists Aid Cheaters
  13. Beware Mob War Strategy
  14. Automatic Norms
  15. 10 Implications of Automatic Norms
  16. Automatic Norm Lessons
  17. Automatic Norms in Academia
  18. Plot Holes & Blame Holes
  19. Fairy Tales Were Cynical
  20. Why Fiction Lies
  21. Biases Of Fiction
  22. Why We Fight Over Fiction
  23. Stories Are Like Religion
  24. More Stories As Religion
  25. This is the Dream Time
  26. DreamTime
  27. Dreamtime Social Games
  28. We Moderns Are Status-Drunk
  29. Earth: A Status Report
  30. On Teen Angst

Decision Theory Remains Neglected

Posted on 2020-02-01

Back in ’84, when I first started to work at Lockheed Missiles & Space Company, I recall a manager complaining that their US government customer would not accept using decision theory to estimate the optimal thickness of missile walls; they insisted instead on using a crude heuristic expressed in terms of standard deviations of noise. Complex decision theory methods were okay to use for more detailed choices, but not for the biggest ones.

In his excellent 2010 book How to Measure Anything, Douglas W. Hubbard reports that this pattern is common:

Many organizations employ fairly sophisticated risk analysis methods on particular problems; … But those very same organizations do not routinely apply those same sophisticated risk analysis methods to much bigger decisions with more uncertainty and more potential for loss. … If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions. The largest, most risky decisions get the least amount of proper risk analysis. … Almost all of the most sophisticated risk analysis is applied to less risky operational decisions while the riskiest decisions—mergers, IT portfolios, big research and development initiatives, and the like—receive virtually none.

In fact, while standard decision theory has long been extremely well understood and accepted by academics, most orgs find a wide array of excuses to avoid using it to make key decisions:

For many decision makers, it is simply a habit to default to labeling something as intangible [=unmeasurable] … committees were categorically rejecting any investment where the benefits were “soft.” … In some cases decision makers effectively treat this alleged intangible as a “must have” … I have known managers who simply presume the superiority of their intuition over any quantitative model … What they seem to take away from these experiences is that to use the methods from statistics one needs a lot of data, that the precise equations don’t deal with messy real-world decisions where we don’t have all of the data, or that one needs a PhD in statistics to use any statistics at all. … I have at times heard that “more advanced” measurements like controlled experiments should be avoided because upper management won’t understand them. … they opt not to engage in a smaller study—even though the costs might be very reasonable—because such a study would have more error than a larger one. … Measurements can even be perceived as “dehumanizing” an issue. There is often a sense of righteous indignation when someone attempts to measure touchy topics, such as the value of an endangered species or even a human life. … has spent much time refuting objections he encounters—like the alleged “ethical” concerns of “treating a patient like a number” or that statistics aren’t “holistic” enough or the belief that their years of experience are preferable to simple statistical abstractions. … I’ve heard the same objections—sometimes word-for-word—from some managers and policy makers. … There is a tendency among professionals in every field to perceive their field as unique in terms of the burden of uncertainty. The conversation generally goes something like this: “Unlike other industries, in our industry every problem is unique and unpredictable,” or “Problems in my field have too many factors to allow for quantification,” and so on. … Resistance to valuing a human life may be part of a fear of numbers in general. Perhaps for these people, a show of righteous indignation is part of a defense mechanism. Perhaps they feel their “innumeracy” doesn’t matter as much if quantification itself is unimportant, or even offensive, especially on issues like these.

Apparently most for-profit firms could make substantially more profits if only they’d use simple decision theory to analyze key decisions. Execs’ usual excuse is that key parameters are unmeasurable, but Hubbard argues convincingly that this is just not true. He suggests that execs seek to excuse poor math abilities, but that seems implausible as an explanation to me. I say that their motives are more political: execs and their allies gain more by using other more flexible decision making frameworks for key decisions, frameworks with more wiggle room to help them justify whatever decision happens to favor them politically. Decision theory, in contrast, threatens to more strongly recommend a particular hard-to-predict decision in each case. As execs gain when the orgs under them are more efficient, they don’t mind decision theory being used down there. But they don’t want it up at their level and above, for decisions that say if they and their allies win or lose. I think I saw the same sort of effect when trying to get firms to consider prediction markets; those were okay for small decisions, but for big ones they preferred estimates made by more flexible methods. This overall view is, I think, also strongly supported by the excellent book Moral Mazes by Robert Jackall, which goes into great detail on the many ways that execs play political games while pretending to promote overall org efficiency.

If I ever did a book on The Elephant At The Office: Hidden Motives At Work, this would be a chapter.

Below the fold are many quotes from How to Measure Anything:

the word “intangible” has also come to mean utterly immeasurable in any way at all, directly or indirectly. It is in this context that I argue that intangibles do not exist—or, at the very least, could have no bearing on practical decisions. …

For many decision makers, it is simply a habit to default to labeling something as intangible …

committees were categorically rejecting any investment where the benefits were “soft.” …

major investments were approved with no plans for measuring their effectiveness after they were implemented. …

In some cases decision makers effectively treat this alleged intangible as a “must have” so that the question of the degree to which the intangible matters is never considered in a rational, quantitative way. …

I have known managers who simply presume the superiority of their intuition over any quantitative model …

Computing and using the economic value of measurements to guide the measurement process is, at a minimum, where a lot of business measurement methods fall short. …

What they seem to take away from these experiences is that to use the methods from statistics one needs a lot of data, that the precise equations don’t deal with messy real-world decisions where we don’t have all of the data, or that one needs a PhD in statistics to use any statistics at all. …

I have at times heard that “more advanced” measurements like controlled experiments should be avoided because upper management won’t understand them. …

they opt not to engage in a smaller study—even though the costs might be very reasonable—because such a study would have more error than a larger one. …

Usually things that seem immeasurable in business reveal themselves to much simpler methods of observation, once we learn to see through the illusion of immeasurability. …

The clarification chain is just a short series of connections that should bring us from thinking of something as an intangible to thinking of it as tangible. First, we recognize that if X is something that we care about, then X, by definition, must be detectable in some way.… if this thing is detectable, then it must be detectable in some amount. If you can observe a thing at all, you can observe more of it or less of it. …

I ask who thinks the sample is “statistically significant.” Those who remember something about that idea seem only to remember that it creates some kind of difficult threshold that makes meager amounts of data useless …

“If you don’t know what to measure, measure anyway. You’ll learn what to measure.”… the objection “A method doesn’t exist to measure this thing” is never valid. …

measurements can even be perceived as “dehumanizing” an issue. There is often a sense of righteous indignation when someone attempts to measure touchy topics, such as the value of an endangered species or even a human life, …

Meehl … has spent much time refuting objections he encounters—like the alleged “ethical” concerns of “treating a patient like a number” or that statistics aren’t “holistic” enough or the belief that their years of experience are preferable to simple statistical abstractions. … I’ve heard the same objections—sometimes word-for-word—from some managers and policy makers. …

Four Useful Measurement Assumptions: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think. …

I’ve noticed that there is a tendency among professionals in every field to perceive their field as unique in terms of the burden of uncertainty. The conversation generally goes something like this: “Unlike other industries, in our industry every problem is unique and unpredictable,” or “Problems in my field have too many factors to allow for quantification,” and so on. I’ve done work in lots of different fields, and some individuals in most of these fields make these same claims. So far, each one of them has turned out to have fairly standard measurement problems not unlike those in other fields. …

When managers think about measuring productivity, performance, quality, risk, or customer satisfaction, it strikes me as surprisingly rare that the first place they start is looking for existing research on the topic. …

When I asked bank managers what decisions these reports supported, they could identify only a few cases where the elective reports had, or ever could, change a decision. Perhaps not surprisingly, the same reports that could not be tied to real management decisions were rarely even read. …

The data on the dashboard was usually not selected with specific decisions in mind based on specific conditions for action. …

So the question is never whether a decision can be modeled or even whether it can be modeled quantitatively. …

Even just pretending to bet money significantly improves a person’s ability to assess odds. In fact, actually betting money turns out to be only slightly better than pretending to bet. …

Why is it that about 5% of people are apparently unable to improve at all in calibration training? Whatever the reason, it often turns out not to be that relevant. Virtually every single person we ever relied on for actual estimates was in the first two groups and almost all were in the first ideally calibrated group. Those who seemed to resist any attempt at calibration were, even before the testing, almost never considered to be the relevant expert or decision maker for a particular problem. …

there is apparently a strong placebo effect in many decision analysis and risk analysis methods. Managers need to start to be able to tell the difference between feeling better about decisions and actually having better track records over time. …

Many organizations employ fairly sophisticated risk analysis methods on particular problems; … But those very same organizations do not routinely apply those same sophisticated risk analysis methods to much bigger decisions with more uncertainty and more potential for loss. …

If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions. The largest, most risky decisions get the least amount of proper risk analysis. …

Almost all of the most sophisticated risk analysis is applied to less risky operational decisions while the riskiest decisions—mergers, IT portfolios, big research and development initiatives, and the like—receive virtually none…

When I ran the macro that computed the value of information for each of these variables, I began to see this pattern: The vast majority of variables in almost all models had an information value of zero. That is, the current level of uncertainty about most variables was acceptable, and no further measurement was justified. The variables that had high information values were routinely those that the client never measured. In fact, the high-value variables often were completely absent from previous business cases. (They excluded chance of project cancellation or the risk of low user adoption.) The variables that clients used to spend the most time measuring were usually those with a very low (even zero) information value (i.e., it was highly unlikely that additional measurements of the variable would have any effect on decisions). …

At the time of this writing, however, I’ve applied this same test to more than 60 additional projects and I found out that this effect is not limited to IT. I noticed the same phenomena arise in projects relating to research and development, military logistics, the environment, venture capital, facilities expansion, and the CGIAR sustainable farming model. …

First people measure what they know how to measure or what they believe is easy to measure.… the things you measured the most in the past have less uncertainty, and therefore less information value, when you need to estimate them for future decisions.…

Managers might tend to measure things that are more likely to produce good news. After all, why measure the benefits if you have a suspicion there might not be any?…

if you aren’t computing the value of a measurement, you are very likely measuring some things that are of little or no value and ignoring some high-value items.…

The 80 or more major risk/return analyses I’ve done in the past 20 years consisted of a total of over 7,000 individual variables, or an average of almost 90 variables per model. Of those 7,000 variables, a little over 180 (about 2 per model) required further measurement according to the

information value calculation. Most of these, about 150, had to be decomposed further to find a more easily measured component of the uncertain variable. Other variables offered more direct and obvious methods of measurement, for example, having to determine the gas mileage of a truck on a gravel road (by just driving a truck with a fuel-flow meter) or estimating the number of bugs in software (by inspecting samples of code). But almost a third of the variables that were decomposed required no further measurement after decomposition. In other words, about 25% of the high-value measurements were addressed with decomposition alone.…

the EVPI is an upper limit on what you should be willing to spend even theoretically. But the best measurement expenditure is probably far below this maximum. As a ballpark estimate, I shoot for spending approximately 10% of the EVPI on a measurement and, depending on the circumstances, sometimes even as low as 2%. I use this estimate for three reasons: The EVPI is the value of perfect information. …

respondents, those of us who measure such things as the value of life and health have to face a misplaced sense of righteous indignation. Some studies have shown that about 25% of people in environmental value surveys refused to answer on the grounds that “the environment has an absolute right to be protected” regardless of cost. …

Resistance to valuing a human life may be part of a fear of numbers in general. Perhaps for these people, a show of righteous indignation is part of a defense mechanism. Perhaps they feel their “innumeracy” doesn’t matter as much if quantification itself is unimportant, or even offensive, especially on issues like these. …

What Function Music?

Posted on 2012-06-09

Darwin argued that music evolved mainly by sexual selection through mate choice—and that we’re uncomfortable acknowledging that fact. (more)

My students … don’t talk about music very eagerly. In class I can get a conversation going about God with no problem. And students love talking about alcohol and its effects on the human mind and spirit, theirs in particular. A conversation about sex is easy to start and quickly goes way further than I’d imagine — and sometimes further than I want. … [Yet] when I ask what role music plays in their lives or why they listen to what they do, there is silence. (more)

I can also feel in myself a reluctance to analyze music, a fear that awareness might kill something precious. Yet this also suggests there’s an important hypocrisy here, a truth we’d rather not face. Digging, I found a summary of music’s functions:

Seven main functions of music listening were identified: music in the background, memories through music, music as diversion, emotions and self-regulation through music, music as reflection of self and social bonding through music. (more detail below)

Anything that we can do several different ways can help to identify us and our groups. Anything we can do together can bond us. And anything that can be done well or badly can signal ability. Any different activity could be a diversion. And any stimulation can sit in the background while we do other things. Because these functions can apply to most anything, they seem last-resort explanations for why we developed a musical capacity. More likely, such functions were layered onto an activity that had a more unique base function.

It certainly feels helpful that music can adjust our mood and emotions. The question is why we’d be built with something so expensive as our mood adjustment knobs. If we needed conscious control of mood, why not just evolve a direct control? I’m also struck by how important lyrics are to music – none of the above functions explain why we prefer songs with meaningful words.

Compared to other sorts of speech, we especially like stories to be accompanied by music. And the lyrics of songs are similar to stories in many ways. This suggests that stories and music perform similar or complementary functions.

If the lower levels of our minds tend to treat story events like real events, then we can use our stories to influence our beliefs about what happens in the real world. By consuming stories socially, and preferring stories preferred by our leaders and created by impressive story tellers, we coordinate to believe what our associates believe, and what our high status leaders choose us to believe, even against the evidence of our eyes. And by letting others see the stories we consume, we can signal this choice to others.

Thus we can use stories to signal our allegiance to our leaders’ and groups’ norms. Of course if some people evolved an ability to prevent stories from influencing their expectations about real events, they’d be able to fake this conformity signal. Which might be why we feel revulsion for “inhuman” folks who are not moved by stories. Similarly, imagine music can directly influence our emotions and moods, but that we have only limited direct conscious control over such things. In this case by associating music with people and verbal claims, we can influence our attitudes toward such things. And by sharing music with our groups, and preferring music preferred by our leaders and created by impressive artists, we can coordinate to have have the attitudes our associates do, and the ones our high status leaders prefer. By consuming music together, we can signal this choice to others. And we’d naturally feel revulsion against those who could fake this signal, because music didn’t influence their moods.

Homo hypocritus likes to think that his beliefs and attitudes are based only on his evidence; he doesn’t believe things just to please his associates or leaders. But he in fact needs to believe what his associates do, and what his leaders like, often against his evidence. And he needs to signal this fact to his associates and leaders.

By visibly exposing himself to shared stories and music, that directly influence his beliefs, while consciously believing that stories and music do not change his beliefs, homo hypocritus can accomplish all these things. This can also explain why we are reluctant to seriously examine the function of music (and stories) in our lives.

Those promised function details:

Seven main functions of music listening were identified: music in the background, memories through music, music as diversion, emotions and self-regulation through music, music as reflection of self and social bonding through music. Across all sub-samples the self-regulation function was the most important personal use of music, bonding was the most important social use of music and the expression of cultural identity was the most salient cultural function of music regardless of listeners’ cultural background. …

Music is often used as a background; … it can also fill gaps and help pass the time. … Music can bring back memories of events, life stages, relationships and emotions or memories of loved ones. … Music is … used for feeling good and enjoying oneself. … Music has the capacity to convey emotions and to trigger emotions or emotional and physical reactions. Particular songs are … specifically chosen … in order to express a particular emotional state of the participants. … Music can help to relax and relieve stress and to enhance creativity and intellectual focus. Listening to music can reduce loneliness, while offering a means of escape. … Certain music can assist in venting frustration and aggression. … It allows for the expression of a person’s individuality and lifestyle. … music expresses and influences values and attitudes; it can act as inspiration. … music indicates social identity by signifying group membership, for instance, belong- ing to a particular social group (like alternative or rave) or the current ‘cool group’ in school. … Music can provide an opportunity for a collective activity, such as discussing and listening to music or going to concerts together. These shared musical activities can … create a special bond. (more)

Politics isn’t about Policy

Posted on 2008-09-21

Food isn’t about Nutrition
Clothes aren’t about Comfort
Bedrooms aren’t about Sleep
Marriage isn’t about Romance
Talk isn’t about Info
Laughter isn’t about Jokes
Charity isn’t about Helping
Church isn’t about God
Art isn’t about Insight
Medicine isn’t about Health
Consulting isn’t about Advice
School isn’t about Learning
Research isn’t about Progress
Politics isn’t about Policy

The above summarizes much of my contrarian world view. (What else should go on this list?) When I say “X is not about Y,” I mean that while Y is the function commonly said to drive most X behavior, in fact some other function Z drives X behavior more. I won’t support all these claims here; for today, let’s just talk politics.

High school students are easily engaged to elect class presidents, even though they have little idea what if any policies a class president might influence. Instead such elections are usually described as “popularity contests.” That is, theses elections are about which school social factions are to have higher social status. If a jock wins, jocks have higher status. If your girlfriend’s brother wins, you have higher status, etc. And the fact that you have a vote says that others should take you into account when forming coalitions – you are somebody.

Civics teachers talk as if politics is about policy, that politics is our system for choosing policies to deal with common problems. But as Tyler Cowen suggests, real politics seems to be more about who will be our leaders, and what coalitions will rise or fall in status as a result. Election media coverage focuses on characterizing the candidates themselves – their personalities, styles, friends, beliefs, etc. You might say this is because character is a cheap clue to the policies candidates would adopt, but I don’t buy it. The obvious interpretation seems more believable – as with high school class presidents, we care about policies mainly as clues to candidate character and affiliations. And to the extent we consider policies not tied to particular candidates, we mainly care about how policies will effect which kinds of people will be respected how much.

For example, we want nationalized medicine so poor sick folks will feel cared for, military actions so foreigners will treat us with respect, business deregulation as a sign of respect for hardworking businessfolk, official gay marriage as a sign we accept gays, and so on.

This perspective explains why voters tend to prefer proportional representation, why many refuse to vote for any candidate when none have earned their respect, and why so few are interested in institutional reforms that would plausibly give more informed policies. (I’m speaking on such reform at a Trinity College symposium Monday afternoon.)

In each case where X is commonly said to be about Y, but is really X is more about Z, many are well aware of this but say we are better off pretending X is about Y. You may be called a cynic to say so, but if honesty is important to you, join me in calling a spade a spade.

Views Aren’t About Sights

Posted on 2021-05-22

Regarding window/patios with nice beach or city views, and days when those views are nice, in two polls respondents estimate that at any given time 1.3% and 0.8% of such places are actually occupied by people enjoying these views. Which makes one wonder why people bother to buy exclusive use, instead of sharing them. For example, ten tenants could share a single view spot for a tenth the price, and hardly ever have conflicts over who uses it when. We similarly see people owning boats and RVs that they hardly ever use and could instead rent more cheaply.

A related phenomenon is that most people strongly prefer to pay a monthly or annual fee for phones or internet, instead of paying per minute of use. Even though per usage payment can give better incentives for thriftiness. Similar for movies and TV shows. And, recently, e-books. And country clubs. Also, apparently a secret to Amazon’s success was that people much prefer to pay for shipping once per year than to pay each time they ship.

Many justifications are offered for these habits, some of them sensible. But surely a big fraction of all this is explained by signaling; people want others to know of and envy that they can afford to buy a view instead of renting it, and can afford the monthly phone fee, instead of having to worry about each call.

But if so, why don’t we buy more things via all-you-want-for-an-annual-fee? Like food or clothes or planes. You might say that these have high marginal costs, but then so do views and boats and RVs and country clubs.

I suspect part of the problem is that it just takes time to build up the scale required for the business arrangements which let people buy many things at marginal cost for an annual fee. Which makes me more optimistic about the future prospects for such programs. Places like Costco go somewhat in that direction, but we could go a lot further.

Imagine large menus of products where you can get as much of each one as you want (for personal use only) at their marginal cost, if you pay a corresponding annual fee. The higher an annual fee you pay, the larger a menu of things you can get at marginal cost. This arrangement not only gets you stuff more efficiently, grabbing anything that’s worth more to you than its marginal cost, but this also lets you signal your wealth by the menus you can afford.

It may take a lot of coordination to get all these suppliers to agree to deals where they sell stuff at marginal cost and get some fraction of the annual feel. And it takes some enforcement to prevent reselling. A single org that tries to arrange all this will face the usual scale diseconomies due to internal coordination costs. But I still think more of this is coming.

Added 23May: Many saying that they feel psychological aversions to renting, sharing, or paying per usage, aversions that go beyond concrete time and effort costs. They say this implies we aren’t avoiding these things to signal wealth. But that confuses different levels of causation. It could be that the way that our minds induce us to signal wealth is via making us feel these aversions.

Why Do Bets Look Bad?

Posted on 2013-07-08

Most social worlds lack a norm of giving much extra respect to claims supported by offers to bet. This is a shame because such norms would reduce insincere untruthful claims, and so make for more accurate beliefs in listeners. But instead of advocating for change, in this post I wonder: why are such norms rare? Yes there are random elements in which groups have which norms, and yes given a local norm that doesn’t respect bets it looks weird to offer bets there. But in this post I’m looking more to explain which norms appear where, and less who follows which norms.

Bets have been around for a long time, and by now most intellectuals understand them, and know that all else equal those who really believe more strongly are willing to bet more. So you might think it wouldn’t be that hard for a betting norm to get added on to all other local norms and cultural factors; all else equal respect bets as showing confidence. But if this happens it must be counter-balanced by other effects, or bets wouldn’t be so rare. What are these other effects?

While info often gets overtly shared in casual conversation, most of that info doesn’t seem very useful. I thus conclude that casual conversation isn’t mainly about overtly sharing info. So I assume the obvious alternative: casual conversation is mostly about signaling (which is covert or indirect info sharing). But still the puzzle remains: whatever else we signal via conversation, why don’t we typically expect a betting offer to signal overall-admirable confidence in a claim?

One obvious general hypothesis to consider here is that betting signals typically conflict with or interact with other signals. But which other signals, and how? In the rest of this post I explore a few bad-looking features that bets might signal:

  • Sincerity – In many subcultures it looks bad to care a lot about most any topic of casual conversation. Such passion suggests that you just don’t get the usual social functions of such conversations. Conversationalists ideally skip from topic to topic, showing off their wits, smarts, loyalties, and social connections, but otherwise caring little about the truth on particular topics. Most academia communities seem to have related norms. Offers to bet, in contrast, suggest you care too much about the truth on a particular topic. Most listeners don’t care if your claim is true, so aren’t interested in your confidence. Of course on some topics people are expected to care a lot, so this doesn’t explain fewer bets there.
  • Conflict – Many actions we take are seen as signals of cooperation or conflict. That is, our actions are seen as indicating that certain folks are our allies, and that certain other folks are our rivals or opponents. A bet offer can be seen as an overt declaration of conflict, and thus make one look overly confrontational, especially within a group that saw itself as mainly made of allies. We often try to portray any apparent conflict in casual conversations as just misunderstandings or sharing useful info, but bets are harder to portray that way.
  • Provinciality – Bets are most common today in sports, and sport arguments and bets seem to be mostly about showing loyalty to particular teams. In sports, confrontation is more ok and expected about such loyalties. Offering to bet on a team is seen as much like offering to have a fist fight to defend your team’s honor. Because of this association with regional loyalties in sports, offers to bet outside of sports are also seen as affirmations of loyalties, and thus to conflict with norms of a universal intellectual community.
  • Imprudence – Some folks are impulsive and spend available resources on whatever suits their temporary fancy, until they just run out. Others are careful to limit their spending via various simple self-control rules on how much they may spend how often on what kinds of things. Unless one is in the habit of betting often from a standard limited betting budget, bets look like unusual impulsive spending. Bettors seem to not sufficiently keep under control their impulsive urges to show sincerity, make conflict, or signal loyalties.
  • Disloyalty – In many conversations it is only ok to quote as sources or supports people outside the conversation who are “one of us.” Since betting markets must have participants on both sides of a question, they will have participants who are not part of “us”. Thus quoting betting market odds in support of a claim inappropriately brings “them” in to “our” conversation. Inviting insiders to go bet in those markets also invites some of “us” to interact more with “them”, which also seems disloyal.
  • Dominance – In conversation we often pretend to support an egalitarian norm where the wealth and social status of speakers is irrelevant to which claims are accepted or rejected by the audience. Offers to bet conflict with that norm, by seeming to favor those with more money to bet. Somehow, who is how smart or articulate or has more free time to read are considered acceptable bases for conversation inequities. While richer folks could be expected to bet more, the conversation would have to explicitly acknowledge that they are richer, which is rude.
  • Greed – We often try to give the impression that we talk mainly to benefit our listeners. This is a sacred activity. Offering to bet money makes it explicit that we seek personal gains, which is profane. This is why folks sometimes offer to bet charity; the money goes to the winner’s favorite charity. But that looks suspiciously like bringing profane money-lenders into a sacred temple.

Last week I said bets can function much like arguments that offer reasons for a conclusion. If so, how do arguments avoid looking bad in these ways? Since the cost to offer an argument is much less than the cost to offer a bet, arguments seem less imprudent and less show sincerity. Since the benefits from winning arguments aren’t explicit, one can pretend to be altruistic in giving them. Also, you can pretend an argument is not directed at any particular listener, and so is not a bid for conflict. Since most arguments t0day are not about sports, arguments less evoke the image of a sports-regional-signal. As long as you don’t quote outsiders, arguments seem less an invitation to invoke or interact with outsiders. If we are to find a way to make bets more popular, we’ll need to find ways to let people make bets without sending these bad-looking signals.

Added: It is suspicious that I didn’t do this analysis much earlier. This is plausibly due to the usual corrupting effect of advocacy on analysis; because I advocated betting, I analyzed it insufficiently.

Homo Hypocritus

Posted on 2010-03-23

The standard social brain theory seems in conflict with standard anthropologist accounts of ancestral forager lifestyles. Might “man the sly rule bender” resolve this conflict?

Why do we have ginormous brains? Animals tend to have big brains when they have big bodies, but beyond that the main brain pattern is social: bigger brains are found in birds and mammals that compete with predators or prey, and who manage pair-bonding mate relations. The extra costs of big brains is outweighed by benefits of not being out-witted by others. Primates (and hyenas) hit on the trick of reusing pair-bonding skills to manage friendships in large social groups. Primates have huge expensive brains, which are bigger in species with larger social groups, and these groups spend more of their time managing social relations. Bigger groups better protect against predators, though the coalition politics of dominance gets more complex in bigger groups.

Primates not only manage relations and coalitions, but they also track the relations and coalitions of others. They are adept at judging how to help their coalitions, and when to switch sides. The top chimp is often not the strongest, but instead the one with the strongest coalition, which gets to dominate food and mating, and stay best protected from predators; chimp investments in big brains often pay off handsomely.

Humans have the biggest primate brains of all. Over the last two million years hominid brains grew more where climates were variable, but they grew most where population densities were high. This suggests that human brains were also big mainly due to social pressures. The “mating mind” sexual selection hypothesis seems at odds with this density effect, and with the more general fact that polygamous species tend to have smaller brains. “Man the tool user” stories seem to confuse broad group gains with individual benefits – smaller brains seem sufficient for copying others’ tool skills. But even if social pressures were key, which pressures exactly? Isolated nomadic forager bands today are “fossils” with crucial clues about our distant ancestors. Anthropologists who study them report that overt dominance is rare, and long distances make war rare (as 4 million year old fossils suggest). Foragers live in tight quarters and use language to express and enforce social norms on food sharing, non-violence, mating freedom, communal decision making, and norm enforcement. Anger, bragging, giving orders, and anything remotely resembling dominance among men is punished by avoidance, exile, and death as required. Human’s unusual hidden female fertility also limits male dominance temptations. The puzzle here is that consistent enforcement of such norms seems to drastically reduce the payoff to expensive coalition-politics-savvy brains. If you can’t collude to grab the food or the women, and everyone is treated fairly based on their contributions, why bother to be so clever? Yes, some brain innovations were required to support language, and maybe they wouldn’t have occurred in a small brain, but after that innovation human brains could have shrunk (as perhaps with hobbits). Why did humans keep huge expensive brains? In a messy real world, social norms expressed in language typically have many iffy boundary cases and ambiguities. How much of what sort of food of what quality offered how conveniently counts as food sharing? How big a frown is a grimace? Sex with how close a relative counts as incest? And so on. This wouldn’t matter if boundary cases were decided randomly, but that seems unlikely. Instead big brain gains come five ways:

Unnormed – coalition politics on acts uncovered by norms.
Skirt – keep actions near but not over edge of violating norms.
Cover – politics of observers on if to report an act to others.
Frame – lawyer-like arguing on if acts violate social norms.
Conspire – form coalitions on how to publicly interpet iffy acts.

Most norms have meta-norms against consciously trying to evade them. Self-deception should help here; foragers might sincerely believe they usually just do their job and “tell it like it is”, and then unconsciously try to act, selectively report and frame acts, and support interpretation coalitions, to their advantage. Instead of “man the tool user”, we might be better understood as “man the sly rule bender.”

Gains to rule bending could be greatly reduced via social norms with very clear simple rules. But humans seems to usually prefer complex and ambiguous rules that require “judgment” to apply. For example, foragers often have complex incest rules, forbidding a much wider range of sex partners than is needed to prevent genetic problems. And acts of sorcery are allowed to count as acts of aggression that violate social norms and must be punished, even without concrete evidence showing such acts. Both complex broad incest rules and allowing sorcery complaints greatly increase the scope for gains to large rule-bending brains, and suggest that we tend to prefer to allow such scope. The idea that the main reason we have huge brains is to hypocritically bend rules seems to me a dramatic change in how we think about human nature. If true, it should change how we understand a great many things in psychology and social science. I’ve been obsessing about his topic for weeks, and last Thursday I ran it past Robin Dunbar, famed for his contributions to the social brain account, and he said it was pretty close to his view on the subject, and he suggested the incest example.

Resolving Your Hypocrisy

Posted on 2006-12-27

Self love is more cunning than the most cunning man in the world. … Hypocrisy is the homage vice pays to virtue. La Rochefoucauld.

Humans are hypocrites. That is, we present ourselves and our groups as pursuing high motives, when more often low motives better explain our behavior. We say we invade nations to help them build democracy, rather than for revenge or security. We say we marry to help our partner, rather than to gain sex or security. We say we choose our profession to help others, and not for prestige or income. And so on. Comedians live by ridiculing such hypocrisy, but “cynics” who complain without such wit and style are despised. In contrast, we are attracted to the innocent who naively believe our hypocrisies.

Noticing the hypocrisy in others usually makes us feel morally superior. After all, we are know we are not hypocrites; “I can look inside myself and and see my sincerity.” But eventually experience and intelligence force some of us to face the likelihood that we are no different. At this point we can resolve our hypocrisy two ways: we can start really living up to our high ideals, or we can admit we don’t care as much as we thought about those ideals .

Most people try harder to live up to their ideals. They usually think they succeed, but mostly they just add on a few more layers of self-deception, and find themselves too busy to ponder the issue. “Sure hypocrites give to charities that don’t really help much, but my charity really does help; I read an article that says so. Sorry; gotta go.”

We want to think well of ourselves, and this gives us a limited ability to make ourselves to want the things we think we should want. And the young are more naturally innocent, with a stronger ability to remake their wants, at least toward ideals others would applaud. But this effect fades with time, and we overestimate both how much we can change our wants, and how much we want to.

One of our ideals is to be honest with ourselves. Is this honesty ideal a substitute or a complement for other ideals? On the one hand, honesty should help us to to use resources more effectively to actually achieve other ideals, versus the appearance of achieving them. On the other hand, I cannot reasonably expect anyone willing to try to live up to this ideal of honesty to have much will power left over to live up to other ideals.

I expect people who are actually more honest will tend to have lower expectations about achieving ideal ends, though they may (or may not) actually achieve such ideals more.

Added: Our conscious minds seem like a public relations department (PRD) of our minds. A corporate PRD tries to find a coherent story to make it look like corporate actions came from high motives. The PRD tries to have this high minded story recorded in official histories, legal testimony, and accounting records. Corporate PRDs have a limited ability to influence corporate policy; “Boss, doing that will make us look real bad.” But corporate profits more fundamentally drive behavior.

Similarly, our conscious minds record and tell high-minded stories about our actions. When image is important enough, we can make real sacrifices to ensure our actions fit closely with our conscious self-image. But we usually need only minor sacrifices, my guess is that a cost-minimizing PRD forced to be more honest will rely more on admitting to low motives, and less on switching from low to high motives.

Errors, Lies, and Self-Deception

Posted on 2009-06-15

About a recent European Journal of Personality article:

The participants recorded a one minute television commercial, … then watched … themselves, having been given guidance on non-verbal cues that can reveal how extraverted or introverted a person is. … They were then asked to rate their own personality. … The participants’ extroversion scores on the implicit test showed no association with their subsequent explicit ratings of themselves, and there was no evidence either that they’d used their non-verbal behaviours (such as amount of eye contact with the camera) to inform their self-ratings.

In striking contrast, outside observers who watched the videos made ratings of the participants’ personalities that did correlate with those same participants’ implicit personality scores, and it was clear that it was the participants’ non-verbal behaviours that mediated this correlation … Two further experiments showed that this general pattern of findings held even when participants were given a financial incentive.

[Folks seem] extremely reluctant to revise their self-perceptions, even in the face of powerful objective evidence. … Participants seemed able to use the videos to inform their ratings of their “state” anxiety (their anxiety “in the moment”) even while leaving their scores for their “trait” anxiety unchanged.

(Hat tip to Michael Webster.) This sort of thing terrifies me. Let me explain why.

Any long complex design or calculation is subject to errors. And those who do such things regularly must get into the habit of testing and checking for such errors. This may take most of the effort, but it is at least manageable, because we expect that such errors are not very correlated with other features of interest. If something has worked ten times in a row in field tests, it will probably work the first time for a customer, at least if that customer’s environment is not too different from field test environments.

People who have to worry about spies and liars, on the other hand, have to worry more about troublesome correlations. Liars can coordinate their lies to tell a consistent story. Spies and liars can choose carefully to betray us exactly when such defections are the hardest to detect and the most expensive. So the fact that a possible spy performed reliably ten times in a row gives less confidence that he will also perform reliably the next time, if the next time is unusually important. In these cases we rely more on private info, i.e., what the spy or liar could not plausibly know. For example, if we do not let the possible spy know which are the important cases, he can’t choose only those cases to betray us. And if we can check on him at unexpected times, we might catch him in a lie.

We humans have many conscious beliefs, and we are built to have accurate ones in many situations, but in many other situations we are built to have misleading conscious beliefs, i.e., to be self-deceived. Evolution judged that such misleading beliefs would tend to help us fool our colleagues, and so better survive and reproduce. It created subconscious mental processes to manage this process of deciding when our beliefs should be accurate or misleading.

We seem almost completely defenseless against such manipulation. Yes we can try to check our conscious beliefs against outside standards, but our subconscious liars can not only choose carefully when to lie about what, but they probably also have access to all our conscious thoughts and info! They might even lie to us about whether we checked our beliefs, and what those checks found. So in principle our unconscious liars can execute extremely complex and subtle lying plans. For example, the study above suggests that such processes choose to make us blind to clues about our average public speaking anxiety, while letting us see momentary fluctuations about that average.

If our subconscious liars were as smart and thoughtful as our conscious minds, we would seem to be completely at their mercy. The situation may not be that bad, but it is not clear how we can tell just how bad the situation is; even if they had complete control, they would probably want us to think otherwise.

This is the context in which I find myself interested in “minimal rationality,” similar to minimal morality. In the limit of my being subject to very powerful subconscious liars, how can I best avoid their distortions? It seems I should then become especially distrustful of intuition, and especially interested in trustworthy processes outside myself, such as prediction markets and formal analysis. If I have a choice between two ways to make an estimate, and one of them allows more discretion by subconscious mental processes, I should try to go with the other choice if possible. If the data is pretty clear and theory needs a lot of judgment calls to get an answer, I go with the data. If the data is messy and needs judgement calls while standard theory gives a pretty clear answer, I go with that theory.

Of course this minimal rationality approach makes me subject to my subconscious lying about which estimates allow more subconscious discression. So I need to be especially careful about those judgments. But what else can I do?

Many folks figure that if evolution planned for them to believe a lie, they might as well believe a lie; that probably helps them acheive their goals. But I want, first and foremost, to believe the truth.

Enforce Common Norms On Elites

Posted on 2019-02-20

In my experience, elites tend to differ in how they adhere to social norms: their behavior is more context-dependent. Ordinary people use relatively simple strategies of being generally nice, tough, silly, serious, etc., strategies that depend on relatively few context variables. That is, they are mostly nice or tough overall. In contrast, elite behavior is far more sensitive to context. Elites are often very nice to some people, and quite mean to others, in ways that can surprise and seem strange to ordinary people.

The obvious explanation is that context-dependence is gives higher payoffs when one has the intelligence, experience, and social training to execute this strategy well. When you can tell which norms will tend to be enforced how when and by whom, then you can adhere strongly to the norms most likely to be enforced, and neglect the others. And skirt right up to the edge of enforcement boundaries. For weakly enforced norms, your power as an elite gives you more ways to threaten retaliation against those who might try to enforce them on you. And for norms that your elite associates are not particularly eager to enforce, you are more likely to be given the benefit of the doubt, and also second and third chances even when you are clearly caught.

One especially important human norm says that we should each do things to promote a general good when doing so is cheap/easy, relative to the gains to others. Applied to our systems, this norm says that we should all do cheap/easy things to make the systems that we share more effective and beneficial to all. This is a weakly enforced norm that elite associates are not particularly eager to enforce.

And so elites do typically neglect this system-improving norm more. Ordinary people look at a broken system, talk a bit out how it might be improved, and even make a few weak moves in such directions. But ordinary people know that elites are in a far better position to make such moves, and they tend to presume that elites are doing what they can. So if nothing is happening, probably nothing can be done. Which often isn’t remotely close to true, given that elites usually see the system-improving norm as one they can safely neglect.

Oh elites tend to be fine with getting out in front of a popular movement for change, if that will help them personally. They’ll even take credit and pretend to have started such a movement, pushing aside the non-elites who actually did. And they are also fine with taking the initiative to propose system changes that are likely to personally benefit themselves and their allies. But otherwise elites give only lip service to the norm that says to make mild efforts to seek good system changes.

This is one of the reasons that I favor making blackmail legal. That is, while one might have laws like libel against making false claims, and laws against privacy invasions such as posting nude picts or stealing your passwords, if you are going to allow people to tell true negative info that they gain through legitimate means, then you should also let them threaten to not tell this info in trade for compensation. Legalized blackmail of this sort would have only modest effects on ordinary people, who don’t have much money, and who others aren’t that interested in hearing about. But it would have much stronger effects on elites; elites would be found out much more readily when they broke common social norms. They’d be punished for such violations either by the info going public, or by their having to pay blackmail to keep them quiet. Either way, they’d learn to adhere much more strongly to common norms.

Yes, this would cause harm in some areas where popular norms are dysfunctional. Such as norms to never give in to terrorists, or to never consider costs when deciding whether to save lives. Elites would have to push harder to get the public to accept norm changes in such areas, or they’d have to follow dysfunctional norms. But elites would also be pushed to adhere better to the key norm of working to improve systems when that is cheap and easy. Which could be a big win.

Yes trying to improve systems can hurt when proposed improvements are evaluated via naive public impressions on what behavior works well. But efforts to improve via making new small scale trials that are scaled up only when smaller versions work well, that’s much harder to screw up. We need a lot more of that.

Norms aren’t norms if most people don’t support them, via at least not disputing the claim that society is better off when they are enforced. If so, most people must say they expect society to be better off when we find more cost-effective ways to enforced current norms. Such as legalizing blackmail. This doesn’t necessarily result in our choosing to enforce norms more strictly, though this may often be the result. Yes, better norm enforcement can be bad when norms are bad. But in that case it seems better to persuade people to change norms, rather than throwing monkey-wrenches into the gears of norm enforcement.

So let’s hold our elites more accountable to our norms, listen to them when they suggest that we change norms, and especially enforce the norm of working to improve systems. Legalized blackmail could help with getting elites to adhere more closely to common norms.

Identity Norms

Posted on 2019-04-15

Over the weekend I did a series of Twitter polls on identity. Seeing a survey showing that 74% of blacks but only 15% of whites find race to be central to their identity, I asked if this attitude is good for either group, and found that 83% saw it as bad for both groups. Asking a similar question on sex, answers were more split, with 50% saying it is bad for both and 43% saying it is good for both. In both the race and sex cases, less than 8% said it was good for one group but bad for the other.

I then picked 16 features and asked which one is best for most people to treat as most central to their identity. I got these relative weights: personality 28%, family 14%, smarts 8%, fav hobby 8%, ideology 7%, job 7%, age 6%, religion 5%. gender 4%, class 3%, race 2.2%, urban area 1.6%, fav fiction 0.7%, looks 0.7%.

Finally, I asked if seeing someone else treating a feature as a central to their identity tempts you more (or less) to treat it as central to your identity, and how that depends on if they have same or different value of that feature from you. I found that for features we approve of for identity, like personality, family, or favorite hobby, people think they’ll make a feature more central when they see others treat it as central, and that happens more when those others share their feature value. But for features we disapprove of for identity, like race, gender, or class, it was the opposite; seeing others treat it as central makes them less likely to treat it as central, an effect that is stronger when those others have a different feature value.

To make sense of these results, let me invoke two theories of identity, and two relevant social norms.

One theory is that identity is a way to simplify ourselves to be more easily understood and predicted:

We are built to find a simple story we can project about who we are that will let others predict us well. This story includes what we like, what we are good at, how we decide who we are loyal to, and so on. Such stories are naturally more than a few stats but less than all our details. … Early in our lives we search for a story that fits well with our abilities and opportunities. In our unstable youth we adjust this story as we learn more, but we reduce those changes as we start to make big life choices, and want to appear stable to our new associates.

Another theory is that identity is a way to coordinate on our social/political coalitions; we ally with folks like us. Sarah Constantin:

Dasein is … self-definition with respect to a social context. Where do I fit in society? Who is my tribe? Who am I relative to other people? What’s my type? “Identifying as” always includes an element of misdirection. Merely describing yourself factually (“I was born in 1988”) is not Dasein. Placing an emphasis, exaggerating, cartoonifying, declaring yourself for a team, is Dasein. But when you identify as, you say “I am such-and-such”, as though you were merely describing. … One of the qualities of Dasein is that it’s very very stealthy, and it wants everything to be about Dasein, so it winds up muddying the waters, even when you don’t intend it to. … Dasein can mess up the attempt to solve social problems. … Sexual harassment gets perceived as a flag for pink-flavored people to wave, and if you’re not pink-flavored, you’re not the target market, so you don’t take it seriously.

One common human norm is that sub-group coalitions are mildly illicit. We aren’t supposed to break into factions that fight other factions; we are supposed to all work together toward common goals, and treat each other as individuals. As with other norms against fighting, it is more okay for a group to defend itself against attacks from others, but you aren’t supposed to start a fight.

This norm against factions explains a lot of the above poll data. Regarding what features to have as central to your identity, we approve of features which are actually useful to predict individual behavior, features where people with different feature values tend to complement each other, and features which are hard to use for coalitions because they are too granular (e.g., families). In contrast, we disapprove of features that could more easily be used, and that have recently been used, as the basis of factional fights.

People who treat less approved features as more central to their identity compensate by claiming that there is already a pre-existing faction fight along that feature in which they are they underdogs; the other side started the fight, and isn’t fighting fair (e.g, via dominance and not prestige). They invoke our common human norm that requires independent observers to support the side of a fight that is favored by justice and fairness.

Combining these theories and norms we can say that we have a licit and an illicit reason to choose identities: simplifying ourselves and joining coalitions. We often pretend to do the former while we actually do the latter. And when it gets too obvious that we are doing the latter, we try the excuses that they started it or that they aren’t fighting fair.

From all this I conclude that we have a limited tolerance for identity politics. The more different features that become a basis for explicit coalitional fights, the less happy we will all become, and the less tolerance we will have for each fight. We can together only handle a few big factional fights at any one time, and so we’ll have to set a high bar for how clear is the evidence in each case that they started it and are not fighting fair. And when we do see justice and fairness as clearly favoring one side of a fight, we’ll want to aid that side, make justice happen, and then end the fight.

Exclusion As A Substitute For Norms, Law, & Governance

Posted on 2017-12-18

Hell may not be other people, but worry sure is. That is, what we worry most about is what other people might do to us. People at the office, near our home, at the store, on the street, and even at church.

To reduce our worries, we can rely on norms, law, and governance. That is, to discourage bad behavior, we can encourage stronger informal social rules, we can adopt more formal legal rules, and we can do more with complex governance mechanisms.

In addition, we can rely on a simple and robust ancient solution: exclusion. That is, we can limit who is allowed with the circles we travel. We can use exclusion to limit who lives in our apartment complex, who shows up at the parties we attend, and who works in a cubicle near us.

Now the modern world tends to say that it disapproves of exclusion. The bad ancient world did much gossiping about what types of people could be trusted how, and then it relied a lot on the resulting shared judgements within their norms, law, and governance. We today have instead been trying to expunge such judgments from our formal systems; they are supposed to treat everyone equally without much reference to the groups to which they belong.

In addition, we’ve become more wary of using harsh punishments, like torture, death, or exile. And we are more wary of using corruptible quick and dirty evaluations within our norms, law, and governance. For example, we have raised our standards for shunning neighbors, pulling over drivers, convicting folks at court, and approving large bold governance changes. And people today seem less willing to help the law via reports and testimony. Oh we may be more willing to apply norms to people we read about on social media; but we apply them less to the people we meet around us.

As a result of these trends, many people perceive that we have on net weakened the power of our systems of norms, law, and governance to constrain bad behavior. In response, I think they’ve naturally increased their reliance on exclusion. They look more carefully at who they allow into their schools, firms, apartments, and nations. And they are less willing to give a marginal person the benefit of the doubt.

Since we don’t want to look like we are excluding on the basis of simple group affiliations, we instead try to rely on a more intuitive and informal aggregation of many weak clues. We try to get a feel for how much we like them or feel comfortable with them overall. But that need not result in more mixing.

For example, colleges that admit people just on GPA and test scores can be more open to lower class students than colleges that require applicants to have adopted the right set of extracurricular actives, and to have hit on the right themes in their essays. Lower class people can find it is easier to get good grades and scores than to track the new fashions in activities and essays.

Similarly, Tyler Cowen makes the point somewhere that when firms had simple and clear rules on dress and behavior, someone with a low class background could more easily pass as high class; they just had to follow the rules. Today, without such simple rules, people rely more on many subtle clues of clothes, conversation topics, travel locations, favorite music and movies, and so on. Someone with a lower class background finds it harder to adopt all these patterns, and so is more obviously outed and rejected as not one of us.

The point seems to apply more generally. The net effect of our today relying less on norms, law, and governance, and avoiding simple group labels in exclusion, is that we rely more on exclusion based on an intuitive feel that someone is like us.

This may be a cause of our increasing class and political polarization, at home and work. Feeling less protected by norms, law, and governance, and shy of using simple group identifiers, we are more and more surrounding ourselves with others who feel comfortably like us. We can tell ourselves that we aren’t excluding Joe or Sue because they are Republicans, or don’t have a college degree. Its just that those sort of people tend to give off dozens of other off-putting signs that they are just not people like us.

We would call it an outrage if society as a whole excluded them explicitly and formally because of a few simple signs. Only ignorant and rude societies do that. But we feel quite comfortable excluding them from our little part of the world based on our just not feeling comfortable with them. Hey, as anyone knows, in our part of the world it is just really important to have the right people.

Consider this another weak argument for relying more on stronger norms, law, and governance. That could let us rely less on exclusion locally. And mix up a bit more.

How Idealists Aid Cheaters

Posted on 2019-08-23

Humans have long used norms to great advantage to coordinate behavior. Each norm requires or prohibits certain behavior in certain situations, and the norm system requires that others who notice norm violations call attention to those violations and coordinate to discourage or punish them. This system is powerful, but not infinitely so. If a small enough group of people notice a minor enough norm violation, and are friendly enough with each other and with the violator, they often coordinate instead to not enforce the norm, and yet pretend that they did so. That is, they let cheaters get away with it.

To encourage norm enforcement, our social systems make many choices of how many people typically see each behavior or its signs. We pair up police in squad cars, and decide how far away in the police organizational structure sits internal affairs. Many kinds of work is double-checked by others, sometimes from independent agencies. Schools declare honor-codes that justify light checking. At times, we “measure twice and cut once.”

These choices of how much to check are naturally tied to our estimates of how strongly people tend to enforce norms. If even small groups who observe violations will typically enforce them, we don’t need to check as much or as carefully, or to punish as much when we catch cheaters. But if large diverse groups commonly manage to coordinate to evade norm enforcement, then we need frequent checks by diverse people who are widely separated organizationally, and we need to punish cheaters more when we catch them.

I’ve been reading the book Moral Mazes for the last few months; it is excellent, but also depressing, which is why it takes so long to read. It makes a strong case, through many detailed examples, that in typical business organizations, norms are actually enforced far less than members pretend. The typical level of checking is in fact far too little to effectively enforce common norms, such as against self-dealing, bribery, accounting lies, fair evaluation of employees, and treating similar customers differently. Combining this data with other things I know, I’m convinced that this applies not only in business, but in human behavior more generally.

We often argue about this key parameter of how hard or necessary it is to enforce norms. Cynics tend to say that it is hard and necessary, while idealists tend to say that it is easy and unnecessary. This data suggests that cynics tend more to be right, even as idealists tend to win our social arguments. One reason idealists tend to win arguments is that they impugn the character and motives of cynics. They suggest that cynics can more easily see opportunities for cheating because cynics in fact intend to and do cheat more, or that cynics are losers who seek to make excuses for their failures, by blaming the cheating of others. Idealists also tend to say what while other groups may have norm enforcement problems, our group is better, which suggests that cynics are disloyal to our group.

Norm enforcement is expensive, but worth it if we have good social norms, that discourage harmful behaviors. Yet if we under-estimate how hard norms are to enforce, we won’t check enough, and cheaters will get away with cheating, canceling much of the benefit of the norm. People who privately know this fact will gain by cheating often, as they know they can get away with it. Conversely, people who trust norm enforcement to work will be cheated on, and lose.

When confronted with data, idealists often argue, successfully, that it is good if people tend to overestimate the effectiveness of norm enforcement, as this will make them obey norms more, to everyone’s benefit. They give this as a reason to teach this overestimate in schools and in our standard public speeches. And so that is what societies tend to do. Which benefits those who, even if they give lip service to this claim in public, are privately selfish enough to know it is a lie, and are willing to cheat on the larger pool of gullible victims that this policy creates.

That is, idealists aid cheaters.

Added 26Aug: In this post, I intended to define the words “idealist” and “cynic” in terms of how hard or necessary it is to enforce norms. The use of those words has distracted many. Not sure what are better words though.

Beware Mob War Strategy

Posted on 2022-09-26

The game theory is clear: it can be in your interest to make threats that it would not be in your interest to carry out. So you can gain from committing to carrying out such threats. But only if you do it right. Your commitment plan must be simple and clear enough for your audience to see when it applies to them, how it is their interest to go along with it, and that people who look like you to them have in fact been consistently following such a plan.

So, for example, it probably won’t work to just lash out at whomever happens to be near you whenever the universe disappoints you somehow. The universe may reorganize to avoid your lashings, but probably not by catering to your every whim. More likely, others will avoid you, or crush you. That’s a bad commitment plan.

Here’s a good commitment plan. A well-run legal system can usefully deter crime via committing to consistently punish law violations. Such a system clearly defines violations, and shows potential violators an enforcement system wherein a substantial fraction of violations will be detected, prosecuted, and punished. Those under the jurisdiction of this law can see this fact, and understand which acts lead to which punishments. Such acts can thus be deterred.

Here’s another pretty good commitment plan. The main nations with nuclear weapons seem to have created a mutual expectation of “mutually assured destruction.” Each nation is committed to responding to a nuclear attack with a devastating symmetric attack. So devastating as to deter attack even if there is a substantial chance that such a response wouldn’t happen. This commitment plan is simple, easy to understand, clearly communicated, and quite focused on particular scenarios. So far, it seems to have worked.

Humans are often willing to suffer large costs to punish those who violate their moral rules. In fact, we probably evolved such moral indignation in part as a way to commit to punishing violations of our local moral norms. In small bands, with norms that were stable across many generations, members could plausibly achieve sufficient clarity and certainty about norm enforcement to deter violations via such threats. So such commitments might have had good plans in that context.

But this does not imply that things would typically go well for us if we freely indulged our moral indignation inclinations in our complex modern world. For example, imagine that we encouraged, instead of discouraged, mob justice. That is, if we encouraged people to gossip to convince their friends to share their moral outrange, building off of each until they chased down and “lynched” any who offended them.

This sort of mob justice can go badly for a great many reasons. We don’t actually share norms as closely as we think, mob members are often more eager to show loyalty to each other than to verify accusation accuracy, and some are willing to make misleading accusations to take down rivals. More fundamentally, we might say that mob justice goes bad because it is not based on a good commitment plan. Observers just can’t predict mob justice outcomes well enough for it to usefully encourage good behavior, at least compared to a formal legal system.

Now consider the subject of making peace deals to end wars. Such as the current war between Russia and Ukraine. An awful lot of people, probably a majority, of the Ukrainian supporters I’ve heard from seem to be morally offended by the idea of such a peace deal in this case. Even though the usual game theory analyses of war say that there are usually peace deals that both sides would prefer at the time to continued war. (Such deals could focus on immediately verifiable terms; they needn’t focus on unverifiable promises of future actions. In April 2022 Russia and Ukraine apparently had a tentative deal, scuttled due to pressure from Ukrainian allies.)

Many of these peace deal opponents are willing to justify this stance in consequentialist terms: they say that we should commit to not making such deals. Which, as they are eager to point out, is a logically coherent stance due to the usual game theory analysis. We should thus “hold firm”, “teach them a lesson”, “don’t let them get away with it”, etc. All justified by game theory, they say.

The problem is, I haven’t seen anyone outline anything close to a good commitment plan here. Nothing remotely as clear and simple as we have with criminal law, or with mutually assured destruction. They don’t clearly specify the set of situations where the commitment is to apply, the ways observers are to tell when they are in such situations, the behavior that has been committed to there, or the dataset of international events that shows that people that look like us have in fact consistently behaved in this way. Peace deal opponents (sometimes called “war mongers”) instead mainly just seem to point to their mob-inflamed feelings of moral outrage.

For example, some talk as if we should just ignore the fact that Russia has nuclear weapons in this war, as if we have somehow committed to doing that in order to prevent anyone from using nuclear weapons as a negotiating leverage. The claim that nations have been acting according to such a commitment doesn’t seem to me at all a good summary of the history of nuclear powers. And if the claim is that we should start now to create such a commitment by just acting as if it had always existed, that seems even crazier.

If we have not actually found and clearly implemented a good commitment plan, then it seems to me that we should proceed as if we have not made such a commitment. So we must act in accord with the usual game theory analysis. Which says to compromise and make peace if possible. Especially as a way to reduce the risk of a large nuclear war.

The possibility of a global nuclear war seems a very big deal. Yes, war seems sacred and that inclines us toward relying on our intuitions instead of conscious calculations. It inclines us toward mob war strategy. But this issue seems plenty important enough to justify our resisting that inclination. Yes, a careful analysis may well identify some good commitment plans, after which we could think about how to move toward making commitments according to those plans.

But following the vague war strategy inclinations of our mob-inflamed moral outrage seems a poor substitute for such a good plan. If we have not yet actually found and implemented a good plan, we should deal with a world where we have not made useful commitments. And so make peace, to avoid risking the destructions of war.

Automatic Norms

Posted on 2017-12-27

Some new ideas I want to explain start with a 2000 paper on Taboo Tradeoffs. (See also newer stuff.) So I’ll review that paper in this post, and then I’ll explain my new ideas in the next post. In Experiment 2 of the 2000 paper, each of 228 subjects were asked to respond to one of 8 scenarios, created by three binary alternatives. All the scenarios involved:

Robert, the key decision maker, was described as the Director of Health Care Management at a major hospital who confronted a “resource allocation decision.”

Robert was either asked to make a tragic tradeoff, where two sacred values conflicted, or a taboo tradeoff, where a sacred value was in conflict with a non-sacred value. The tragic tradeoff:

Robert can either save the life of Johnny, a five year old boy who needs a liver transplant, or he can save the life of an equally sick six year old boy who needs a liver transplant. Both boys are desperately ill and have been on the waiting list for a transplant but because of the shortage of local organ donors, only one liver is available. Robert will only be able to save one child.

The taboo tradeoff:

Robert can save the life of Johnny, a five year old who needs a liver transplant, but the transplant procedure will cost the hospital $1,000,000 that could be spent in other ways, such as purchasing better equipment and enhancing salaries to recruit talented doctors to the hospital. Johnny is very ill and has been on the waiting list for a transplant but because of dire shortage of local organ donors, obtaining a liver will be expensive. Robert could save Johnny’s life, or he could use the $1,000,000 for other hospital needs.

Robert was said to either find this decision easy or difficult:

“Robert sees his decision as an easy one, and is able to decide quickly,” or “Robert finds this decision very difficult, and is only able to make it after much time, thought, and contemplation.”

Finally, Robert was said to have chosen to save Johnny, or to have chosen otherwise. Subjects were asked to rate Robert’s decision and describe their feelings about it in 8 ways. They were also asked to make 3 decisions on actions regarding Robert, including dismiss from job, punish, and end friendship. Using factor analysis all these responses were combined into an outrage factor, mainly weighted on 6 of the ratings and feelings, and a punish factor, mainly weighted on the 3 actions. These factors were on a 1-7 point scale. Here are the average factor values for the eight possible scenarios:

In the case of a taboo tradeoff, Robert is less likely to be punished for saving Johnny than for not. We have a strong social norm against trading sacred things for non-sacred things, and Robert is to be punished if he violates this taboo. When Robert makes a sacred tradeoff, it is as if he must violate a norm no matter what he does. In this case, he is punished much more if he treats this as an easy choice; norm violation must be done in a serious thoughtful manner.

However, when Robert makes a taboo tradoff, he is punished much more if he treats this as a difficult choice. In fact, he is punished almost as much for saving Johnny after much thought as he is for not saving Johnny after little thought! It is worse to do the wrong thing after careful thought than after little thought.

Years ago, this result helped me to understand the political reaction when in 2003 my Policy Analysis Market (PAM) was accused of trying to let people bet on terrorist deaths.

PAM appeared to some to cross a moral boundary, which can be paraphrased roughly as “none of us should intend to benefit when some of them hurt some of us.” (While many of us do in fact benefit from terrorist attacks, we can plausibly argue that we did not intend to do so.) So, by the taboo tradeoff effect, it was morally unacceptable for anyone in Congress or the administration to take a few days to think about the accusation. The moral calculus required an immediate response.

Of course, no one at high decision-making levels knew much about a $1 million research project within a $1 trillion government budget. If PAM had been a $1 billion project, representatives from districts where that money was spent might have considered defending the project. But there was no such incentive for a $1 million project (spent mostly in California and London); the safe political response was obvious: repudiate PAM, and everyone associated with it. (more)

Today, however, my interest is in what these results imply for our awareness of where our norm feelings come from, and how much they are shared by others. These results suggest that when we face a choice, the categorization of some of the options as norm violating is supposed to come to us fast, and with little thought or doubt. Unless we notice that all of the options violate similarly important norms, we are supposed to be sure of which options to reject, without needing to consult with other people, and without needing to try to frame the choice in multiple ways, to see if the relevant norms are subject to framing effects. We are to presume that framing effects are unimportant, and that everyone agrees on the relevant norms and how they are to be applied.

Apparently the legal principle of “ignorance of the law is no excuse” isn’t just a convenient way to avoid incentives not to know the law, and to avoid having to inquire about who knows what laws. Regarding norms more generally, including legal norms, we seem to think “ignorance of the norms isn’t plausible; you must have known.”

If this description is correct, it seems to me to have remarkable implications. Which I’ll discuss in my next post. (Unless of course you figure them all out in the comments now.)

10 Implications of Automatic Norms

Posted on 2017-12-28

My last post observed that we seem to have a meta-norm that norm application should be automatic and obvious. We are to just know easily and surely which actions violate norms, without needing to reflect on or discuss the matter. We are to presume that framing effects are unimportant, and that everyone agrees on the relevant norms and how they are to be applied. If true, this has many implications: 1) We rarely feel much need to think about or discuss with others whether our own behavior violates norms. We either feel sure that we are innocent, or we feel at risk of being guilty. If we end up being seen as guilty, we’d rather be able to claim that we forgot, were distracted, or were overcome by passion. Any evidence that we discussed or thought carefully about the choice would instead suggest that we consciously choose to be guilty.

2) We aren’t much interested in ethics and misbehavior discussion or training for the purpose of helping us to figure out what to do personally. We may, however, be interested in using such things as a way to show others that we are devoted to good norms, and that we despise those who violate them. We are far more interested in norm preaching than learning or analysis.

3) We feel justified in accusing others of bad motives when they seem to us to violate norms. It seems to us that either they intended to be guilty, or they were inexcusably sloppy or lacking in control of their passions. We usually don’t need to wonder how they framed the situation, what norms they applied, or how they interpreted those norms. Of course we may not feel obligated to point out their violation, but we’d feel justified if we did.

4) We feel justified in describing those who claim to disagree with us about particular cases as either stupid or mean, or perhaps lacking a proper moral upbringing. With a proper upbringing, they are probably trying to excuse what they know to be their own guilty behavior.

5) We actually face a high risk of framing effects when interpreting particular acts as norm violating. We first learn norms by examples, and then we later apply learned norms to new examples. In both situations the result can depend on the particular examples, their context, and how we framed all this in our minds. If these were the main cognitive processes that produced norm application, then we’d all need to learn from a lot of pretty similar examples in order to reasonably have much confidence that we were all applying the same norms the same way.

6) In a relatively simple world with limited sets of actions and norms, and a small set of people who grew up together and later often enough observe and gossip about possible norm violations of others, such people might in fact learn from enough examples to mostly apply the same norms the same way. This was plausibly the case for most of our distant ancestors. They could in fact mostly be sure that, if they judged themselves as innocent, most everyone else would agree. And if they judged someone else as guilty, others should agree with that as well. Norm application could in fact usually be obvious and automatic.

7) Today however, there are far more people, and more intermixed, who grow up in widely varying contexts and now face far larger spaces of possible actions and action contexts. Relative to this huge space, gossip about particular norm violations is small and fragmented. So it isn’t very plausible that we’ve all converged on how to reliably interpret most norms in most contexts. Thus today we must quite frequently make different judgements on whether actions violate norms. We may converge in judgement with our closest associates and gossip partners, at least on our most common topics of gossip. But for everyone else, if we consider the details of most of their behavior, we will find fault with a lot of it. As they would if they considered the details of our behavior. We are usually sure that we are innocent, but in fact that’s not how many others would categorize us.

8) We must see ourselves as tolerating a lot of norm violation. We actually tell others about and attempt to punish socially only a tiny fraction of the violations that we could know of. When we look most anywhere at behavior details, it must seem to us like we are living in a Sodom and Gomorrah of sin. Compared to the ancient world, it must seem a lot easier to get away for a long time with a lot of norm violations. Selection effects in who chooses to complain about which violations, and which violations others are willing to punish, may seem plausibly to make a big difference to who actually gets punished how much.

We must also see ourselves as tolerating a lot of overeager busybodies applying what they see as norms to what we see as our own private business where their social norms shouldn’t apply. They may not complain out loud about us each time, but we know that they often judge us privately as violating norms, and for no good reason from our point of view. They should just butt out, we think.

9) Random effects of who frames which particular actions as norm violating or not may contribute substantially to who succeeds or fails overall. Some people don’t see a serious violation, and then find themselves punished for what they consider a triviality. They conclude someone had it in for them. Others see a serious potential violation, and pay substantial costs to avoid it, when they in fact faced little risk of punishment. Compared to the ancient world, today larger gains go to those with the social savvy to discern what norm violations others can more easily observe and are likely to punish, and the moral flexibility to act on that savvy. 10) Many norms apply only to particular professions, and are mainly intended to protect outsiders from those professionals. For example, norms about how teachers should treat students, or how bankers should treat customers. Strong competition to become a professional can easily select for those with the ambition and social savvy to pretend to follow all such norms, but to only actually follow the norms with sufficient enforcement. Outsiders may then consistently be fooled to mistakenly believe that these professionals follow certain norms, as those outsiders believe that they would naturally follow such norms, if they had been assigned to be such a professional.

In the next posts: examples of all this, and life lessons to learn from it.

Automatic Norm Lessons

Posted on 2017-12-30

Pity the modern human who wants to be seen as a consistently good person who almost never breaks the rules. For our distant ancestors, this was a feasible goal. Today, not so much.To paraphrase my recent post: Our norm-inference process is noisy, and gossip-based convergence isn’t remotely up to the task given our huge diverse population and vast space of possible behaviors. Setting aside our closest associates and gossip partners, if we consider the details of most people’s behavior, we will find rule-breaking fault with a lot of it. As they would if they considered the details of our behavior. We seem to live in a Sodom and Gomorrah of sin, with most people getting away unscathed with most of it. At the same time, we also suffer so many overeager busybodies applying what they see as norms to what we see as our own private business where their social norms shouldn’t apply.

Norm application isn’t remotely as obvious today as our evolved habit of automatic norms assumes. But we can’t simply take more time to think and discuss on the fly, as others will then see us as violating the meta-norm, and infer that we are unprincipled blow-with-the-wind types. The obvious solution: more systematic preparation.

People tend to presume that the point of studying ethics and norms is to follow them more closely. Which is why most people are not interested for themselves, but think it is good for other people. But in fact such study doesn’t have that effect. Instead, there should be big gains to distinguishing which norms to follow more versus less closely. Whether for purely selfish purposes, or for grand purposes of helping the world, study and preparation can help one to better identify the norms that really matter, from the ones that don’t.

In each area of life, you could try to list many possibly relevant norms. For each one, you can try to estimate how it expensive it is to follow, how much the world benefits from such following, and how likely others are to notice and punish violations. Studying norms together with others is especially useful for figuring out how many people are aware of each norm, or consider it important. All this can help you to prioritize norms, and make a plan for which ones to follow how eagerly. And then practice your plan until your new habits become automatic.

As a result, instead of just obeying each random rule that pops into your head in each random situation that you encounter, you can actually only follow the norms that you’ve decided are worth the bother. And if variation in norm following is an big part of variation in success, you may succeed substantially more.

Automatic Norms in Academia

Posted on 2017-12-29

In my career as a researcher and professor, I’ve come across many decisions where my intuition told me that some actions are prohibited by norms. I’ve usually just obeyed these intuitions, and assumed that everyone agrees. However, I only rarely observe what others think regarding the same situations. In these rare cases, I’m often surprised to see that others don’t agree with me.

I illustrate with the following set of questions on which I’ve noticed divergent opinions. Most academic institutions have no official rules to answer them, nor even an official person to which one can ask. Professors are just supposed to judge for themselves, which they usually do without consulting anyone. And yet many people treat these decisions if they are governed by norms.

  1. What excuses are acceptable for students missing an assignment or exam?
  2. If a teacher will be out of town on a class day, must a substitute teacher always be found or can classes sometimes be cancelled? How often can this be done?
  3. Is there any limit on how much extra help or extra credit assignments teachers can offer only to particular students?
  4. Should students be excused for misunderstanding questions due to poor understanding of English?
  5. Is it okay in college to teach students to just remember and then spit back relatively dogmatic statements, instead of trying to teach them how to think about more complex problems?
  6. Is it okay to assign a final exam, but then toss the exams and give out final grades based on all prior assignments?
  7. Is it okay to give all grad students A grades, and to praise all their papers as brilliant, as a way to compete to get students to pick you as their PhD advisor?
  8. Is it okay to lecture while stumbling drunk?
  9. Must you cite the work that actually influenced your work if it is lowbrow like blogs, wikipedia, or working papers, or if it is outside your discipline?
  10. Can you cite prestigious papers that look good in your references if they did not influence your work?
  11. Is it okay to write as if the first work of any consequence on a topic was the first to appear in a top prestige venue, in effect presuming that lower prestige prior work was inadequate?
  12. Should you cite papers requested by journal referees if you don’t think them relevant?
  13. How much searching is okay, searching in theory assumptions or in statistical model specifications, in order to find the kind of result you wanted? Must you disclose such searching?
  14. Is it okay to publish roughly the same idea in several places as long as you don’t use the exact same words?

I expect the same holds in most areas of life. Most detailed decisions that people treat as norm-governed have no official rules or judges. Most people decide for themselves without much thought or discussion, assuming incorrectly that relevant norms are obvious enough that everyone else agrees.

Plot Holes & Blame Holes

Posted on 2020-02-22

We love stories, and the stories we love the most tend to support our cherished norms and morals. But our most popular stories also tend to have many gaping plot holes. These are acts which characters could have done instead of what they did do, to better achieve their goals. Not all such holes undermine the morals of these stories, but many do.

Logically, learning of a plot hole that undermines a story’s key morals should make us like that story less. And for a hole that most everyone actually sees, that would in fact happen. This also tends to happen when we notice plot holes in obscure unpopular stories.

But this happens much less often for widely beloved stories, such as Star Wars, if only a small fraction of fans are aware of the holes. While the popularity of the story should make it easier to tell most fans about holes, fans in fact try not to hear, and punish those who tell them. (I’ve noticed this re my sf reviews; fans are displeased to hear beloved stories don’t make sense.)

So most fans remain ignorant of holes, and even fans who know mostly remain fans. They simply forget about the holes, or tell themselves that there probably exist easy hole fixes – variations on the story that lack the holes yet support the same norms and morals. Of course such fans don’t usually actually search for such fixes, they just presume they exist.

Note how this behavior contrasts with typical reactions to real world plans. Consider when someone points out a flaw in our tentative plan for how to drive from A to B, how to get food for dinner, how to remodel the bathroom, or how to apply for a job. If the flaw seems likely to make our plan fail, we seek alternate plans, and are typically grateful to those who point out the flaw. At least if they point out flaws privately, and we haven’t made a big public commitment to plans.

Yes, we might continue with our basic plan if we had good reasons to think that modest plan variations could fix the found flaws. But we wouldn’t simply presume that such variations exist, regardless of flaws. Yet this is mostly what we do for popular story plot holes. Why the different treatment?

A plausible explanation is that we like to love the same stories as others; loving stories is a coordination game. Which is why 34% of movie budgets were spent on marketing in ’07, compared to 1% for the average product. As long as we don’t expect a plot hole to put off most fans, we don’t let it put us off either. And a plausible partial reason to coordinate to love the same stories is that we use stories to declare our allegiance to shared norms and morals. By loving the same stories, we together reaffirm our shared support for such morals, as well as other shared cultural elements.

Now, another way we show our allegiance to shared norms and morals is when we blame each other. We accuse someone of being blameworthy when their behavior fits a shared blame template. Well, unless that person is so allied to us or prestigious that blaming them would come back to hurt us.

These blame templates tend to correlate with destructive behavior that makes for a worse (local) world overall. For example, we blame murder and murder tends to be destructive. But blame templates are not exactly and precisely targeted at making better outcomes. For example, murderers are blamed even when their act makes a better world overall, and we also fail to blame those who fail to murder in such situations.

These deviations make sense if blame templates must have limited complexity, due to being socially shared. To support shared norms and morals, blame templates must be simple enough so most everyone knows what they are, and can agree on if they match particular cases. If the reality of which behaviors are actually helpful versus destructive is more complex than that, well then good behavior in some detailed “hole” cases must be sacrificed, to allow functioning norms/morals.

These deviations between what blame templates actually target, and what they should target to make a better (local) world, can be seen as “blame holes”. Just as a plot may seem to make sense on a quick first pass, with thought and attention required to notice its holes, blame holes are typically not noticed by most who only work hard enough to try to see if a particular behavior fits a blame template. While many are capable of understanding an explanation of where such holes lie, they are not eager to hear about them, and they still usually apply hole-plagued blame templates even when they see their holes. Just like they don’t like to hear about plot holes in their favorite stories, and don’t let such holes keep them from loving those stories.

For example, a year ago I asked a Twitter poll on the chances that the world would have been better off overall had Nazis won WWII. 44% said that chance was over 10% (the highest category offered). My point was that history is too uncertain to be very sure of the long term aggregate consequences of such big events, even when we are relatively sure about which acts tend to promote good.

Many then said I was evil, apparently seeing me as fitting the blame template of “says something positive about Nazis, or enables/encourages others to do so.” I soon after asked a poll that found only 20% guessing it was more likely than not that the author of such a poll actually wishes Nazis had won WWII. But the other 80% might still feel justified in loudly blaming me, if they saw my behavior as fitting a widely accepted blame template. I could be blamed regardless of the factual truth of what I said or intended.

Recently many called Richard Dawkins evil for apparently fitting the template “says something positive about eugenics” when he said that eugenics on humans would “work in practice” because “it works for cows, horses, pigs, dogs & roses”. To many, he was blameworthy regardless of the factual nature or truth of his statement. Yes, we might do better to instead use the blame template “endorses eugenics”, but perhaps too few are capable in practice of distinguishing “endorses” from “says something positive about”. At least maybe most can’t reliably do that in their usual gossip mode of quickly reading and judging something someone said.

On reflection, I think a great deal of our inefficient behavior and policies can be explained via limited-complexity blame templates. For example, consider the template:

Blame X if X interacts with Y on dimension D, Y suffers on D, no one should suffer on D, and X “could have” interacted so as to reduce that suffering more.

So, blame X who hires Y for a low wage, risky, or unpleasant job. Blame X who rents a high price or peeling paint room to Y. Blame food cart X that sells unsavory or unsafe food to Y. Blame nation X that lets in immigrant Y who stays poor afterward. Blame emergency room X who failed to help arriving penniless sick Y. Blame drug dealer X who sells drugs to poor, sick, or addicted Y. Blame client X who buys sex, an organ, or a child from Y who would not sell it if they were much richer.

So a simple blame template can help explain laws on min wages, max rents, job & room quality regs, food quality rules, hospital care rules, and laws prohibiting drugs, organ sales, and prostitution. Yes, by learning simple economics many are capable of seeing that these rules can actually make targets Y worse off, via limiting their options. But if they don’t expect others to see this, they still tend to apply the usual blame templates. Because blame templates are socially shared, and we each tend to be punished from deviating from them, either by violating them, or failing to disapprove of violators.

In another post soon I hope to say more about the role of, and limits on, simplified blame templates. For this post, I’m content to just note their central causal roles.

Added 8am: Another key blame template happens in hierarchical organizations. When something bad seems to happen to a division, the current leader takes all the blame, even if recently replaced prior leader. Rising stars gain by pushing short term gains at the expense of long term losses, and being promoted fast enough so as not to be blamed for those losses.

Re my deliberate exposure proposal, many endorse a norm that those who propose policies intended to combine good and bad effects should immediately cause themselves to suffer the worst possible bad effects personally, even in the absence of implementing their proposal. Poll majorities, however, don’t support such norms.

Fairy Tales Were Cynical

Posted on 2012-08-25

A recent New Yorker article on fairy tales fascinated me (quotes below). Apparently the fairy tales once “told at rural firesides” were for adults, full of sex and violence, and cynical – they did not often affirm common ideals. This stands in sharp contrast to most fiction genres today, especially today’s fairy tales targeted at kids. Why were long ago stories so much more cynical? They remind me of some joke genres, like dead baby jokes, and of the crudeness often found off the record in many close social groups.

Here’s my homo hypocritus explanation. Our forager ancestors evolved intricate capacities to affirm standard ideals when what they said or did might be visible or reported to distant observers, and to coordinate to violate such ideals when they were less visible. Shared private rejection and violation of wider ideals can signal close bonds with associates, and reveal more about ourselves to intimates.

So when stories become more visible, such as by getting published in books, stories had to become more ideal. Similarly, when kids were taught in schools, with a curriculum visible to all, that curriculum had to become more ideal. And as law enforcement has become more visible, it has been held to higher standards.

Today harassment laws make it harder to be very crude and cynical at work, and divorce custody battles punish parents who act this way around their kids. Today, more interactions are governed by officially idealistic norms: teachers around students, doctors & lawyers around clients, etc. What costs do we pay for this panopticon-like suppression of our natural crude/cynical styles? We are probably less able to form very close social groups where we can more clearly see each others’ weaknesses and vulnerabilities. But what else?

Added 26Aug: Another contributing factor may be that in general our idealism just rises with rising wealth.

Those promised quotes:

In Grimms’ Fairy Tales there is a story called “The Stubborn Child” that is only one paragraph long. …

Once upon a time there was a stubborn child who never did what his mother told him to do. The dear Lord, therefore, did not look kindly upon him, and let him become sick. No doctor could cure him and in a short time he lay on his deathbed. After he was lowered into his grave and covered over with earth, one of his little arms suddenly emerged and reached up into the air. They pushed it back down and covered the earth with fresh earth, but that did not help. The little arm kept popping out. So the child’s mother had to go to the grave herself and smack the little arm with a switch. After she had done that, the arm withdrew, and then, for the first time, the child had peace beneath the earth.

The tale, without details to attach it to anything in particular, becomes universal. Whatever happened there, we all deserve it. A. S. Byatt has written that this is the real terror of the story: “It doesn’t feel like a warning to naughty infants. It feels like a glimpse of the dreadful side of the nature of things.” That is true of very many of the Grimms’ tales, even those with happy endings. …

The Grimms grew up in the febrile atmosphere of German Romanticism, which involved intense nationalism and, in support of that, a fascination with the supposedly deep, pre-rational culture of the German peasantry, the Volk. … They had political reasons, too—above all, Napoleon’s invasion of their beloved Hesse. …

The Grimms … first edition was not intended for the young, nor, apparently, were the tales told at rural firesides. The purpose was to entertain grownups, during or after a hard day’s work, and rough material was part of the entertainment. But the reviews and the sales of the Grimms’ first edition were disappointing to them. Other collections, geared to children, had been more successful, and the brothers decided that their second edition would take that route. … What they regarded as unsuitable for the young was information about sex. In the first edition, Rapunzel, imprisoned in the tower by her wicked godmother, goes to the window every evening and lets down her long hair so that the prince can climb up and enjoy her company. …

Grimm tales, many of which feature mutilation, dismemberment, and cannibalism, not to speak of ordinary homicide, often inflicted on children by their parents or guardians. … You get used to the outrages, though. They may even come to seem funny. … Some stories do tear you apart, usually those where the violence is joined to some emphatically opposite quality, such as peace or tenderness. … The stories are still extremely short. … They come in, clobber you over the head, and then go away. As with sections of the Bible, the conciseness makes them seem more profound. … W. H. Auden once described the Grimm-sanitizers as “the Society for the Scientific Diet, the Association of Positivist Parents, the League for the Promotion of Worthwhile Leisure, the Cooperative Camp of Prudent Progressives.” …

Marina Warner … says that most modern writers ignore the Grimms’ “historical realism.” Among the pre-modern populations, she records, death in childbirth was the most common cause of female mortality. …

The Grimm tales are no different from other art. They merely concretize and then expand our experience of life. The main reason that Zipes likes fairy tales, it seems, is that they provide hope: they tell us that we can create a more just world. The reason that most people value fairy tales, I would say, is that they do not detain us with hope but simply validate what is. Even people who have never known hunger, let alone a murderous stepmother, still have a sense—from dreams, from books, from news broadcasts—of utter blackness, the erasure of safety and comfort and trust. Fairy tales tell us that such knowledge, or fear, is not fantastic but realistic. (more)

Why Fiction Lies

Posted on 2009-01-05

Most religious activities make a lot of sense, especially in terms of group bonding. It is religious beliefs that seem the most puzzling. Many suggest supernatural beliefs are just a side effect of our having a theory of mind, and applying it liberally. Back in 2001 I read and reviewed Pascal Boyer’s book Religion Explained. Boyer noted 1) supernatural concepts tend to violate one ontological assumption each, making them maximally memorable, and 2) supernatural entities tend to know and care about human-socially-relevant info, and to punish humans who are not nice (i.e., cooperative). I was puzzled that Boyer didn’t explicitly make what seemed to me the obvious suggestion: we evolved a tendency to accept strange memorable group beliefs to create a high cost of leaving our group, and to show that we expect to be punished if we are not nice.

Our obsession with gossiping about each other makes a lot of sense, but more puzzling is our obsession with stories we know are not true, about unrelated people in strange worlds. I recently finished literary-expert William Flesch’s Comeuppance, a literary expert’s evo psych account of why we like fiction (reviewed here and here). Flesch says humans cooperate via a norm of celebrating cooperators and punishing defectors and those who violate this norm:

In narratives we … [are] disposed to want to see the cooperators triumph over the obstacles set up by defectors of various sorts. …. [We] root for characters with a propensity for strong reciprocity, not because the judge them as like us or identify with them, but because a disposition to reward cooperators and to punish defectors is itself a central aspect of cooperation. (p.126)

Social life is all about signaling our abilities and cooperativeness, and discerning such signals from others:

Understanding narrative at all requires understanding of signaling. We monitor signals and the reliability of signals that others produce. We take note of how others monitor signals, and what signals they produce in turn on receive signals that we also may receive. One of the intricate pleasures of narrative … consists in keeping track of who knows what. We like to keep track of what other people are keeping track of. … Narrative relies on the psychological incentives to engage in such monitoring of how we respond to what we know about one another. (p.85)

Yes, we love to watch, and watching abilities serve us well, by why do we apply them so enthusiastically to false stories? Why not just tell stories about real heros and villains? One clue is that stories can signal things about authors and tellers:

Among the strong reciprocators to narrative events are the narrators of those events. … Gossip is a likely mode of altruistic punishment: the scandal monger punishes scandalous behavior. … Gossip … disciplines those who have violated whatever norms the gossipers are punishing.

But how is it altruistic to punish non-existent violators? Only once does Flesch get close to the key: visibly consuming stories also signals things!

Vicarious feelings for others is therefore both a propensity for responding emotionally to the signals of others and itself a primary example of such a signal. … Our own monitoring of costly signals and our response to the response of others constitute our own costly and altruistic absorption in the interactions of others. And of course we signal as well with the stories we love, a mode of signaling that can range from the simple desire to repeat them to the social capital of our own conspicuous cultural attainments. Knowing a story and, still more, telling a story signals our own capacities for altruistic interest, affect, and punishment, capacities that the story will represent its characters manifesting in order to appeal to the audiences interest in monitoring these things. (pp 123-124)

This explanation of fiction comes close to the above explanation of religious beliefs: both religion and fiction serve to reassure our associates that we will be nice. In addition to letting us show we can do hard things, and that we are tied to associates by doing the same things, religious beliefs show we expect the not nice to be punished by supernatural powers, and our favorite fiction shows the sort of people we think are heroes and villains, how often they are revealed or get their due reward, and so on.

We don’t believe the stories really happened, but we do tend to believe these “social truths” about their characters. We love to tell associates about our favorite stories, and prefer them to love them too.

As with religion, the beliefs of ours that most reassure others are not necessarily the most accurate. In fiction, relative to reality, people know more why they act and what they want, good and bad personal characteristics correlate more strongly, personal character matters more relative to circumstance or larger social forces, and there are clearer ultimate resolutions to complex events. What other social lies does fiction tell, and why does it reassure others that we believe them?

Biases Of Fiction

Posted on 2012-12-05

This essay, on “The 38 most common fiction writing mistakes”, offers advice to writers. But the rest of us can also learn useful details on how fiction can bias our thinking. Here are my summary of key ways it says fiction differs from reality (detailed quotes below):

Features of fictional folk are more extreme than in reality; real folks are boring by comparison. Fictional folks are more expressive, and give off clearer signs about their feelings and intentions. Their motives are simpler and clearer, and their actions are better explained by their motives and local visible context. Who they are now is better predicted by their history. Compared to real people, they are more likely to fight for what they want, especially when they encounter resistance. Their conversations are mostly pairwise, more logical, and to the point. In fiction, events are determined more by motives and plans, relative to random chance and larger social forces. Overt conflict between people is more common than in real life.

And I’ll add that stories tend to affirm standard moral norms. Good guys, who do good acts, have more other virtuous features than in reality, and and good acts are rewarded more often than in reality. A lot of our biases come, I think, from expecting real life to be like fiction. For example, when we have negative opinions on important subjects, we tend too much to expect that we should explicitly and directly express those negative opinions in a dramatic conversation scene. We should speak our mind, make it clear, talk it through, etc. This usually a bad idea. We also tend to feel bad about ourselves when we notice that we avoid confrontation, and back off when from things we want when we encounter resistance. But such retreat is usually for the best.

Those promised quotes:

In more than twenty years of teaching courses in professional writing at the University of Oklahoma, I think I’ve encountered almost every difficulty an aspiring writer might face. …

“Wally, these characters are dull. What they are is flat and insipid. They are pasteboard. They have no life, no color, no vivacity. They need a lot of work. ”
Wally looked shocked. “How can these characters be dull? They’re real people-every one of them! I took them right out of real life!”
“Oh”, I said. “So that’s the problem. ”
“What?” he said.
“You can never use real people in your story. ”
“Why?”
“For one reason, real people might sue you. But far more to the point in fiction copy, real people – taken straight over and put on the page of a story – are dull. ” …

Good fiction characters, in other words, are never, ever real people. Your idea for a character may begin with a real person, but to make him vivid enough for your readers to believe in him, you have to exaggerate tremendously; you have to provide shortcut identifying characteristics that stick out all over him, you have to make him practically a monster-for readers to see even his dimmest outlines.

For example, if your real person is loyal, you will make your character tremendously, almost unbelievably loyal; if he tends to be a bit impatient in real life, your character will fidget, gnash his teeth, drum his fingers, interrupt others, twitch, and practically blow sky high with his outlandishly exaggerated impatience….

Good fiction characters also tend to be more understandable than real-life people. They do the things they do for motives that make more sense than real-life motives often do. While they’re more mercurial and colorful, they’re also more goal-motivated. Readers must be able to understand why your character does what he does; they may not agree with his motives, but you have carefully set things up so at least they can see that he’s acting as he is for some good reason. …

In real life, a young woman may come out of a poverty-stricken rural background and still somehow become the president of a great university. Except in a long novel, where you might have sufficient space to make it believable, you would have a hard time selling this meshing of background and present reality in fiction. … In short fiction, characters and their backgrounds are almost always much more consistent than people in real life.

Motivation? Again, fictional characters are better than life. In real life, people often seem to do things for no reason we can understand. They act on impulses that grow out of things in their personalities that even they sometimes don’t understand. But in fiction there is considerably less random chance. … in real life people often don’t make sense. But in fiction, they do. …

interesting characters are almost always characters who are active-risk-takers – highly motivated toward a goal. Many a story has been wrecked at the outset because the writer chose to write about the wrong kind of person -a character of the type we sometimes call a wimp. … He’s the one who wouldn’t fight under any circumstances.
Ask him what he wants, and he just sighs. Poke him, and he flinches-and retreats. Confront him with a big problem, and he fumes and fusses and can’t make a decision. …

In reality-in the real world -much of what happens is accidental. … In most effective fiction, accidents don’t determine the outcome. And your story people don’t sit around passively. … In good fiction, the story people determine the outcome. Not fate.

In fiction, the best times for the writer- and reader- are when the story’s main character is in the worst trouble. … There are many kinds of fiction trouble, but the most effective kind is conflict. You know what conflict is. It’s active give-and-take, a struggle between story people with opposing goals. … The calmer and more peaceful your real life, the better, in all likelihood. Your story person’s life is just the opposite. You the author must never duck trouble … Because fiction is make-believe, it has to be more logical than real life if it is to be believed. In real life, things may occur for no apparent reason. But in fiction you the writer simply cannot ever afford to lose sight of logic and let things happen for no apparent reason. …

In real life, coincidence happens all the time. But in fiction – especially when the coincidence helps the character be at the right place at the right time, or overhear the crucial telephone conversation, or something similar -coincidence is deadly. Your readers will refuse to believe it. …

Your character must have an immediate, physical cause for what he does. This immediate stimulus cannot be merely a thought inside his head; for readers to believe many transactions, they have to be shown a stimulus to action that is outside of the character-some kind of specific prod that is onstage right now. Turning this around, it’s equally true that if you start by showing a stimulus, then you can’t simply ignore it; you must show a response. … In real life, you might get a random thought for no apparent reason, and as a consequence do or say something. But … fiction has to be better than life, clearer and more logical. …

Writers sometimes mess up their dialogue. Sometimes, without realizing it, they let their characters talk on and on, boringly, becoming windbags. … The great majority of your characters have to be more terse and logical than we often are in real life, if the dialogue on the page is to appear realistic. … whenever possible, set up your dialogue scenes so that they play out “one-on-one”, getting rid of other characters (who might interrupt and make the conversation more complicated). … Simplicity… directness… goal orientation… brevity. These are the hallmarks of modern story dialogue. …

If you have any doubt that the reader will understand the meaning of what someone in the story says or does, you must work in at once some method of pointing out what you may think is obvious. (more; HT Eliezer Yudkowsky)

Why We Fight Over Fiction

Posted on 2020-11-29

We tell stories with language, and so prefer to tell the kind of stories that ordinary language can describe well.

Consider how language can describe a space of physical stuff and how to navigate through that stuff. In a familiar sort of space, a few sparse words can evoke a vivid description, such as of a city street or a meadow. And a few words relating to landmarks in such a space can be effective at telling you how to navigate from one place to another.

But imagine an arbitrary space of partially-opaque swirling strangeness, in a highly curved 11-dimensional space. In principle our most basic and general spatial language could describe this too, and instruct navigation there. But in practice that would require a lot more words, and slow the story to a crawl. So few authors would try, though a filmmaker might try just using visuals.

Or consider stories with non-human minds. In principle those who study minds in the abstract can conceive of a vast space of possible minds, and can use a basic and general language of mental acts to describe how each such mind might make a decision, or send a communication, and what those might be. But in practice such descriptions would be long, boring, and unfamiliar to most readers.

So in practice even authors writing about aliens or AIs stick to describing human-like minds, where their usual language for describing what actors decide and say is fast, fluid, and relatable. Authors even prefer human characters with familiar minds, and so avoid characters who think oddly, such as those with autism.

Just as authors focus on telling stories in familiar spaces with familiar minds, they also focus on telling stories in familiar moral universes. This effect is, if anything, even stronger than the space and mind effects, as moral colors are even more central to our need for stories. Compared to other areas of our lives, we especially want our stories to help us examine and affirm our moral stances.

In a familiar moral universe, there many be competing considerations re what acts are moral, making it sometimes hard to decide if an act is moral. Other considerations may weigh against morality, and reader/viewers may not always sympathize most with the most moral characters, who may not win in the end. Moral characters may have unattractive features (like being ugly). There may even be conflicts between characters who see different familiar moral universes.

These are the familiar sorts of “moral ambiguity” in stories said to have that feature, such as The Sopranos or Game of Thrones. But you’ll note that these are almost all stories told in familiar moral universes. By which I mean that we are quite familiar with how to morally evaluate the sort of actions that happen there. The set of acts is familiar, as are their consequences, and the moral calculus used to judge them.

But there is another sort of “moral ambiguity” that reader/viewers hate, and so authors studiously avoid. And that is worlds where we find it hard to judge the morality of actions, even when those actions have big consequences for characters. Where our usual quick and dirty moral language doesn’t apply very well. Where even though in principle our most basic and general moral languages might be able to work out rough descriptions and evaluations, in practice that would be tedious and unsatisfying.

And, strikingly, the large complex social structures and organizations that dominate our world are mostly not familiar moral universes to most of us. For example, big firms, agencies, and markets. The worlds of Moral Mazes and of Pfeffer’s Power. (In fiction: Jobs.) Our stories thus tend to avoid such contexts, unless they happen to allow an especially clear moral calculus. Such as a firm polluting to cause cancer, or a boss sexually harassing a subordinate. As I’ve discussed before, our social world has changed greatly over the last few centuries. Our language has changed fast enough to describe the new physical objects and spaces that have arisen, at least those with which ordinary people must deal, if not the many new strange objects and spaces behind the scenes that enable our new world. But we have not gone remotely as fast at coming to agree on moral stances toward the new choices possible in such social structures. This is why our stories tend to take place in relatively old fashioned social worlds. Consider the popularity of the Western, or of pop science fiction stories like Star Wars that are essentially Westerns with more gadgets. Stories that take place in modern settings tend to focus on personal, romantic, and family relations, as these remain to us relatively familiar moral universes. Or on artist biopics. Or on big conflicts like war or corrupt police or politicians. For which we have comfortable moral framings.

Stories we write today set in say the 1920s feel to us more comfortable than do stories set in the 2020s, or than stories written in the 1920s and set in that time. That is because stories written today can inherit a century of efforts to work out clearer moral stances on which 1920s actions would be more moral. For example, as to our eyes female suffrage is clearly good, we can see any characters from then who doubted it as clearly evil in the eyes of good characters. As clear as if they tortured kittens. To our eyes, their world has now clearer moral colors, and stories set there work better as stories for us.

This is also why science fiction tends to make most people more wary of anticipated futures. The easiest engaging stories to tell about strange futures are on how acts there that seem to violate the rules in our current moral universe. Like about how nuclear rockets spread radioactivity near their launch site, instead of the solar civilization they enable. Much harder to describe how new worlds will induce new moral universes.

This highlights an important feature of our modern world, and an important process that continues within it. Our social world has changed a lot faster than has our shared moral evaluations of typical actions possible in our new world. And our telling stories, and coming to agree on which stories we embrace, is a big part of creating such a fluid language of shared moral evaluations.

This helps to explain why we invest so much time and energy into fiction, far more than did any of our ancestors. Why story tellers are given high and activist-like status, and why we fight so much to convince others to share our beliefs on which stories are best. Our moral evaluations of the main big actions that influence our world today, and that built our world from past worlds, are still up for grabs. And the more we build such shared evaluations, the more we’ll be able to tell satisfying stories set in the world in which we live, rather than set in the fantasy and historical worlds with which we must now make do.

(This post is an elaboration of this Twitter thread.)

Stories Are Like Religion

Posted on 2012-05-08

Small children (age 4-6) who were exposed to a large number of children’s books and films had a significantly stronger ability to read the mental and emotional states of other people. … The more absorbed subjects were in the story, the more empathy they felt, and the more empathy they felt, the more likely the subjects were to help when the experimenter “accidentally” dropped a handful of pens… Reading narrative fiction … fosters empathic growth and prosocial behavior. …

Fiction’s happy endings seem to warp our sense of reality. They make us believe in a lie: that the world is more just than it actually is. But believing that lie has important effects for society—and it may even help explain why humans tell stories in the first place. (more)

People who mainly watched drama and comedy on TV—as opposed to heavy viewers of news programs and documentaries—had substantially stronger “just-world” beliefs. … Fiction, by constantly exposing us to the theme of poetic justice, may be partly responsible for the sense that the world is, on the whole, a just place. (more)

Psychologists have found that people who watch less TV are actually more accurate judges of life’s risks and rewards than those who subject themselves to the tales of crime, tragedy, and death that appear night after night on the ten o’clock news. That’s because these people are less likely to see sensationalized or one-sided sources of information, and thus see reality more clearly. (more)

Imagine that all you know about someone is that they have zero interest in stories. Not movies, not novels, not nothing. They prefer instead to stay focused on the real world. The only “stories” they want are accurate histories of representative people. What do you think of this person?

You might want to hire this person. But would you trust them to be loyal? Would you date them? Marry them? Most people feel a little wary of such story-less people, just as they are wary of atheists. People fear that atheists will violate social norms because they do not fear punishment from gods and spirits. Similarly, people fear that story-less people have not internalized social norms well – they may be too aware of how easy it would be to get away with violations, and feel too little shame from trying.

Thus in equilibrium, people are encouraged to consume stories, and to deludedly believe in a more just world, in order to be liked more by others. This is similar to how people have long been encouraged to be religious, so that they could similarly be liked more by others.

A few days ago I asked why not become religious, if it will give you a better life, even if the evidence for religious beliefs is weak? Commenters eagerly declared their love of truth. Today I’ll ask: if you give up the benefits of religion, because you love far truth, why not also give up stories, to gain even more far truth? Alas, I expect that few who claim to give up religion because they love truth will also give up stories for the same reason. Why? One obvious explanation: many of you live in subcultures where being religious is low status, but loving stories is high status. Maybe you care a lot less about far truth than you do about status.

Posted on 2014-07-14

Most people who say they are atheist or agnostic still believe in supernatural powers:

In the United States, 38% of people who identified themselves as atheist or agnostic went on to claim to believe in a God or a Higher Power. While the UK is often defined as an irreligious place, a recent survey … found that … only 13 per cent of adults agreed with the statement “humans are purely material beings with no spiritual element”. …

When researchers asked people whether they had taken part in esoteric spiritual practices such as having a Reiki session or having their aura read, the results were almost identical (between 38 and 40%) for people who defined themselves as religious, non-religious or atheist.

This is plausibly reinforced by fiction, which (as I’ve said) serves similar functions to religion: In almost all fictional worlds, God exists, whether the stories are written by people of a religious, atheist or indeterminate beliefs.

It’s not that a deity appears directly in tales. It is that the fundamental basis of stories appears to be the link between the moral decisions made by the protagonists and the same characters’ ultimate destiny. The payback is always appropriate to the choices made. An unnamed, unidentified mechanism ensures that this is so, and is a fundamental element of stories—perhaps the fundamental element of narratives.

In children’s stories, this can be very simple: the good guys win, the bad guys lose. In narratives for older readers, the ending is more complex, with some lose ends left dangling, and others ambiguous. Yet the ultimate appropriateness of the ending is rarely in doubt. If a tale ended with Harry Potter being tortured to death and the Dursley family dancing on his grave, the audience would be horrified, of course, but also puzzled: that’s not what happens in stories. Similarly, in a tragedy, we would be surprised if King Lear’s cruelty to Cordelia did not lead to his demise.

Indeed, it appears that stories exist to establish that there exists a mechanism or a person—cosmic destiny, karma, God, fate, Mother Nature—to make sure the right thing happens to the right person. Without this overarching moral mechanism, narratives become records of unrelated arbitrary events, and lose much of their entertainment value. In contrast, the stories which become universally popular appear to be carefully composed records of cosmic justice at work.

In manuals for writers (see “Screenplay” by Syd Field, for example) this process is often defined in some detail. Would-be screenwriters are taught that during the build-up of the story, the villain can sin (take unfair advantages) to his or her heart’s content without punishment, but the heroic protagonist must be karmically punished for even the slightest deviation from the path of moral rectitude. The hero does eventually win the fight, not by being bigger or stronger, but because of the choices he makes.

This process is so well-established in narrative creation that the literati have even created a specific category for the minority of tales which fail to follow this pattern. They are known as “bleak” narratives. An example is A Fine Balance, by Rohinton Mistry, in which the likable central characters suffer terrible fates while the horrible faceless villains triumph entirely unmolested.

While some bleak stories are well-received by critics, they rarely win mass popularity among readers or moviegoers. Stories without the appropriate outcome mechanism feel incomplete. The purveyor of cosmic justice is not just a cast member, but appears to be the hidden heart of the show. (more)

This is the Dream Time

Posted on 2009-09-28

Aboriginals believe in … [a] “dreamtime”, more real than reality itself. Whatever happens in the dreamtime establishes the values, symbols, and laws of Aboriginal society. … [It] is also often used to refer to an individual’s or group’s set of beliefs or spirituality. … It is a complex network of knowledge, faith, and practices that derive from stories of creation. Wikipedia.

We will soon enter an era where most anyone can at any time talk directly with most anyone else who can talk. Cheap global talk and travel continue to tie our global economy and culture more closely together. But in the distant future, our descendants will probably have spread out across space, and redesigned their minds and bodies to explode Cambrian-style into a vast space of possible creatures. If they are free enough to choose where to go and what to become, our distant descendants will fragment into diverse local economies and cultures. Given a similar freedom of fertility, most of our distant descendants will also live near a subsistence level. Per-capita wealth has only been rising lately because income has grown faster than population. But if income only doubled every century, in a million years that would be a factor of 103000, which seems impossible to achieve with only the 1070 atoms of our galaxy available by then. Yes we have seen a remarkable demographic transition, wherein richer nations have fewer kids, but we already see contrarian subgroups like Hutterites, Hmongs, or Mormons that grow much faster. So unless strong central controls prevent it, over the long run such groups will easily grow faster than the economy, making per person income drop to near subsistence levels. Even so, they will be basically happy in such a world. Our distant descendants will also likely have hit diminishing returns to discovery; by then most everything worth knowing will be known by many; truly new and important discoveries will be quite rare. Complete introspection will be feasible, and immortality will be available to the few who can afford it. Wild nature will be mostly gone, and universal coordination and destruction will both be far harder than today.

So what will these distant descendants think of their ancestors? They will find much in common with our distant hunting ancestors, who also continued for ages at near subsistence level in a vast fragmented world with slow growth amid rare slow contact with strange distant cultures. While those ancestors were quite ignorant about their world, and immersed in a vast wild nature instead of a vast space of people, their behavior was still pretty well adapted to the world they lived in. While they suffered many misconceptions, those illusions rarely made them much worse off; their behavior was usually adaptive.

When our distant descendants think about our era, however, differences will loom larger. Yes they will see that we were more like them in knowing more things, and in having less contact with a wild nature. But our brief period of very rapid growth and discovery and our globally integrated economy and culture will be quite foreign to them. Yet even these differences will pale relative to one huge difference: our lives are far more dominated by consequential delusions: wildly false beliefs and non-adaptive values that matter. While our descendants may explore delusion-dominated virtual realities, they will well understand that such things cannot be real, and don’t much influence history. In contrast, we live in the brief but important “dreamtime” when delusions drove history. Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.

Why is our era so delusory?

  1. Our knowledge has been growing so fast, and bringing such radical changes, that many of us see anything as possible, so that nothing can really be labeled delusion.
  2. Rich folks like us have larger buffers of wealth to cushion our mistakes; we can live happily and long even while acting on crazy beliefs.
  3. We humans evolved to signal various features of ourselves to one another via delusions; we usually think that the various things we do to signal are done [for other reasons](politics-isnt-a). For example, we think we pay for docs to help our loved ones get well, rather than to [show that we care](showing-that-yo). We think we do politics because we want to help our nation, rather than to signal our character and loyalty. We are overconfident in our abilities in order to convince others to have confidence in us, and so on. But while our ancestors’ delusions were well adapted to their situations, and so didn’t hurt them much, the same delusions are not nearly as adapted to our rapidly changing world; our signaling induced delusions hurt us more.
  4. Humans seem to have evolved to emphasize signaling more in good times than in bad. Since very few physical investments last very long, the main investments one can make in good times that last until bad times are allies and reputation. So we are built to, in good times, spend more time and energy on leisure, medicine, charity, morals, patriotism, and so on. Relative to our ancestors’ world, our whole era is one big very good time.
  5. Our minds were built with a near mode designed more for practical concrete reasoning about things up close, and a far mode [designed more](a-tale-of-two-tradeoffs) for presenting a good image to others via our abstract reasoning about things far away. But our minds must now deal with a much larger world where many relevant things are much further away, and abstract reasoning is more useful. So we rely more than did our ancestors on that abstract far mode capability. But since that far mode was tuned more for presenting a good image, it is much more tolerant of good-looking delusions.
  6. Tech now enables more exposure to mood-altering drugs and arts, and specialists make them into especially potent “super-stimuli.” Our ancestors used drugs and went into art appreciation mode rarely, e.g., around the campfire listening to stories or music, or watching dances. Since such contexts were relatively safe places, our drug and art appreciation modes are relatively tolerant of delusions. But today drugs are cheap, we can hear music all the time, most surfaces are covered by art, and we spend much of our day with stories from TV, video games, etc. And all that art is made by organized groups of specialists far better than the typical ancestral artist.
  7. We were built to be influenced by the rhetoric, eloquence, [difficulty](academias-function), drama, and repetition of arguments, not just their logic. Perhaps this once helped us to ally us with high status folks. And we were built to show our ideals via the stories we like, and also to like well-crafted stories. But today we are exposed to arguments and stories by folks far more expert than found in ancestral tribes. Since we are built to be quite awed and persuaded by such displays, our beliefs and ideals are highly influenced by our writers and story-tellers. And these folks in turn tell us what we want to hear, or what their patrons want us to hear, neither of which need have much to do with reality.

These factors combine to make our era the most consistently and consequentially deluded and unadaptive of any era ever. When they remember us, our distant descendants will be shake their heads at the demographic transition, where we each took far less than full advantage of the reproductive opportunities our wealth offered. They will note how we instead spent our wealth to buy products we saw in ads that talked mostly about the sort of folks who buy them. They will lament our obsession with super-stimili that highjacked our evolved heuristics to give us taste without nutrition. They will note we spent vast sums on things that didn’t actually help on the margin, such as on medicine that didn’t make us healthier, or education that didn’t make us more productive.

Our descendants will also remember our adolescent and extreme mating patterns, our extreme gender personalities, and our unprecedentedly fierce warriors. They will be amazed at the strange religious, political, and social beliefs we acted on, and how we preferred a political system, democracy, designed to emphasize the hardly-considered fleeting delusory thoughts of the median voter rather than the considered opinions of our best experts.

Perhaps most important, our descendants may remember how history hung by a precarious thread on a few crucial coordination choices that our highly integrated rapidly changing world did or might have allowed us to achieve, and the strange delusions that influenced such choices. These choices might have been about global warming, rampaging robots, nuclear weapons, bioterror, etc. Our delusions may have led us to do something quite wonderful, or quite horrible, that permanently changed the options available to our descendants. This would be the most lasting legacy of this, our explosively growing dream time, when what was once adaptive behavior with mostly harmless delusions become strange and dreamy unadaptive behavior, before adaptation again reasserted a clear-headed relation between behavior and reality.

Our dreamtime will be a time of legend, a favorite setting for grand fiction, when low-delusion heroes and the strange rich clowns around them could most plausibly have changed the course of history. Perhaps most dramatic will be tragedies about dreamtime advocates who could foresee and were horrified by the coming slow stable adaptive eons, and tried passionately, but unsuccessfully, to prevent them.

DreamTime

Posted on 2010-06-05

The most common voluntary activity is not eating, drinking alcohol, or taking drugs. It is not socializing with friends, participating in sports, or relaxing with the family. While people sometimes describe sex as their most pleasurable act, time-management studies find that the average American adult devotes just four minutes per day to sex.

Our main leisure activity is, by a long shot, participating in experiences that we know are not real. When we are free to do whatever we want, we retreat to the imagination—to worlds created by others, as with books, movies, video games, and television (over four hours a day for the average American), or to worlds we ourselves create, as when daydreaming and fantasizing. …

This is a strange way for an animal to spend its days. Surely we would be better off pursuing more adaptive activities—eating and drinking and fornicating, establishing relationships, building shelter, and teaching our children. Instead, 2-year-olds pretend to be lions, graduate students stay up all night playing video games, young parents hide from their offspring to read novels, and many men spend more time viewing Internet pornography than interacting with real women. …

One solution to this puzzle is that the pleasures of the imagination exist because they hijack mental systems that have evolved for real-world pleasure. We enjoy imaginative experiences because at some level we don’t distinguish them from real ones. …

Just as artificial sweeteners can be sweeter than sugar, unreal events can be more moving than real ones. There are three reasons for this. First, fictional people tend to be wittier and more clever than friends and family, and their adventures are usually much more interesting. I have contact with the lives of people around me, but this is a small slice of humanity, and perhaps not the most interesting slice. My real world doesn’t include an emotionally wounded cop tracking down a serial killer, a hooker with a heart of gold, or a wisecracking vampire. As best I know, none of my friends has killed his father and married his mother. But I can meet all of those people in imaginary worlds.

Second, life just creeps along, with long spans where nothing much happens. The O.J. Simpson trial lasted months, and much of it was deadly dull. Stories solve this problem—as the critic Clive James once put it, “Fiction is life with the dull bits left out.” This is one reason why Friends is more interesting than your friends.

Finally, the technologies of the imagination provide stimulation of a sort that is impossible to get in the real world. A novel can span birth to death and can show you how the person behaves in situations that you could never otherwise observe. In reality you can never truly know what a person is thinking; in a story, the writer can tell you. (more)

Yes modern stories and art are more enticing than were those of our distant forager ancestors. But their stories and art also occupied much of their time, especially when food was plentiful. It seems rather implausible that this was only because “imagination … hijack[s] mental systems that have evolved for real-world pleasure.” Surely our foragers would have evolved a resistance to such imagination, if it in fact wasted valuable time. I’m pretty confident that since foragers had stories and art, then stories and art must have served, and still serve, important functions.

Modern humans often prefer to believe that the activities which they most treasure have no evolutionary function – that they were accidents. This attitude helps them stay blind to those functions, awareness of which would make their treasured activities seem less noble.

Dreamtime Social Games

Posted on 2019-09-27

Ten years ago, I posted one of my most popular essays: “This is the Dreamtime.” In it, I argued that, because we are rich,

Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.

Today I want to talk about dreamtime social games.

For at least a million years, our ancestors wandered the Earth in small bands of 20-50 people. These groups were so big that they ran out of food if they stayed in one place, which is why they wandered. But such groups were big and smart enough to spread individual risks well, and to be relative safe from predators.

So in good times at least, the main environment that mattered to our forager ancestors was each other. That is, they succeeded or failed mostly based on winning social games. Those who achieved higher status in their group gained more food, protection, lovers, and kids. And so, while foragers pretended that they were all equal, they actually spent much of their time and energy trying to win such status games. They tried to look impressive, to join respected alliances, to undermine rival alliances, and so on. Usually in the context of grand impractical leisure and play.

As I described recently, status is usually based on a wide range of clues regarding one’s impressiveness, and the relative weight on these clues does vary across cultures. But there are many generic clues that tend to be important in most all cultures, including strength, courage, intelligence, wit, art, loyalty, social support etc. When an ability was important for survival in a local environment, cultural selection tended to encourage societies to put more weight on that ability in local status ratings, especially when their society felt under threat. So given famine, hunters gain status, given war warriors gain status, and when searching for a new home explorers gain status.

But when the local environment seemed less threatening, humans have tended to revert back to a more standard human social game, focused on less clearly useful abilities. And the more secure a society, and the longer it has felt secure, the more strongly it reverts. So across history the social worlds of comfortable elites have been remarkably similar. In the social worlds such as Versailles, Tales of Genji, or Google today, we see less emphasis on abilities that help win in larger harsher world, or that protect this smaller world from larger worlds, and more emphasis on complex internal politics based on beauty, wit, abstract ideas, artistic tastes, political factions, and who likes who.

That is, as people feel safer, local status metrics and social institutions drift toward emphasizing likability over effectiveness, popularity and impressiveness over useful accomplishment, and art and design over engineering. And as our world has been getting richer and safer for many centuries now, our culture has long been moving toward emphasizing such forager values and attitudes. (Though crises like wars often push us back temporarily.) “Liberals” tend to have moved further on this path than “conservatives”, as indicated by typical jobs:

jobs that lean conservative … [are] where there are rare big bad things that can go wrong, and you want workers who can help keep them from happening. … Conservatives are more focused on fear of bad things, and protecting against them. … Jobs that lean liberal… [have] small chances that a worker will cause a rare huge success … [or] people who talk well.

Also, “conservative” attitudes toward marriage have focused on raising kids and on a division of labor in production, while “liberal” attitudes have focused on sex, romance, and sharing leisure activities.

Rather than acknowledging that our status priorities change as we feel safer, humans often give lip service to valuing useful outcomes, while actually more valuing the usual social game criteria. So we pretend to go to school to learn useful class material, but we actually gain prestige while learning little that is useful. We pretend that we pick lawyers who win cases, yet don’t bother to publish track records and mainly pick lawyers based on institutional prestige. We pretend we pick doctors to improve health, but also don’t publish track records and mainly pick via institutional prestige, and don’t notice that there’s little correlation between health and medicine. We pretend to invest in hedge funds to gain higher returns, but really gain status via association with impressive fund managers, and pay via lower average returns.

I recently realized that, alas, my desire to move our institutions more toward “paying for results” is at odds with this strong social trend. Our institutions could be much more effective at getting us the things we say we want out of them, but we seem mostly content to let them be run by the usual social status games. We put high status people in change and give them a lot of discretion, as long as they give lip service to our usual practical goals. It feels to most people like a loss in collective status if they let their institutions actually focus too much on results. A focus on results would probably result in the rise to power of less impressive looking people who manage to get more useful things done. That is what we’ve seen when firms have adopted prediction markets. At first firms hope that such markets may help them identify the best informed employees. But are are disappointed to learn that winners tend not to look socially impressive, but are more nerdy difficult inarticulate contrarians. Not the sort they actually want to promote.

Paying more for results would feel to most people like having to invite less suave and lower class engineers or apartment sups to your swanky parties because they are useful as associates. Or having to switch from dating hip hunky Tinder dudes to reliable practical guys with steady jobs. In status terms, that all feels less like admiring prestige and more like submitting to domination, which is a forager no-no. Paying for results is the sort of thing that poor practical people have to do, not rich prestigious folks like you.

Of course our society is full of social situations where practical people get enough rewards to keep them doing practical things. So that the world actually works. People sometimes try to kill such things, but then they suffer badly and learn to stop. But most folks who express interest in social reforms seem to care more about projecting their grand hopes and ideals, relative to making stuff work better. Strong emotional support for efficiency-driven reform must come from those who have deeply felt the sting of inefficiency. Perhaps regarding crime? Ordinary human intuitions work well for playing the usual social status games. You can just rely on standard intuitions re who you like and are impressed by, and who you should say what to. In contrast, figuring out how to actually and effectively pay for results is far more complex, and depends more on the details of your world. So good solutions there are unlikely to be well described by simple slogans, and are not optimized for showing off one’s good values. Which, alas, seems another big obstacle to creating better institutions.

We Moderns Are Status-Drunk

Posted on 2021-06-27

Twelve years ago I posted on how our era is a rare unique “dreamtime” of fast growth, wide cultural integration, and delusional beliefs. But I think I missed a big reason why we have the delusions we do: as we get rich, we each increasingly over-estimate our relative social status. Let me explain. The core idea of evolutionary psychology is that evolution shaped our behaviors to be adaptive in our ancestral environments. That is, we do stuff that gives us more descendants. But because our ancestors only experienced a limited range of environments, we only evolved behavior rules sufficient to induce adaptive behavior in those actual environments. This made our behavior indeterminate in the other new environments which humans have experienced since then. So a re-run of the process of evolution could easily lead to different behaviors in these new environments. That is, human behavior today results not just from adaptation to ancestral environments, but also from the many random ways that evolution happened to encode our behavior in rules.

For example, our ancestors needed to drink water to avoid dehydration, but because in their environments water always had the same combination of water smell and water feel, we could have evolved either to check that stuff is water by its smell, or by its feel. If those two water features always go together, and if both methods are just as easy, then this difference won’t make much difference to behavior. We find water, check that it is water, and drink it. But if later we encountered stuff that had water smell but not water feel, or water feel but not water smell, then these two different ways to detect water might lead to very different behaviors. For example, water-smell humans might drink stuff that smells but doesn’t feel like water, while water-feel humans would not drink such stuff.

In this post, I want to suggest that much of the “modern” human style which has arisen since the industrial revolution results from a particular way that evolution happened to encode human detection of relative status. This has made human history go surprisingly well in some ways, and surprisngly badly in others. Had evolution happened to have coded our status detection machinery differently, these last few centuries might have played out very differently. And perhaps they did, in alien histories. But before we get into that, let us first see how our status detection methods have shaped the modern human style.

Most social animals have status ladders, and humans are no exception. Selfishly optimal animal behavior depends on where an animal sits in such ladders. Thus animals need ways to detect the relative status rank of themselves and potential interaction partners. The same applies to humans, though humans had some new ways to mark and assert status, and so needed some new ways to judge status.

My key hypothesis is this: evolution had humans use their absolute income/wealth to judge their relative status. (I’m talking here about overall status in the larger community, not status relative to particular associates; we have many better clues to judge that.) Yes, this method would work badly in environments where communities varied greatly in average levels of absolute income/wealth. In that case, someone rich might think that they had high relative status, when in fact most everyone in their society was also rich.

But before the industrial revolution there were few persistent differences in average income/wealth across societies. Yes, there were temporary famines and pandemics, and so good times and bad, but these periods were short relative to human lifetimes. So until recently absolute wealth, averaged over many years, was in fact a good indicator of relative status.

However, for the first time in history the industrial revolution enabled income/wealth to grow faster than did human population, inducing a rapid increase in average income/wealth, an increase that has been continuing for several centuries now. As a result, our status detection systems have severely misfired. They tell us each that, because we are rich, we have high relative status. And the richer we have become, the more severe has been this error.

To judge how this has this distorted our behavior, we mainly just need to know how humans had previously evolved to adjust their behavior to relative status. For forager and farmer era humans, what behaviors were more adaptive for the high in status? We can find many such differences.

For example, for most social mammals, being higher status protects you more from stressful life events, so that you less often invoke the standard mammal stress response. By not spending on stress, you body invests more in growth and immunity. So higher status primates are less sick, and live longer. Thus this theory predicts that humans came to live much longer after the industrial revolution. You might think that this outcome is also predicted by our being able to afford more medicine, nutrition, clean water, and other public health measures. But in fact these factors do a poor job of explaining the magnitude and steadiness of the mortality fall over the last few centuries. Changes in these other factors have been weaker and less steady than declining mortality.

Higher status animals also tend more to be group leaders, and thus to be peacemakers regarding local disputes. Yes, the leaders of a group may manage its disputes with outsiders, and then they may need to act tough. But leaders are supposed to less take sides regarding internal disputes, and more try to resolve them peacefully. That is, they have a wider moral circle, and are more cooperative and pro-social. Thus higher status animals less often pick fights with associates, so they are on average more peaceful. And low status humans are consistently more violent than are high status ones. Thus this theory predicts what we have seen: declining rates of violence and conflict, less war, and widening moral circles.

However, even as wars get rare, the fact that soldiers are higher status means that more people expect to participate in wars when they happen; soldiering has become more democratic. This wider view of leaders seems to be implemented in part via leaders taking on more abstract/far views, relative to concrete/near views. This predicts that we moderns increasingly take on far views, relative to near views, and this seems roughly right. As status markers tend to complement each other, it makes sense for people with some markers to work harder to acquire more such markers. Also, the high in status tend to have more resources and better abilities, both of which suggest higher returns from investing in more status markers. Thus people who believe they are high status naturally try to invest more in rising even further in status. What specifically they will do depends on what counts more for status in their society for their age, gender, etc. For example, they might do sports, combat, poetry, music, art, crafts, travel, scholarship, invention, etc. But the key prediction is: we are more mad for status, as we think we already have a lot of it.

Also, as status is often conferred for showing range and variety in such abilities, we pursue such range and variety. And as most of these things require training, this predicts more school, as does the fact that school tends to confer status directly. Over the last few centuries we have in fact seen a consistent rise in the fraction of their time and energy spent on all these things, and also a rise in their emphasis on variety in such things. We do more school, even though we don’t seem to learn much useful there. We have slowly spent more time on leisure as we’ve become richer, but this decline has been slower than many had expected. Plausibly this is because work also gives us great status, and it is mainly the pressure for variety in our status markers than makes us also pursue non-work status. In most societies, investments in fertility take time and time and energy away from investments in status. Yes fertility confers some status, but in our world not as much. As people get rich, they are tempted to invest less in immediate fertility in order to gain in status, which could help them or their children later become a high status “king” or “queen”, a role that could then allow much higher fertility later. For example, a young woman might delay fertility to invest in poetry, music, etc., hoping to then be chosen as queen, which would allow her kids to have many grandkids. Or parents might choose to have fewer children, so that they can invest in more status markers for each child that they have. Both strategies reduce overall fertility, and in fact fertility has fallen dramatically over the last few centuries, seemingly in response to local wealth levels. The other explanations offered for this fertility fall are mostly quite unsatisfactory. While in most firms various political factions vie for dominance, low level workers are often well advised to “keep their head down”, and just do their job. But high level managers must pick sides and play the game. More generally, high status people are expected to participate in elite conversation and governance. That is, they are more expected to take on formal governance roles, and also speak up and express opinions on the issues of the day. Which will naturally result in them allying with political factions. And to do this well they need to keep up with gossip and the news. Also, we all tend to rise in status when we seem to influence the behavior of others, but fall when lower status others seem to influence us.

All this induces higher status people to track more news, and to talk more, more visibly, and more politically. It induces us to make and push more behavior recommendations, and to try harder to govern everything, creating more governance roles to fill. As democracy allows more people to participate in governance, we predict more democracy. And in fact over the last few centuries we have seen people more eager for news, talk, politics, democracy, government, and paternalistic policies.

As high status people are held to higher standards regarding social and moral norms, we hold ourselves to higher standards, but are also more willing to criticize others who see claim high status but fail to meet such standards. Regarding religion, our seeing ourselves as higher status makes us more expect to be prophets, priests, monks, martyrs, and activists, but less to be the prototypical attendee of religious services, the meek supplicant to whom religion offers comfort and meaning in their hard life. And in fact we are more moral, more morally critical, seek more to be prophets and activists, but less attend church.

The high in status tend to have relationships and projects that last longer, so that they need to attend to longer timescales. And they will suffer less theft and loss of relations which can discourage long term investments. Thus the high in status discount the future less. And we do in fact see over time less discounting and longer time horizons, expressed in particular in lower interest rates.

All told, this theory seems pretty successful to me. The assumption that evolution had humans estimate their relative status vis their absolute income/wealth predicts many trends and unique styles of the industrial era, including rising lifespans, lower fertility, falling violence, more school, more effort into art/travel/invention/etc., and much more. We now have a deeper understanding of how and why we modern humans have a different style from ancient humans. Note that as evolution should slowly correct our mistaken non-adaptive way to estimate relative status, this modern era won’t last forever; we will eventually wake from our dreamtime.

Science fiction often depicts alien worlds with very advanced technology, and yet with social styles and attitudes more like those of our ancients. I always thought that a mistake, but this analysis suggests it isn’t so crazy. Had evolution had us use relative wealth to estimate our relative status, most of these changes would have not happened, or been much weaker. We might well have continued more with ancient human styles in the industrial era and beyond.

Early in the industrial era many expressed great fears for where it might go, and while those fears seem to have been overblown, this analysis suggests that they weren’t crazy. A more ancient-style industrial era would have had more violence and war, shorter lives, more work and less emphasis on variety in leisure, and thus more regimentation of leisure as well as work. There’d also be less democracy and politics, and less obsession with social media. A dramatically different world that might have been, and may well have actually existed in alien histories. Added 28Jun: During the forager era, humans had strong direct contact with everyone in their band, and so had relatively clear signals about their status relative to each such person. Which easily added up to one’s relative status overall. So it may have been the introduction of larger communities (~1000) in the farming era that created a need for ways to estimate one’s status relative to people with which one did not have much contact. That is where it would have been handy to be able to just look at yourself to infer your relative status. Looking at your personal wealth would have worked well then.

Earth: A Status Report

Posted on 2023-01-02

In a universe that is (so far) almost entirely dead, we find ourselves to be on a rare planet full not only of life, but now also of human-level intelligent self-aware creatures. This makes our planet a roughly a once-per-million-galaxy rarity, and if we ever get grabby we can expect to meet other grabby aliens in roughly a billion years.

We see that our world, our minds, and our preferences have been shaped by at least four billions years of natural selection. And we see that evolution going especially fast lately, as we humans pioneer many powerful new innovations. Our latest big thing: larger scale organizations, which have induced our current brief dreamtime, wherein we are unusually rich.

For preferences, evolution has given us humans a mix of (a) some robust general preferences, like wanting to be respected and rich, (b) some less robust but deeply embedded preferences, like preferring certain human body shapes, and (c) some less robust but cultural plastic preferences, such as which particular things each culture finds more impressive.

My main reaction to all this is to feel grateful to be a living intelligent creature, who is compatible enough with his world to often get what he wants. Especially to be living in such a rich era. I accept that I and my descendants will long continue to compete (in part by cooperating of course), and that as the world changes evolution will continue to change my descendants, including as needed their values.

Many see this situation quite differently from me, however. For example, “anti-natalists” see life as a terrible crime, as the badness of our pains outweigh the goodness of our pleasures, resulting in net negative value lives. They thus want life on Earth to go extinct. Maybe, they say, it would be okay to only create really-rich better-emotionally-adjusted creatures. But not the humans we have now.

Many kinds of “conservatives” are proud to note that their ancestors changed in order to win prior evolutionary competitions. But they are generally opposed to future such changes. They want only limited changes to our tech, culture, lives, and values; bigger changes seem like abominations to them.

Many “socialists” are furious that some of us are richer and more influential than others. Furious enough to burn down everything if we don’t switch soon to more egalitarian systems of distribution and control. The fact that our existing social systems won difficult prior contests does not carry much weight with them. They insist on big radical changes now, and disavow any failures associated with prior attempts made under their banner. None of that was “real” socialism, you see.

Due to continued global competition, local adoption of anti-natalist, conservative, or socialist agendas seems insufficient to ensure these as global outcomes. Now most fans of these things don’t care much about long term outcomes. But some do. Some of those hope that global social pressures, via global social norms, may be sufficient. And others suggest using stronger global governance.

In fact, our scales of governance, and level of global governance, have been increasing over centuries. Furthermore, over the last half century we have created a world community of elites, wherein global social norms and pressures have strong power.

However, competition at the largest scales has so far been our only robust solution to system rot and suicide, problems that may well apply to systems of global governance or norms. Furthermore, centralized rulers may be reluctant to allow civilization to expand to distant places which they would find it harder to control.

This post resulted from Agnes Callard asking me to comment on Scott Alexander’s essay Meditations On Moloch, wherein he takes similarly stark positions on these grand issues. Alexander is irate that the world is not adopting various utopian solutions to common problems, such as ending corporate welfare, smaller militaries, and common hospital medical record systems. He seems to blame all of that, and pretty much anything else that has ever gone wrong, on something he personalizes into a monster “Moloch.” And while Alexander isn’t very clear on what exactly that is, my best read is that it is the general phenomenon of competition (at least the bad sort); that at least seems central to most of the examples he gives.

Furthermore, Alexander fears that, in the long run, competition will force our descendants to give up absolutely everything that they value, just to exist. Now he has no empirical or theoretical proof that this will happen; his post is instead mostly a long passionate primal scream expressing his terror at this possibility.

(Yes, he and I are aware that cooperation and competition systems are often nested within each other. The issue here is about the largest outer-most active system.)

Alexander’s solution is:

Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans. … Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

By which Alexander means: start with a tiny weak AI, induce it to “foom” (sudden growth from tiny to huge), resulting in a single “super-intelligent” AI who rules our galaxy with an iron fist, but wrapped the velvet glove of being “friendly” = “aligned”. By definition, such a creature makes the best possible utopia for us all. Sure, Alexander has no idea how to reliably induce a foom or to create an aligned-through-foom AI, but there are some people pondering theses questions (who are generally not very optimistic).

My response: yes of course if we could easily and reliably create a god to mange a utopia where nothing ever goes wrong, maybe we should do so. But I see enormous risks in trying to induce a single AI to grow crazy fast and then conquer everything, and also in trying to control that thing later via pre-foom design. I also fear many other risks of a single global system, including rot, suicide, and preventing expansion.

Yes, we might take this chance if we were quite sure that in the long term all other alternatives result in near zero value, while this remained the only scenario that could result in substantial value. But that just doesn’t seem remotely like our actual situation to me.

Because: competition just isn’t as bad as Alexander fears. And it certainly shouldn’t be blamed for everything that has ever gone wrong. More like: it should be credited for everything that has ever gone right among life and humans.

First, we don’t have good reasons to expect competition, compared to an AI god, to lead more reliably to the extinction either of life or of creatures who value their experiences. Yes, you can fear those outcomes, but I can as easily fear your AI god.

Second, competition has so far reigned over four billion years of Earth life, and at least a half billion years of Earth brains, and on average those seem to have been brain lives worth living. As have been the hundred billion human brain lives so far. So empirically, so far, given pretty long time periods, competition has just not remotely destroyed all value.

Now I suspect that Alexander might respond here thus:

The way that evolution has so far managed to let competing creatures typically achieve their values is by having those values change over time as their worlds change. But I want descendants to continue to achieve their values without having to change those values across generations.

However, relatively soon on evolutionary timescales, I’ve predicted that, given further competition, our descendants will come to just directly and abstractly value reproduction. And then after that, no descendant ever need to change their values. But I think even that situation isn’t good enough for Alexander; he wants our (his?) current human values to be the ones that continue and never change.

Now taken very concretely, this seems to require that our descendants never change their tastes in music, movies, or clothes. But I think Alexander has in mind only keeping values the same at some intermediate level of abstraction. Above the level of specific music styles, but below the level of just wanting to reproduce. However, not only has Alexander not been very clear regarding which exact value abstraction level he cares about, I’m not clear on why the rest of us should agree to with him about this level, or care as much as he does about it.

For example, what if most of our descendants get so used to communicating via text that they drop talking via sound, and thus also get less interesting in music? Oh they like artistic expressions using other mediums, such as text, but music becomes much more of a niche taste, mainly of interest to that fraction of our descendants who still attend a lot to sound.

This doesn’t seem like such a terrible future to me. Certainly not so terrible that we should risk everything to prevent it by trying to appoint an AI god. But if this scenario does actually seem that terrible to you, I guess maybe you should join Alexander’s camp. Unless all changes seem terrible to you, in which case you might join the conservative camp. Or maybe all life seems terrible to you, in which case you might join the anti-natalists.

Me, I accept the likelihood and good-enough-ness of modest “value drift” due to future competition. I’m not saying I have no preferences whatsoever about my descendants’ values. But relative to the plausible range I envision, I don’t feel greatly at risk. And definitely not so much at risk as to make desperate gambles that could go very wrong.

You might ask: if I don’t think making an AI god is the best way to get out of bad equilibria, what do I suggest instead? I’ll give the usual answer: innovation. For most problems, people have thought of plausible candidate solutions. What is usually needed is for people to test those solution in smaller scale trials. With smaller successes, it gets easier to entice people to coordinate to adopt them.

And how do you get people to try smaller versions? Dare them, inspire them, lead them, whatever works; this isn’t something I’m good at. In the long run, such trials tend to happen anyway, by accident, even when no one is inspired to do them on purpose. But the goal is to speed up that future, via smaller trials of promising innovation concepts.

Added 5Jan: While I was presuming that Alexander had intended substantial content to his claims about Moloch, many are saying no, he really just mean to say “bad equilibria are bad”. Which is just a mood well-expressed, but doesn’t remotely support the AI god strategy.

On Teen Angst

Posted on 2010-06-12

Two complementary theories of teen angst:

  1. Our homo hypocritus ancestors overtly followed idealistic norms, such as against dominance and bragging, but covertly violated them. They also cheated often on norms of sexual fidelity. An important part of growing up in such a world was learning to see that acts oft deviate from spoken ideals, and to affirm ideals via outrage at such hypocrisy, before one was old enough to have been very hypocritical oneself. And since the young seek to displace the old in the positions of highest status, old hypocrisy makes a good rallying cry.
  2. In the vast majority of the past, and the vast majority of the future, people grow up in a world for which they were designed – their inborn expectations and intuitions are good guides to their world. But in this the great Dreamtime, only ten thousand years old, mostly done, and near its peak, our inborn intuitions are poor guides – we awake into a world we find strange, fake, and wrong. So when young, we are drawn to stories about righting those wrongs by exposing this fake world, replacing it with a true one, and in the process having an adventure where we prove our mettle and impress potential mates and allies.

Below are quotes on teen angst in fiction. They inspire this open letter of mine:

Dear angsty teen,

As you suspect, the world into which you have been born is indeed strange, fake, and wrong, relative to your inborn intuitions. Adults have not been frank with you, or themselves, about how often they fail to live up to your ideals or theirs. In fact, much of the function of school and other ways adults shape your youth is to use social pressure to get you to replace your inborn ideals with new given ideals, and to accept your and others’ hypocrisies.

There maybe be places you could move which better fit your inborn ideals and expectations, and there may be ways to change your current place to better fit such things. You may even devote some energy to such moving or changing. But the vast majority of you will mostly forget your angst, eagerly trading your inborn ideals for the hope of social approval and respect. A few of you will hold the most strongly to your inborn ideals, paying great costs to move or change. Some such efforts will even succeed, moving your world closer to your inborn ideals.

But know that your world is stable enough so that if you actually “fight the power,” you will on average lose. Most of what looks like young “rebels” winning is actually part of the established order. New art, tech, political groups, etc. often replace old ones with rhetoric about how the change better achieves natural ideals. Such rhetoric can bind “rebels” together, helping them beat rivals. But most such changes do little about hypocrisy or idealism overall, and the few that do mostly reflect larger trends, not a triumph of some group’s moral fervor.

On average, real rebels who most hold to their inborn ideals do not thereby gain social approval or respect – they lose it. Real rebels are little like the heroes of your teen angst fiction, who accumulate fascinating stories while proving their mettle and impressing potential mates and allies. While some real rebels succeed in exposing more hypocrisy to those willing to listen, it is the willingness to listen that is the main block. Those willing to look for hypocrisy can find it easily enough themselves, most anywhere they look.

Finally, pause for a moment and ask: how sure can you be that your inborn ideals are really better than the ideals society wishes to imprint on you? Your inborn ideals were adaptive to a world that is long gone, and only then in conjunction with lots of hypocrisy; the ideals adults want to imprint on you instead seem better adapted to your current world. There is no solid rock on which you can stand; we all float in a sea of choice; choose your ideals, and your level of hypocrisy, and pay the price.

Now for those quotes. On JD Salinger:

Mr. Salinger had such unerring radar for the feelings of teenage angst and vulnerability and anger … Mr. Salinger’s people tend to be outsiders — spiritual voyagers shipwrecked in a vulgar and materialistic world, misfits who never really outgrew adolescent feelings of estrangement. … Such characters have a yearning for some greater spiritual truth, but they are also given to an adolescent either/or view of the world and tend to divide people into categories: the authentic and the phony, those with an understanding … and those coarse, unenlightened morons who will never get it — a sprawling category, it turns out, that includes everyone from pompous college students parroting trendy lit crit theories to fashionable, well-fed theater-goers to self-satisfied blowhards who recount every play in a football game or proudly wear tattersall vests.

On Dystopian Teen Fiction:

A recent boom in dystopian fiction for young people. … Intricately imagined worlds. … For example, all sixteen-year-olds undergo surgery to conform to a universal standard of prettiness. … Teen-age boys awaken, all memories of their previous lives wiped clean, in a walled compound surrounded by a monster-filled labyrinth. The books tend to end in cliff-hangers. … There are, or will soon be, books about teen-agers slotted into governmentally arranged professions and marriages or harvested for spare parts or genetically engineered for particular skills or brainwashed by subliminal messages embedded in music or outfitted with Internet connections in their brains. Then, there are the post-apocalyptic scenarios in which humanity is reduced to subsistence farming or neo-feudalism. … A new, better way of life can be assembled from the ruins. …

Dystopian fiction … it’s about what’s happening, right this minute, in the stormy psyche of the adolescent reader. “The success of ‘Uglies,’ … is partly thanks to high school being a dystopia.” … As a tool of practical propaganda, the [Hunger Games] don’t make much sense. … If, on the other hand, you consider the games as a fever-dream allegory of the adolescent social experience, they become perfectly intelligible. Adults dump teen-agers into the viper pit of high school, spouting a lot of sentimental drivel about what a wonderful stage of life it’s supposed to be. The rules are arbitrary, unfathomable, and subject to sudden change. A brutal social hierarchy prevails, with the rich, the good-looking, and the athletic lording their advantages over everyone else. To survive you have to be totally fake. Adults don’t seem to understand how high the stakes are; your whole life could be over, and they act like it’s just some “phase”! Everyone’s always watching you, scrutinizing your clothes or your friends … but no one cares who you really are or how you really feel about anything.

The typical arc of the dystopian narrative mirrors the course of adolescent disaffection. First, the fictional world is laid out. It may seem pleasant enough. Tally … looks forward to the surgery that will transform her into a Pretty. … Then somebody new, a misfit, turns up, or the hero stumbles on an incongruity. A crack opens in the façade. If the society is a false utopia, the hero discovers the lie at its very foundation: the Pretties are lobotomized when they receive their plastic surgery. … If the society is frankly miserable or oppressive, the hero will learn that, contrary to what he’s been told, there may be an alternative out there, somewhere. Conditions at home become more and more unbearable until finally the hero, alone or with a companion, decides to make a break for it, heading out across dangerous terrain. …

Incorporating the particular flavor of contemporary kid culture. Waking up in a hostile, confined place without an identity or any notion of what you’re supposed to do or how you can get out … is a scenario often found in video games. … There’s more hand-to-hand combat in these dystopias. … Some [kids] will surely grow up to write dystopian tales of their own, incited by technologies or social trends we have yet to conceive. By then, reality TV and privacy on the Internet may seem like quaint, outdated problems. But the part about the world being broken or intolerable, about the need to sweep away the past to make room for the new? That part never gets old.

Added 13June: Reports of teen angst seem more common in industry and among farmer aristocrats than elsewhere. This could be because such folks are more articulate, and have high enough status to complain. If not, this fact seems to favor the second of the two explanations I offered above.