Posted on 2010-10-06
Farming required huge behavior changes, mostly unnatural to foragers. A key enabler seems to have been increased self-control to follow social norms. But what allowed this increased self-control?
One source was moving from vague spirituality to religions with powerful and morally-outraged gods who punish norm violators. In addition (as I’ll explain tomorrow), high densities and larger social networks made stronger credible threats to ostracize folks for specific deviant acts. Yes both these mechanisms require the fear that norm violations could lead to great harm, even death. But for poor farmers living on edge, such threats were easy to come by.
Interestingly, this death-threat pressure could work even without farmers being conscious of the relevant threats or fears. In fact, farming society probably worked better with homo hypocritus farmers, consciously denying that strong social pressures pushed them to do what would otherwise feel unnatural. A large robust literature makes it clear that inducing people to unconsciously think about death pushes them to more strongly obey and defend cultural norms, especially norms framed as disgust at animal-like behavior. Today, fear of death encourages folks to obey authorities, and be more loyal to their communities and spouses, all strong farmer norms:
Empirical support for [Terror management theory] has originated from more than 175 published experiments which have been conducted cross-culturally both nationally and internationally. … People, when reminded of their own inevitable death, will cling more strongly to their cultural worldviews. …. Nations or persons who have experienced traumas are more attracted to strong leaders who express traditional, pro-establishment, authoritarian viewpoints. … Many terror management studies have examined elicited affect as a covariate to mortality salience, and only one reviewed study has found elicited affect (fear) in the terror management process. Why? Terror management is a non-conscious process. …
Research corroborates the link between love and the fear of death. Studies reveal an association between close relationship seeking and mortality salience. Moreover, further studies demonstrate that the desire for close relationships under conditions of mortality salience trumps other needs including self-esteem and maintenance (pride) or avoidance (shame/guilt) … [Researchers] find the rejection of animality or creatureliness to function as the central tendency driving disgust … Studies demonstrate that mortality salience is associated with the rejection of animal traits. (more)
Subtle reminders of death on a subconscious level motivates a statistically significant number of subjects to exhibit biased and xenophobic type behaviors, such as gravitating toward those who they perceive as culturally similar to themselves and holding higher negative feelings and judgments toward those they perceive as culturally dissimilar to themselves. (more)
Note that fear-of-death based norm-enforcement mechanisms should work better on poor folk for whom death is a more immediate threat. Farming culture took advantage of a prior natural fear of death to push farming ways, but as farmers got richer, such pressures weakened, inclining folks to revert to more natural-feeling forager ways.
I suspect that social scientists, even those favoring “behavioral” explanations, consistently neglect fear of (thinking about) death as an explanation of social phenomena. Social scientists also don’t like to think about death, and thinking about explanations involving fear of death makes social scientists think too much about death. Added: tijmz points out an ’08 Science study showing more fear-sensitive folks are more conservative:
Individuals with measurably lower physical sensitivities to sudden noises and threatening visual images were more likely to support foreign aid, liberal immigration policies, pacifism,and gun control, whereas individuals displaying measurably higher physiological reactions to those same stimuli were more likely to favor defense spending, capital punishment, patriotism, and the Iraq War. Thus, the degree to which individuals are physiologically responsive to threat appears to indicate the degree to which they advocate policies that protect the existing social structure from both external (outgroup) and internal (norm-violator) threats.
Bryan reminded he that he pointed out this essay arguing that “authoritarian personalities” looks more like “old-fashioned personalities”, a fact which emphasizes just how much opinion has moved in a less conservative direction over time.
Posted on 2010-10-07
The two biggest events of last million years, by far, are the transition from foraging to farming and then from farming to industry. Since industry began, humans have changed in many ways, some of which are puzzling, since there hasn’t been time for much genetic selection, and only limited time for cultural selection. Especially puzzling are big changes in our basic attitudes, and big variations in such attitudes between people and nations.
The ten thousand years since the farming transition, however, offers more time for genetic and cultural adaptation. Yet ten thousand years is also short enough that we should expect much less than full adaptation. Some people and places should retain vestiges of forager ways, and variations in these vestiges should be important.
So it seems natural to try to explain key variations and changes in attitudes today as vestiges of the transition from foragers to farmers colliding with the vast increase in individual wealth that is the main effect of industry. On Monday I described how foragers vs. farmers seems to do a decent job of capturing the rich-poor axis in the World Values Survey, which is related to today’s liberal/modern vs. conservative/traditional political axis. I suggested that the social pressures which encouraged farming behaviors were naturally stronger for the poor, predicting that people retreat to forager ways with increasing industry wealth. The rest of the week I explored two theories of why such social pressures reduce with wealth. Today I want to consider what this theory implies about our future. First, it implies that if we continue to get richer, we should continue to see attitude changes in roughly the same directions. We should expect continued movement toward accepting school and workplace domination and ranking, and whatever other attitudes greatly enable industry to create wealth. And regarding how we spend our increased wealth, we should expect a continued shift from farmer to forager style attitudes for a while. For example, we should expect less war and physical cruelty to humans and animals, and more forager-like sexual promiscuity and respect for the environment. This should make us feel more happy, relaxed, and natural. In the extreme, we might even end up (for a time) as foragers in bands wandering virtual robot-supported forests, absent predators, famines, or pandemics.
Yet in the long run, if our interactions remain competitive, we shouldn’t expect forager behavior to be anything like the most adaptive for our descendants’ future worlds. Neither should farming of course, but one might still wonder which offers the best basis for generating adaptation to those future worlds. And on that criteria, the farming style seem more promising. Its not so much that farming ways adapted to a larger social world, more like the large social worlds we expect for our descendants. Its more that farming adapted at all – farming found ways to push foragers, whose ways had been changing very slowly by farming standards, rather quickly into doing quite unnatural things. So farming meta-innovations, like religion, honor, politeness, etc., might well be usefully repurposed to get our descendants to adapt to even stranger future environments.
For example, ems, or whole brain emulations, are my best guess for the next big transition on the order of the farming and industry transitions. Farmer-style stoicism, self-sacrifice, and self-control, detached as needed from farmer specifics like love of land or sexual monogamy, might well be more effective at creating acceptance of em-efficient lifestyles. Religious ems might, for example, better accept being deleted when new more efficient versions of themselves are introduced. “Onward Christian robots” might be the new sensibility. And em’s low incomes might help farmer-style fear-based norm-enforcement to gain traction. Perhaps you hope that an industry-refashioned forager style might adapt just as well to these new requirements. But wishing won’t make it so.
Posted on 2010-10-04
I’m about to describe two types of people, A vs. B. While reading their descriptions I want you to think about which people around you are more like type A or B. Also ask yourself: which type do you respect more? Which would you rather be?
TYPE A folks eat a healthier more varied diet, and get better exercise. They more love nature, travel, and exploration, and they move more often to new communities. They work fewer hours, and have more complex mentally-challenging jobs. They talk more openly about sex, are more sexually promiscuous, and more accepting of divorce, abortion, homosexuality, and pre-marital and extra-marital sex. They have fewer kids, who they are more reluctant to discipline or constrain. They more emphasize their love for kids, and teach kids to more value generosity, trust, and honesty.
Type A folks care less for land or material posessions, relative to people. They spend more time on leisure, music, dance, story-telling and the arts. They are less comfortable with war, domination, bragging, or money and material inequalities, and they push more for sharing and redistribution. They more want lots of discussion of group decisions, with everyone having an equal voice and free to speak their mind. They deal with conflicts more personally and informally, and more prefer unhappy folk to be free to leave. Their leaders lead more by consensus.
TYPE B folks travel less, and move less often from where they grew up. They are more polite and care more for cleanliness and order. They have more self-sacrifice and self-control, which makes them more stressed and suicidal. They work harder and longer at more tedious and less healthy jobs, and are more faithful to their spouses and their communities. They make better warriors, and expect and prepare more for disasters like war, famine, and disease. They have a stronger sense of honor and shame, and enforce more social rules, which let them depend more on folks they know less. When considering rule violators, they look more at specific rules, and less at the entire person and what feels right. Fewer topics are open for discussion or negotiation.
Type B folks believe more in good and evil, and in powerful gods who enforce social norms. They envy less, and better accept human authorities and hierarchy, including hereditary elites at the top (who act more type A), women and kids lower down, and human and animal slaves at the bottom. They identify more with strangers who share their ethnicity or culture, and more fear others. They are less bothered by violence in war, and toward foreigners, kids, slaves, and animals. They more think people should learn their place and stay there. Nature’s place is to be ruled and changed by humans.
Types A and B map reasonably well onto today’s culture wars, with A the modern/liberal and B the traditional/conservative. It maps well to the rich-poor axis from the World Value Survey. But in fact, type A vs. B are actually foragers vs. farmers. [The above summarizes many books and articles I’ve read over the last year.] Which is my point: I think a lot of today’s political disputes come down to a conflict between farmer and forager ways, with forager ways slowly and steadily winning out since the industrial revolution. It seems we acted like farmers when farming required that, but when richer we feel we can afford to revert to more natural-feeling forager ways. The main exceptions, like school and workplace domination and ranking, are required to generate industry-level wealth. We live a farmer lifestyle when poor, but prefer to buy a forager lifestyle when rich. Why this should be will be the subject of my next few posts.
Posted on 2017-08-31
Seven years ago, after a year of reading up on forager lives, I first started to explore a forager vs. farmer axis:
A lot of today’s political disputes come down to a conflict between farmer and forager ways, with forager ways slowly and steadily winning out since the industrial revolution. It seems we acted like farmers when farming required that, but when richer we feel we can afford to revert to more natural-feeling forager ways. The main exceptions, like school and workplace domination and ranking, are required to generate industry-level wealth. ([more](two-types-of-people))
Recently I decided to revisit the idea, to see if I could find a clearer story that accounts better for many related patterns. Here is what I’ve come up with.
Our primate ancestors lived in a complex Machiavellian social world, with many nested levels of allies each coordinating to oppose outside rival groups of allies, often via violence. Humans, however, managed to collapse most of those levels into one: what [Boehm](hail-christopher-boehm) has called a “reverse dominance hierarchy.” Human bands were mostly on good terms with neighboring bands, who they met infrequently. Inside each band, the whole group used weapons and language to coordinate to enforce shared social norms, to create a peaceful egalitarian safe space.
Individuals who saw a norm violation could tell others, and then the whole band could discuss what to do about it. Once a consensus formed, the band could use weapons to enforce their collective decision. As needed, punishments could escalate from scolding to shunning to exile to death. Common norms included requirements to share food and protection, and bans on violence, giving orders, bragging, and creating subgroup factions.
This worked often, but not always. People retained general Machiavellian social abilities, and usually used them covertly, just out of view of group norm enforcement. But sometimes the power of the collective waned, and then many would switch to acting more overtly Machiavellian. For example, an individual or a pair of allies might become so powerful that they could openly defy the group’s disapproval. Or such a pair might violate norms semi-privately, and use a threat of strong retaliation to dissuade others from openly decrying their violations. Or a nearby rival group might threaten to attack. Or a famine or flood might threaten mass mortality.
In the absence of such threats, the talky collective was the main arena that mattered. Everyone worked hard to look good by the far-view idealistic and empathy-based norms usually favored in collective views. They behaved well when observed, learned to talk persuasively to the group, and made sure to have friends to watch and talk for them. They expressed their emotions, and acted like they cared about others.
When they felt on good terms with the group, people could relax and [feel](specific-vs-general-foragers-farmers) safe. They then become more playful, and [acted](play-will-persist) [like](play-blindness) animals generally do when playful. Within a bounded safe space, behavior becomes more varied, stylized, artistic, humorous, teasing, self-indulgent, and emotionally expressive. For example, there is more, and more varied, music and dance. New possibilities are explored.
A feeling of safety includes feeling safe to form more distinct subgroups, without others seeing such subgroups as threatening factions. And that includes feeling safe to form groups that tend to argue together for similar positions within talky collective discussions, and to disagree with the larger group. After all, it is hard for a talky collective to function well unless members are allowed to openly disagree with one another.
But when the group was stressed and threatened by dominators, outsiders, or famine, the collective view mattered less, and people reverted to more general Machiavellian social strategies. Then it mattered more who had what physical resources and strength, and what personal allies. People leaned toward projecting toughness instead of empathy. And they demanded stronger signals of loyalty, such as conformity, and were more willing to suspect people of disloyalty. Subgroups and non-conformity became more suspect, including subgroups that consistently argued together for unpopular positions.
And here is the key idea: individuals vary in the thresholds they use to switch between focusing on dealing with issues via an all-encompassing norm-enforcing talky collective, and or via general Machiavellian social skills, mediated by personal resources and allies. Everyone tends to switch together to a collective focus as the environment becomes richer and safer. (This is one of the [many](key-disputed-values) [ways](more-2d-values) that behaviors and values consistently change with wealth.) But some switch sooner: those better at working the collective, such as being better at talking and empathy, and those who gain more from collective choices, such as physically weaker folks who can’t hunt or gather as well. And also people just generally less prone to feeling afraid as a result of ambiguous cues.
People who feel less safe are more afraid of changing whatever has worked in the past, and so hold on more tightly to typical past behaviors and practices. They are more worried about the group damaging the talky collective, via tolerating free riders, allowing more distinct subgroups, and by demanding too much from members who might just up and leave. Also, those who feel less able to influence communal discussions prefer groups norms to be enforced more simply and mechanically, without as many exceptions that will be more influenced by those who are good at talking.
I argue that this key “left vs. right” inclination to focus more vs less on a talky collective is the main parameter that consistently determines who people tend to ally with in large scale political coalitions. Other parameters can matter a lot in different times and places, but this is the one that consistently matters. This parameter doesn’t matter much for how individuals relate to each other personally, and at smaller social scales like clubs or firms, coalitions form more via our general Machiavellian abilities, based on parameters than matter directly in those contexts. But everyone has an intuitive sense for how much we all expect and want big issues to be handled by a talky collective of “everyone” with any power. The first and primary political question is how much to try to resolve issues via a big talky collective, or to let smaller groups decide for themselves.
This account that I’ve just outline does reasonably well at accounting for many known left-right patterns. For example, the right is more conscientious, while the left is more open to experience. The left prefers more varied niche types of sports, movies, and music, while the right [prefers](media-genre-more-basic-than-politics-or-personality) fewer standardized types. Artists, musicians, and comedians tend to be on the left. Right sports focus more on physical strength and combat, stronger men have stronger political opinions, and when low status they favor more redistribution. People on the right are less reflective, prefer simpler arguments, are more sensitive to disgust, and startle more easily.
Education elites are more left than business elites. In romance and spirituality, the left tends to favor authentic feelings while the right cares more about standards of behavior. The left is more spiritual while right is more religious. Left [jobs](conservative-vs-liberal-jobs) focused more on talking and on a high tail of great outcomes, while right jobs focus more on avoiding a low tail of bad outcomes.
The left is more okay with people forming distinct subgroups, even as it thinks more in terms of treating everyone equally, even across very wide scopes, and including wide scopes in more divisive debates. The right wants to make redistribution more conditional, more wants to punish free riders, and wants norm violators to be more consistently punished. The left tends to presume large scale cooperation is feasible, while right tends to presume competition more. The left hopes for big gains from change while the right worries about change damaging things that now work.
Views tend to drift leftward as nations and the world get richer. Left versus right isn’t very useful for predicting individual behavior outside of politics, even as it is the main parameter that robustly determines large scale political inclinations. People tend to think differently about politics on what they see as the largest scales; for example, there are whole separate fields of political science and political philosophy, which don’t overlap much with fields dealing with smaller scale politics, such as in clubs and firms.
I shouldn’t need to say it but I will anyway: it is obvious that a safe playful talky collective is sometimes but not always the best way to deal with things. Its value varies with context. So sometimes those who are more reluctant to invoke it are right to be wary, while at other times those who are eager to apply it are right to push for it. It is not obvious, at least to me, whether on average the instincts of the left or the right are more helpful.
I’ve [noted before](specific-vs-general-foragers-farmers) that if one frames left attitudes as better when the world is safe, while right attitudes as better when world is harsh, the longer is the timescale on which you evaluate outcomes, the harsher is the world.
Added 9Sept: This post didn’t say much directly about farmers. In the much larger farmer social groups, simple one layer talky collectives were much less feasible. Farmer lives had new dangers of war and disease, and neighboring groups were more threatening. The farmer world more supported property in spouses and material goods and had more social hierarchies, farmer law less relied on a general discussion of each accused, and more reliable food meant there was less call for redistribution. Farmers worked more and had less time for play. Together, these tended to reduce the scope of safe playful talky collectives, moving society in a rightward direction relative to foragers.
## [Rome As Semi-Foragers](#table-of-contents)
_Posted on 2010-12-28_
It seems that an “almost” industrial revolution happened around 500BC. For example, this graph of estimated world population shows a population jump then similar to the start of the ~1800 jump. Also, consider this brief history of the Roman Empire:
~5 century BC: Roman civilization is a strong patriarchy, fathers … have absolute authority over the family.
~1 century BC: … Material wealth is astounding, … Romans enjoy the arts … democracy, commerce, science, human rights, animal rights, children rights and women become emancipated. No-fault divorce is enacted, and quickly becomes popular by the end of the century.
~1-2 century AD: … Men refuse to marry and the government tries to revive marriage with a “bachelor tax”, to no avail. … Roman women show little interest in raising their own children and frequently use nannies. The wealth and power of women grows very fast, while men become increasingly demotivated and engage in prostitution and vice. Prostitution and homosexuality become widespread.
~3-4 century AD: … Roman population declines due to below-replacement birth-rate. Vice and massive corruption are rampant. (more; HT Roissy)
Yes this exaggerates, but the key point remains: a sudden burst in productivity and wealth lead to big cultural changes that made the Greek-Roman world and its cultural descendants more forager-like than the rest of the farmer world. These changes helped clear the way for big cultural changes of the industrial revolution.
These cultural changes included not more political egalitarianism, but also more forager like attitudes toward alchohol and mating:
Historically, we find a correlation between the shift from polygyny to monogamy and the growth of alcohol consumption. Cross-culturally we also find that monogamous societies consume more alcohol than polygynous societies in the preindustrial world. … Studies find a positive relationship between alcohol use on the one hand and a more promiscuous and high-risk sexual behavior on the other hand. … The Greek and Roman empires … were the only (and first) to introduce formal monogamy. … Hunting tribes drink more than agricultural and settled tribes. … Hunting tribes … have more monogamous marriage arrangements than agricultural tribes. …
The emergence of socially imposed formal monogamy in Greece coincides with (a) the growth of “chattel slavery” (where men can have sex with female slaves) and (b) the extension of political rights. … The industrial revolution played a key role in the shift from formal to effective monogamy and in the sharp increase of alcohol consumption (more; HT Tyler)
This roughly fits my simple story: forager to farmer and back to forager with industry. The key is to see monogomous marraige as an intermediate form between low-commitment feeling-based forager mating, and wives-as-property-for-live farmer polygamy. Let me explain.
Forager work and mating is more intuitive, less institutional. Mates stay together mainly because they feel like it; there is more an open compeition to seduce mates, and there’s a lot of sneaking around. Foragers drink alchohol when they can, and spontaneous feelings count for more relative to formal commitments. The attitude is more that if you can’t hold her interest, you don’t deserve to keep her. Men show off abilities to obtain resources mainly to signal attractive qualities; most resources acquired must be shared with the rest of the band.
Farmers, in contrast, don’t share much, and are far more unequal in the resources they control, by which they can more directly “buy” wives. Farmer wives so bought are supposed to be committed to their husbands even when they don’t feel like it. Marriage was less about mutal attraction and more about building households and clans. Husbands worry about cheating wives, and so try to limit access and temptations, which includes alchohol. Musicians and artists are also suspect if they excite wives’ passions, which might lead to cheating.
When empires like Greece and Rome achieved sustained periods of prosperity, their elites reverted to more forager-like ways. They had more drinking and art, more egalitarian politics, fertility fell, and [non-slave] mating became more egalitarian and about feelings. If a bit of alchohol was enough to get your wife cheat to on you, well maybe you didn’t deserve her. The Greek-Roman move from polygamy to monogamy was a move in the direction of more forager-like feeling-based mating, though it retained farmer-like lifelong commitment.
The Greeks and Romans became models for Europe when industry made it rich again. In our era, fertility has fallen far, divorce and out-of-wedlock births are common, and alchohol, drugs, and sneaking about are more tolerated. Women need men less for their resources, and choose them more on other grounds. Dropping the lifelong commitment element of marriage, and often the expectation of any sort of marriage commitment, we have moved even further away from farmer wives-as-lifelong-property and toward forager “promiscuity.”
Added: Razib Khan and Jason elaborate.
Added 1Feb: A new study says that in places where marriages are more arranged by parents, there is more mate-guarding. Discouraging alcohol seems a reasonable mate-guarding strategy.
## [Self-Control Is Slavery](#table-of-contents)
_Posted on 2010-06-05_
I’ve been pondering 3 related points. 1) [Self-Control Is Culture-Control](self-control-is-culture-control):
It seems to me that … the key change after farming [was] an increased sensitivity to culture, so that social sanctions became better able to push behavior contrary to other inclinations. … This increased sensitivity to the carrots and sticks of culture generally appears to us as greater “self-control”, i.e., as our better resisting immediate inclinations for other purposes. And since we have more self-control in far mode, I suspect an important component of change since farming has been greater inclinations toward and abilities in far mode.
2) Fogel & Engerman’s economic classic analysis of US slavery:
Plantation agriculture based upon slave labor … may have been significantly more efficient than family farming. … The typical slave field-hand may have been more productive than a free, white field-hand. … Slavery was not incompatible with industrial production. … Slave-labor farms were 28 percent more productive than southern free-labor farms and 40 percent more productive than northern free-labor farms. …
Plantation operators strove for a disciplined, specialized and coordinated labor force. Labor was organized into something like the assembly line operations in industry. This involved “driving” the slaves’ efforts to maintain a pace of production. The “drivers” or foremen were slaves themselves. …
Plantations had a much higher rate of labor force participation, two thirds, as compared with a free population, one third. This was achieved by finding productive pursuits for the young and the elderly and maintaining nurseries so that slave women could work.
3) The latest AER on designing work to aid self-control:
The Industrial Revolution involved workers moving from agriculture to manufacturing; from working on their own to working with others in factories; and from flexible work-hours to rigid work-days. … Some work-place arrangements may make self-control problems more severe, while others may ameliorate them. … The firm … can use regular compensation to … make the returns to effort more immediate. Firms can also create disproportionate penalties for certain types of low efforts … so as to create sharp self-control incentives. … Conforming to an externally set pace, however, can decrease these self-control costs. … Workers planting rice-fields often find it helpful to synchronize movements to music or to beats. In industrial production, the assembly line may serve a similar purpose. … An intrinsic competitive drive may make the momentary self exert more effort when surrounded by hard-working coworkers. Young boys run races faster when running alongside another boy than when running alone. …
[Farming] creates difficult self-control problems. First, it involves long time horizons — farmers must tend their land constantly for months before reaping benefits at harvest. These lags can generate suboptimal effort in early stages of production. Financially, farmers may also fail to save enough money out of lumpy harvest payments to make efficient investments during the production cycle, further affecting labor supply returns and output. Second, agriculture often involves self-employment or very small firms. As a result, there are rarely firms or large employers to mitigate the self-control problem. Tasks cannot be structured, compensation altered, or work intensity regulated. Finally, agrarian production by nature is also geographically dispersed, which makes colocation of workers difficult. … This can help explain the observation that work hours appear to be low in modern-day subsistence agriculture. …
In the workshop system, workers rented floor space or machinery in factories, received pure piece rates for output … Clark presents evidence that workers under the workshop system had very unsteady attendance and hours, spent a lot of time socializing at work, and concentrated effort in the latter half of the week leading up to paydays. Clark argues that this led firms to transition to the factory discipline system to solve self-control problems.
OK, now let’s put it all together. Apparently, factory-like methods that greatly increase farming productivity have long been feasible. (First known factory: Venice Arsenal, 1104.) Yet it took slaves to actually implemented such methods in farming. Even after ten thousand years of Malthusian competition, a farming method that could support a much larger population per land area did not displace other methods. (And if factory-fortified foraging was possible, the timescale problem gets much worse.)
The introduction of farming was associated with important new elements, like religion, that encouraged more “self-control,” i.e. sensitivity to social norms. However, those additions were not sufficient to achieve factory-like farming — most humans had too little self-control to make themselves behave that way, and too strong an anti-dominance norm to let rulers enforce such behavior.
This dramatically illustrates the huge self-control innovations that came with industry. [School](school-is-far), propaganda, mass media, and who knows what else have greatly changed human nature, enabling a system of industrial submission and control that proud farmers and foragers simply would not tolerate – they would (and did) starve first. In contrast, industry workers had enough self/culture-control to act as only slaves would before – working long hours in harsh alien environments, and showing up on time and doing what they were told.
So what made industry workers so much more willing to increase their self-control, relative to farmers? One guess: the productivity gains from worker self-control were far larger in industry than in farming. Instead of a 50% gain, it might have been a factor of two or more. Self-controlled workers and societies gained a big enough productivity advantage to compensate for lost pride.
Humans are an increasingly self-domesticated species. Foragers could cooperate in non-kin groups of unprecedented size, farmers could enforce norms to induce many behaviors unnatural for foragers, and the schooled humans of industry would willingly obey like enslaved farmers. Our descendants may evolve even stronger self/culture-control of behavior.
## [School Is To Submit](#table-of-contents)
_Posted on 2016-04-06_
Most animals in the world can’t be usefully domesticated. This isn’t because we can’t eat their meat, or feed them the food they need. It is because all animals naturally resist being dominated. Only rare social species can let a human sit in the role of dominant pack animal whom they will obey, and only if humans do it just right.
Most nations today would be richer if they had long ago just submitted wholesale to a rich nation, allowing that rich nation to change their laws, customs, etc., and just do everything their way. But this idea greatly offends national and cultural pride. So nations stay poor.
When firms and managers from rich places try to transplant rich practices to poor places, giving poor place workers exactly the same equipment, materials, procedures, etc., one of the main things that goes wrong is that poor place workers just refuse to do what they are told. They won’t show up for work reliably on time, have many problematic superstitions, hate direct orders, won’t accept tasks and roles that that deviate from their non-work relative status with co-workers, and won’t accept being told to do tasks differently than they had done them before, especially when new ways seem harder. Related complaints are often made about the poorest workers in rich societies; they just won’t consistently do what they are told. It seems pride is a big barrier to material wealth.
The farming mode required humans to swallow many changes that didn’t feel nice or natural to foragers. While foragers are fiercely egalitarian, farmers are dominated by kings and generals, and have unequal property and classes. Farmers work more hours at less mentally challenging tasks, and get less variety via travel. Huge new cultural pressures, such as religions with moralizing gods, were needed to turn foragers into farmers.
But at work farmers are mostly autonomous and treated as the equal of workers around them. They may resent having to work, but adults are mostly trusted to do their job as they choose, since job practices are standardized and don’t change much over time. In contrast, productive industrial era workers must accept more local domination and inequality than would most farmers. Industry workers have bosses more in their face giving them specific instructions, telling them what they did wrong, and ranking them explicitly relative to their previous performance and to other nearby workers. They face more ambiguity and uncertainty about what they are supposed to do and how.
How did the industrial era get at least some workers to accept more domination, inequality, and ambiguity, and why hasn’t that worked equally well everywhere? A simple answer I want to explore in this post is: prestigious schools.
While human foragers are especially averse to even a hint of domination, they are also especially eager to take “orders” via copying the practices of [prestigious](two-kinds-of-status) folks. Humans have a uniquely powerful capacity for [cultural evolution](how-plastic-are-values) exactly because we are especially eager and able to copy what prestigious people do. So if humans hate industrial workplace practices when they see them as bosses dominating, but love to copy the practices of prestigious folks, an obvious solution is to habituate kids into modern workplace practices in contexts that look more like the latter than the former.
In his upcoming book, The Case Against Education, my colleague Bryan Caplan argues that school today, especially at the upper levels, functions mostly to help students signal intelligence, conscientiousness, and conformity to modern workplace practices. He says we’d be better off if kids did this via early jobs, but sees us as having fallen into an unfortunate equilibrium wherein individuals who try that seem non-conformist. I agree with Bryan that, compared with the theory that older students mostly go to school to learn useful skills, signaling better explains the low usefulness of school subjects, low transfer to other tasks, low retention of what is taught, low interest in learning relative to credentials, big last-year-of-school gains, and student preferences for cancelled classes.
My main problem with Caplan’s story so far (he still has time to change his book) is the fact that centuries ago most young people did signal their abilities via jobs, and the school signaling system has slowly displaced that job signaling system. Pressures to conform to existing practices can’t explain this displacement of a previous practice by a new practice. So why did signaling via school did win out over signaling via early jobs?
Like early jobs, school can have people practice habits that will be useful in jobs, such as showing up on time, doing what you are told even when that is different from what you did before, figuring out ambiguous instructions, and accepting being frequently and publicly ranked relative to similar people. But while early jobs threaten to trip the triggers than make most animals run from domination, schools try to frame a similar habit practice in more acceptable terms, as more like copying prestigious people.
Forager children aren’t told what to do; they just wander around and do what they like. But they get bored and want to be respected like adults, so eventually they follow some adults around and ask to be shown how to do things. In this process they sometimes have to take orders, but only until they are no longer novices. They don’t have a single random boss they don’t respect, but can instead be trained by many adults, can select them to be the most prestigious adults around, and can stop training with each when they like.
Schools work best when they set up an apparently similar process wherein students practice modern workplaces habits. Start with prestigious teachers, like the researchers who also teach at leading universities. Have students take several classes at at a time, so they have no single “boss” who personally benefits from their following his or her orders. Make class attendance optional, and let students pick their classes. Have teachers continually give students complex assignments with new ambiguous instructions, using the excuse of helping students to learn new things. Have lots of students per teacher, to lower costs, to create excuses for having students arrive and turn in assignments on time, and to create social proof that other students accept all of this. Frequently and publicly rank student performance, using the excuse of helping students to learn and decide which classes and jobs to take later. And continue the whole process well into adulthood, so that these habits become deeply ingrained.
When students finally switch from school to work, most will find work to be similar enough to transition smoothly. This is especially true for desk professional jobs, and when bosses avoid giving direct explicit orders. Yes, workers now have one main boss, and can’t as often pick new classes/jobs. But they won’t be publicly ranked and corrected nearly as often as in school, even though such things will happen far more often than their ancestors would have tolerated. And if their job ends up giving them prestige, their prior “submission” to prestigious teachers will seem more appropriate.
This point of view can help explain how schools could help workers to accept habits of modern workplaces, and thus how there could have been selection for societies that substituted schools for early jobs or other child activities. It can also help explain unequal gains from school; some kinds of schools should be less effective than others. For example, teachers might not be prestigious, teachers may fail to show up on time to teach, teacher evaluations might correlate poorly with student performance, students might not have much choice of classes, school tasks might diverge too far from work tasks, students may not get prestigious jobs, or the whole process might continue too long into adulthood, long after the key habituation has been achieved.
In sum, while students today may mostly use schools to signal smarts, drive, and conformity, we need something else to explain how school displaced early work in this signaling role. One plausible story is that schools habituate students in modern workplace habits while on the surface looking more like prestigious forager teachers than like the dominating bosses that all animals are primed to resist. But this hardly implies that everything today that calls itself a school is equally effective at producing this benefit.
## [Why Grievances Grow](#table-of-contents)
_Posted on 2019-03-09_
> We have come to call these fields “grievance studies” in shorthand because of their common goal of problematizing aspects of culture in minute detail in order to attempt diagnoses of power imbalances and oppression rooted in identity. (more)
> A full 80% [of US] believe that “political correctness is a problem in our country.” … The woke are in a clear minority across all ages. … Progressive activists are the only group that strongly backs political correctness: Only 30% see it as a problem. … Compared with the rest of the [nation], progressive activists are much more likely to be rich, highly educated—and white. … What people mean by “political correctness.” … [is] their day-to-day ability to express themselves: They worry that a lack of familiarity with a topic, or an unthinking word choice, could lead to serious social sanctions for them. (more)
> While the American legal system favors the state over the individual in property takings, for example in contrast with the Japanese system, the political system favors NIMBYs and really anyone who complains. Infrastructure construction takes a long time and the politician who gets credit for it is rarely the one who started it, whereas complaints happen early. This can lead to many of the above-named problems [with transit construction], especially overbuilding, such as tunneling where elevated segments would be fine or letting agency turf battles and irrelevant demands dictate project scope. (more)
> Chronic Complainers: These folks live in a constant state of complaint. If they’re not voicing about their “woe is me” attitude, they’re probably thinking about it. Psychologists term this compulsory behavior rumination, defined as “repetitively going over a thought or a problem without completion.” Rumination is, unfortunately, directly relayed to the depressed and anxious brain. (more)
> Customers with high status tended to register more service failures and to complain more frequently than customers of lower social status. All three social status distinctions explored in this study (gender, education, and age) correlated negatively with formal complaint, but only age correlated negatively with informal complaint. … Two cultural dimensions [power distance and uncertainty avoidance] had the expected negative effect on intention to complain, and moderated the relationship between social status and intention to complain. (more)
Learning someone is prone to complain more often that others can change your opinion of them. And this effect may be different for low vs. high status (S) people. Do you think more or less of complainers who are high vs. low status? > — Robin Hanson (@robinhanson) March 9, 2019> > My [favorite](two-types-of-people) one-factor [theory](forager-v-farmer-elaborated) of social attitude (and value) change over the last few centuries is that increasing wealth has induced a drift from farmer back to forager attitude (and values). (A theory I also outline in Age of Em.) Which plausibly helps explains changing attitudes toward fertility, gender, slavery, crime, democracy, war, leisure, art, and travel. In this post I want to suggest a (to me) new hypothesis about forager attitudes, which could help explain some recent attitude trends. Foragers are fiercely egalitarian. They share many kinds of food and other resources, and enforce a norm of quickly and aggressively squashing any signs of attempts to use or threaten to use force, or any inclinations to do so. In fact, this is [probably](hail-christopher-boehm) the uber-norm that drove the evolution of norms in the first place. Bragging about your physical strength is a no-no, as that can be interpreted as an implicit threat to use that strength. Even bragging about your intelligence or other resources is discouraged, as those might also be seen as threats, or as attempts to form coalitions that might threaten. Forager group decisions are to be made by consensus, after everyone has had a chance to weigh in. Now consider foragers attitudes about complaining. When someone more dominant makes a complaint to someone less dominant, that can often be interpreted as a threat to use power if the complaint isn’t fixed. Which is a big forager no-no. But when a less dominant person complains to a more dominant person, it is harder to see that as a threat to use power. So complaints down are discouraged more than complaints up, just as punching down is more of a no-no than punching up. And we’ll tend to interpret complaints as a pro-down positions. A complaint that is made to third parties fits the standard norm-enforcement pattern, a pattern of which foragers greatly approve. Thus having A complain to B about how a more dominant person C is treating a less dominant person D badly should generally meet with approval. This is A helping out with norm enforcement, and can be seen as “speaking truth to power.” If A is a high [prestige](dominance-hides-in-prestige-clothing) person, and B is a wise and moral audience, this pattern should be especially approved. After all, we naturally believe prestigious people more than others. And if a complaint leads to action of which we later approve, that can increase the prestige of the complainers. Yes, people who complain a lot tend to seem unhealthy, and we tend to think less of frequent complainers. Even so, foragers likely a big soft spot in their hearts for prestigious people who complain to the whole group that some low dominance people are being treated badly by high dominance people. Those complaints, foragers respected. In our society today, we tend to frame big firms, governments, rich folks, and larger demographic groups as more dominant actors. So when a local neighborhood group complains about a government plan for a transit construction project, we tend to see that as a low dominance actor complaining about a high dominance actor, and habitually sympathize. And to the extent that we have forager-like attitudes about such situations, this increases the political negotiating power of such complainers, inducing governments to give in to them, and raising the costs of transit construction projects. Similar processes likely increase the power of neighborhood groups who demand rent, zoning, and private construction restrictions, resulting in less new buildings and housing. Forager-like attitudes similarly prime us to favor ordinary consumers or employees who complain about big firms, and this encourages regulations focused mostly on consumer and employee welfare, relative to the welfare of investors, who are framed as rich and thus dominators. Even rich high status people feel comfortable complaining about how big firms treat them, and in fact they feel more comfortable than low status folks. Their higher prestige can make them feel like respected moral crusaders for all. As larger race/ethnicities are framed as dominators relative to smaller ones, forager-like attitudes prime us to sympathize with complaints that the former mistreat the latter. Similarly for complaints on how the larger groups who have more standard gender and sexual preferences treat the smaller groups who have more deviant genders and sexual preferences. Men’s higher physical strength and participation in war, and higher percentage among top positions at most organizations, has long induced us to frame men as more dominant relative to women. Thus when we have more forager-like attitudes, we naturally sympathize when high prestige people complain that these more dominant groups are mistreating the less dominant groups. And in fact people with the potential for high prestige can seek to cement and increase their prestige via such complaints. Which is plausibly why it is high prestige folks who participate most in “grievance studies” type complaining. Forager-like attitudes should make us sympathize with most any complaint about how rich people treat less rich people. Including how they conspire to mess up markets, political systems, or legal systems. Also, when criminals are committing crimes, they can seem like illicit dominators relative to ordinary citizens. But police, courts, and prisons can seem like dominators relative to criminals, thus inducing us to sympathize with complaints that criminals are being treated too harshly by the legal system. Perhaps explaining why prestigious folks [seem to](why-weakly-enforced-rules) consistently push for weaker criminal punishments. My wealth-induces-farmer-to-forager-attitudes story says that this complaint-sympathizing effect has been slowly getting stronger as we’ve been getting richer and more forager-like. It is strongest in the richest nation, which is currently the US, and it will continue to get stronger world-wide as the world gets richer. And these grievances accumulate when we do not [use law](consider-reparations) to try and settle them. And that’s my story. Hyper-egalitarian foragers were especially sympathetic to complaints by prestigious folks that high-dominance folks were mistreating less-dominant others, and with increasing wealth we’ve been slowly increasing our embrace of this forager attitude. And so we’ve been listening more to such complainers, and giving them more political and social power, which has encouraged more high prestige folks to present themselves as such crusading complainers. Which results in a growing accumulation of such grievances. What to do about this will have to wait for another post. Added 10Mar: The conceptual power here is that this theory is more specific than the general idea that we dislike inequality and dominance, and so work consistently to reduce them. A habit of favoring specific complaints against more dominant parties can actually increase inequality and dominance in many cases. Added 11Mar: Martin Gurri’s book Revolt of the Public can be seen as describing a switch to a focus on popular complaints. He describes many new social movements around 2011 that focused on complaining loudly to an enthusiastic public, but which due to egalitarian ideals weren’t interested in or capable of negotiating concrete demands or working within the usual political systems. ## [The World Forager Elite](#table-of-contents) _Posted on 2020-09-22_ My [last post](elois-ate-your-flying-car) was on Where’s My Flying Car?, which argues that changing US attitudes created a tsunami of reluctance and regulation that killed nuclear power, planes, and ate the future that could have been. This explanation, however, has a problem: if there are many dozens of nations, how can regulation in one nation kill a tech? Why would regulatory choices be so strongly correlated across nations? If nations compete, won’t one nation forgoing a tech advantage make others all the more eager to try it? Now as nuclear power tech is close to nuclear weapon tech, maybe major powers exerted strong pressures re how others pursued nuclear power. Also, those techs are high and require large scales, limiting how many nations could feasibly do them differently. But we also see high global correlation for many other kinds of regulation. For example, as Hazlett [explains](hazletts-political-spectrum), the US started out with a reasonable property approach to spectrum, but then Hoover broke that on purpose, to create a problem he could solve via nationalization, thereby gaining political power that helped him become U.S. president. Pretty much all other nations then copied this bad US approach, instead of the better prior property approach, and kept doing so for many decades. The world has mostly copied bad US approaches to over-regulating planes as well. We also see regulatory convergence in topics like human cloning; many had speculated that China would be defy the consensus elsewhere against it, but that turned out not to be true. Public prediction markets on interesting topics seems to be blocked by regulations almost everywhere, and insider trading laws are most everywhere an obstacle to internal corporate markets. Back in February we saw a dramatic example of world regulatory coordination. Around the world public health authorities were talking about treating this virus like they had treated all the others in the last few decades. But then world elites talked a lot, and suddenly they all agreed that this virus must be treated differently, such as with lockdowns and masks. Most public health authorities quickly caved, and then most of the world adopted the same policies. Contrarian alternatives like [variolation](variolation-may-cut-covid19-deaths-3-30x), challenge trials, and cheap fast lower-reliability tests have also been rejected everywhere; small experiments have not even been allowed. One possible explanation for all this convergence is that regulators are just following what is obviously the best policy. But if you dig into the details you will quickly see that the usual policies are not at all obviously right. Often, they seem obviously wrong. And having all the regulatory bodies suddenly change at once, even when no new strong evidence appeared, seems especially telling. It seems to me that we instead have a strong world culture of regulators, driven by a stronger world culture of elites. Elites all over the world talk, and then form a consensus, and then authorities everywhere are pressured into following that consensus. Regulators most everywhere are quite reluctant to deviate from what most other regulators are doing; they’ll be blamed far more for failures if they deviate. If elites talk some more, and change their consensus, then authorities must then change their polices. On topic X, the usual experts on X are part of that conversation, but often elites overrule them, or choose contrarians from among them, and insist on something other than what most X experts recommend. This looks a lot like the ancient forager system of conflict resolution within bands. Forager bands would gossip about a problem, come to a consensus about what to do, and then everyone would just do that. Because each one would lose status if they didn’t. In this system, there were no formal rules, and on the surface everyone had an equal say, though in fact some people had a lot more prestige and thus a lot more influence. This world system also looks new – I doubt this description applied as well to the world centuries or millennia ago, even within smaller regions. So this looks like [another](forager-v-farmer-elaborated) way in which our world has become more forager-like over the last few centuries, as we’ve felt more rich and safe. Big world wars probably cut into this feeling, so there was probably a big jump in the few decades after WWII, helping to explain the big change in attitudes ~1970. Elites like to talk about this system as if it were “democratic”, so that any faction that opposes it “undermines democracy”. And it is true that this system isn’t run by a central command structure. But it is also far from egalitarian. It embodies a huge inequality of influence, even if individuals within it claim that they are mainly driven by trying to help the world, or “the little guy”. This system seems a big obstacle for my hopes to create better policy institutions driven by expert understanding of institutions, and to get trials to test and develop such things. Because as soon as any policy choice seems important, such by triggering moral feelings, world elite culture feels free to gossip and then pressure authorities to adopt whatever solution their gossip prefers. Experts can only influence policy via their prestige. Very prestigious types of experts, such as in physics, can win, especially on topics about which world elites care little. But otherwise, elite gossip wins, whenever it bothers to generate an opinion. That is, the global Overton window isn’t much wider than are local Overton windows, and often excludes a lot of valuable options. Notice that in this kind of world, policy has varied far more across time than across space. Context and fashion change with time, and then elites sometimes change their minds. So perhaps my hopes for policy experiments must wait for the long run. Or for a fall of forager values, such as seems likely in an Age of Em. Alas neither I nor my allies have sufficient prestige to push elites to favor our proposals. Added 11p: It seems to me that the actual degree of experimentation and variance in policy is far below optimum in this conformist sort of policy world. We are greatly failing to try out as many alternatives as fast as we should to find out what works best. And we are failing to listen enough to our best experts, and instead too often going with the opinions of well-educated but amateur world elites. Added4p: As John Nye reminds me, in the early years of a new tech, only a few nations in the world may be able to pursue it. They then set the initial standards of regulation. Later, more nations may be able to participate, but risk-averse regulators may feel shy about defying widely adopted initial standards. ## [The Great Cycle Rule](#table-of-contents) _Posted on 2017-03-08_ History contains a lot of data, but when it comes to the largest scale patterns, our data is very limited. Even so, I think we’d be crazy not to notice whatever patterns we can find at those largest scales, and ponder them. Yes we can’t be very sure of them, but we surely should not ignore them. I’ve said that history can be summarized as a sequence of roughly exponential growth modes. The three most recent modes were the growth of human foragers, then of farmers, then of industry. Roughly, foragers doubled every quarter million years, farmers every thousand years, and industry every fifteen years. (Before humans, animal brains doubled roughly every 35 million years.) I’ve previously noted that this sequence shows some striking patterns. Each transition between modes took much less than a previous doubling time. Modes have gone through a similar number of doublings before the next mode appeared, and the factors by which growth rates increased have also been similar. In addition, the group size that typified each mode was roughly the square of that of the previous mode, from thirty for foragers to a thousand for farmers to a million for industry. In this post I report a new pattern, about cycles. Some cycles, such as days, months, and years, are common to most animals days, months, years. Other cycles, such as heartbeats lasting about a second and lifetimes taking threescore and ten, are common to humans. But there are other cycles that are distinctive of each growth mode, and are most often mentioned when discussing the history of that mode. For example, the 100K year cycle of ice ages seems the most discussed cycle regarding forager history. And the two to three century cycle of empires, such as [documented](cycles-of-war-empire) by Turchin, seems most discussed regarding the history of farmers. And during our industry era, it seems we most discuss the roughly five year business cycle. The new pattern I recently noticed is that each of these cycles lasts roughly a quarter to a third of its mode’s doubling time. So a mode typically grows 20-30% during one period of its main cycle. I have no idea why, but it still seems a pattern worth noting, and pondering. If a new mode were to follow these patterns, it would appear in the next century, after a transition of ten years or less, and have a doubling time of about a month, a main cycle of about a week, and a typical group size of a trillion. Yes, these are only very rough guesses. But they still seem worth pondering. ## [The Labor-From-Factories Explosion](#table-of-contents) _Posted on 2016-05-04_ As I’ve discussed before, including in my book, the history of humanity so far can be roughly summarized as a sequence of three exponential growth modes: foragers with culture started a few million years ago, farming started about ten thousand years ago, and industry starting a few hundred years ago. Doubling times got progressively shorter: a quarter million years, then a millennia, and now fifteen years. Each time the transition lasted less than a previously doubling time, and roughly similar numbers of humans have lived during each era. Before humans, animal brains brains grew exponentially, but even more slowly, doubling about every thirty million years, starting about a half billion years ago. And before that, genomes [seem to](life-before-earth) have doubled exponentially about every half billion years, starting about ten billion years ago. What if the number of doublings in the current mode, and in the mode that follows it, are comparable to the number of doublings in the last few modes? What if the sharpness of the next transition is comparable to the sharpness if the last few transitions, and what if the factor by which the doubling time changes next time is comparable to the last few factors. Given these assumptions, the next transition will happen sometime in roughly the next century. Within a period of five years, the economy will be doubling every month or faster. And that new mode will only last a year or so before something else changes. To summarize, usually in history we see relatively steady exponential growth. But five times so far, steady growth has been disturbed by a rapid transition to a much faster rate of growth. It isn’t crazy to think that this might happen again. Plausibly, new faster exponential modes appear when a feedback loop that was previously limited and blocked becomes is unlocked and strong. And so one way to think about what might cause the next faster mode after ours is to look for plausible feedback loops. However, if there thousands of possible factors that matter for growth and progress, then there are literally millions of possible feedback loops. For example, denser cities should innovate more, and more innovation can find better ways to make buildings taller, and thus increase city density. More better tutorial videos make it easier to learn varied skills, and some of those skills help to make more better tutorial videos. We can go all day making up stories like these. But as we have only ever seen maybe five of these transitions in all of history, powerful feedback loops whose unlocking causes a huge growth rate jump must be extremely rare. The vast majority of feedback loops do not create such a huge jump when unlocked. So just because you can imagine a currently locked feedback loop does not make unlocking it likely to cause the next great change. Many people lately have fixated on one particular possible feedback loop: an “intelligence explosion.” The more intelligence a creature is, the more it is able to change creatures like itself to become more intelligent. But if you mean something more specific than “[mental goodness](the-betterness-explosion)” by “intelligence”, then this remains only one of thousands of possibilities. So you need strong additional arguments to see this feedback loop as more likely than all the others. And the mere fact that you can imagine this feedback being positive is not remotely enough. It turns out that we already know of an upcoming transition of a magnitude similar to the previous transitions, scheduled to arrive roughly when prior trends led us to expect a new transition. This explosion is due to labor-from-factories. Today we can grow physical capital very fast in factories, usually doubling capital on a scale ranging from a few weeks to a few months, but we grow human workers much more slowly. Since capital isn’t useful without more workers, we are forced to grow today mainly via innovation. But if in the future we find a way to make substitutes for almost all human workers in factories, the economy can grow much faster. This is called an AK model, and standard growth theory says it is plausible that this could let the economy double every month or so. So if it is plausible that artificial intelligence as capable as humans will appear in the next century or so, then we already know what will cause the next great jump to a faster growth mode. Unless of course some other rare powerful feedback loop is unlocked before then. But if an intelligence explosion isn’t possible until you have machines at least as smart as humans, then that scenario won’t happen until after labor-from-factories. And even then it is far from obvious that feedback can cause one of the few rare big growth rate jumps. ## [Lost Advanced Civilizations](#table-of-contents) _Posted on 2020-08-18_ Did life on Earth start on Earth, or did it start on Mars and move to Earth? If you frame such panspermia as an “extraordinary claim” for which you demand “extraordinary evidence”, you will of course conclude that this should be treated “skeptically” as unlikely and sloppy unscientific “speculation”. To be disdained and not treated as serious by respectable academics and science journalists. But that’s not really fair. You see the early Mars environment is, a priori, about as likely a place for life to start as the Earth environment. So if the rate at which life is transferred between the planets were high enough, then equal chances of life starting first in both places would result in equal chances for Earth life to have started in either place. We should take the expected time difference between life starting in the two places, and ask how high is the chance that life would move from one planet to the next during that period. The more often rocks are thrown from one place to the other, and the more easily life could survive for the travel period within those rocks, then the more likely it is that Earth life started on Mars. In addition, Mars, being further from the Sun, would have cooled first, and had a head start in its window for life. Making it more likely that life would start there and spread to Earth than vice versa. Of course life starting first on Mars would have implications for what we might see when we look at Mars. If we had expected Mars life to continue strong until today, then the fact that we see no life on Mars now would be a big strike against this hypothesis. But if we expected Mars life to have died out or at least gone dormant by now, then the issue is what we will see when we dig on Mars. With enough data on such digs, we may come to reject to Mars first hypothesis even given its initial plausibility. A similar analysis applies to panspermia from other stars. You might think it obvious that the rate at which life-filled rocks from a star make it to seed other stars is very low, but most stars are born in large groups close together in stellar nurseries. So if life arose early enough within our star’s nursery, there might have been high rates of moving that life between stars in that nursery. In which [case](pondering-panspermia) the chance that Earth life came from another star could also be high, and the best place to look for life outside our star would be the other stars from our stellar nursery. Now consider the possibility of lost advanced civilizations. Not just civilizations at a similar level of development to those around them in space and time; that’s quite likely given that we keep finding new previously-unknown settlements and developed places. No, the more interesting claims are about substantial (but not crazy extreme) decreases in the peak or median level of civilizations across wide areas. Such as what happened late in the late Mediterranean Bronze Age, or at the fall of the Roman Empire. Could there have been “higher” civilizations before the “first” ones that we now know about in each region, such as the Sumerians, Egyptians, and Chinese Shang dynasty? (I’m talking human civs, [not](dinos-on-the-mo) others.) Yes, you might think of these as “extraordinary” claims for which we lack extraordinary evidence, and declare them unlikely and sloppy unscientific speculation, to be disdained by the respectable. But again, that’s not fair. A priori it is nearly as likely that overall advancement in a region would have taken a big (but not crazy huge) temporary dip, as that it would have had a recently-typical rise. No, that isn’t much of a reason for skepticism. Substantial, if hardly overwhelming, supporting evidence comes in the form of writings from the earliest authors we can find, who explicitly claim that they descended from more advanced prior civilizations, who fell due to big cataclysms. This story is actually quite common. Further supporting evidence exists when the earliest versions of the first civilizations we see had surprisingly advanced abilities for their time in key areas, abilities which then declined over time. That is what you’d expect to see after a prior peak. And that does seem to be what we see in Egypt and Peru, as far as I can tell, regarding stone masonry abilities. Of course that might also just reflect local fluctuations in particular abilities; the big question is how much correlation to expect to see across different kinds of civilization abilities. The most common contrary evidence offered is the absence of expected supporting evidence. For example: > No matter how devastating an extraterrestrial impact might be, are we to believe that after centuries of flourishing every last tool, potsherd, article of clothing, and, presumably from an advanced civilization, writing, metallurgy and other technologies—not to mention trash—was erased? Inconceivable. (More) > He claims that glacial runoff from the comet’s incineration of the ice sheets covering North America could have destroyed every trace of civilization, though how animal bones survived but not a single stone or metal tool, or a single indisputably human-carved block of stone is beyond me … Clovis people left behind tens of thousands of stone tools and fluted points, while Atlantis is represented by exactly nothing. Even if their bones turned to dust, where are their stones and their metals? Where is the pollen from their crops…? (More) Here the key question here is: what sort of historical evidence should you expect to have already seen, if it were really there? On the one hand, we clearly have seen enough to safely conclude that there aren’t large dinosaurs roaming the streets in our major city centers. On the other hand, we often hear reports of people uncovering old things that others had pretty confidently predicted would never be found. Which makes many suspect widespread overconfidence in claims about what we know can’t be there, because if so we would have seen them already. Yes, the bigger and more techy a lost civilization one postulates, the more likely it is that we’d have seen evidence of it. For example, the bigger a civ, the earlier they adopted pottery, and the more widely they used it, the more we should expect to find pottery shards. Similar for widespread use of metal. But if there are plausible civ hypotheses that don’t require them to be as big, or as much into stuff that creates long-lasting evidence like pottery shards, then the more trouble we’ll have rejecting such hypotheses. One complication re lost advanced civilizations is that the last 7K years have seen especially calm weather worldwide. Before that, sea levels changed a lot more, and before 10Kya temperatures changed a lot more, and much of the Earth was covered with glaciers. There may even have been some huge worldwide cataclysms around 12Kya. All this made it harder to sustain complex civilizations back then, but also made it harder to preserve evidence of them for us to see now, if they has been there. Seems to me we want something like prediction markets here, to give better incentives and aggregation re predictions of what stuff will be found where re when. So let me suggest: markets in archeology prize obligations. First, let’s set up some archeology prizes, each of which pays $P to the first group who can show an X found in region R from before date D. For example, show a homo sapien skull found in the Americans dated before 200Kya. Define $P in units of some standard investment asset, like the S&P 5000 or MSCI All Country World Index. Then create markets where people can be paid in those same units to take on fractions of prize obligations. For example, someone might be paid 10 units to take on an obligation to pay 100 units of the pre-200Kya America skeleton prize. The asset ratio price in these markets, such as the 10% ratio of 10 to 100 in the example above, could be interpreted as a probability that the prize will ever be won. With enough kinds of prizes for enough findings X, regions R, and dates D, we could get a pretty good picture of what we are likely to find. Such as lost advanced civilizations. These prize payments would encourage more archeology effort to discover things. Skeptics who see little chance of dramatic findings might eagerly be paid to take on such obligations, while enthusiasts who see such discoveries as more likely could take the opposite sides of such transactions. Each side can expect to profit by reversing their trade when the world comes to its senses and agrees with them, which might happen long before any actual discoveries are paid prizes. Investigators who expect to be able to show particular findings soon might offer to pay now for others to take on obligations to pay bigger related prizes later. And these relative prices might give investigators hints about what to look for where. As it is quite legal to pay out prizes and to transfer obligations to pay prizes, all of this looks pretty legal to me. (But I’m not a lawyer, so of course not legally allowed to state opinions on such things.) Yes, you’d need to set things up to ensure that people will make good on obligations to pay prizes, but that seems feasible. We could get even more trading if anyone were allowed to pay to become an auxiliary prize recipient in case a prize was one by someone, but I’m less sure that would be considered legal. So, who wants to help set this up? Added 9a: During the classic Egypt era, many monuments were built over and near apparently much older sites with much older monuments built using apparently advanced tech. Many of these older sites have very large tunnel systems, many of which are from being fully explored. That is my best bet re where to look for evidence of lost advanced civs. ## [Try-Try or Try-Once Great Filter?](#table-of-contents) _Posted on 2020-12-03_ [Here’s](new-hard-steps-results) a simple and pretty standard theory of the origin and history of life and intelligence. Life can exist in a supporting oasis (e.g., Earth’s surface) that has a volume V and metabolism M per unit volume, and which lasts for a time window W between forming and then later ending. This oasis makes discrete “advances” between levels over time, and at any one time the entire oasis is at the same level. For example, an oasis may start at the level of simple dead chemical activity, may later rise to a level that counts as “life”, then rise to a level that includes “intelligence”, and finally to a level where civilization makes a big loud noises that are visible as clearly artificial from far away in the universe. There can be different kinds of levels, each with a different process for stepping to the next level. For example, at a “delay” level, the oasis takes a fixed time delay D to move to the next level. At a “[try once](two-types-of-future-filters)” level, the oasis has a particular probability of immediately stepping to the next level, and if it fails at that it stays forever “stuck”, which is equivalent to a level with an infinite delay. And at a “try try” level, the oasis stays at a level while it searches for an “innovation” to allow it to step to the next level. This search produces a constant rate per unit time of jumping. As an oasis exists for only a limited window W, it may never reach high levels, and in fact may never get beyond its first try-try level. If we consider a high level above many hard try-try levels, and with small enough values of V,M,W, then any one oasis may have a very small chance of “succeeding” at reaching that high level before its window ends. In this case, there is a “great filter” that stands between the initial state of the oasis and a final success state. Such a success would then only tend to happen somewhere if there are enough similar oases going through this process, to overcome these small odds at each oasis. And if we know that very few of many similar such oases actually succeed, then we know that each must face a great filter. For example, knowing that we humans now can see no big loud artificial activity for a very long distance from us tells us that planets out there face a great filter between their starting level and that big loud level. Each try-try type level has an expected time E to step to the next level, a time that goes inversely as V*M. After all, the more volume there is of stuff that tries, and faster its local activity, the more chances it has to find an innovation. A key division between such random levels is between ones in which this expected time E is much less than, or much greater than, the oasis window W. When E << W, these jumps are fast and “easy”, and so levels change relatively steadily over time, at a rate proportional to V*M. And when E >> W, then these jumps are so “hard” that most oases never succeed at them. Let us focus for now on oases that face a great filter, have no try-once steps, and yet succeed against the odds. There are some useful patterns to note here. First, let’s set aside S, the sum of the delays D for delay steps, and of the expected times E for easy try-try steps, for all such steps between the initial level and the success level. Such an oasis then really only has a time duration of about W-S to do all its required hard try-try steps. The first pattern to note is that the chance that an oasis does all these hard steps within its window W is proportional to (V*M*(W-S))N, where N is the number of these hard steps needed to reach its success level. So if we are trying to predict which of many differing oases is mostly likely to succeed, this is the formula to use. The second pattern to note is that if an oasis succeeds in doing all its required hard steps within its W-S duration, then the time durations required to do each of the hard steps are all drawn from the same (roughly exponential) distribution, regardless of the value of E for those steps! Also, the time remaining in the oasis after the success level has been reached is also drawn from this same distribution. This makes concrete predictions about the pattern of times in the historical record of a successful oasis. Now let’s try to compare this theory to the history of life on Earth. The first known fossils of cells seems to be from 0.1-0.5 Ga (billion years) after life would be possible on Earth, which happened about 4.2 Gya (billion years ago), which was about 9.6 Ga after the universe formed. The window remaining for (eukaryotic) life to remain on Earth seems 0.8-1.5 Ga. The relatively [steady](brain-size-is-not-filter) growth in max brain sizes since multi-cellular life arose 0.5 Gya suggests that during this period there were many easy, but no hard, try-try steps. Multi-celluar life seems to require sufficient oxygen in the atmosphere, but the process of collecting enough oxygen seems to have started about 2.4 Gya, implying a long 1.9 Ga delay step. Prokaryotes started exchanging genes about 2.0 Gya, eukaryotes appeared about 1.7 Gya, and modern sex appeared about 1.2 Gya. These events may or may not have been the result of successful try-try steps. Can we test this history against the predictions that try-try hard step durations, and the window time remaining, should all be drawn from the same roughly exponential distribution? Prokaryote sex, eukaryotes, and modern sex all appeared within 0.8 Ga, which seems rather close together, and leaving a long uneventful period of almost ~2 Ga before them. The clearest hard step duration candidates are before the first life, which took 0.0-0.5 Ga, and the window remaining of 0.8-1.5 Ga, which could be pretty different durations. Overall I’d say that while this data isn’t a clear refutation of the same hard step distribution hypothesis, it also isn’t that much of a confirmation. What about the prediction that the chance of oasis success is proportional to (V*M*(W-S))N? The prediction about Earth is that it will tend to score high on this metric, as Earth is the only example of success that we know. Let’s consider some predictions in turn, starting with metabolism M. Life of the sort that we know seems to allow only a limited range of temperatures, and near a star that requires a limited range of distances from the star, which then implies a limited range of metabolisms M. As a result of this limited range of possible M, our prediction that oases with larger M will have higher chances of success doesn’t have much room to show itself. But for what its worth, Earth seems to be nearer to the inner than outer edge of the Sun’s allowable zone, giving it a higher value of M. So that’s a weak confirmation of the theory, though it would be stronger if the allowed zone range were larger than most authors now estimate. What about volume V? The radii of non-gas-giant planets seems to be lognormally distributed, with Earth at the low end of the distribution (at a value of 1 on this axis): [](Planet-Size-Distribiution) So there are many planets out there (at r=4) with 16 times Earth’s surface area, and with 64 times the volume, ratios that must be raised to the power of N to give their advantage over Earth. And these larger planets are made much more of water than is Earth. This seems to be a substantial, if perhaps not overwhelming, disconfirmation of the prediction that Earth would score high on VN. The higher is the number of hard steps N, the stronger is this disconfirmation. Regarding the time window W, I see three relevant parameters: when a planet’s star formed, how long that star lasts, and how often there are supernova nearby that destroy all life on the planet. Regarding star lifetimes, main sequence star luminosity goes as mass to the ~3.5-4.0 power, which implies that star lifetimes go inversely as mass to the ~2.5-3.0 power. And as the smallest viable stars have 0.08 of our sun’s mass, that implies that there are stars with ~500-2000 times the Sun’s lifetime, an advantage that must again be raised to the power N. And there are actually a lot more such stars, 10-100 times more than of the Sun’s size: [](StarMassDistribution) However, the higher metabolism of larger mass stars gives them a spatially wider habitable zone for planets nearby, and planets near small stars are said to face other problems; how much does that compensate? And double stars should also offer wider habitable zones; so why is our Sun single? Now what if life that appears near small long-lived stars would appear too late, as life that appeared earlier would spread and take over? In this case, we are talking about a race to see which oases can achieve intelligence or big loud civilizations before others. In which case, the prediction is that winning oases are the ones that appeared first in time, as well has having good metrics of V,M,W. Regarding that, here are estimates of [where](galaxy-calc-shows-aliens) the habitable stars appear in time and galactic radii, taking into account both star formation rates and local supernovae rates (with the Sun’s position shown via a yellow star): [](GalacticHabitableZone) As you can see, our Sun is far from the earliest, and its quite a bit closer to galactic center than is ideal for its time. And if the game isn’t a race to be first, our Sun seems much earlier than is ideal (these estimates are arbitrarily stopped at 10Ga). Taken together, all this seems to me to give a substantial disconfirmation of the theory that chance of oasis success is proportional to (V*M*(W-S))N, a disconfirmation that gets stronger the larger is N. So depending on N, maybe not an overwhelming disconfirmation, but at least substantial and worrisome. Yes, we might yet discover more constraints on habitability to explain all these, but until we find them, we must worry about the implications of our analysis of the situation as we best understand it. So what alternative theories do we have to consider? In this post, I’d like to suggest replacing try-try steps with try-once steps in the great filter. These might, for example, be due to evolution’s choices of key standards, such as the genetic code, choices that tend to lock in and get entrenched, preventing competing standards from being tried. The overall chance of success with try-once steps goes as the number of oases, and is independent of oasis lifetime, volume, or metabolism, favoring many small oases relative to a few big ones. With more try-once steps, we need fewer try-try steps in the great filter, and thus N gets slower, weakening our prediction conflicts. In addition, many try-once steps could unproblematically happen close to each other in time. This seems attractive to me because I estimate there to be in fact a great many rather hard steps. Say at least ten. This is because the design of even “simple” single cell organisms seems to me amazingly complex and well-integrated. (Just look at it.) “Recent” life innovations like eukaryotes, different kinds of sex, and multicellular organisms do involved substantial complexity, but the total complexity of life seems to me far larger than these. And while incremental evolution is capable of generating a lot of complexity and integration, I expect that what we see in even the simplest cells must have involved a lot of hard steps, of either the try-once or the try-try type. And if they are all try-try steps, that makes for a huge N, which makes the prediction conflicts above very difficult to overcome. Well that’s enough for this post, but I expect to have more to say on the subject soon. Added 19Jan: Turns out we also seem to be in the wrong kind of galaxy; each giant elliptical with a low star formation rate hosts 100-10K times more habitable Earth-like planets, and a million times as many habitable gas giants, than does our Milky Way. ## [Great Filter with Set-Backs, Dead-Ends](#table-of-contents) _Posted on 2022-04-01_ A biological cell becomes cancerous if a certain set of rare mutations all happen in that same cell before its organism dies. This is quite unlikely to happen in any one cell, but a large organism has enough cells to create a substantial chance of cancer appearing somewhere in it before it dies. If the chances of mutations are independent across time, then the durations between the timing of mutations should be roughly equal, and the chance of cancer in an organism rises as a power law in time, with the power equal to the number of required mutations, usually around six. A similar process may describe how an advanced civilization like ours arises from a once lifeless planet. Life may need to advance through a number of “hard step” transitions, each of which has a very low chance per unit time of happening. Like evolving photosynthesis or sexual reproduction. But even if the chance of advanced life appearing on any one planet before it becomes inhabitable is quite low, there can be enough planets in the universe to make the chance of life appearing somewhere high. As with cancer, we can predict that on a planet lucky enough to birth advanced life, the time durations between its step transitions should be roughly equal, and the overall chance of success should rise with time as the power of the number of steps. Looking at the history of life on Earth, many observers have estimated that we went through roughly six (range ~3-12) hard steps. In our [grabby aliens](http://grabbyaliens.com/) analysis, we say that a power of this magnitude suggests that Earth life has arrived very early in the history of the universe, compared to when it would arrive if the universe would wait empty for it to arrive. Which suggests that grabby aliens are out there, have now filled roughly half the universe, and will soon fill all of it, creating a deadline soon that explains why we are so early. And this power lets us estimate how soon we would meet them: in roughly a billion years. According to this simple model, the short durations of the periods associated with the first appearance of life, and with the last half billion years of complex life, suggest that at most one hard step was associated with each of these periods. (The steady progress over the last half billion years also suggests this, though our paper describes a “multi-step” process by which the equivalent of many hard steps might be associated with somewhat steady progress.) In an excellent new [paper](https://royalsocietypublishing.org/doi/10.1098/rspb.2021.2711) in the _Proceedings of the Royal Society_, “Catastrophe risk can accelerate unlikely evolutionary transitions”, Andrew Snyder-Beattie and Michael Bonsall extend this standard model to include set-backs and dead-ends. > Here, we generalize the [standard] model and explore this hypothesis by including catastrophes that can ‘undo’ an evolutionary transition. Introducing catastrophes or evolutionary dead ends can create situations in which critical steps occur rapidly or in clusters, suggesting that past estimates of the number of critical steps could be underestimated. ([more](https://royalsocietypublishing.org/doi/10.1098/rspb.2021.2711)) Their analysis looks solid to me. They consider scenarios where, relative to the transition rate at which a hard step would be achieved, there is a higher rate of a planet “undoing” its last hard step, or of that planet instead switching to a stable “stuck” state from which no further transitions are possible. In this case, advanced life is achieved mainly in scenarios where the hard steps that are vulnerable to these problems are achieved in a shorter time than it takes to undo or stuck them. As a result, the hard steps which are vulnerable to these set-back or dead-end problems tend to happen together much faster than would other sorts of hard steps. So if life on early Earth was especially fragile amid especially frequent large asteroid impacts, many hard steps might have been achieved then in a short period. And if in the last half billion years advanced life has been especially fragile and vulnerable to astronomical disasters, there might have been more hard steps within that period as well. Their paper only looks at the durations between steps, and doesn’t ask if these model modifications change the overall power law formula for the chance of success as a function of time. But my math intuition is telling me it feels pretty sure that the power law dependence will remain, where the power now goes as the number of all these steps, including the ones that happen fast. Thus as these scenarios introduce more hard steps into Earth history, the overall power law dependence of our grabby aliens model should remain but become associated with a higher power. Maybe more like twelve instead of six. With a higher power, we will meet grabby aliens sooner, and each such civilization will control fewer (but still many) galaxies. Many graphs showing how our predictions vary with this power parameter can be found in our [grabby aliens](http://grabbyaliens.com/) paper. ## [Seeing ANYTHING Other Than Huge-Civ Is Bad News](#table-of-contents) _Posted on 2021-07-04_ The great filter is whatever obstacles prevent simple dead matter from evolving into a civilization big and visible on astronomical scales. The fact that we see nothing big and visible in a huge universe says this filter must be large, and a key question is the size of the future filter: how much have we passed and how much remains ahead of us? I’ve suggested that evidence of life elsewhere below our level makes the past filter look smaller, and thus our future filter larger. From which you might conclude that evidence of a civilization above our level is good news. That seems to be what says here at Vox: > If (and I must stress that this is a quite unlikely “if”) UFO sightings on earth are actually evidence that an advanced alien civilization has developed a system of long-distance probes that it is using to monitor or contact humanity, then that would be an immensely hopeful sign in Great Filter terms. It would mean that at least one civilization has far surpassed humanity without encountering any insurmountable hurdles preventing its survival. (more) But I don’t think that’s right. This would move the filter more to above their level, but below the level of becoming big and visible, without changing the size of the total filter. Which implies a larger future filter for us. In addition, any UFO aliens are [likely](ufos-what-the-hell) here to actively impose a filter on us, i.e., to stop us from getting big and visible (or “grabby“). So if UFOs as aliens is not good news, what would be good news re our future filter? Aside from detailed engineering and social calculations showing that we are in fact very close to becoming irreversibly grabby, the only good news I can imagine is actual concrete evidence of big visible aliens civilizations out there. Maybe we’ve misread their signatures somehow. Looking out further and in more detail at the universe and still finding it dead suggests the total filter is larger, which is bad news. And finding any evidence of anything other than death suggests the filter is smaller up to the level of that finding, but doesn’t revise our estimate of the total filter. Which is bad news re our future. Thus a perhaps surprising conclusion: finding anything other than a big visible civilization out there is bad news re our future prospects for becoming big and visible. Remember also: the SIA indexical prior (IMHO the reasonable choice) favors larger future filters. Beware the future filter! ## [Our Level in the Great Filter](#table-of-contents) _Posted on 2022-07-08_ An exchange between Astrophysicist [Charles Lineweaver](https://www.mso.anu.edu.au/~charley/) and myself: In their 2019 paper “The [Timing of Evolutionary Transitions Suggests Intelligent Life is Rare](https://www.liebertpub.com/doi/10.1089/ast.2019.2149)”, Snyder-Beattie, Sandberg, Drexler, and Bonsall argue that the expected time for “intelligent life” to appear on Earth “likely exceed the lifetime of Earth, perhaps by many orders of magnitude” which “corroborate[s] the original argument suggested by Brandon Carter that intelligent life in the Universe is exceptionally rare.” In a Feb. 2022 comment in _Inference_, “[A Lonely Universe](https://inference-review.com/article/a-lonely-universe)”, Charles Lineweaver disagreed: > The Snyder-Beattie et al. result depends on the assumption that … the major transitions that characterize our evolution happen elsewhere. There is little evidence in the history of life on earth to support this assumption. … transition to human-like intelligence or technological intelligence occurred only about 100,000 years ago and is species-specific. The latter trait is strong evidence we should not expect to find it elsewhere. > > It [is not] reasonable to argue that … the features of life on earth … most likely to appear in life elsewhere are those that have evolved independently many times, such as complex multicellularity, eyes, wings, and canines. … [because] these … have only occurred within a unique [never-repeated] eukaryotic branch that represents a tiny fraction of the diversity of life on earth. … > > Attempting to compute the probability of human-like intelligence elsewhere based on our lineage is akin to analyzing the evolution of the English language on earth and trying to use the timing of the Great Vowel Shift to estimate its timing on other planets My July 2022 [reply](https://inference-review.com/letter/understanding-the-chances-for-life), also in _Inference,_ says: > Lineweaver suggests that without good reasons to think “the major transitions that characterize our evolution happen elsewhere,” estimates regarding Earth do not allow us to make estimates regarding other planets. > > On the contrary, I see two ways to compare planets so that Earth estimates become relevant for other planets, allowing us to infer a low overall rate at which advanced life appears elsewhere. First, if Earth is a random sample from planets that succeed in making life at our level, the success rate on Earth cannot be too different from the typical success rate on other such planets. Second, if there is a substantial chance that our descendants will soon become very visible in the universe, the fact that no other star in our galaxy has yet done so can set a low upper bound on the fraction of such stars that can have reached our level by now. … > > Let R be the chance of life at our current level—i.e., controlling nuclear power and practicing spaceflight—appearing on a particular planet within some fixed planet habitability duration. … chance Q that, within the following ten million years, a planet at our level would give rise to a civilization that becomes permanently visible across its entire galaxy. [I elaborated with math examples for both these approaches.] In that [same](https://inference-review.com/letter/understanding-the-chances-for-life) place, Lineweaver then responded: > I don’t believe in the general group that he and many others call “advanced life.” … No other life-forms in the universe will be genetically or phenotypically more similar to us than chimps, bonobos, gorillas, naked mole rats, or frogs. Since Hanson and many others exclude our closest relatives from “advanced life,” they are—by their definition—not talking about a generic group with other members. … > > On Earth, humans are the only ones who have become humans at our level of technology. To then conclude that among all species, our species had an average chance of becoming humans at our level is meaningless. … > > Morris … argues that strong selection pressure leads to [convergent](https://link-springer-com.mutex.gmu.edu/chapter/10.1007/978-1-4020-8837-7_17) evolution which then produces human-like intelligence. Hanson and most physicists subscribe to this view, but most biologists and I don’t. … Hanson refers to … life at our level … I … ask: If we exclude our species from consideration, does this talk of levels make any sense when applied to the rest of life? Are dogs or red oak trees at a higher level? Reading Lineweaver’s response, I see my reply was off target; his issue is with the very idea of “life at our level”. So let me try again. A key datapoint is this: we do _not_ now see any big visible civilizations (BVC) in the sky who have greatly changed the natural universe into something more to their liking. In order to explain this fact, we must postulate a “[great filter](https://en.wikipedia.org/wiki/Great_Filter)”, i.e., a process whereby simple dead matter _might_ give rise first to simple life, and then to a BVC, or various filter obstacles might end this progress, so that it never produces a BVC. We must conclude that so far, averaging across the universe, this filter process has a _very_ low total pass-through rate to a BVC. After all, _no_ dead matter in the entire universe has yet given rise to a BVC we can see. That is, this great filter is on average very large. In contrast, Earth today seems to plausibly have a much higher rate for creating BVC. I’d say we have at least a one in a million chance of doing so within the next ten million years. (This isn’t value judgement, just an estimate.) As Earth is now thus much closer to this BVC endpoint than it was originally, there is a sense in which Earth has now passed through part of the great filter, so that a substantially smaller filter lies before us than once lied before a simple dead Earth. To talk about how much of the great filter we have so far passed, we’d like a way to talk about where we “are now” in this filter process. And this is where we can want to talk about our current “level” along some linear path from dead matter to BVC. But, as Lineweaver points out, evolution is in many ways a tree, instead of a line, and we cannot construct such a level concept merely by creating a conjunct of various random specific features of our species and planet. Even so, I do think there are useful ways to define “our level” (OL) within the great filter. What we want is an equivalence class OL of alien civilization-moments such that (a) Earth today is in OL, (b) almost all BVC were once in OL at some prior point in their history, and (c) OL covers only a short “time slice” during which few civilizations go extinct. If we have more choices, we’d further like to pick OL so that (d) it minimizes the variance in the (coarse-grained) chance that each civilizations in OL later gives rise to a BVC. The lower this variance, the more it makes sense to talk in terms of the average chance within OL of giving rise later to a BVC. One option would be to just define OL as the class that meets criteria (a,b,c) and actually minimizes (d). But while this might be well defined, it seems unwieldy. Which is why I tried above to define OL above in terms of a civilization having just mastered the basics of both nuclear power and spaceflight. It might be reasonable to add a few other techs to this list, such as computers. Sure, we’d define somewhat different OL sets if we added or cut techs from this list. But the key point is that any civilization that had mastered all of them would be well on its way to being able to start a BVC soon. And most likely the chance of extinction is low between the point of having mastered half of these techs and mastering all of them. Thus the exact list of techs in our OL definition probably doesn’t make that much difference. Yes, this way to define OL can let humans pass through OL, while chimps never do. But I just don’t see why that’s a problem. There is in fact a big important difference between what humans and chimps have accomplished, and I’m fine with our OL definition reflecting that. ## [At Least Two Filters](#table-of-contents) _Posted on 2010-11-28_ Where lies the great filter, i.e., the obstacles that make it extremely unlikely that any one chunk of pre-organic matter originates a visibly expanding interstellar civilization? While it seems [unlikely](brain-size-is-not-filter) our ancestors passed through much of a filter in the last half billion years, our descendants may face a big filter in the next few thousand years, and there may have been big filters associated with the origin of life, the spread of life, the invention of complex cells, sexual reproduction, or multicellular life. In many folks eyes, an elegantly simple resolution, which is likely because of its simplicity, is to assume there is just one huge filter: the origin of life. Assuming that first step is enormously hard allows one to think all the other steps are pretty easy. They wouldn’t be sure things of course, but conditional on a big enough origin-of-life filter, one wouldn’t have a strong reason to fear that common analyses underestimate future filters. Unfortunately, the elegantly simple hypothesis that the great filter is mainly a big origin-of-life filter seems at odds with our best evidence. Why? Because if the spread-of-life step had the weakest possible associated filter, then life spreading must be easy. Over billions of years life could have [spread](pondering-panspermia) to many star systems from its place of [origin](all-hail-william-napier): Life could spread across a galaxy via giant molecular clouds reliably collecting life from the stars they drift near, and then passing that life on to a few of the thousands of new stars they create. If over billions of years life spread to many hundreds, or even billions, of star systems, and no substantial filters stood between arrival of life near a star, and its eventual development of advanced technical civilizations like ours, then why would we now see no any evidence of other civilizations? Yes it is possible that we are the very first, but that hypothesis is of course unlikely by default. It seems to me that if the great filter is to consist of just one big step, the only plausible possibility is the development of multi-cellular life. All the steps before that one seem able to spread to other star systems via single-celled life hidden in dust, and [it seems](brain-size-is-not-filter) we haven’t had a big filter step since the multi-cellular innovation. So if the idea of just one big filter appeals to your sense of elegance, you’ll have to presume that life, including complex life with sexual reproduction etc., is very common in our vast universe, but that Earth is one of the handful of places in all that vastness with multi-cellular life. If you don’t find that plausible, well then you’ll have to grant there are at least two filters. And if two, why not three? So you must find the possibility of a third filter in our future plausible; beware [future](beware-future-filters) [filters](fertility-the-big-problem). ## [Fertility: The Big Problem](#table-of-contents) _Posted on 2010-11-15_ Many folks want to save the world. Especially young, single, energetic folks. Especially if they also get to: