Dog/Human #2


Dog/Human Alliance: Reading #2


 The Scientific Method and the Value of What-If  — Ch.3, excerpts

A Dog in the Cave: The Wolves Who Made Us Human (2017)

Kay Frydenborg


[1] Fossils are tangible evidence of the past, but deciphering what's written in those bones is part hard science and part speculation. Both are essential components of science. The first is based on solid evidence and meant to be judged by scientific peers and others accordingly; the second is less intended to support any predetermined conclusion than it is to encourage thought and additional research. Speculative research can, and often does, lead to new discoveries and understanding that enlarge the body of human scientific knowledge, but only if the author is scrupulous about distinguishing what is known for sure, what is probable, and what might be possible and can be tested to see if it's true. It's the difference, in the words of the New York Times science reporter James Gorman, between "what is" and "what if."


[2] Since at least the seventeenth century, scientists the world over have followed a set of techniques and standards for investigating the natural world; this is known as the scientific method. By this method, observations about the world and events that happen in it can be investigated, new knowledge can be acquired, and previous knowledge can be corrected and integrated into what is learned today. But to be considered scientifically valid, research must be based on observable, measurable evidence that follows the principles of logic. The basic stages or steps that all scientists strive to follow include systematic observation, measurement, experimentation, and the formula­tion, testing, and modification of hypotheses.


[3] A hypothesis is a proposed explanation for an observed phenomenon. Often scientists base hypotheses on previous observations that can't be satisfactorily explained with the available scientific theories. Proposing a hypothesis can be a very creative process, as scientists open their minds to possibilities and "what if" questions. Some of the most important distoveries in the scientific world have been arrived at by those willing to ask such open-ended questions. But to be a scientific hypothesis, the proposed explanation must be testable, and it must undergo extensive and rigorous testing before it may be accepted as a scientific theory. Previously accepted scientific theories have, at times, been proven false in the light of new information. In the scientific meaning of the word, then, a theory has been tested and generally accepted as an accurate explanation of the observation or phenomenon being explored. A working hypothesis is a provisionally accepted theory, pending further research.


[4] The way scientists work is to observe reality, and then to let reality speak for itself, supporting a hypothesis when its predictions are confirmed, and challenging it if its predictions prove false. So in science, asking (or formulating) the right question is as important as finding the right answers, since both are necessary to discover the truth. After proposing a hypothesis, researchers must design experimental studies to test the hypothesis by finding out whether the predictions that follow from it are true.


[5] The reason for such strict standards is to keep scientific inquiry as objective as possible. Scientists are also expected to document, archive, and share (often in the form of peer-reviewed, published papers) all of the data collected and the methods used, so that other scientists can attempt to reproduce and verify the results. Without these steps, science isn't really science; it's more a kind of storytelling. Imagination and belief are one kind of truth, but science is a different level of truth. For thousands of years people have recognized that knowledge requires both.


Nicholas Wade

Before the Dawn  (2006)

Ch.7 — Settlement [excerpts]


[1] The last glacial maximum preceded the emergence not only of people who looked somewhat different from each other but, far more significantly, of people who behaved differently from all their predecessors. In the southern borders of the western half of Eurasia, around the eastern shores of the Mediterranean, a new kind of human society evolved, one in which hunters and gatherers at last developed the behaviors necessary for living in settled communities.


[2] The Pleistocene did not depart quietly but in a roller coaster of climatic swings. After the Last Glacial Maximum, of 20,000 to 15,000 years ago, came a warming period known as the Bolling-Meiod Interstadial, during which plants, animals and people were able to move northward again. But the Bolling-Allerod warming, which lasted from 15,000 to 12,500 years ago, was a false dawn. A second cold period, particularly challenging because it began so abruptly, established its grip on Eurasia. Within a decade, it had scot temperatures plummeting back to almost glacial levels and soon had converted to tundra the vast forests of Northern Europe. This deadly cold snap is known as the Younger Dryas, after a dwarf yellow rose, Dryas octopetala, that grew amid the tundra.


[3] The Younger Dryas lasted for 1,300 years and ended as suddenly as it be­gan, also in a decade or so, according to the cores drilled from through the Greenland ice cap that serve as an archive of global climate. By 11,500 years ago the world was launched on the Holocene, the inter-ice age period that still prevails. These wrenching climatic and territorial changes would have posed severe tests to human survival, doubtless forcing people to resort to many new expedients even in the warmer southern latitudes. The precise chain of cause and effect, if any, remains a mystery. All that can be said for now is that in the Near East, as the Last Glacial Maximum ended, a new kind of human society began to emerge, one based not on the narrow ambit of the forager's life but on settling down in one place.


[4] Settling down, or sedentism, as archaeologists say, may sound so simple and obvious, but for foragers it was not nearly so clear a choice. Sedentism tied people to a single exposed site, increasing vulnerability to raiders. Sedentism attracted noxious vermin and disease. Sedentism required new ways of thought, new social relationships and a new kind of social organization, one in which people had to trade their prized freedom and equality for hierarchy, officials and chiefs and other encumbrances.


[5] Archaeologists have little hesitation in describing the transition to sedentism as a revolution, comparable to the one that defines the beginning of the Upper Paleolithic 50,000 years ago when behaviorally modern humans emerged from their anatomically modern forebears. Ofer Bar-Yosef of Harvard refers to these transitions as "two major revolutions in the history of humankind." Hunter-gatherers own almost no personal property and, without differences of wealth, everyone is more or less equal. The first settled communities show evidence of a quite different social order. Houses and storage facilities seem to have been privately owned. With personal property allowed, some people quickly acquired more of it than others, along with greater status. The old egalitarianism disappeared and in its place there emerged a hierarchical society, with chiefs and commoners, rich families and poor, specializations of labor, and the beginnings of formal religion in the form of an ancestor cult.


[6] "Daily life in a village that is larger than a foragers’ band heralds the restructuring of the social organization, as it imposes more limits on the individual as well as on entire households," writes Bar-Yoscf. “To ensure the long-term predictability of habitable conditions in a village, members accept certain rules of conduct that include, among other things, the role of leaders or headmen (possibly the richest members of the community), active or passive participation in ceremonies (conducted publicly in an open space) and the like."


[7] Sedentism must also have included a response to the most pressing of human social needs, defense against other human groups. For hunter-gatherers, the essence of security is mobility. For the first settlers, defense must have rested on some other basis, which was presumably that of population size. Because the settlers had learned to live together in larger groups, they would have outnumbered the attackers. With greater manpower than the usual foraging group, together with fortifications and perhaps the guard dogs . . . settlers would have been able to even the odds against the raiding parties after their food and women.


[8] This new form of social organization preceded and perhaps prompted such innovations as the cultivation of wild cereals, and the penning and herding of wild animals like sheep and goats. These steps led in turn, perhaps more by accident than design, to the domestication of plants and animals and to the beginnings of agriculture. Settled life and the new hierarchical form of society paved the way for complex societies, cities, civilization and, in rudimentary form, the institutions of today's urban life. Almost all subsequent human history and development seems in one sense a consequence of the pivotal transition from the foraging lifestyle to a settled, structured society.


[9] The innovations of settled life and agriculture started to spread through Europe 10,000 years ago, a date that marks the beginning of the Neolithic age. Because the two inventions became so visible in the Neolithic, archaeologists long assumed that the improving climate made agriculture possible, which in turn opened the gateway to settled living. But in part because of improved dating techniques, they have come to see that the reverse is true: it was not agriculture that led to settlement, but rather sedentary life came first, well before the Neolithic age began, and agriculture followed in its train.


Ch.8 — Sociality [excerpts]

[10] One principle that biologists think may help explain larger societies, both human and otherwise, is that of reciprocal altruism, the practice of helping even a nonrelated member of society because they may return the favor in future. A tit-for-tat behavioral strategy, where you cooperate with a new acquaintance, and thereafter follow his strategy toward you (retaliate if he retaliates, cooperate if he cooperates), turns out to be superior to all others in many circumstances. Such a behavior could therefore evolve, providing that a mechanism to detect and punish freeloaders evolves in parallel; otherwise freeloaders will be more successful and drive the conditional altruists to extinction.


[11] Conditional or tit-for-tat altruism cannot evolve in just any species. It requires members to recognize each other and have long memories, so as to be able to keep tally. A species that provides a shining example of reciprocal altruism is none other than the vampire bat. The bats, found in South America, hang out in colonies of a dozen or so adult females with their children. They feed by biting a small incision in the skin of sleeping animals, nowadays mostly cattle or horses, and injecting a special anticoagulant named, naturally enough, draculin. But their blood collection drives are not always successful. On any given night a third of the young bats and 7% of the adults are unsuccessful, according to a study by Gerald W. Wilkinson of the University of Maryland.


[12] This could pose a serious problem because vampire bats must feed every three days, or they die. The colony's solution, Wilkinson found, is that suc­cessful bats regurgitate blood to those who went hungry. Bats are particularly likely to donate blood to their friends, with whom they have grooming relationships, to those in dire need, and to those from whom they received help recently. The vampires' reciprocal altruism must be particularly effective since the bats, despite the risk of death after three bloodless nights, can live for 15 years.


[13] If social altruism has evolved among vampire bats, there is no reason why it could not also emerge among primates. And indeed it can be seen at work in the coalitionary politics of male chimpanzees, where the alpha male depends on allies to preserve his dominance of the male hierarchy. The biologist Robert L. Trivers, who first showed how reciprocal altruism could be favored by natural selection, suggested that in people a wide range of so­phisticated behaviors grew up around it, including cheating (failure to re­turn an altruistic favor to the giver), indignation at cheating, and methods to detect cheating.


[14] Many common emotions can be understood as being built around the expectation of reciprocity and the negative reaction when it is made to fail. If we like a person, we are willing to exchange favors with them. We are angry at those who fail to return favors. We seek punishment for those who take advantage of us. We feel guilty if we fail to return a favor, and shame if publicly exposed. If we believe someone is genuinely sorry about a failure to reciprocate, we trust them. But if we detect they are simulating contrition, we mistrust them.


[15] The instinct for reciprocity, and the cheater-detection apparatus that accompanies it, seem to be the basis for a fundamental human practice, that societies would still consist of family units a few score strong, and cities and great economies would have had no foundation for existence. How might this greater level of trust have arisen? Two hormones, known as oxytocin and vasopressin, are emerging as central players in modulating certain social behaviors in the mammalian brain. The hormones are generated in the pituitary gland at the base of the brain and have effects both on the body and in the brain. Oxytocin induces both labor in childbirth and the production of milk. Its effects on the mind, at least in experimental animals, have the general property of promoting affiliative or trusting behavior, lowering the natural resistance that animals have to the close proximity of others. So what does oxytocin do in people? Researchers at the University of Zurich have found that it substantially increases the level of trust. Oxytocin, they say, "specifically affects an individual's willingness to accept social risks arising through interpersonal interactions." The findings emerged from giving subjects a sniff of oxytocin before playing a game that tested trusting behavior.


[16] If the biological basis of trusting behavior is mediated in this manner, the degree of trust could easily be ratcheted up or down in the course of human evolution by genetic changes that either increased individuals' natural production of the hormone or enhanced the brain's response to it. Thus hunter-gatherers might have a genetically lower response to oxytocin while city-dwellers would have evolved a greater sensitivity. Whatever the exact mechanism, it is easy to see how greater levels of trust might have evolved at various stages in human evolution, given that there is a biological basis for the behavior.


[17] Trust is an essential part of the social glue that binds people together in cooperative associations. But it increases the vulnerability to which all social groups are exposed, that of being taken advantage of by freeloaders. Free­loaders seize the benefits of social living without contributing to the costs. They are immensely threatening to a social group because they diminish the benefits of sociality for others and, if their behavior goes unpunished, they may bring about the society's dissolution. Human societies long ago devised an antidote to the freeloader problem. This freeloader defense system, a major organizing principle of every society, has assumed so many other duties that its original role has been lost sight of. It is religion.


The Evolution of Religion

[18] The essence of religion is communal: religious rituals are performed by assemblies of people. The word itself, probably derived from the Latin religare, meaning to bind, speaks to its role in social cohesion. Religious ceremonies involve emotive communal actions, such as singing or dancing, and this commonality of physical action reinforces the participants' commitment to the shared religious views. The propensity for religious belief may be innate since it is found in so­cieties around the world. Innate behaviors are shaped by natural selection because they confer some advantage in the struggle for survival. But if reli­gion is innate, what could that advantage have been?


[19] No one can describe with certainty the specific needs of hunter-gatherer societies that religion evolved to satisfy. But a strong possibility is that reli­gion coevolved with language, because language can be used to deceive, and religion is a safeguard against deception. Religion began as a mechanism for a community to exclude those who could not be trusted. Later, it grew into a means of encouraging communal action, a necessary role in hunter-gatherer societies that have no chiefs or central authority. It was then co-opted by the rulers of settled societies as a way of solidifying their authority and justifying their privileged position. Modern states now accomplish by other means many of the early roles performed by religion, which is why religion has become of less relevance in some societies. But because the propensity for religious belief is still wired into the human mind, religion continues to be a potent force in societies that still struggle for cohesion.


[20] A distinctive feature of religion is that it appeals to something deeper than reason: religious truths are accepted not as mere statements of fact but as sacred truths, something that it would be morally wrong to doubt. This emotive quality suggests that religion has deep roots in human nature, and that just as people are born with a propensity to learn the language they hear spoken around them, so too they may be primed to embrace their community's religious beliefs.


[21] Can the origin of religion be dated? A surprising answer is yes, if the following argument is accepted. Like most behaviors that are found in societies throughout the world, religion must have been present in the ancestral human population before the dispersal from Africa 50,000 years ago. Although religious rituals usually involve dance and music, they are also very verbal, since the sacred truths have to be stated. If so, religion, at least in its modern form, cannot pre-date the emergence of language. It has been argued earlier that language attained its modern state shortly before the exodus from Africa. If religion had to await the evolution of modern, articulate language, then it too would have emerged shortly before 50,000 years ago.


[22] If both religion and language evolved at the same time, it is reasonable to assume that each emerged in interaction with the other. It is easy enough to see why religion needed language, as a vehicle for the sharing of religious ideas. But why should language have needed religion?


[23] The answer may have to do with the instinct for reciprocal altruism that is a principal cohesive force in human society, and specifically with its principal vulnerability, the freeloaders who may take advantage of the system without returning favors to others. Unless freeloaders can be curbed, a society may disintegrate, since membership loses its advantages. With the advent of language, freeloaders gained a great weapon, the power to deceive. Religion could have evolved as a means of defense against freeloading. Those who committed themselves in public ritual to the sacred truth were armed against the lie by knowing that they could trust one another.


[24] The anthropologist Roy Rappaport argued that sanctified statements were early societies' antidote to the misuse of the newly emerged powers of language. "This implies that the idea of the sacred is as old as language," he wrote, "and that the evolution of language and of the idea of the sacred were closely related, if not bound together in a single mutual causal process." The emergence of the sacred, he suggested, "possibly helped to maintain the general features of some previously existing social organization in the face of new threats posed by an ever-increasing capacity for lying."


[25] For early societies making the first use of language, there had to be some context in which statements were reliably and indubitably true. That context, in Rappaport's view, was sanctity This feature has been retained to a considerable degree in modern religions, which are centered around sacred truths, such as "The Lord Our God the Lord is One," or "There is no god but God." These sacred truths are unverifiable, and unfalsifiable, but the faithful nevertheless accept them to be unquestionable. In doing so, like assemblies of the faithful since the dawn of language, they bind themselves together for protection or common action against the unbelievers and their lies is so high that multi-million-dollar deals can be sealed by a handshake. Islam is said to have spread through Africa as a facilitator of trade and trust.


[26] Trust and cohesiveness are nowhere more important than in wartime. Contemporary religions preach the virtues of peace in peacetime but in war the bishops are expected to bless the cannon, and official churches almost always support national military goals. "Religion is superbly serviceable to the purposes of warfare and economic exploitation," writes the biologist Edward 0. Wilson, noting that it is "above all the process by which individuals are persuaded to subordinate their immediate self-interest to the interests of the group."


[27] Why does religion persist when its primary role, that of providing social cohesion, is now supplied by many other cultural and political institutions? While religion may no longer be socially necessary, it nevertheless fills a strong need for many people, and this may reflect the presence of genetic predisposition. Wilson, for one, believes that religion has a genetic basis, that its sources "are in fact hereditary, urged into birth through biases in mental development encoded in the genes."


[28] Religion, language and reciprocity are three comparatively recent elements of the glue that holds human societies together. All seem to have emerged some 50,000 years ago. But a far more ancient adaptation for social cohesiveness, one that set human societies on a decisively different path from those of apes, was the formation of the pair bond. Much of human nature consists of the behaviors necessary to support the male-female bond and a man's willingness to protect his family in return for a woman's willingness to bear only his children.


The descent of Edward Wilson: A new book by a great biologist makes a slew of mistakes

Richard Dawkins

Prospect Magazine | May 24, 2012


 Review of The Social Conquest of Earth by Edward O Wilson [excerpts]


[1] . . . I am not being funny when I say of Edward Wilson’s latest book that there are interesting and informative chapters on human evolution, and on the ways of social insects (which he knows better than any man alive), and it was a good idea to write a book comparing these two pinnacles of social evolution, but unfortunately one is obliged to wade through many pages of erroneous and downright perverse misunderstandings of evolutionary theory. In particular, Wilson now rejects “kin selection” (I shall explain this below) and replaces it with a revival of “group selection the poorly defined and incoherent view that evolution is driven by the differential survival of whole groups of organisms.


[2] Nobody doubts that some groups survive better than others. What is controversial is the idea that differential group survival drives evolution, as differential individual survival does. The American grey squirrel is driving our native red squirrel to extinction, no doubt because it happens to have certain advantages. That’s differential group survival. But you’d never say of any part of a squirrel that it evolved to promote the welfare of the grey squirrel over the red. Wilson wouldn’t say anything so silly about squirrels. He doesn’t realise that what he does say, if you examine it carefully, is as implausible and as unsupported by evidence.


[3] I would not venture such strong criticism of a great scientist were I not in good company. The Wilson thesis is based on a 2010 paper that he published jointly with two mathematicians, Martin Nowak and Corina Tarnita. When this paper appeared in Nature it provoked very strong criticism from more than 140 evolutionary biologists, including a majority of the most distinguished workers in the field. They include Alan Grafen, David Queller, Jerry Coyne, Richard Michod, Eric Charnov, Nick Barton, Alex Kacelnik, Leda Cosmides, John Tooby, Geoffrey Parker, Steven Pinker, Paul Sherman, Tim Clutton-Brock, Paul Harvey, Mary Jane West-Eberhard, Stephen Emlen, Malte Andersson, Stuart West, Richard Wrangham, Bernard Crespi, Robert Trivers and many others. These may not all be household names but let me assure you they know what they are talking about in the relevant fields.


[4] I’m reminded of the old Punch cartoon where a mother beams down on a military parade and proudly exclaims, “There’s my boy, he’s the only one in step.” Is Wilson the only evolutionary biologist in step? Scientists dislike arguing from authority, so perhaps I shouldn’t have mentioned the 140 dissenting authorities. But one can make a good case that the 2010 paper would never have been published in Nature had it been submitted anonymously and subjected to ordinary peer-review, bereft of the massively authoritative name of Edward O Wilson. If it was authority that got the paper published, there is poetic justice in deploying authority in reply.


[5] Then there’s the patrician hauteur with which Wilson ignores the very serious drubbing his Nature paper received. He doesn’t even mention those many critics: not a single, solitary sentence. Does he think his authority justifies going over the heads of experts and appealing directly to a popular audience, as if the professional controversy didn’t exist— as if acceptance of his (tiny) minority view were a done deal? “The beautiful theory [kin selection, see below] never worked well anyway, and now it has collapsed.” Yes it did and does work, and no it hasn’t collapsed. For Wilson not to acknowledge that he speaks for himself against the great majority of his professional colleagues is— it pains me to say this of a lifelong hero— an act of wanton arrogance.


[6] The argument from authority, then, cuts both ways, so let me now set it aside and talk about evolution itself. At stake is the level at which Darwinian selection acts: “survival of the fittest” but, to quote Wilson’s fellow entomologist-turned-anthropologist RD Alexander, the fittest what? The fittest gene, individual, group, species, ecosystem? Just as a child may enjoy addressing an envelope: Oxford, England, Europe, Earth, Solar System, Milky Way Galaxy, Local Group, Universe, so biologists with non-analytical minds warm to multi-level selection: a bland, unfocussed ecumenicalism of the sort promoted by (the association may not delight Wilson) the late Stephen Jay Gould. Let a thousand flowers bloom and let Darwinian selection choose among all levels in the hierarchy of life. But it doesn’t stand up to serious scrutiny. Darwinian selection is a very particular process, which demands rigorous understanding.


[7] The essential point to grasp is that the gene doesn’t belong in the hierarchy I listed. It is on its own as a “replicator,” with its own unique status as a unit of Darwinian selection. Genes, but no other units in life’s hierarchy, make exact copies of themselves in a pool of such copies. It therefore makes a long-term difference which genes are good at surviving and which ones bad. You cannot say the same of individual organisms (they die after passing on their genes and never make copies of themselves). Nor does it apply to groups or species or ecosystems. None make copies of themselves. None are replicators. Genes have that unique status . . .


[8] Gene survival is what ultimately counts in natural selection, and the world becomes full of genes that are good at surviving. But they do it vicariously, by embryologically programming “phenotypes”: programming the development of individual bodies, their brains, limbs and sense organs, in such a way as to maximise their own survival. Genes programme the embryonic development of their vehicles, then ride inside them to share their fate and, if successful, get passed on to future generations.


[9] So, “replicators” and “vehicles” constitute two meanings of “unit of natural selection.” Replicators are the units that survive (or fail to survive) through the generations. Vehicles are the agents that replicators programme as devices to help them survive. Genes are the primary replicators, organisms the obvious vehicles. But what about groups? As with organisms, they are certainly not replicators, but are they vehicles? If so, might we make a plausible case for “group selection”?


[10] It is important not to confuse this question— as Wilson regrettably does— with the question of whether individuals benefit from living in groups. Of course they do. Penguins huddle for warmth. That’s not group selection: every individual benefits. Lionesses hunting in groups catch more and larger prey than a lone hunter could: enough to make it worthwhile for everyone. Again, every individual benefits: group welfare is strictly incidental. Birds in flocks and fish in schools achieve safety in numbers, and may also conserve energy by riding each other’s slipstreams— the same effect as racing cyclists sometimes exploit.


[11] Such individual advantages in group living are important but they have nothing to do with group selection. Group selection would imply that a group does something equivalent to surviving or dying, something equivalent to reproducing itself, and that it has something you could call a group phenotype, such that genes might influence its development, and hence their own survival.