Trump Presidency to be Large-Scale Replication Experiments in Destructive Obedience: Here is How to Resist

(I tried to get this published as an op-ed in a few places but met with failure and stonewalling, so I'm putting it on the blog. Please share if you find it useful.)

You might think that, while four to eight years of President Trump will be embarrassing, they will not leave an indelible stain. But know this: America is not special. Our smug self-assurance that genocide, democide, and other crimes against humanity only happen in other countries may be our undoing. Americans are no better and – let us hope – not much worse than people everywhere. And people everywhere are liable to obey authorities who incrementally ratchet up their destructive orders.

There’s good scientific evidence for this claim. In the 1960s, the psychologist Stanley Milgram demonstrated it at Yale University. He showed that approximately two-thirds of ordinary American adults will, when subject to escalating social pressure, put 450 volts of electricity through a complete stranger whose only sin is failing to memorize a list of words.

The setup of Milgram’s experiment is simple: a participant and an actor who pretends to be an ordinary participant are ushered into the lab. The participant is “randomly” selected to be a teacher while the actor is the learner. When the actor makes a mistake in recalling the list of words, the participant shocks him.

The shocks start at a benign 15 volts and increase by 15 volts for each subsequent mistake. Initially the actor stoically grunts through the pain, but at 150 volts he demands to be released from the experiment. By 300 volts, he’s “unconscious.” The experimenter tells the participant to treat failure to answer as a wrong answer, leading ultimately to three shocks in a row with 450 volts.

Why don’t the participants object? Many do. But at the first sign of disobedience, the experimenter mildly instructs, “Please go on.” Further disobedience is met with “The experiment requires that you continue,” then “It is absolutely essential that you continue,” and finally “You have no other choice, you must go on.” If the participant rebels a fifth time, the experiment is terminated. These verbal nudges are enough to get two-thirds of participants to be maximally compliant.

Shocked? So were laypeople and scientists of Milgram’s day. In interviews with 110 psychiatrists, college students, and middle-class adults who were not aware of his results, Milgram found that 100% predicted that no participants would go all the way and that the maximum shock they would deliver was 135 volts.

Milgram’s participants were unusual neither by American nor by global standards. Subsequent studies elsewhere in the USA, along with South Africa, Australia, Jordan, Spain, and Austria, have found similar levels of destructive obedience.

In a boon for psychological science and a moral test for the country, the Trump presidency will be the most ecologically-valid, large-scale replication of Milgram’s studies ever conducted.

Instead of issuing verbal prods, Trump commands the FBI, Homeland Security, the CIA, and the military. Instead of torturing an obviously innocent victim, he targets African-Americans, women, Mexicans, Muslims, gay people and other groups who have faced dehumanizing animus since the United States enshrined slavery in the Constitution.

If 67% of us maximally comply with the destructive orders that are sure to flow from the Trump White House, Milgram will be proven scientifically right and we will be proven morally wrong.

Milgram’s studies aren’t all bad news, though. He and other researchers have identified six ways that you can be part of the resistant 33%. Here are the lessons we should learn:

1) Resist early. Almost everyone who goes one-third of the way in the Milgram study goes all the way. If you go along to get along, you’re likely to go much too far.

2) Resist loudly, visibly, and intelligently. In the presence of another resister, others become more inclined to resist as well. People are less susceptible to pressure from authority when they know how such pressure can affect them.

3) Use authority to resist authority. When a knowledgeable second party contradicts destructive orders, almost everyone resists.

4) Focus on the individuality of victims. Learn their names. Memorize their faces. Shake their hands. Hug them. Get close to them both psychologically and physically. Compliance drops by more than half when the participant has to touch the victim.

5) Seek solidarity. The solitary hero may be a romantic ideal, but courage breeds courage. Find other resisters and reinforce one another.

6) Nurse your contempt. Compliance drops by two-thirds when the person giving the orders is perceived as just some schmuck.

America is not special. With hard work and a lot of luck, we may emerge from this struggle ashamed but relieved that the worst did not come to pass. In the face of disaster, we can and must demand this much of ourselves.

draft review of Liao's "Moral Brains"

Matthew Liao is to be commended for editing Moral Brains, a fine collection showcasing truly excellent chapters by, among others, James Woodward, Molly Crocket, and Jana Schaich Borg. In addition to Liao’s detailed, fair-minded, and comprehensive introduction, the book has fourteen chapters. Of these, one is a reprint (Joshua Greene ch. 4), one a re-articulation of previously published arguments (Walter Sinnott-Armstrong ch. 14), and one a literature review (Oliveira-Souza, Zahn, and Moll ch. 9). The rest are original contributions to the rapidly developing field of neuroethics.

This volume convinced me to endorse my standing suspicion that progress in neuroethics depends on improving how we conceptualize and operationalize moral phenomena, how we increase the accuracy and precision of methods for measuring such phenomena, and which questions about these phenomena we ask in the first place. Many of the contributors point out that the neuroscience of morality has predominantly employed functional magnetic resonance imaging (fMRI) of voxel-level activation in participants making one-off deontic judgments about hypothetical cases constructed by the experimenters. This approach is liable to result in experimenter (and interpreter) myopia. Judgment is an important component of morality, but so too are perception, attention, creativity, decision-making, action, longitudinal dispositions (e.g., virtues, vices, values, and commitment to principles), reflection on and revision of judgments, and social argumentation. Someone like my father who makes moral judgments when prodded to do so but never reconsiders them, argues sincerely about their adequacy, or acts on the basis of them is a seriously deficient moral agent. Yet much of the current literature seems to presuppose that people like my father are normal members of the moral community. (He’s not. He voted for Trump in Pennsylvania.) The contributions by Jesse Prinz (cf. 1), Jeanette Kennett & Philip Gerrans (cf. 3), Julia Driver (ch. 5), Stephen Darwall (ch. 6), Crockett (ch. 10), and Schaich Borg (ch. 11) are especially trenchant on this point. (In this context, I can’t help but narcissistically recommend my recent monograph – Alfano 2016 – as a framework for better structuring future research in terms of what I contend are the five key dimensions of moral psychology: agency, patiency, sociality, reflexivity, and temporality.)

Beyond fMRI-myopia, the extant neuroethical literature tends to neglect the reverse-inference problem. This problem arises from the fact that the mapping from brain regions to psychological processes is not one-one but many-many, which means that inferring from “region X showed activation” to “process P occurred” is invalid. As of the composition of this review, the amygdala and insula were implicated in over ten percent of all neuroimaging studies indexed by www.neurosynth.org.[1] Inferring, as Greene often does, from the activation of one of these areas to a conclusion about emotion generally or a discrete emotion, such as disgust, is hopeless.

On top of this, individuating regions as large as the amygdala is unlikely to be sufficiently fine-grained for neuroethicists’ purposes. We need, therefore, to diversify the methods of neuroethics to include approaches that have better spatial resolution (e.g., the single-cell resolution made possible by CUBIC – Susaki et al. 2014) and temporal precision (e.g., electroencephalogram), as well as methods that account for interactions among systems that operate at different timescales and beyond the central nervous system (e.g., hormones and the vagus nerve).

However, many of the questions we would like to ask seem answerable only by shudderingly unethical research on humans or other primates, such as torturous and medically unnecessary surgery. To get around this problem, Schaich Borg (ch. 11) argues for the use of rodent models (including measures of oxytocin) in the study of violent dispositions towards conspecifics. In the same vein, Oliveira et al. (ch. 9) recommend using lesions in the human population as natural experiments, and Crockett advocates for studies and experimental interventions on the endocrine system related to serotonin (and, I might add as a friendly amendment, testosterone and cortisol, cf. Denson et al. 2013).

Compounding these difficulties is the fact that brain science is expensive and time-consuming. With so many questions to ask and so little human and material capital to devote to them, we are constantly forced to prioritize some questions over others. In light of the crisis of replication and reproducibility that continues to rock psychology and neuroscience, I urge that we cast a skeptical eye on clickbait-generating experimental designs built on hypotheses with near-floor prior probabilities, such as Wheatley & Haidt’s (2010) study of the alleged effects of hypnotically-induced incidental disgust (which receives an absurd amount of attention in this volume and in contemporary moral psychology more broadly). Instead, we should pursue designs built to answer structured, specific questions given the constraints we face.

We need to stop asking ham-fisted questions like, “Which leads to better moral judgments – reason or emotion?” and, “Does neuroscience support act utilitarianism or a strawman of Kantian deontology?” As Prinz argues, “reasoning and emotion work together in the moral domain,” so we should reject a model like Haidt’s social intuitionism that “dichotomizes the debate between rationalist and sentimentalist” (p. 65). Reasoning can use emotions as inputs, deliver them as outputs, and integrate them into more complex mental states and dispositions. Contrary to what Greene (cf. 4) tells us, emotion is not an on-or-off “alarm bell.” Indeed, Woodward patiently walks through the emerging evidence that the ventromedial prefrontal cortex (VMPFC), which Greene bluntly labels an “emotion” area, is the region in which diverse value inputs from various parts of the brain (including emotional inputs, but also many others) are transformed into a common currency and integrated into a cardinal (not merely categorical or even ordinal) value signal that guides judgment and decision-making.

On reflection, it should have been obvious that distinguishing categorically between reason (understood monolithically) and emotion (also understood monolithically) was a nonstarter. For one thing, “emotion” includes everything from rage and grief to boredom and nostalgia; it is far too broad a category to license generalizations at the psychological or neurological level (Lindquist et al. 2012). In addition, the brain bases of emotions such as fear and disgust often exhibit exquisitely fine-tuned responses to the evaluative properties they track (Mobbs et al. 2010). Even more to the point, in some cases, we have no problem accepting emotions as reasons or, conversely, giving reasons for the emotions we embody. In the one direction, “She feels sad; something must have reminded her of her brother’s death,” is a reasonable inference. In the other direction, there are resentments that I’ve nursed for over a decade, and I’d be happy to give you all of my reasons for doing so if you buy me a few beers.

To illustrate what I have in mind by asking structured, specific questions, consider this one: “If we want to model moral judgment in consequentialist terms, at what level of analysis should valuation attach to consequences?” This question embarks from well-understood distinctions within consequentialist theory and seeks a non-question-begging answer. Unlike Greene’s question, which pits an arbitrarily-selected version of consequentialism against an arbitrarily-selected version of deontology, this one assumes a good deal of common ground, making it possible to get specific. Greene (ch. 4) asserts that act consequentialism employs the appropriate level of analysis, but Darwall (ch. 6) plausibly contends that the evidence better fits rule consequentialism. I venture to suggest that an even better fit is motive consequentialism (Adams 1976) because negative judgments about pushing the large man off the footbridge are almost certainly driven by intuitions like, “Anyone who could bring herself to shove someone in front of a runaway trolley at a moment’s notice is a terrifying asshole.”

So which questions should neuroethicists be asking? One question that they shouldn’t be asking is, “What does current neuroscience tell us about morality?” In this verdict, I am in agreement with a plurality or perhaps even a majority of the contributors to Moral Brains. Several of the chapters barely engage with neuroscience (Kennett & Gerrans, Driver, Darwall, Liao ch. 13). These chapters are well-written, significant contributions to philosophy, but it’s unclear why they were included in a book with this title. To put it another way, it’s unclear to me why the book wasn’t titled ‘Morality and Psychology, with a Dash of Neuroscience’. This difficulty becomes clearer when we note that many of the chapters that do engage in a significant way with neuroscience end up concluding that the brain doesn’t tell us anything that we couldn’t have learned in some other way from psychological or behavioral methods (Prinz, Woodward, Greene, Kahane). Perhaps we should be asking, “What do morality and moral psychology tell us about neuroscience?”

This reversal of explanatory direction presupposes that we have a reasonably coherent conception of what morality is or does. Sinnott-Armstrong argues in the closing chapter of the volume, however, that we lack such a conception because morality is fragmented at the level of content, brain basis, and function. I conclude this review by offering a rejoinder related to function in particular. My suggestion is that the function of morality is to organize communities (understood more or less broadly) in pursuing, promoting, preserving, and protecting what matters to them via cooperation. This conception of morality is, of necessity, vague and parameterized on multiple dimensions, but it is specific enough to gain significant empirical support from cross-cultural studies of folk axiology in both psychology (Alfano 2016, ch. 5) and anthropology (Curry et al. submitted). If this is on the right track, then the considerations that members of communities can and should offer each other (what High-Church meta-ethicists call ‘moral reasons’) are considerations that favor or disfavor the pursuit, promotion, preservation, or protection of shared values, as well as meta-reasons to modify the parameters or the ranking of values. What counts as a consideration, who counts as a member of the community, which values matter, and how they are weighed – these are questions to be answered, as Amartya Sen (1985) persuasively argued, by establishing informational constraints that point to all and only the variables that should be considered by an adequate moral theory. Indeed, some of the most sophisticated arguments in Moral Brains turn on such informational constraints (e.g., Greene pp. 170-2; Kahane pp. 294-5).

This book should interest philosophers working in the areas of neuroethics, moral psychology, normative ethics, research ethics, philosophy of psychology, philosophy of mind, and decision making. It should also grab the attention of psychologists and neuroscientists working in ethics-adjacent and ethics-relevant areas. It might work as a textbook for an advanced undergraduate seminar on neuroethics, and it would certainly be appropriate for a graduate seminar on this topic. (And it has a very detailed index – a rarity these days!)

 

References:

Adams, R. M. (1976). Motive utilitarianism. The Journal of Philosophy, 73(14): 467-81.

Alfano, M. (2016). Moral Psychology: An Introduction. London: Polity.

Curry, O. S., Mullins, D. A., & Whitehouse, H. (submitted). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current Anthropology

Denson, T., Mehta, P., & Tan, D. (2013). Endogenous testosterone and cortisol jointly influence reactive aggression in women. Psychoneuroendocrinology, 38(3): 416-24.

Lindquist, K., Wager, T., Kober, H., Bliss-Moreau, E. & Feldman Barrett, L. (2012). The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35: 121-202.

Mobbs, D., Yu, R., Rowe, J., Eich, H., Feldman-Hall, O., & Dalgleish, T. (2010). Neural activity associated with monitoring the oscillating threat value of a tarantula. Proceedings of the National Academies of Science, 107(47): 20582-6.

Sen, A. (1985). Well-being, agency and freedom: The Dewey Lectures 1984. The Journal of Philosophy, 82(4): 169-221.

Susaki, E., Tainaka, K., Perrin, D., Kishino, G., Tawara, T., Watanabe, T., Yokoyama, C., Onoe, H., Eguchi, M., Yamaguchi, S., Abe, T., Kiyonari, H., Shimizu, Y., Miyawaki, A., Yokota, H., Ueda, H. (2014). Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis. Cell, 157(3): 726-39.

Wheatley, T. & Haidt, J. (2010). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10): 780-784.

 

[1] Accessed 3 December 2016.

epistemic emotions and intellectual virtues

It’s uncontroversial to say that many virtues are emotional dispositions, even if they involve behavior in addition to emotion. Intellectual courage disposes its bearer to appropriate fear and confidence in matters epistemic. Alfano (2016b, chapter 4) suggests that, because we are able to individuate emotions more clearly than virtues, it might be helpful to index virtues to the emotions they govern. If this is on the right track, then intellectual virtues could be distinguished and structured by cataloguing what Morton (2010; see also Morton 2015 and Stocker 2012) calls epistemic emotions. These include such states as curiosity, fascination, intrigue, hope, trust, distrust, mistrust, surprise, doubt, skepticism, boredom, puzzlement, confusion, wonder, awe, faith, and epistemic angst. Note that some of these emotions are referred to by words that are also used to refer to their controlling virtues. As Morton says, “the words often do triple duty. Character links to virtue links to emotion” (2010).

 

VE can benefit from theorizing about epistemic emotions in at least three ways. One benefit of theorizing intellectual virtues via epistemic emotions is that doing so furnishes practitioners with a sort of “to do list”: many of the virtues related to the emotions mentioned in the previous paragraph are unexplored or underexplored. These virtues are ripe for the picking. Another benefit of the lens of epistemic emotion is that it helps to make sense of intellectual virtues as dispositions to motivated inquiry rather than just static belief. Emotions are, after all, motivational states, and epistemic emotions in particular direct us to seek confirmation, disconfirmation, and so on. This point is related to but more specific than Michael Brady’s (2013, 92) idea that emotions in general motivate inquiry because they “capture and consume” attention, thereby motivating inquiry into their own eliciting conditions. For instance, fear captures and consumes the attention of the fearful person, directing him to find and understand the (potential) threat or danger.

 

Finally, epistemic emotions help to make sense of the motivations and practices of scientists. For example, Thagard (2002) mined James Watson’s (1969) autobiographical account of the discovery of the structure of DNA for emotion terms; the most common related to interest and the joy of discovery, followed by fear, hope, anger, distress, aesthetic appreciation, and surprise. In addition, the literature on the demarcation between science and pseudo-science, along with the literature on scientific revolutions, is peppered with the language of emotion – especially epistemic emotion. Popper (1963) talks of scientists’ attitudes to their hypotheses as one of “hope” rather than belief. He distinguishes science from pseudoscience by sneering at the “faith” characteristic of the latter and praising the “doubt” and openness to testing of the former. He argues that the “special problem under investigation” and the scientist’s “theoretical interests” determine her point of view. Lakatos (1978) contrasts scientific knowledge with theological certainty that “must be beyond doubt.” Kuhn (1962) says that the attitude scientists have towards their paradigms is one of not only belief but also “trust.” He claims that scientists received the discovery of x-rays “not only with surprise but with shock […] though they could not doubt the evidence, [they] were clearly staggered by it.”

 

In times of crisis, says Kuhn, scientists are plagued by “malaise.” Such malaise has recently become most evident in social psychology’s replication and reproducibility crisis. For example, two pre-registered replications of the so-called “ego-depletion effect” recently found that, despite decades of positive studies and successful meta-analyses, there appears to be no such effect (Hagger et al. 2016; Lurquin et al. 2016). A science journalist writing for Slate magazine described these findings as “not just worrying” but “terrifying,” because they suggest that an entire field of research is “suspicious” (Engber 2016). The article quotes Evan Carter, one of the young scientists in the thick of the crisis, saying, “All of a sudden it felt like everything was crumbling. I basically lost my compass. Normally I could say, all right there have been 100 published studies on this, so I can feel good about it, I can feel confident. And then that just went away.” Engber goes on to lament that, “All the old methods are in doubt,” even meta-analysis, then quotes the prominent social psychologist Michael Inzlicht saying “Meta-analyses are fucked.” On his own blog, Inzlicht (2016) writes that, despite or perhaps because of the fact that he is “in love with social psychology,” nevertheless “I have so many feelings about the situation we’re in, and sometimes the weight of it breaks my heart. […] it is only when we feel badly, when we acknowledge and, yes, grieve for yesterday, that we can allow for a better tomorrow.” He goes on to say, “This is flat-out scary,” and, “I’m in a dark place. I feel like the ground is moving from underneath me and I no longer know what is real and what is not.” Practitioners of VE may be in a position to offer aid and comfort to afflicted scientists, or at least an accurate description of what ails them.