draft review of Liao's "Moral Brains"

Matthew Liao is to be commended for editing Moral Brains, a fine collection showcasing truly excellent chapters by, among others, James Woodward, Molly Crocket, and Jana Schaich Borg. In addition to Liao’s detailed, fair-minded, and comprehensive introduction, the book has fourteen chapters. Of these, one is a reprint (Joshua Greene ch. 4), one a re-articulation of previously published arguments (Walter Sinnott-Armstrong ch. 14), and one a literature review (Oliveira-Souza, Zahn, and Moll ch. 9). The rest are original contributions to the rapidly developing field of neuroethics.

This volume convinced me to endorse my standing suspicion that progress in neuroethics depends on improving how we conceptualize and operationalize moral phenomena, how we increase the accuracy and precision of methods for measuring such phenomena, and which questions about these phenomena we ask in the first place. Many of the contributors point out that the neuroscience of morality has predominantly employed functional magnetic resonance imaging (fMRI) of voxel-level activation in participants making one-off deontic judgments about hypothetical cases constructed by the experimenters. This approach is liable to result in experimenter (and interpreter) myopia. Judgment is an important component of morality, but so too are perception, attention, creativity, decision-making, action, longitudinal dispositions (e.g., virtues, vices, values, and commitment to principles), reflection on and revision of judgments, and social argumentation. Someone like my father who makes moral judgments when prodded to do so but never reconsiders them, argues sincerely about their adequacy, or acts on the basis of them is a seriously deficient moral agent. Yet much of the current literature seems to presuppose that people like my father are normal members of the moral community. (He’s not. He voted for Trump in Pennsylvania.) The contributions by Jesse Prinz (cf. 1), Jeanette Kennett & Philip Gerrans (cf. 3), Julia Driver (ch. 5), Stephen Darwall (ch. 6), Crockett (ch. 10), and Schaich Borg (ch. 11) are especially trenchant on this point. (In this context, I can’t help but narcissistically recommend my recent monograph – Alfano 2016 – as a framework for better structuring future research in terms of what I contend are the five key dimensions of moral psychology: agency, patiency, sociality, reflexivity, and temporality.)

Beyond fMRI-myopia, the extant neuroethical literature tends to neglect the reverse-inference problem. This problem arises from the fact that the mapping from brain regions to psychological processes is not one-one but many-many, which means that inferring from “region X showed activation” to “process P occurred” is invalid. As of the composition of this review, the amygdala and insula were implicated in over ten percent of all neuroimaging studies indexed by www.neurosynth.org.[1] Inferring, as Greene often does, from the activation of one of these areas to a conclusion about emotion generally or a discrete emotion, such as disgust, is hopeless.

On top of this, individuating regions as large as the amygdala is unlikely to be sufficiently fine-grained for neuroethicists’ purposes. We need, therefore, to diversify the methods of neuroethics to include approaches that have better spatial resolution (e.g., the single-cell resolution made possible by CUBIC – Susaki et al. 2014) and temporal precision (e.g., electroencephalogram), as well as methods that account for interactions among systems that operate at different timescales and beyond the central nervous system (e.g., hormones and the vagus nerve).

However, many of the questions we would like to ask seem answerable only by shudderingly unethical research on humans or other primates, such as torturous and medically unnecessary surgery. To get around this problem, Schaich Borg (ch. 11) argues for the use of rodent models (including measures of oxytocin) in the study of violent dispositions towards conspecifics. In the same vein, Oliveira et al. (ch. 9) recommend using lesions in the human population as natural experiments, and Crockett advocates for studies and experimental interventions on the endocrine system related to serotonin (and, I might add as a friendly amendment, testosterone and cortisol, cf. Denson et al. 2013).

Compounding these difficulties is the fact that brain science is expensive and time-consuming. With so many questions to ask and so little human and material capital to devote to them, we are constantly forced to prioritize some questions over others. In light of the crisis of replication and reproducibility that continues to rock psychology and neuroscience, I urge that we cast a skeptical eye on clickbait-generating experimental designs built on hypotheses with near-floor prior probabilities, such as Wheatley & Haidt’s (2010) study of the alleged effects of hypnotically-induced incidental disgust (which receives an absurd amount of attention in this volume and in contemporary moral psychology more broadly). Instead, we should pursue designs built to answer structured, specific questions given the constraints we face.

We need to stop asking ham-fisted questions like, “Which leads to better moral judgments – reason or emotion?” and, “Does neuroscience support act utilitarianism or a strawman of Kantian deontology?” As Prinz argues, “reasoning and emotion work together in the moral domain,” so we should reject a model like Haidt’s social intuitionism that “dichotomizes the debate between rationalist and sentimentalist” (p. 65). Reasoning can use emotions as inputs, deliver them as outputs, and integrate them into more complex mental states and dispositions. Contrary to what Greene (cf. 4) tells us, emotion is not an on-or-off “alarm bell.” Indeed, Woodward patiently walks through the emerging evidence that the ventromedial prefrontal cortex (VMPFC), which Greene bluntly labels an “emotion” area, is the region in which diverse value inputs from various parts of the brain (including emotional inputs, but also many others) are transformed into a common currency and integrated into a cardinal (not merely categorical or even ordinal) value signal that guides judgment and decision-making.

On reflection, it should have been obvious that distinguishing categorically between reason (understood monolithically) and emotion (also understood monolithically) was a nonstarter. For one thing, “emotion” includes everything from rage and grief to boredom and nostalgia; it is far too broad a category to license generalizations at the psychological or neurological level (Lindquist et al. 2012). In addition, the brain bases of emotions such as fear and disgust often exhibit exquisitely fine-tuned responses to the evaluative properties they track (Mobbs et al. 2010). Even more to the point, in some cases, we have no problem accepting emotions as reasons or, conversely, giving reasons for the emotions we embody. In the one direction, “She feels sad; something must have reminded her of her brother’s death,” is a reasonable inference. In the other direction, there are resentments that I’ve nursed for over a decade, and I’d be happy to give you all of my reasons for doing so if you buy me a few beers.

To illustrate what I have in mind by asking structured, specific questions, consider this one: “If we want to model moral judgment in consequentialist terms, at what level of analysis should valuation attach to consequences?” This question embarks from well-understood distinctions within consequentialist theory and seeks a non-question-begging answer. Unlike Greene’s question, which pits an arbitrarily-selected version of consequentialism against an arbitrarily-selected version of deontology, this one assumes a good deal of common ground, making it possible to get specific. Greene (ch. 4) asserts that act consequentialism employs the appropriate level of analysis, but Darwall (ch. 6) plausibly contends that the evidence better fits rule consequentialism. I venture to suggest that an even better fit is motive consequentialism (Adams 1976) because negative judgments about pushing the large man off the footbridge are almost certainly driven by intuitions like, “Anyone who could bring herself to shove someone in front of a runaway trolley at a moment’s notice is a terrifying asshole.”

So which questions should neuroethicists be asking? One question that they shouldn’t be asking is, “What does current neuroscience tell us about morality?” In this verdict, I am in agreement with a plurality or perhaps even a majority of the contributors to Moral Brains. Several of the chapters barely engage with neuroscience (Kennett & Gerrans, Driver, Darwall, Liao ch. 13). These chapters are well-written, significant contributions to philosophy, but it’s unclear why they were included in a book with this title. To put it another way, it’s unclear to me why the book wasn’t titled ‘Morality and Psychology, with a Dash of Neuroscience’. This difficulty becomes clearer when we note that many of the chapters that do engage in a significant way with neuroscience end up concluding that the brain doesn’t tell us anything that we couldn’t have learned in some other way from psychological or behavioral methods (Prinz, Woodward, Greene, Kahane). Perhaps we should be asking, “What do morality and moral psychology tell us about neuroscience?”

This reversal of explanatory direction presupposes that we have a reasonably coherent conception of what morality is or does. Sinnott-Armstrong argues in the closing chapter of the volume, however, that we lack such a conception because morality is fragmented at the level of content, brain basis, and function. I conclude this review by offering a rejoinder related to function in particular. My suggestion is that the function of morality is to organize communities (understood more or less broadly) in pursuing, promoting, preserving, and protecting what matters to them via cooperation. This conception of morality is, of necessity, vague and parameterized on multiple dimensions, but it is specific enough to gain significant empirical support from cross-cultural studies of folk axiology in both psychology (Alfano 2016, ch. 5) and anthropology (Curry et al. submitted). If this is on the right track, then the considerations that members of communities can and should offer each other (what High-Church meta-ethicists call ‘moral reasons’) are considerations that favor or disfavor the pursuit, promotion, preservation, or protection of shared values, as well as meta-reasons to modify the parameters or the ranking of values. What counts as a consideration, who counts as a member of the community, which values matter, and how they are weighed – these are questions to be answered, as Amartya Sen (1985) persuasively argued, by establishing informational constraints that point to all and only the variables that should be considered by an adequate moral theory. Indeed, some of the most sophisticated arguments in Moral Brains turn on such informational constraints (e.g., Greene pp. 170-2; Kahane pp. 294-5).

This book should interest philosophers working in the areas of neuroethics, moral psychology, normative ethics, research ethics, philosophy of psychology, philosophy of mind, and decision making. It should also grab the attention of psychologists and neuroscientists working in ethics-adjacent and ethics-relevant areas. It might work as a textbook for an advanced undergraduate seminar on neuroethics, and it would certainly be appropriate for a graduate seminar on this topic. (And it has a very detailed index – a rarity these days!)

 

References:

Adams, R. M. (1976). Motive utilitarianism. The Journal of Philosophy, 73(14): 467-81.

Alfano, M. (2016). Moral Psychology: An Introduction. London: Polity.

Curry, O. S., Mullins, D. A., & Whitehouse, H. (submitted). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current Anthropology

Denson, T., Mehta, P., & Tan, D. (2013). Endogenous testosterone and cortisol jointly influence reactive aggression in women. Psychoneuroendocrinology, 38(3): 416-24.

Lindquist, K., Wager, T., Kober, H., Bliss-Moreau, E. & Feldman Barrett, L. (2012). The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35: 121-202.

Mobbs, D., Yu, R., Rowe, J., Eich, H., Feldman-Hall, O., & Dalgleish, T. (2010). Neural activity associated with monitoring the oscillating threat value of a tarantula. Proceedings of the National Academies of Science, 107(47): 20582-6.

Sen, A. (1985). Well-being, agency and freedom: The Dewey Lectures 1984. The Journal of Philosophy, 82(4): 169-221.

Susaki, E., Tainaka, K., Perrin, D., Kishino, G., Tawara, T., Watanabe, T., Yokoyama, C., Onoe, H., Eguchi, M., Yamaguchi, S., Abe, T., Kiyonari, H., Shimizu, Y., Miyawaki, A., Yokota, H., Ueda, H. (2014). Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis. Cell, 157(3): 726-39.

Wheatley, T. & Haidt, J. (2010). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10): 780-784.

 

[1] Accessed 3 December 2016.

epistemic emotions and intellectual virtues

It’s uncontroversial to say that many virtues are emotional dispositions, even if they involve behavior in addition to emotion. Intellectual courage disposes its bearer to appropriate fear and confidence in matters epistemic. Alfano (2016b, chapter 4) suggests that, because we are able to individuate emotions more clearly than virtues, it might be helpful to index virtues to the emotions they govern. If this is on the right track, then intellectual virtues could be distinguished and structured by cataloguing what Morton (2010; see also Morton 2015 and Stocker 2012) calls epistemic emotions. These include such states as curiosity, fascination, intrigue, hope, trust, distrust, mistrust, surprise, doubt, skepticism, boredom, puzzlement, confusion, wonder, awe, faith, and epistemic angst. Note that some of these emotions are referred to by words that are also used to refer to their controlling virtues. As Morton says, “the words often do triple duty. Character links to virtue links to emotion” (2010).

 

VE can benefit from theorizing about epistemic emotions in at least three ways. One benefit of theorizing intellectual virtues via epistemic emotions is that doing so furnishes practitioners with a sort of “to do list”: many of the virtues related to the emotions mentioned in the previous paragraph are unexplored or underexplored. These virtues are ripe for the picking. Another benefit of the lens of epistemic emotion is that it helps to make sense of intellectual virtues as dispositions to motivated inquiry rather than just static belief. Emotions are, after all, motivational states, and epistemic emotions in particular direct us to seek confirmation, disconfirmation, and so on. This point is related to but more specific than Michael Brady’s (2013, 92) idea that emotions in general motivate inquiry because they “capture and consume” attention, thereby motivating inquiry into their own eliciting conditions. For instance, fear captures and consumes the attention of the fearful person, directing him to find and understand the (potential) threat or danger.

 

Finally, epistemic emotions help to make sense of the motivations and practices of scientists. For example, Thagard (2002) mined James Watson’s (1969) autobiographical account of the discovery of the structure of DNA for emotion terms; the most common related to interest and the joy of discovery, followed by fear, hope, anger, distress, aesthetic appreciation, and surprise. In addition, the literature on the demarcation between science and pseudo-science, along with the literature on scientific revolutions, is peppered with the language of emotion – especially epistemic emotion. Popper (1963) talks of scientists’ attitudes to their hypotheses as one of “hope” rather than belief. He distinguishes science from pseudoscience by sneering at the “faith” characteristic of the latter and praising the “doubt” and openness to testing of the former. He argues that the “special problem under investigation” and the scientist’s “theoretical interests” determine her point of view. Lakatos (1978) contrasts scientific knowledge with theological certainty that “must be beyond doubt.” Kuhn (1962) says that the attitude scientists have towards their paradigms is one of not only belief but also “trust.” He claims that scientists received the discovery of x-rays “not only with surprise but with shock […] though they could not doubt the evidence, [they] were clearly staggered by it.”

 

In times of crisis, says Kuhn, scientists are plagued by “malaise.” Such malaise has recently become most evident in social psychology’s replication and reproducibility crisis. For example, two pre-registered replications of the so-called “ego-depletion effect” recently found that, despite decades of positive studies and successful meta-analyses, there appears to be no such effect (Hagger et al. 2016; Lurquin et al. 2016). A science journalist writing for Slate magazine described these findings as “not just worrying” but “terrifying,” because they suggest that an entire field of research is “suspicious” (Engber 2016). The article quotes Evan Carter, one of the young scientists in the thick of the crisis, saying, “All of a sudden it felt like everything was crumbling. I basically lost my compass. Normally I could say, all right there have been 100 published studies on this, so I can feel good about it, I can feel confident. And then that just went away.” Engber goes on to lament that, “All the old methods are in doubt,” even meta-analysis, then quotes the prominent social psychologist Michael Inzlicht saying “Meta-analyses are fucked.” On his own blog, Inzlicht (2016) writes that, despite or perhaps because of the fact that he is “in love with social psychology,” nevertheless “I have so many feelings about the situation we’re in, and sometimes the weight of it breaks my heart. […] it is only when we feel badly, when we acknowledge and, yes, grieve for yesterday, that we can allow for a better tomorrow.” He goes on to say, “This is flat-out scary,” and, “I’m in a dark place. I feel like the ground is moving from underneath me and I no longer know what is real and what is not.” Practitioners of VE may be in a position to offer aid and comfort to afflicted scientists, or at least an accurate description of what ails them.

 

Draft Review of Luetge, Rusch, & Uhl, Experimental Ethics

It would be unkind but not inaccurate to say that most experimental philosophy is just psychology with worse methods and better theories. In Experimental Ethics: Towards an Empirical Moral Philosophy, Christoph Luetge, Hannes Rusch, and Matthias Uhl set out to make this comparison less invidious and more flattering. Their book has sixteen chapters, organized into five sections and bookended by the editors’ own introduction and prospectus. Contributors hail from four countries (Germany, USA, Spain, and the United Kingdom) and five disciplines (philosophy, psychology, cognitive science, economics, and sociology). While the chapters are of mixed quality and originality, there are several fine contributions to the field. These especially include Stephan Wolf and Alexander Lenger’s sophisticated attempt to operationalize the Rawlsian notion of a veil of ignorance, Nina Strohminger et al.’s survey of the methods available to experimental ethicists for studying implicit morality, Fernando Aguiar et al.’s exploration of the possibility of operationalizing reflective equilibrium in the lab, and Nikil Mukerji’s careful defusing of three debunking arguments about the reliability of philosophical intuitions.

Part I introduces experimental philosophy as a promising but problematic methodology with several precedents in the history of philosophy and related fields. It begins with a reprint of Kwame Anthony Appiah’s 2007 presidential address to the Eastern Division of the American Philosophical Association, followed by chapters authored by two of the editors (Luetge and Rusch, the latter of whom co-authors with Niklas Dworazik). Readers already familiar with the field should skip to the next section, but for newcomers these chapters will be a helpful introduction to the field.

The five chapters in Part II are case studies for experimental ethics. In chapter 5, Eric Schwitzgebel summarizes his research program on the moral behavior of ethicists; in a somewhat dismal series of papers he has shown that, with a few exceptions, professional ethicists are indistinguishable from other philosophers and professors. While such results might lead to skepticism about the effects of studying ethics, this is not the conclusion Schwitzgebel draws (indeed, to get evidence for such skepticism, one would have to randomize people to philosophical specializations and careers). Instead, he reflects on the tension between the epistemic and ethical values associated with the study of philosophy, pointing out that pressure to live in accordance with the values one advocates professionally may lead to motivated reasoning that obscures the moral truth. In Chapter 6, Verena Wagner offers an interpretation of some studies of the “Knobe effect.” Unfortunately, her familiarity with this literature is limited and out-of-date, leading her to focus on the red herring of blameworthiness. For better-informed interpretations of this literature, see Robinson et al. (2015) and Sauer (2014). In chapter 7, Ezio di Nucci summarizes some of his research on the trolley problem. This early research is a good starting point for the methodological refinements suggested in later chapters. For instance, because di Nucci employed categorical variables for both his predictors and his outcomes, he was only able to run an underpowered chi-squared test of independence; with richer measures, his plausible hypothesis about the doctrine of double effect could be better tested. In chapter 8, Wolf and Lenger describe a two-stage experiment on distributive justice, which suggests that “people tend to equalize income as suggested by Rawls’s difference principle” when behind an experimental veil of ignorance, but that when “the veil is lifted, individuals egoistically choose in line with their post-veil interests” (p. 95). They are aware of how denuded of personality people are supposed to be behind the veil of ignorance and admit that it is impossible to produce such conditions in the lab. I heartily endorse their use of monetary incentives and measurement of behavior (rather than the verbal responses to hypothetical scenarios more typically recorded by experimental ethicists), though I worry that even with these methodological improvements the degree to which they can approximate the original position is very limited. In chapter 9, Ulrich Frey explores various potential explanations for the disconnect between people’s stated values related to environmental protection and their behavior – a pressing question that is also relevant to the debate between motivational internalists and externalists.

Part III contains four chapters on methodology in experimental ethics, the crown jewel of which is chapter 10 by Nina Strohminger, Brendan Caldwell, Daryl Cameron, Jana Schaich Borg, and Walter Sinnott-Armstrong. They explore the strengths and weaknesses of three implicit judgment tasks (the Implicit Association Test, the affect misattribution procedure, and the process dissociation procedure), as well as eye-tracking and functional magnetic resonance imaging (fMRI). While there is some value to asking people to read, think about, and respond explicitly to questions about hypothetical scenarios as experimental ethicists have tended to do, such research is limited by people’s introspective awareness of and willingness to sincerely express their judgments and preferences. Implicit measures provide a window into the unconscious and the intentionally obscured aspects of moral cognition and behavior. Naturally, none of these measures is perfect either, so Strohminger et al. conclude by calling for “a multi-method, integrated approach” (p. 146) that exploits the advantages of each method to make up for the weaknesses of others. In chapter 11, Martin Bruder and Attila Tanyi suggest a within-subjects methodology meant to prompt participants to “subject their initial responses to a thorough ‘test of reflection’” (p. 167), which would help to distinguish genuine intuitions from mere hunches. While I am unconvinced that the method they propose does the trick, I agree that it would be a significant methodological improvement to include measurements before and after reflection (and to distinguish between solitary reflection and dialogic reflection). In chapter 12, Andreas Bunge and Alexander Skulmowski explore the methodology of designing institutions in such a way that people find it easier to do what they do or would reflectively judge to be the right thing – an approach that I have dubbed moral technology (Alfano 2013). As they put it, “Carefully constructed institutions avoid creating conflicts between multiple psychological systems of moral judgment” (p. 181). They also emphasize “how little resemblance filling out a survey form involving more or less contrived scenarios bears to making a moral judgment in everyday life” (p. 184) and advocate more ecologically valid methods, including “immersive virtual environment technology” (p. 186). In chapter 13, Fernando Aguiar, Antonio Gaitán, and Blanca Rodríguez-López argue that experimental ethicists should conduct behavioral studies like those familiar in experimental economics. These studies employ tailored scripts, repeated trials (to get within-subjects data and allow for learning), and financial incentives (to promote engagement and sincerity).

Part IV includes two critical reflections on the state of experimental ethics. The first, by Jacob Rosenthal, is an ill-informed dud, but chapter 15 by Mukerji does an excellent job of charitably engaging with and defusing three empirically-motivated arguments against the use of the method of cases to test and refine moral principles: “the argument from disagreement, the argument from framing effects, and debunking explanations” (p. 227). Regarding disagreement, Mukerji helpfully points out that studies to date have tended to focus on casuistic controversies, not easy cases, and that “there are many cases on which we should reasonably expect no disagreement at all (even among philosophers)” (p. 232). Regarding framing effects, Mukerji objects that the existence of these may indicate that participants are responsive to the reasons embedded in hypothetical scenarios – just not to all of them at once. Indeed, I contend that framing effects show that people do respond to reasons, including moral reasons (Koralus & Alfano forthcoming). Regarding debunking arguments that point to the emotional sources of moral judgments, Mukerji plausibly denies that emotions are ipso facto unreasonable. Cognitive theories of moral emotions, such as Roeser’s (2011) affectual intuitionism, support this contention.

Part V includes two chapters and a brief prospectus by the editors. I mentioned above that certain questions about the effects of training in professional ethics could only be answered by randomizing people to philosophical specialization. Julian Müller’s proposal in chapter 16 is even more ambitious. He argues that certain questions in empirical ethics can only be answered by conducting large-scale social experiments, such as Startup Cities. “A Startup City is essentially a newly founded city that is part of a larger entity like a modern democratic state or a Union like the EU but has considerably more freedom to test different socio-economic policy schemes, while adhering to some minimal standard of human rights and free exit” (p. 261). This is a truly radical proposal, though one with gold-standard precedents such as Plato’s attempt to create the Republic in Syracuse and Plotinus’ plan to found Platonopolis. The editors, drawing on the previous chapters, conclude the book by identifying what they consider the five most pressing problems in contemporary experimental ethics: 1) inadequate methodological rigor, 2) lack of clarity regarding the relation between experimental philosophy and armchair philosophy, 3) lack of acceptance of experimental methods among some philosophers, 4) over-reliance on verbal responses to hypothetical scenarios, 5) lack of integration with other disciplines. I agree with all five, especially items 1 and 4. I would add as item 6 the fact that experimental philosophy continues to have a “woman problem.” Of the twenty-six contributors to this volume, only four are women (15%). This is in line with the problematic gender ratio identified by Peggy DesAutels (2015) in other collections of experimental philosophy. Given that two of the four best papers in this volume have female authors, the quality of work done by women in the field of experimental ethics cannot account for the disparity. If future editors of volumes in experimental philosophy do not make a conscious effort to promote gender diversity, experimental philosophy may improve its methods, but it will still be unkind but not inaccurate to say that it is just social science with less diversity and better theories.

 

References

Alfano, M. (2013). Character as Moral Fiction. Cambridge University Press.

DesAutels, P. (2015). [Review of J. Knobe & S. Nichols (eds.), Experimental Philosophy, Volume 2.] Notre Dame Philosophy Reviews.

Koralus, P. & Alfano, M. (forthcoming). Reasons-based moral judgment and the erotetic theory. In J.-F. Bonnefon & B. Trémolière (eds.), Moral Inference. Psychology Press.

Robinson, B., Stey P., & Alfano, M. (2015). Reversing the side-effect effect: The power of salient norms. Philosophical Studies, 172(1): 177-206.

Roeser, S. (2011). Moral Emotions and Intuitions. Basingstoke: Palgrave.

Sauer, H. (2014). It’s the Knobe effect, stupid! Review of Philosophy and Psychology, 5(4): 485-503.