A semantic-network approach to the history of philosophy, Or, What does Nietzsche talk about when he talks about emotion?

A semantic-network approach to the history of philosophy,  Or,  What does Nietzsche talk about when he talks about emotion?

You might find this map a bit surprising. When we teach Nietzsche to our students, we tend to focus on resentment, leaving out most of the other emotions that he actually talks about. My hunch is that this is because most translations of Nietzsche into English leave ‘ressentiment’ in the French and always italicize it, despite the fact that Nietzsche only italicizes it twice and only refers to it in a couple dozen passages. This distracts readers and leads them to fetishize resentment and ignore the other emotions.

Read More

Draft review of Katsafanas's "The Nietzschean Self"

I'm working on a review of Paul Katsafanas's The Nietzschean Self: Moral Psychology, Agency, and the Unconscious. Here's a draft. It'll have to be cut down by about 50%, but I figured some folks might like to see the extended version.

Philosophical engagement with Nietzsche in the English-speaking world began in earnest in the 1970s with Walter Kaufmann’s translations and commentaries. It matured in spurts, with significant book-length contributions by Alexander Nehamas (Nietzsche: Life as Literature, Harvard University Press, 1985), Maudemarie Clark (Nietzsche on Truth and Philosophy, Cambridge University Press, 1990), John Richardson (Nietzsche’s System, Oxford University Press, 1996), and Bernard Reginster (The Affirmation of Life, Harvard University Press, 2006). Paul Katsafanas’s The Nietzschean Self: Moral Psychology, Agency, and the Unconscious marks a consolidation of half a century of scholarship. Whereas previous commentators have often been mesmerized and distracted by the words and phrases Nietzsche italicizes (amor fati, ressentiment, pathos of distance), Katsafanas focuses on the less flashy but more essential components of his moral psychology: consciousness and the unconscious, drives, values, willing, the self, and freedom. The book is organized into eight main chapters bookended by a succinct introduction and a comparison with the moral psychologies of Kant, Hume, and Aristotle. Along the way, Katsafanas engages illuminatingly with both contemporary philosophical work (including both commentary on Nietzsche and non-historical work in philosophy of mind, philosophy of language, and moral psychology) and Nietzsche’s intellectual predecessors and successors (especially Spinoza, Schopenhauer, Kant, Schiller, Hegel, and Freud). In this review, I summarize the main arguments of the book and offer some criticism.

In two chapters on consciousness and the unconscious, Katsafanas argues that Nietzsche aligns the distinction between conscious and unconscious with the distinction between conceptual content and nonconceptual content. This initially-puzzling equation stems from Nietzsche’s account of language and the way that linguistic communication feeds back on the thoughts it expresses. According to Nietzsche, consciousness arose from the social need to communicate our thoughts, desires, feelings, and emotions. Mental states that did not need to be communicated remained unconscious, while those that demanded communication had to be articulated and regimented in mutually-understandable tokens. Such regimentation tends to simplify the content of the now-conscious mental states and, in some cases, may falsify them by forcing their jagged contours into Procrustean categories. If this is right, conscious thought must be conceptually articulated, but — contrary to Katsafanas’s interpretation — unconscious thought may but needn’t be conceptually articulated. Imagine, for example, someone who says (and therefore conceptually articulates the thought that) the tablecloth is blue. An hour later, she is no longer thinking or talking about the tablecloth, but presumably her unconscious thought that the tablecloth is blue retains its conceptual structure.

In chapter 4, Katsafanas refines the conception of a Nietzschean drive developed in his earlier work, arguing that a drive is a disposition that induces a signature affective orientation, which in turn leads the agent both to engage in a characteristic range of actions and to take herself to be warranted in so doing. Rather than prompting actions directly, then, the agent’s drives entice her to act in characteristic ways by putting her in a frame of mind in which reasons for acting thus appear salient and relevant while other reasons do not. Someone’s sex drive, for example, leads her to see the object of her affection as alluring and attractive, which in turn makes it seem reasonable to pursue that person. Katsafanas also attributes to Nietzsche the stronger claim that drives, via the affective orientations they induce, influence the content of experience itself. In particular, an agent’s drives lead her to see ambiguous evidence as confirmation that a drive-consilient action is warranted. Someone in the grip of an aggressive drive, for example, will tend to see another person’s quick smile as a sneer of contempt that calls for an angry retort rather than as a friendly gesture that calls for a gentler response.

In chapter 5, Katsafanas defines values in terms of drives, arguing that an agent values something just in case she has a drive-induced (positive) affective orientation toward it and doesn’t disapprove of that very orientation. Values are thus a proper subset of the moods, emotions, and affects induced by drives. If this is right, then drives both include and explain values. They include values because the affective orientations that they systematically induce (when not disapproved of) constitute valuations; they explain values because they lead the agent to find warrant for acting in ways that express her values.

Chapter 6 on willing without a (faculty of the) will is one of the only places where Katsafanas resorts to periodization, arguing that while Nietzsche accepts a version of hard incompatibilism in his early works, he shifts to a sort of Spinozist compatibilism in the middle and late works. Is the will the only moral psychological phenomenon about which Nietzsche changed his mind? That would be a curious coincidence. In any event, Katsafanas argues that, while the mature Nietzsche rejects the Kantian idea that it is possible to suspend the influence of motives during reflection and deliberation, someone’s choice is not uniquely determined by the weighted set of her motives because conscious reflection and deliberation interpret motives, and in so doing potentially modulate both their force and their direction. This point is best-attested in Nietzsche’s discussions of suffering, which, he says only motivates aversive action when it is not given meaning; once a meaning is bestowed on suffering, people even seek it out. Nietzsche thus allows a causal role — albeit a supporting rather than starring role — for reflection and deliberation in agency. Katsafanas sells short the novelty and interest of this interpretation when he labels it the ‘vector model’ (160). This is not merely a matter of summing up vectors, with the will adding or subtracting its bit in the context of the larger vectors associated with drives, affects, and desires. Rather, on this view, conscious deliberation modulates both the force and the direction of the agent’s motives. In the language of vector geometry, it functions as a scalar, dot product, or cross product.

One problem for this account of conscious reflection’s role in action arises from the timescale on which it is meant to occur. Katsafanas persuasively argues against expecting punctate episodes of reflection to have much effect in the moment, but he does want reflection to exercise its influence over the course of days and years in an individual’s lifetime. I suggest that this is neither sufficiently social nor sufficiently distal. In most of the passages Katsafanas cites to support his interpretation (D 38, D 103, GS 58, BGE 225, GM III.28), one person’s reflection modulates the motivational economy of other people. Indeed, Nietzsche seems to think that this kind of influence is typically intergenerational, making the appropriate timescale that of decades and centuries, not days and years. The under-socialization of Katsafanas’s interpretation is also evidenced by the fact that only one chapter of the book (chapter 8) is explicitly devoted to the social dimensions of moral psychology.

Chapters 7 through 9 cover Nietzsche’s conceptions of the self, its relation to society, and the kinds of selves that count as either great or free. Katsafanas uses values as a bridge from drives to selfhood, arguing that — while there is a minimal sense in which someone’s self just is their values (cf. Strohminger & Nichols, “The Essential Moral Self,” Cognition, 2014) — Nietzsche has a notion of unified selfhood according to which unity obtains when the agent acts on their values and wouldn’t disapprove of that action were she to learn more about the etiology (though not necessarily the consequences) of her motives. For example, a professor who teaches logic effectively but is motivated, unbeknownst to herself, by a resentful desire to rebuke her father’s illogical political attitudes would count as unified if learning about this hidden motive would not reduce her pride in her teaching but disunified if it would undermine her pride. In defining unified selfhood using a counterfactual conditional with an epistemic antecedent, Katsafanas attributes to Nietzsche the position that selfhood is a modally robust good (Pettit, The Robust Demands of the Good: Ethics with Attachment, Virtue, and Respect, Oxford University Press, 2015). Unlike the majority of other commentators, he conceives of unified selfhood not as a matter of harmony among an agent’s drives or values, but as harmony between their drive-motivated actions and their conscious reflection in nearby possible worlds. Katsafanas further argues that, for Nietzsche, only behavior that springs from a unified self counts as genuine action, rather than mere behavior. While he sometimes vacillates between a stronger version of the counterfactual (if the agent were to gain knowledge of etiology, she would still approve of her action) and a weaker version (if the agent were to gain knowledge of etiology, she wouldn’t disapprove), his view is attractive both as an interpretation of Nietzsche and as a self-standing philosophical theory (Doris, Talking to Ourselves: Reflection, Ignorance, and Agency, Oxford University Press, 2015).

This account of the unified self enables Katsafanas to make sense of Nietzsche’s frequent praise for exemplary agents who manage to navigate a unified course of action despite embodying contrary drives. As long as someone approves of their actions in a modally robust way, the drives and affects that conspire to produce those actions can be a bit of a mess. However, Katsafanas might exaggerate the difference between the within-drives harmony views of other commentators and his own between-drives-and-reflection view. After all, if someone’s drives are sufficiently disordered, she is almost certain to end up acting in ways that express motives that she either disapproves of or would disapprove of were she to learn more about their etiology.

Some unified selves also exemplify what Nietzsche calls greatness or freedom. Katsafanas argues that the former are those individuals who are lucky enough to have a significant impact on their societies and cultures through uptake of their values. By contrast, the latter — regardless of their social impact — don’t just satisfy the counterfactual conditional but actually go through the work of tracking down the etiology of the motivations of (enough of) their actions; they make a point of making the antecedent of the counterfactual true. Katsafanas again undersells the novelty and interest of his position here. Not only does he manage to connect drives, through values and conscious reflection, to the self and freedom, but also he does so in a way that explains the value of self-knowledge: successfully engaging in inquiry into one’s own motives while maintaining an affirmative affective stance partly constitutes Nietzschean freedom. And the prospects of such inquiry are significantly boosted if the agent embodies the distinctive Nietzschean virtues of curiosity (Alfano, “The Most Agreeable of All Vices: Nietzsche as Virtue Epistemology,” British Journal for the History of Philosophy, 2013) and high-spirited contempt (Alfano, “A Schooling in Contempt: Emotions and the Pathos of Distance,” in Philosophy Minds: Nietzsche, Routledge, 2017).

One might worry that Nietzschean freedom thus characterized is too easily got. What are we to say, for instance, about the insouciant self-scrutinizer who blithely affirms his own actions regardless of what he learns about the etiology of his motives? In chapter 9, Katsafanas argues that Nietzsche’s doctrine of will to power provides substantive constraints to the motives someone can genuinely affirm. The account of will to power on offer is complicated, and readers unfamiliar with Katsafanas’s earlier work may find this chapter difficult to follow. The basic idea, though, is that willing is always a matter of seeking to overcome resistance through action. To the extent, then, that the insouciant self-scrutinizer’s actions fail to seek out or to overcome resistance, they will fail the will-to-power test and hence not be candidates for affirmation, whether actual or counterfactual.

For anyone teaching a seminar on Nietzsche or the history of moral psychology, I can recommend without reservation putting The Nietzschean Self on your syllabus. It may be possible to write a better book on Nietzsche’s moral psychology, but no one has done so yet.

 

Youtube self-radicalization as a bespoke transformative experience

Philosopher Laurie Paul recently published a book about transformative experiences, which she understands as events that change someone's personality or values. If Nina Strohminger is right that one's self is to a large extent identified with one's values, then going through a transformative experience means becoming a different person.

Typical examples of transformative experiences could be classified as Big Honking Deals. Becoming a vampire. Going to war. Having a child. Enduring a severe mental disorder. But transformative experiences can also occur more slowly and without attracting attention. Though the typical examples are relatively short, time-stamped encounters characterized by trauma, drama, or melodrama, other transformative experiences happen more slowly. You move to a new town and slowly find yourself rooting for their football team, even though you used to despise the whole sport. You lose a friend and eventually realize that you deeply disagree with them about religion, even though you went to the same church. You go to college, major in sociology, and find yourself one day earnestly utterly the word 'differance'.

In this post, I'm interested in another such slow-burning transformative experience: self-radicalization on Youtube. Youtube serves videos to browsers. In some cases, it simply delivers the link someone enters in their url bar. In other cases, it delivers the Google-determined answer to a query the user enters in Youtube's search bar (Google owns Youtube). While the algorithm that determines the answer is proprietary, we know that it is highly similar to the PageRank algorithm, which in turn resembles a Condorcet voting procedure in a social network. In still other cases, Youtube suggests videos to a user based on the videos they previously watched and the videos subsequently watched by other users who also watched (most of) the videos they watched. Such individualized recommendation processes rely on what's called profiling: building up datasets about individual users that help predict what they think, like, and care about. The algorithms that power these recommendation systems are powerful, relying on hidden Markov models, deep learning, and/or neural networks. 

These algorithms are built to optimize a variable chosen and operationalized by their coders. In most cases, that variable is engagement: the likelihood that the user will mouse-over, click on, like, comment on, or otherwise interact with an item. Eli Pariser and others have pointed to the ways in which optimizing for engagement (rather than, say, truth, reliability, sensitivity, safety, or some other epistemic value) leads to social and political problems. PageRank and its derivatives can be gamed by propagandists, unduly influencing election outcomes. Even when no nefarious plots are afoot, engagement is at best a loose proxy for epistemic value.

One especially worrisome consequence of optimizing for engagement is the possibility of creating bespoke transformative experiences that radicalize viewers. It's already been argued that conspiracist media such as Fox News has radicalized a large proportion of the Baby Boomer generation. Fed a little hate, they kept watching. The more they watched, the more hate they imbibed and the less connected with truth they become. Over time, Fox ceased to be the contemptible fringe and was usurped by Breitbart, Newsmax, and Infowars. Now Steve Bannon and Stephen Miller are in the White House advising the Trump administration.

I lay a great day of the blame for this at the feet of Rush Limbaugh and the Baby Boomers who half-intentionally poisoned their minds with his bluster and bullshit on AM radio throughout the 1990s. (Remember "America under siege?") But what worries me now is that the general-purpose, mind-poisoning transformation that the Baby Boomers suffered is being individualized and accelerated by the recommendation algorithms employed by Google. Engagement tracks people's emotions, which can be positive or negative. Recent studies suggest that both nascent right-wing white nationalists and nascent Islamist terrorists are increasingly learning to hate by following a string of Youtube recommendations that take them from incredulity to interest to fascination to zealotry. If this is right, then an additional side-effect of optimizing for engagement is the creation of a small but determined group of extremists bent on revanchist politics and revolutionary violence.

Somebody call Sergey Brin.

Follow all of the Alt- and Rogue- Government Twitter handles

As a philosopher, I tend to expect that my research might be socially relevant in 3 to 21 generations. Unless cryogenics speeds the fuck up, I won't be around to see it. Nothing's perfect.

In this post, though, I have something to say that's relevant yesterday: you need to follow all of the Alt- and Rogue- government Twitter handles. Here are the ones I know of, as of 27 January 2017:

@Altforestserv, @alt_fda, @RogueNASA, @AltHHS, @ActualEPAFacts, @AltUSDA

There will be more.

Here is why you need to follow them: the Trump administration has issued gag orders to many government agencies that are meant to supply citizens with the truth. Officially, they are now meant to clear everything they say to media, on social media, etc. with the administration.

Let me be clear: THIS IS NOT NORMAL. In fact THIS IS HOW CRIMINALS TREAT THEIR VICTIMS. I've spent the last year studying the ways in which knowledge gets distributed in social networks. One very clear pattern is that hubs in star networks tend to abuse their power. A star network is a communication network in which one actor controls whether and to what extent each of the other actors is able to communicate (about what) with other members of the network. Star networks are associated with all the evils. Sexual predators are often the hubs in star networks (DJT, anybody?). The reason is obvious: if A knows that B sexually assaulted C, A will be wary of B. But if B can limit the communication (or, more importantly, the extent to which communication is trusted) between A and C, B can undermine A's wariness.

Star networks are also associated with a variety of other sorts of malfeasance, including financial fraud, academic fraud, and terrorism. The point is that the hubs of star networks have immense power. Not all of them abuse it (e.g., many medical doctors and therapists), but many of them do.

The Trump presidential administration is a classic example of an abuser of hub power. In the last few days, this administration has insisted that scientific branches of government cease all independent communication with the public. All communication is meant to flow through the White House. This is the equivalent of your abusive boyfriend saying that you can't talk with any of your other friends now; anything you want to say has to go through him. 

If we put up with this, we are the moral equivalent of a "friend" who says, "You say your boyfriend hits you, but no one else told me that. In fact, lots of his friends said that you're a lying bitch."

Don't fall for it.

Fortunately, you don't have to navigate this new landscape alone. Into the breach, we have the Alt- and Rogue- institutional accounts. These will be essential for organizing against the Trump administration.  

Trump Presidency to be Large-Scale Replication Experiments in Destructive Obedience: Here is How to Resist

(I tried to get this published as an op-ed in a few places but met with failure and stonewalling, so I'm putting it on the blog. Please share if you find it useful.)

You might think that, while four to eight years of President Trump will be embarrassing, they will not leave an indelible stain. But know this: America is not special. Our smug self-assurance that genocide, democide, and other crimes against humanity only happen in other countries may be our undoing. Americans are no better and – let us hope – not much worse than people everywhere. And people everywhere are liable to obey authorities who incrementally ratchet up their destructive orders.

There’s good scientific evidence for this claim. In the 1960s, the psychologist Stanley Milgram demonstrated it at Yale University. He showed that approximately two-thirds of ordinary American adults will, when subject to escalating social pressure, put 450 volts of electricity through a complete stranger whose only sin is failing to memorize a list of words.

The setup of Milgram’s experiment is simple: a participant and an actor who pretends to be an ordinary participant are ushered into the lab. The participant is “randomly” selected to be a teacher while the actor is the learner. When the actor makes a mistake in recalling the list of words, the participant shocks him.

The shocks start at a benign 15 volts and increase by 15 volts for each subsequent mistake. Initially the actor stoically grunts through the pain, but at 150 volts he demands to be released from the experiment. By 300 volts, he’s “unconscious.” The experimenter tells the participant to treat failure to answer as a wrong answer, leading ultimately to three shocks in a row with 450 volts.

Why don’t the participants object? Many do. But at the first sign of disobedience, the experimenter mildly instructs, “Please go on.” Further disobedience is met with “The experiment requires that you continue,” then “It is absolutely essential that you continue,” and finally “You have no other choice, you must go on.” If the participant rebels a fifth time, the experiment is terminated. These verbal nudges are enough to get two-thirds of participants to be maximally compliant.

Shocked? So were laypeople and scientists of Milgram’s day. In interviews with 110 psychiatrists, college students, and middle-class adults who were not aware of his results, Milgram found that 100% predicted that no participants would go all the way and that the maximum shock they would deliver was 135 volts.

Milgram’s participants were unusual neither by American nor by global standards. Subsequent studies elsewhere in the USA, along with South Africa, Australia, Jordan, Spain, and Austria, have found similar levels of destructive obedience.

In a boon for psychological science and a moral test for the country, the Trump presidency will be the most ecologically-valid, large-scale replication of Milgram’s studies ever conducted.

Instead of issuing verbal prods, Trump commands the FBI, Homeland Security, the CIA, and the military. Instead of torturing an obviously innocent victim, he targets African-Americans, women, Mexicans, Muslims, gay people and other groups who have faced dehumanizing animus since the United States enshrined slavery in the Constitution.

If 67% of us maximally comply with the destructive orders that are sure to flow from the Trump White House, Milgram will be proven scientifically right and we will be proven morally wrong.

Milgram’s studies aren’t all bad news, though. He and other researchers have identified six ways that you can be part of the resistant 33%. Here are the lessons we should learn:

1) Resist early. Almost everyone who goes one-third of the way in the Milgram study goes all the way. If you go along to get along, you’re likely to go much too far.

2) Resist loudly, visibly, and intelligently. In the presence of another resister, others become more inclined to resist as well. People are less susceptible to pressure from authority when they know how such pressure can affect them.

3) Use authority to resist authority. When a knowledgeable second party contradicts destructive orders, almost everyone resists.

4) Focus on the individuality of victims. Learn their names. Memorize their faces. Shake their hands. Hug them. Get close to them both psychologically and physically. Compliance drops by more than half when the participant has to touch the victim.

5) Seek solidarity. The solitary hero may be a romantic ideal, but courage breeds courage. Find other resisters and reinforce one another.

6) Nurse your contempt. Compliance drops by two-thirds when the person giving the orders is perceived as just some schmuck.

America is not special. With hard work and a lot of luck, we may emerge from this struggle ashamed but relieved that the worst did not come to pass. In the face of disaster, we can and must demand this much of ourselves.

draft review of Liao's "Moral Brains"

Matthew Liao is to be commended for editing Moral Brains, a fine collection showcasing truly excellent chapters by, among others, James Woodward, Molly Crocket, and Jana Schaich Borg. In addition to Liao’s detailed, fair-minded, and comprehensive introduction, the book has fourteen chapters. Of these, one is a reprint (Joshua Greene ch. 4), one a re-articulation of previously published arguments (Walter Sinnott-Armstrong ch. 14), and one a literature review (Oliveira-Souza, Zahn, and Moll ch. 9). The rest are original contributions to the rapidly developing field of neuroethics.

This volume convinced me to endorse my standing suspicion that progress in neuroethics depends on improving how we conceptualize and operationalize moral phenomena, how we increase the accuracy and precision of methods for measuring such phenomena, and which questions about these phenomena we ask in the first place. Many of the contributors point out that the neuroscience of morality has predominantly employed functional magnetic resonance imaging (fMRI) of voxel-level activation in participants making one-off deontic judgments about hypothetical cases constructed by the experimenters. This approach is liable to result in experimenter (and interpreter) myopia. Judgment is an important component of morality, but so too are perception, attention, creativity, decision-making, action, longitudinal dispositions (e.g., virtues, vices, values, and commitment to principles), reflection on and revision of judgments, and social argumentation. Someone like my father who makes moral judgments when prodded to do so but never reconsiders them, argues sincerely about their adequacy, or acts on the basis of them is a seriously deficient moral agent. Yet much of the current literature seems to presuppose that people like my father are normal members of the moral community. (He’s not. He voted for Trump in Pennsylvania.) The contributions by Jesse Prinz (cf. 1), Jeanette Kennett & Philip Gerrans (cf. 3), Julia Driver (ch. 5), Stephen Darwall (ch. 6), Crockett (ch. 10), and Schaich Borg (ch. 11) are especially trenchant on this point. (In this context, I can’t help but narcissistically recommend my recent monograph – Alfano 2016 – as a framework for better structuring future research in terms of what I contend are the five key dimensions of moral psychology: agency, patiency, sociality, reflexivity, and temporality.)

Beyond fMRI-myopia, the extant neuroethical literature tends to neglect the reverse-inference problem. This problem arises from the fact that the mapping from brain regions to psychological processes is not one-one but many-many, which means that inferring from “region X showed activation” to “process P occurred” is invalid. As of the composition of this review, the amygdala and insula were implicated in over ten percent of all neuroimaging studies indexed by www.neurosynth.org.[1] Inferring, as Greene often does, from the activation of one of these areas to a conclusion about emotion generally or a discrete emotion, such as disgust, is hopeless.

On top of this, individuating regions as large as the amygdala is unlikely to be sufficiently fine-grained for neuroethicists’ purposes. We need, therefore, to diversify the methods of neuroethics to include approaches that have better spatial resolution (e.g., the single-cell resolution made possible by CUBIC – Susaki et al. 2014) and temporal precision (e.g., electroencephalogram), as well as methods that account for interactions among systems that operate at different timescales and beyond the central nervous system (e.g., hormones and the vagus nerve).

However, many of the questions we would like to ask seem answerable only by shudderingly unethical research on humans or other primates, such as torturous and medically unnecessary surgery. To get around this problem, Schaich Borg (ch. 11) argues for the use of rodent models (including measures of oxytocin) in the study of violent dispositions towards conspecifics. In the same vein, Oliveira et al. (ch. 9) recommend using lesions in the human population as natural experiments, and Crockett advocates for studies and experimental interventions on the endocrine system related to serotonin (and, I might add as a friendly amendment, testosterone and cortisol, cf. Denson et al. 2013).

Compounding these difficulties is the fact that brain science is expensive and time-consuming. With so many questions to ask and so little human and material capital to devote to them, we are constantly forced to prioritize some questions over others. In light of the crisis of replication and reproducibility that continues to rock psychology and neuroscience, I urge that we cast a skeptical eye on clickbait-generating experimental designs built on hypotheses with near-floor prior probabilities, such as Wheatley & Haidt’s (2010) study of the alleged effects of hypnotically-induced incidental disgust (which receives an absurd amount of attention in this volume and in contemporary moral psychology more broadly). Instead, we should pursue designs built to answer structured, specific questions given the constraints we face.

We need to stop asking ham-fisted questions like, “Which leads to better moral judgments – reason or emotion?” and, “Does neuroscience support act utilitarianism or a strawman of Kantian deontology?” As Prinz argues, “reasoning and emotion work together in the moral domain,” so we should reject a model like Haidt’s social intuitionism that “dichotomizes the debate between rationalist and sentimentalist” (p. 65). Reasoning can use emotions as inputs, deliver them as outputs, and integrate them into more complex mental states and dispositions. Contrary to what Greene (cf. 4) tells us, emotion is not an on-or-off “alarm bell.” Indeed, Woodward patiently walks through the emerging evidence that the ventromedial prefrontal cortex (VMPFC), which Greene bluntly labels an “emotion” area, is the region in which diverse value inputs from various parts of the brain (including emotional inputs, but also many others) are transformed into a common currency and integrated into a cardinal (not merely categorical or even ordinal) value signal that guides judgment and decision-making.

On reflection, it should have been obvious that distinguishing categorically between reason (understood monolithically) and emotion (also understood monolithically) was a nonstarter. For one thing, “emotion” includes everything from rage and grief to boredom and nostalgia; it is far too broad a category to license generalizations at the psychological or neurological level (Lindquist et al. 2012). In addition, the brain bases of emotions such as fear and disgust often exhibit exquisitely fine-tuned responses to the evaluative properties they track (Mobbs et al. 2010). Even more to the point, in some cases, we have no problem accepting emotions as reasons or, conversely, giving reasons for the emotions we embody. In the one direction, “She feels sad; something must have reminded her of her brother’s death,” is a reasonable inference. In the other direction, there are resentments that I’ve nursed for over a decade, and I’d be happy to give you all of my reasons for doing so if you buy me a few beers.

To illustrate what I have in mind by asking structured, specific questions, consider this one: “If we want to model moral judgment in consequentialist terms, at what level of analysis should valuation attach to consequences?” This question embarks from well-understood distinctions within consequentialist theory and seeks a non-question-begging answer. Unlike Greene’s question, which pits an arbitrarily-selected version of consequentialism against an arbitrarily-selected version of deontology, this one assumes a good deal of common ground, making it possible to get specific. Greene (ch. 4) asserts that act consequentialism employs the appropriate level of analysis, but Darwall (ch. 6) plausibly contends that the evidence better fits rule consequentialism. I venture to suggest that an even better fit is motive consequentialism (Adams 1976) because negative judgments about pushing the large man off the footbridge are almost certainly driven by intuitions like, “Anyone who could bring herself to shove someone in front of a runaway trolley at a moment’s notice is a terrifying asshole.”

So which questions should neuroethicists be asking? One question that they shouldn’t be asking is, “What does current neuroscience tell us about morality?” In this verdict, I am in agreement with a plurality or perhaps even a majority of the contributors to Moral Brains. Several of the chapters barely engage with neuroscience (Kennett & Gerrans, Driver, Darwall, Liao ch. 13). These chapters are well-written, significant contributions to philosophy, but it’s unclear why they were included in a book with this title. To put it another way, it’s unclear to me why the book wasn’t titled ‘Morality and Psychology, with a Dash of Neuroscience’. This difficulty becomes clearer when we note that many of the chapters that do engage in a significant way with neuroscience end up concluding that the brain doesn’t tell us anything that we couldn’t have learned in some other way from psychological or behavioral methods (Prinz, Woodward, Greene, Kahane). Perhaps we should be asking, “What do morality and moral psychology tell us about neuroscience?”

This reversal of explanatory direction presupposes that we have a reasonably coherent conception of what morality is or does. Sinnott-Armstrong argues in the closing chapter of the volume, however, that we lack such a conception because morality is fragmented at the level of content, brain basis, and function. I conclude this review by offering a rejoinder related to function in particular. My suggestion is that the function of morality is to organize communities (understood more or less broadly) in pursuing, promoting, preserving, and protecting what matters to them via cooperation. This conception of morality is, of necessity, vague and parameterized on multiple dimensions, but it is specific enough to gain significant empirical support from cross-cultural studies of folk axiology in both psychology (Alfano 2016, ch. 5) and anthropology (Curry et al. submitted). If this is on the right track, then the considerations that members of communities can and should offer each other (what High-Church meta-ethicists call ‘moral reasons’) are considerations that favor or disfavor the pursuit, promotion, preservation, or protection of shared values, as well as meta-reasons to modify the parameters or the ranking of values. What counts as a consideration, who counts as a member of the community, which values matter, and how they are weighed – these are questions to be answered, as Amartya Sen (1985) persuasively argued, by establishing informational constraints that point to all and only the variables that should be considered by an adequate moral theory. Indeed, some of the most sophisticated arguments in Moral Brains turn on such informational constraints (e.g., Greene pp. 170-2; Kahane pp. 294-5).

This book should interest philosophers working in the areas of neuroethics, moral psychology, normative ethics, research ethics, philosophy of psychology, philosophy of mind, and decision making. It should also grab the attention of psychologists and neuroscientists working in ethics-adjacent and ethics-relevant areas. It might work as a textbook for an advanced undergraduate seminar on neuroethics, and it would certainly be appropriate for a graduate seminar on this topic. (And it has a very detailed index – a rarity these days!)

 

References:

Adams, R. M. (1976). Motive utilitarianism. The Journal of Philosophy, 73(14): 467-81.

Alfano, M. (2016). Moral Psychology: An Introduction. London: Polity.

Curry, O. S., Mullins, D. A., & Whitehouse, H. (submitted). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current Anthropology

Denson, T., Mehta, P., & Tan, D. (2013). Endogenous testosterone and cortisol jointly influence reactive aggression in women. Psychoneuroendocrinology, 38(3): 416-24.

Lindquist, K., Wager, T., Kober, H., Bliss-Moreau, E. & Feldman Barrett, L. (2012). The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35: 121-202.

Mobbs, D., Yu, R., Rowe, J., Eich, H., Feldman-Hall, O., & Dalgleish, T. (2010). Neural activity associated with monitoring the oscillating threat value of a tarantula. Proceedings of the National Academies of Science, 107(47): 20582-6.

Sen, A. (1985). Well-being, agency and freedom: The Dewey Lectures 1984. The Journal of Philosophy, 82(4): 169-221.

Susaki, E., Tainaka, K., Perrin, D., Kishino, G., Tawara, T., Watanabe, T., Yokoyama, C., Onoe, H., Eguchi, M., Yamaguchi, S., Abe, T., Kiyonari, H., Shimizu, Y., Miyawaki, A., Yokota, H., Ueda, H. (2014). Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis. Cell, 157(3): 726-39.

Wheatley, T. & Haidt, J. (2010). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10): 780-784.

 

[1] Accessed 3 December 2016.