Youtube self-radicalization as a bespoke transformative experience

Philosopher Laurie Paul recently published a book about transformative experiences, which she understands as events that change someone's personality or values. If Nina Strohminger is right that one's self is to a large extent identified with one's values, then going through a transformative experience means becoming a different person.

Typical examples of transformative experiences could be classified as Big Honking Deals. Becoming a vampire. Going to war. Having a child. Enduring a severe mental disorder. But transformative experiences can also occur more slowly and without attracting attention. Though the typical examples are relatively short, time-stamped encounters characterized by trauma, drama, or melodrama, other transformative experiences happen more slowly. You move to a new town and slowly find yourself rooting for their football team, even though you used to despise the whole sport. You lose a friend and eventually realize that you deeply disagree with them about religion, even though you went to the same church. You go to college, major in sociology, and find yourself one day earnestly utterly the word 'differance'.

In this post, I'm interested in another such slow-burning transformative experience: self-radicalization on Youtube. Youtube serves videos to browsers. In some cases, it simply delivers the link someone enters in their url bar. In other cases, it delivers the Google-determined answer to a query the user enters in Youtube's search bar (Google owns Youtube). While the algorithm that determines the answer is proprietary, we know that it is highly similar to the PageRank algorithm, which in turn resembles a Condorcet voting procedure in a social network. In still other cases, Youtube suggests videos to a user based on the videos they previously watched and the videos subsequently watched by other users who also watched (most of) the videos they watched. Such individualized recommendation processes rely on what's called profiling: building up datasets about individual users that help predict what they think, like, and care about. The algorithms that power these recommendation systems are powerful, relying on hidden Markov models, deep learning, and/or neural networks. 

These algorithms are built to optimize a variable chosen and operationalized by their coders. In most cases, that variable is engagement: the likelihood that the user will mouse-over, click on, like, comment on, or otherwise interact with an item. Eli Pariser and others have pointed to the ways in which optimizing for engagement (rather than, say, truth, reliability, sensitivity, safety, or some other epistemic value) leads to social and political problems. PageRank and its derivatives can be gamed by propagandists, unduly influencing election outcomes. Even when no nefarious plots are afoot, engagement is at best a loose proxy for epistemic value.

One especially worrisome consequence of optimizing for engagement is the possibility of creating bespoke transformative experiences that radicalize viewers. It's already been argued that conspiracist media such as Fox News has radicalized a large proportion of the Baby Boomer generation. Fed a little hate, they kept watching. The more they watched, the more hate they imbibed and the less connected with truth they become. Over time, Fox ceased to be the contemptible fringe and was usurped by Breitbart, Newsmax, and Infowars. Now Steve Bannon and Stephen Miller are in the White House advising the Trump administration.

I lay a great day of the blame for this at the feet of Rush Limbaugh and the Baby Boomers who half-intentionally poisoned their minds with his bluster and bullshit on AM radio throughout the 1990s. (Remember "America under siege?") But what worries me now is that the general-purpose, mind-poisoning transformation that the Baby Boomers suffered is being individualized and accelerated by the recommendation algorithms employed by Google. Engagement tracks people's emotions, which can be positive or negative. Recent studies suggest that both nascent right-wing white nationalists and nascent Islamist terrorists are increasingly learning to hate by following a string of Youtube recommendations that take them from incredulity to interest to fascination to zealotry. If this is right, then an additional side-effect of optimizing for engagement is the creation of a small but determined group of extremists bent on revanchist politics and revolutionary violence.

Somebody call Sergey Brin.

Follow all of the Alt- and Rogue- Government Twitter handles

As a philosopher, I tend to expect that my research might be socially relevant in 3 to 21 generations. Unless cryogenics speeds the fuck up, I won't be around to see it. Nothing's perfect.

In this post, though, I have something to say that's relevant yesterday: you need to follow all of the Alt- and Rogue- government Twitter handles. Here are the ones I know of, as of 27 January 2017:

@Altforestserv, @alt_fda, @RogueNASA, @AltHHS, @ActualEPAFacts, @AltUSDA

There will be more.

Here is why you need to follow them: the Trump administration has issued gag orders to many government agencies that are meant to supply citizens with the truth. Officially, they are now meant to clear everything they say to media, on social media, etc. with the administration.

Let me be clear: THIS IS NOT NORMAL. In fact THIS IS HOW CRIMINALS TREAT THEIR VICTIMS. I've spent the last year studying the ways in which knowledge gets distributed in social networks. One very clear pattern is that hubs in star networks tend to abuse their power. A star network is a communication network in which one actor controls whether and to what extent each of the other actors is able to communicate (about what) with other members of the network. Star networks are associated with all the evils. Sexual predators are often the hubs in star networks (DJT, anybody?). The reason is obvious: if A knows that B sexually assaulted C, A will be wary of B. But if B can limit the communication (or, more importantly, the extent to which communication is trusted) between A and C, B can undermine A's wariness.

Star networks are also associated with a variety of other sorts of malfeasance, including financial fraud, academic fraud, and terrorism. The point is that the hubs of star networks have immense power. Not all of them abuse it (e.g., many medical doctors and therapists), but many of them do.

The Trump presidential administration is a classic example of an abuser of hub power. In the last few days, this administration has insisted that scientific branches of government cease all independent communication with the public. All communication is meant to flow through the White House. This is the equivalent of your abusive boyfriend saying that you can't talk with any of your other friends now; anything you want to say has to go through him. 

If we put up with this, we are the moral equivalent of a "friend" who says, "You say your boyfriend hits you, but no one else told me that. In fact, lots of his friends said that you're a lying bitch."

Don't fall for it.

Fortunately, you don't have to navigate this new landscape alone. Into the breach, we have the Alt- and Rogue- institutional accounts. These will be essential for organizing against the Trump administration.  

Trump Presidency to be Large-Scale Replication Experiments in Destructive Obedience: Here is How to Resist

(I tried to get this published as an op-ed in a few places but met with failure and stonewalling, so I'm putting it on the blog. Please share if you find it useful.)

You might think that, while four to eight years of President Trump will be embarrassing, they will not leave an indelible stain. But know this: America is not special. Our smug self-assurance that genocide, democide, and other crimes against humanity only happen in other countries may be our undoing. Americans are no better and – let us hope – not much worse than people everywhere. And people everywhere are liable to obey authorities who incrementally ratchet up their destructive orders.

There’s good scientific evidence for this claim. In the 1960s, the psychologist Stanley Milgram demonstrated it at Yale University. He showed that approximately two-thirds of ordinary American adults will, when subject to escalating social pressure, put 450 volts of electricity through a complete stranger whose only sin is failing to memorize a list of words.

The setup of Milgram’s experiment is simple: a participant and an actor who pretends to be an ordinary participant are ushered into the lab. The participant is “randomly” selected to be a teacher while the actor is the learner. When the actor makes a mistake in recalling the list of words, the participant shocks him.

The shocks start at a benign 15 volts and increase by 15 volts for each subsequent mistake. Initially the actor stoically grunts through the pain, but at 150 volts he demands to be released from the experiment. By 300 volts, he’s “unconscious.” The experimenter tells the participant to treat failure to answer as a wrong answer, leading ultimately to three shocks in a row with 450 volts.

Why don’t the participants object? Many do. But at the first sign of disobedience, the experimenter mildly instructs, “Please go on.” Further disobedience is met with “The experiment requires that you continue,” then “It is absolutely essential that you continue,” and finally “You have no other choice, you must go on.” If the participant rebels a fifth time, the experiment is terminated. These verbal nudges are enough to get two-thirds of participants to be maximally compliant.

Shocked? So were laypeople and scientists of Milgram’s day. In interviews with 110 psychiatrists, college students, and middle-class adults who were not aware of his results, Milgram found that 100% predicted that no participants would go all the way and that the maximum shock they would deliver was 135 volts.

Milgram’s participants were unusual neither by American nor by global standards. Subsequent studies elsewhere in the USA, along with South Africa, Australia, Jordan, Spain, and Austria, have found similar levels of destructive obedience.

In a boon for psychological science and a moral test for the country, the Trump presidency will be the most ecologically-valid, large-scale replication of Milgram’s studies ever conducted.

Instead of issuing verbal prods, Trump commands the FBI, Homeland Security, the CIA, and the military. Instead of torturing an obviously innocent victim, he targets African-Americans, women, Mexicans, Muslims, gay people and other groups who have faced dehumanizing animus since the United States enshrined slavery in the Constitution.

If 67% of us maximally comply with the destructive orders that are sure to flow from the Trump White House, Milgram will be proven scientifically right and we will be proven morally wrong.

Milgram’s studies aren’t all bad news, though. He and other researchers have identified six ways that you can be part of the resistant 33%. Here are the lessons we should learn:

1) Resist early. Almost everyone who goes one-third of the way in the Milgram study goes all the way. If you go along to get along, you’re likely to go much too far.

2) Resist loudly, visibly, and intelligently. In the presence of another resister, others become more inclined to resist as well. People are less susceptible to pressure from authority when they know how such pressure can affect them.

3) Use authority to resist authority. When a knowledgeable second party contradicts destructive orders, almost everyone resists.

4) Focus on the individuality of victims. Learn their names. Memorize their faces. Shake their hands. Hug them. Get close to them both psychologically and physically. Compliance drops by more than half when the participant has to touch the victim.

5) Seek solidarity. The solitary hero may be a romantic ideal, but courage breeds courage. Find other resisters and reinforce one another.

6) Nurse your contempt. Compliance drops by two-thirds when the person giving the orders is perceived as just some schmuck.

America is not special. With hard work and a lot of luck, we may emerge from this struggle ashamed but relieved that the worst did not come to pass. In the face of disaster, we can and must demand this much of ourselves.

draft review of Liao's "Moral Brains"

Matthew Liao is to be commended for editing Moral Brains, a fine collection showcasing truly excellent chapters by, among others, James Woodward, Molly Crocket, and Jana Schaich Borg. In addition to Liao’s detailed, fair-minded, and comprehensive introduction, the book has fourteen chapters. Of these, one is a reprint (Joshua Greene ch. 4), one a re-articulation of previously published arguments (Walter Sinnott-Armstrong ch. 14), and one a literature review (Oliveira-Souza, Zahn, and Moll ch. 9). The rest are original contributions to the rapidly developing field of neuroethics.

This volume convinced me to endorse my standing suspicion that progress in neuroethics depends on improving how we conceptualize and operationalize moral phenomena, how we increase the accuracy and precision of methods for measuring such phenomena, and which questions about these phenomena we ask in the first place. Many of the contributors point out that the neuroscience of morality has predominantly employed functional magnetic resonance imaging (fMRI) of voxel-level activation in participants making one-off deontic judgments about hypothetical cases constructed by the experimenters. This approach is liable to result in experimenter (and interpreter) myopia. Judgment is an important component of morality, but so too are perception, attention, creativity, decision-making, action, longitudinal dispositions (e.g., virtues, vices, values, and commitment to principles), reflection on and revision of judgments, and social argumentation. Someone like my father who makes moral judgments when prodded to do so but never reconsiders them, argues sincerely about their adequacy, or acts on the basis of them is a seriously deficient moral agent. Yet much of the current literature seems to presuppose that people like my father are normal members of the moral community. (He’s not. He voted for Trump in Pennsylvania.) The contributions by Jesse Prinz (cf. 1), Jeanette Kennett & Philip Gerrans (cf. 3), Julia Driver (ch. 5), Stephen Darwall (ch. 6), Crockett (ch. 10), and Schaich Borg (ch. 11) are especially trenchant on this point. (In this context, I can’t help but narcissistically recommend my recent monograph – Alfano 2016 – as a framework for better structuring future research in terms of what I contend are the five key dimensions of moral psychology: agency, patiency, sociality, reflexivity, and temporality.)

Beyond fMRI-myopia, the extant neuroethical literature tends to neglect the reverse-inference problem. This problem arises from the fact that the mapping from brain regions to psychological processes is not one-one but many-many, which means that inferring from “region X showed activation” to “process P occurred” is invalid. As of the composition of this review, the amygdala and insula were implicated in over ten percent of all neuroimaging studies indexed by www.neurosynth.org.[1] Inferring, as Greene often does, from the activation of one of these areas to a conclusion about emotion generally or a discrete emotion, such as disgust, is hopeless.

On top of this, individuating regions as large as the amygdala is unlikely to be sufficiently fine-grained for neuroethicists’ purposes. We need, therefore, to diversify the methods of neuroethics to include approaches that have better spatial resolution (e.g., the single-cell resolution made possible by CUBIC – Susaki et al. 2014) and temporal precision (e.g., electroencephalogram), as well as methods that account for interactions among systems that operate at different timescales and beyond the central nervous system (e.g., hormones and the vagus nerve).

However, many of the questions we would like to ask seem answerable only by shudderingly unethical research on humans or other primates, such as torturous and medically unnecessary surgery. To get around this problem, Schaich Borg (ch. 11) argues for the use of rodent models (including measures of oxytocin) in the study of violent dispositions towards conspecifics. In the same vein, Oliveira et al. (ch. 9) recommend using lesions in the human population as natural experiments, and Crockett advocates for studies and experimental interventions on the endocrine system related to serotonin (and, I might add as a friendly amendment, testosterone and cortisol, cf. Denson et al. 2013).

Compounding these difficulties is the fact that brain science is expensive and time-consuming. With so many questions to ask and so little human and material capital to devote to them, we are constantly forced to prioritize some questions over others. In light of the crisis of replication and reproducibility that continues to rock psychology and neuroscience, I urge that we cast a skeptical eye on clickbait-generating experimental designs built on hypotheses with near-floor prior probabilities, such as Wheatley & Haidt’s (2010) study of the alleged effects of hypnotically-induced incidental disgust (which receives an absurd amount of attention in this volume and in contemporary moral psychology more broadly). Instead, we should pursue designs built to answer structured, specific questions given the constraints we face.

We need to stop asking ham-fisted questions like, “Which leads to better moral judgments – reason or emotion?” and, “Does neuroscience support act utilitarianism or a strawman of Kantian deontology?” As Prinz argues, “reasoning and emotion work together in the moral domain,” so we should reject a model like Haidt’s social intuitionism that “dichotomizes the debate between rationalist and sentimentalist” (p. 65). Reasoning can use emotions as inputs, deliver them as outputs, and integrate them into more complex mental states and dispositions. Contrary to what Greene (cf. 4) tells us, emotion is not an on-or-off “alarm bell.” Indeed, Woodward patiently walks through the emerging evidence that the ventromedial prefrontal cortex (VMPFC), which Greene bluntly labels an “emotion” area, is the region in which diverse value inputs from various parts of the brain (including emotional inputs, but also many others) are transformed into a common currency and integrated into a cardinal (not merely categorical or even ordinal) value signal that guides judgment and decision-making.

On reflection, it should have been obvious that distinguishing categorically between reason (understood monolithically) and emotion (also understood monolithically) was a nonstarter. For one thing, “emotion” includes everything from rage and grief to boredom and nostalgia; it is far too broad a category to license generalizations at the psychological or neurological level (Lindquist et al. 2012). In addition, the brain bases of emotions such as fear and disgust often exhibit exquisitely fine-tuned responses to the evaluative properties they track (Mobbs et al. 2010). Even more to the point, in some cases, we have no problem accepting emotions as reasons or, conversely, giving reasons for the emotions we embody. In the one direction, “She feels sad; something must have reminded her of her brother’s death,” is a reasonable inference. In the other direction, there are resentments that I’ve nursed for over a decade, and I’d be happy to give you all of my reasons for doing so if you buy me a few beers.

To illustrate what I have in mind by asking structured, specific questions, consider this one: “If we want to model moral judgment in consequentialist terms, at what level of analysis should valuation attach to consequences?” This question embarks from well-understood distinctions within consequentialist theory and seeks a non-question-begging answer. Unlike Greene’s question, which pits an arbitrarily-selected version of consequentialism against an arbitrarily-selected version of deontology, this one assumes a good deal of common ground, making it possible to get specific. Greene (ch. 4) asserts that act consequentialism employs the appropriate level of analysis, but Darwall (ch. 6) plausibly contends that the evidence better fits rule consequentialism. I venture to suggest that an even better fit is motive consequentialism (Adams 1976) because negative judgments about pushing the large man off the footbridge are almost certainly driven by intuitions like, “Anyone who could bring herself to shove someone in front of a runaway trolley at a moment’s notice is a terrifying asshole.”

So which questions should neuroethicists be asking? One question that they shouldn’t be asking is, “What does current neuroscience tell us about morality?” In this verdict, I am in agreement with a plurality or perhaps even a majority of the contributors to Moral Brains. Several of the chapters barely engage with neuroscience (Kennett & Gerrans, Driver, Darwall, Liao ch. 13). These chapters are well-written, significant contributions to philosophy, but it’s unclear why they were included in a book with this title. To put it another way, it’s unclear to me why the book wasn’t titled ‘Morality and Psychology, with a Dash of Neuroscience’. This difficulty becomes clearer when we note that many of the chapters that do engage in a significant way with neuroscience end up concluding that the brain doesn’t tell us anything that we couldn’t have learned in some other way from psychological or behavioral methods (Prinz, Woodward, Greene, Kahane). Perhaps we should be asking, “What do morality and moral psychology tell us about neuroscience?”

This reversal of explanatory direction presupposes that we have a reasonably coherent conception of what morality is or does. Sinnott-Armstrong argues in the closing chapter of the volume, however, that we lack such a conception because morality is fragmented at the level of content, brain basis, and function. I conclude this review by offering a rejoinder related to function in particular. My suggestion is that the function of morality is to organize communities (understood more or less broadly) in pursuing, promoting, preserving, and protecting what matters to them via cooperation. This conception of morality is, of necessity, vague and parameterized on multiple dimensions, but it is specific enough to gain significant empirical support from cross-cultural studies of folk axiology in both psychology (Alfano 2016, ch. 5) and anthropology (Curry et al. submitted). If this is on the right track, then the considerations that members of communities can and should offer each other (what High-Church meta-ethicists call ‘moral reasons’) are considerations that favor or disfavor the pursuit, promotion, preservation, or protection of shared values, as well as meta-reasons to modify the parameters or the ranking of values. What counts as a consideration, who counts as a member of the community, which values matter, and how they are weighed – these are questions to be answered, as Amartya Sen (1985) persuasively argued, by establishing informational constraints that point to all and only the variables that should be considered by an adequate moral theory. Indeed, some of the most sophisticated arguments in Moral Brains turn on such informational constraints (e.g., Greene pp. 170-2; Kahane pp. 294-5).

This book should interest philosophers working in the areas of neuroethics, moral psychology, normative ethics, research ethics, philosophy of psychology, philosophy of mind, and decision making. It should also grab the attention of psychologists and neuroscientists working in ethics-adjacent and ethics-relevant areas. It might work as a textbook for an advanced undergraduate seminar on neuroethics, and it would certainly be appropriate for a graduate seminar on this topic. (And it has a very detailed index – a rarity these days!)

 

References:

Adams, R. M. (1976). Motive utilitarianism. The Journal of Philosophy, 73(14): 467-81.

Alfano, M. (2016). Moral Psychology: An Introduction. London: Polity.

Curry, O. S., Mullins, D. A., & Whitehouse, H. (submitted). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current Anthropology

Denson, T., Mehta, P., & Tan, D. (2013). Endogenous testosterone and cortisol jointly influence reactive aggression in women. Psychoneuroendocrinology, 38(3): 416-24.

Lindquist, K., Wager, T., Kober, H., Bliss-Moreau, E. & Feldman Barrett, L. (2012). The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35: 121-202.

Mobbs, D., Yu, R., Rowe, J., Eich, H., Feldman-Hall, O., & Dalgleish, T. (2010). Neural activity associated with monitoring the oscillating threat value of a tarantula. Proceedings of the National Academies of Science, 107(47): 20582-6.

Sen, A. (1985). Well-being, agency and freedom: The Dewey Lectures 1984. The Journal of Philosophy, 82(4): 169-221.

Susaki, E., Tainaka, K., Perrin, D., Kishino, G., Tawara, T., Watanabe, T., Yokoyama, C., Onoe, H., Eguchi, M., Yamaguchi, S., Abe, T., Kiyonari, H., Shimizu, Y., Miyawaki, A., Yokota, H., Ueda, H. (2014). Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis. Cell, 157(3): 726-39.

Wheatley, T. & Haidt, J. (2010). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10): 780-784.

 

[1] Accessed 3 December 2016.