Data Archaeology II: Predestination

I was planning on writing my first entry in this blog series on the problem of free will in philosophy and theology as it relates to Data when (providentially?) this long read in The Guardian came to my attention, via the British theologian John Milbank’s twitter feed. Written by Oliver Burkeman, it outlines the arguments of philosophers, representing a minority in their field, who hold that free will does not exist. Rather, they claim that all human intentions and actions can be reduced to bare cause and effect, the collisions of particles set into motion by the Big Bang. Even when we think that we are making a free choice (for example, between an apple or a banana from a fruit bowl), we are in fact being driven along by pre-conscious chemical interactions in our brain.

It is no surprise that this question would be particularly germane to our present moment. The volume of responses to the piece that The Guardian received are proof enough that it touched a nerve. In outlining the stakes of the free will vs. determinism debate, Burkeman centers the question of personal moral responsibility — if we have no control over our actions, then why, for example, should we punish violent criminals at all?1 Additionally, the piece suggests that awareness of an entirely deterministic universe can induce crises of the self, existential torment, and depressive episodes in unwitting victims.

I think there’s more to it than that, though. If we think of human history as a pendulum swinging between the two poles of freedom and determinism, it is fair to say we live in a deterministic age. By that I mean that we understand our condition to be decisively constrained by larger structures and dynamics that we have no control over. Moreover, we seem to despair of any agency in solving or even ameliorating our dilemmas. To give a few examples: trauma has moved to a central place in our understanding of social injustices, and it is understood that trauma leaves permanent, inescapable psychic wounds that can even be passed down and become “generational.” When people discuss grief, they tend to emphasize that one can never get over it. Racism, sexism, and other forms of inequality are now understood to be structural in nature, which means that in our interpersonal interactions we can perpetuate them even against our own intentions, shaped as they are by hierarchies that are durable and can only change slightly, if at all, under the pressure of historical forces.

None of this is necessarily wrong. I only mean to point out that our culture, specifically as it attempts to grapple with the set of social and political problems that constitute “our current predicament,” is inflected at the moment by a deterministic streak. Furthermore, this tendency — with the concerns and anxieties it entails — is present in the conversation about Data and the “impact of technology” on society. This is because technology companies use data in order to predict the future. That is what data science is: the idea that the past can predict the future. If you bought this product, you might like this other one, etc. This is even true for “generative” applications of the technology, like chat bots, which function on probabilistic models about which word should follow another, based on training from past texts. It’s easy to see why many are uncomfortable with all this, for a lot of reasons, but for the purposes of this post I would like to focus on one — it is annoying when someone else thinks they know your own intentions better than you do. It can even feel like a violation of one’s consciousness — and the spark of volition that allows us to freely make our own choices in this world.

As the title of this post suggests, this broad set of questions found expression in the West under the aegis of Christian theology, which was a project of applying philosophical tools to understanding God as revealed in the Bible. Similar debates occurred in Islam, also under the influence of Greek philosophy, and I imagine in other civilizations as well. The Hebrew Bible had treated divine power as something of a mystery (a black box algorithm, you might say), for example in the Book of Job, or as limited to distinct historical interventions in the narrative of the Hebrew people. But the infusion of Greek philosophy into Christianity, already so evident in the Gospel of John, demanded a more systematic treatment of the question, all the more so because Christianity offered eternal life to its adherents. This is why Christian debates over free will and determinism centered on predestination: the question of whether God knew and willed, even at the moment of creation and before the Fall of Man, which individual souls were to be saved. Even more troubling was the position of double predestination, which held that God also willed that individuals would remain sinners and consequently be damned.

The implications of these debates were obvious to all: if God decided the fate of all, then what role did individuals play in seeking salvation? What did this mean for morality? It was reasonable to think that God, having elected certain people as destined for heaven, would will their good deeds into existence. But what about the good deeds of people who were not elected? And did God will the sins of the damned? I don’t intend to gloss one thousand years of theological debate in this post, but rather to point out a few elements which are of particular interest to the current topic of Data. The first is to do with the reason that predestination was even a thinkable concept — the doctrine divine omnipotence. Those thinkers who emphasized that individuals’ fates were predestined tended to do so because it was required, in their minds, by divine omnipotence. (Examples include Augustine and John Calvin.) People did have free will in some limited sense, but it was irrational to think that an all-powerful God would not ultimately be responsible for decisions of cosmic significance.

In our deterministic age, meanwhile, we tend to feel powerless to shape our own destinies. This echoes the angst of Christians like Martin Luther, who despaired of his ability to will himself into sanctification before embarking on his reforming mission. But, crucially, we don’t think that all humans are powerless. On the contrary, we feel that our reality is shaped decisively by elites who wield unaccountable and vast control over others. This feeling is common all along the political spectrum, though slightly less so in the center. And of course, among this elite are tech titans who wield enormous influence through their access to our data and control over communication platforms. Their power, in some minds, verges on the divine — see the Guardian’s paraphrase of Yuval Harari’s opinion on free will, which is that it is an “anachronistic myth… rendered obsolete by the power of modern data science to know us better than we know ourselves, and thus to predict and manipulate our choices.”

To sum up: the power over all existence, including people’s innermost thoughts, was first held by God, and was then secularized and conceptualized as a material determinism governed by natural laws. Now, the capabilities of data science allow humans to usurp that power — but only some privileged humans, who were unelected and are unaccountable. In these perverse circumstances, it is no surprise that critics of the status quo on the left and right frequently describe the elite in demonological terms.

Burkeman’s article on free will does offer some relief from the grimness of it all, however. A surrender to determinism can represent a comfort to conflicted souls — this represents the second element of the predestination debate that I want to highlight. As Burkeman says, “there’s something liberating about it, too. It’s a reason to be gentler with yourself, and with others. For those of us prone to being hard on ourselves, it’s therapeutic to keep in the back of your mind the thought that you might be doing precisely as well as you were always going to be doing – that in the profoundest sense, you couldn’t have done any more.” This therapeutic relief is exactly the same as that experienced by Luther 500 years ago, when he reformulated his spiritual crisis — it was not that all his efforts at sanctification were in vain, but rather that God would freely offer him grace in spite of his failings. This insight, that he would be saved by faith rather than deeds, formed the theological core of his later career.

This type of release and surrender to determinism, commonly experienced upon the resolution of religious, philosophical, or psychological crises, is nonetheless elusive when it comes to the discourse around Data. I suppose that there are some who are optimistic about technology’s ability to solve social problems, and imagine a techno-utopia governed by the benevolent application of advanced statistical techniques. But this does not solve the problem of individual free will, its manipulation by human artifice, and the anxieties that entails.

But I have some good news to share. I am not sure whether we possess free will, but I strongly suspect that we do, and that it exists in some deep stratum of ourselves that cannot be captured by language, no matter how philosophically rigorous, nor the scientific method, no matter how advanced. But one thing I can assure you is that Data does not confer godlike powers on its experts. If free will is an anachronistic myth, then its contemporary counterpart is “the power of modern data science to know us better than we know ourselves.” This idea, which enjoys a surprising level of ubiquity on both sides of the debate over tech, does not withstand scrutiny. No one who actually practices data science seems to believe it — indeed, built into the statistical concepts that undergird data science is the inevitability of error, an irreducible stochasticity within which free will might be said to lie. Its predictive powers only work at scale, so they cannot function on the level of a divine omnipotence or natural law which governs every particle in the cosmos in perpetuity. None of this means that Data is not a powerful tool that deserves both optimistic and critical attention. Rather, it suggests that Data ought to be demystified in order to generate an accurate and useful understanding of its implications, which is the endeavor I hope to pursue further in subsequent posts.


I predict that the next entry in this blog, discussing labels, will arrive in the near future. Please share any thoughts or questions in the comments section, or email me: maxstaley at gmail dot com.

1: Philosophy is far from the only discipline that raises questions about criminal responsibility. Sociology and psychology both also suggest that your background or experiences can predispose you to commit crimes. And these factors are often raised in media coverage of crime and in criminal proceedings, particularly when a “just” punishment is being measured. Point being, deterministic thinking permeates many discursive fields and is not necessarily as exotic as the piece suggests.

New Blog Series: The Archaeology of Data

Spriral Mind, Chaos Obviously by Hunter Longe & Lauren Huret

I am going to try something new with this blog, instead of updating it infrequently with whatever happens to catch my interest when I have the time. Over the past few months, a number of topics which long preoccupied my thoughts — secularization, political thought, data science/AI/Machine Learning, “internet culture” (i.e. Twitter), the philosophy of science, etc. — have coalesced into the idea of a somewhat coherent writing project.

The plan is to write a series of posts which attempt to place data/AI in a broader historical and intellectual context. People often say that Silicon Valley types need to learn more about the humanities, in the hope that this would open their souls to the world and make them behave more ethically. Setting aside the question of whether this would work for the moment (it would not), my project is not one of educating tech workers about philosophy. Nor is it, strictly speaking, a critique of data/AI using the concepts provided by the academic humanities.

What I am interested in, rather, is the fact that the discourse around data takes up a number of problems and debates that have, in the past, been discussed extensively within other intellectual traditions. This is not a bad thing; it normal that public discourse would confront the big, perennial questions of human existence, which philosophers et al. have traditionally argued over. My goal is to excavate these connections in the hope that they can suggest more productive ways of addressing the dilemmas raised by Data. (From here on out I will employ the capitalized “Data” to capture the whole gamut of related technologies, from internet cookies to SQL databases, Machine Learning algorithms, Artificial Intelligences, autonomous vehicles, etc.)

When I talk about the “discourse around Data,” I mean it very broadly — I will discuss not only books and articles on the topic, but also marketing copy, popular films, and of course, tweets. To illustrate what sorts of intellectual traditions I hope to cover, here are two examples:

Theology: As I have written previously, popular culture often depicts data mastery as granting god-like powers to its practitioners. This presents an entryway for thinking through how we conceive of the knowledge and control that our digital footprints concede to others, as well as common feelings of outrage, disgust, and fear toward tech titans like Mark Zuckerberg. Their ambitions seem to violate the natural order by usurping attributes of God — omniscience, omnipotence, immortality. Perennial philosophical debates, for example about free will and determinism, have traditionally been conducted in a theological register. Now they are present in conversations about data privacy, social media misinformation, really all over the place. My plan is to unpack how secularized theological concepts play a crucial role in shaping our concept of Data.

Epistemology: I have also written previously on this blog about the epistemological problems that are at play in data-based technologies. The question ultimately concerns the kind of knowledge that data can provide. Data is ultimately an abstraction, a digital footprint stored on a server. Combined with sophisticated statistical operations, these records of interactions can predict the future — based on the assumption that the past behaviors will continue. Although an algorithm can be wrong in its specific predictions, at scale it should make mistakes in a predictable fashion. While these insights may seem rote to a data scientist, or indeed anyone who knows a bit about statistics, I plan to explore these questions about certainty, doubt, predictability, and so on. Whether or not you understand something like Bayesian inference, the increasing role of statistical methods in all sorts of decision-making suggests that the underlying epistemological assumptions bear interrogation.

I will also discuss disciplines including sociology, ethics, history, political philosophy, phenomenology, political economy, and others, to the extent that my limited expertise allows. Additionally, several themes will arise repeatedly in the course of the blog entries. These are loci where Data and intellectual history intersect in particularly striking ways. Some of these are:

  • Scale
  • Complexity
  • Emergence
  • Agency
  • Alienation
  • Legibility
  • Commodification
  • Abstraction
  • Simulation
  • “Visualization”
  • Taxonomy
  • Identity

And so on. I hope to explore these themes over the next few months in the informal context of this blog. If you have any thoughts or comments on these issues, I am always happy to discuss them here in the comments section or on Twitter. I would love to hear from people who know more about both Data technologies and the intellectual histories under discussion.

Insta-theology

The novelist-poet-memoirist Leigh Stein had a Times op-ed the other day about what is arguably a New Religious Movement: the “Instavangelists.” The portmanteau refers to social media influencers, mainly women, who serve up their followers with a stream of guidance inflected by the vocabularies, concerns, and “values” of the internet-addled millennial generation. As she puts it:

Many millennials who have turned their backs on religious tradition because it isn’t sufficiently diverse or inclusive have found alternative scripture online. Our new belief system is a blend of left-wing political orthodoxy, intersectional feminism, self-optimization, therapy, wellness, astrology and Dolly Parton.

Let’s set aside for the moment that, in spite of the use of the first person plural, Stein is clearly critical of this “belief system,” a fact which will probably be lost on many readers. (The column serves as promotion for her new novel Self Care, a satire of the whole milieu.) This notion of ersatz religion has been a long-standing interest of mine, and the column touches on many strands of the academic discourse about the relationship between modernity and religion.

When I started reading the column, I expected that the accounts Stein was discussing at least had some vaguely Christian tendencies, especially because she positions them as the successors to the televangelists of the 20th century. But as she explains, what they borrow from Oral Roberts et al is more along the lines of the Prosperity Gospel than the Greek ones. The shared idea is that positive thinking will “manifest abundance,” although in this case the set of doctrines center on social justice politics and “the rebranding of diet and beauty culture as wellness.” There is, of course, something more than just narcissism at work in both movements, as they both enjoin the individual to give a little bit of themselves to the wider world before they can expect earthly rewards — this can take the form of tithing to a church, uplifting marginalized voices, or various other salutary deeds, depending on the context.

The central thrust of Stein’s argument — besides the obvious, unflattering implication that like the televangelists before them, influencers are grifting — is that these influencers mimic religion by offering their followers meaning and a direction for their hopes, energies, and outrage. In a word, purpose. But they cannot offer answers to perennial questions about the meaning of life, existence, and in particular, suffering. What’s more, because they are located on the internet, with its harsh hierarchies of attention, they cannot offer the sense of community and mutual obligation that have sustained traditional religions — indeed, as Stein points out, these aspects of religious life are a large factor in their decline among millennials, insofar as for many young people “community” and “mutual obligation” are little more than codewords for sexism, bigotry, “power imbalances,” etc.

In short, the Church of Self Care cannot replace traditional religion because on the one hand it does not demand that its adherents give themselves wholly, in the spirit of humility, to something greater than themselves (this “something greater” can be either transcendent or immanent), and on the other hand cannot offer the reciprocity that serves as a reward of religious life, neither immanently from the community nor transcendently from God (i.e. communion).

So, the Instragram ideology, which is a syncretic mixture of social justice politics and wellness culture, is a simulacrum of the traditional religion. This echoes academic debates from the early-to-middle 20th century, mainly involving German intellectuals, about the relationship between modern, secular thought and theology. The debate centered on political philosophy, with many suggesting that by imitating the truth claims and sociological role of religion without a connection to God, secular thought was not only deficient but actively sinister. Carl Schmitt, a National Socialist jurist, offered the famous formulation that “all significant concepts of the modern theory of the state are secularized theological concepts.” The most straightforward illustration of this is that the sovereign is a secularized God — just as He can suspend the natural laws that govern His creation and perform miracles, the sovereign can declare a “state of exception” and suspend the constitutional order. Eric Voegelin, an Austrian critic of Nazism who later emigrated to America and became a conservative, argued in The Political Religions that modern ideologies paralleled religious systems of thought in that their totalizing attempts to give ultimate meaning to individuals through reference to the political and cosmic orders. However, modern politics, which he would later criticize as examples of a gnostic corruption endemic to Western Civilization, found their ground in inner-worldly or immanent elements of the cosmos (i.e. race, or class), rather than basing themselves on transcendent grounds. Hans Blumenberg, a half-Jewish philosopher, would later defend secularism by arguing that, while the perennial questions of philosophy remained the same, modern concepts had filled in answer positions vacated by theology as a result of the nominalist revolution of the late Middle Ages. The debate has continued to the present, with recent interventions by figures like Mark Lilla and Michael Gillespie.

My point in linking all these threads, I suppose, is to illustrate that we ought to be taking seriously the role of religion, ersatz or otherwise, in directing social energies. Even while I was writing this, a conversation broke out over Twitter about whether wokeness is a replacement religion — I would say it may replicate certain impulses about denouncing heresy that have historically found expression through religion, but that is not enough. What Stein has identified is a system of thought that offers both a comprehensive vision of the world/creation (this is a “woke” vision) with a set of practices that can help ameliorate the aspirations and torments of being alive in an imperfect world. I think in a way the better parallel than American evangelism is liberal, mainline Protestantism. What they both offer are beliefs that are meant to provide comfort alongside suggestions — not commandments — oriented toward self-improvement. They make you feel a little better, and they make you a better citizen of the modern world, without being too demanding. For many, this is not enough, particularly as our world (politically, ecologically, existentially) seems to require some sterner stuff.

Where people will turn when all this proves insufficient — and surely, many find it plenty fulfilling — is anyone’s guess. Arguably, you can see the results of this process all around us already, but in a fragmentary and inchoate way. I think that what will be key is that whatever replaces the replacement religions must both demand more of their adherents while offering a reciprocity that is missing in people’s lives, religious or secular.

Data Fetishism, Digital Humanities: On the Seshat Databank

vico scienza nuova.jpg

Frontispiece of Giambattista Vico’s La Scienza Nuova, 1730 ed.

On a scale of one to 10, how complex is contemporary Japan? How about ancient Rome? Neolithic settlements in the Niger Delta? Does the presence or absence of “Informal Elite Polygamy” give you the entire picture of gender relations in a given society? And would you trust a machine learning algorithm that used this data to predict our future? Some researchers are claiming that we should do just that, and even hope that we can use them to make policy recommendations addressing the great issues of our day — climate change, economic inequality, mass migration, you name it.

A massive databank created by University of Connecticut professor Peter Turchin aims to collect our knowledge of past cultures in a “systematically organized” fashion in order to test theories that explain “cultural evolution and historical dynamics.” If you think that sounds like a pretty scientific way of describing traditionally humanistic disciplines like history and anthropology, you’re right — a recent Guardian profile portrays Turchin, a professor in the departments of mathematics, anthropology, and evolutionary biology, as an empiricist gatecrasher who can rescue the field of history from its myopic decadence. After a successful career as a biologist, he began to think about the ways his methods could benefit the field of history, writing a book called Historical Dynamics in 2003, and in 2011 founding a journal, Cliodynamics, dedicated to the “mathematical modeling of historical processes.” Gathering an interdisciplinary group of like-minded researchers around him, he began to collect huge amounts information about our past in data form; this endeavor has borne fruit beginning in 2018 in the form of publications in Cliodynamics and other journals.

His project, named  the Seshat Databank after the Egyptian goddess of wisdom, knowledge, and writing, represents a new iteration of an old story: scientists thinking that humanists and social scientists could learn a thing or two from them about methodological rigor, in this case exemplified by a growing trend of researchers advocating the use advanced data science tools in those disciplines. 

The databank project, which is partially open to the public, is built on the foundations of complexity science, a field of study that relies on mathematical tools to discover interactions in systems that are too, well, complex for us to fully comprehend. The seemingly random vicissitudes of the weather, or the rise and fall of great civilizations, are revealed to follow regular patterns when viewed at a large enough scale. Collecting data on everything from agricultural tools to bureaucratic infrastructure in societies all over the globe and throughout history allows computers to perform millions of calculations, searching for mathematical interactions among the parts of the complex systems the data describes. And turning history into a data set is theoretically possible, because anything — not only something like population estimates, but also language and archaeological findings — can be converted into numbers and interpreted by computers, which can do math much faster than puny human brains. 

What this project and others like it will ultimately reveal, however, is not laws that govern the course of history, but rather the inability of data itself to capture the truth about history and humans in general,  whether or not the theories of complexity science are correct. While the researchers can use technology to find complex interactions within their data, the data itself radically simplifies concrete realities, no matter if it is collected for scientific or commercial purposes. Instagram might record that you liked a photo, but the platform doesn’t know why: maybe you felt obligated, or maybe your finger slipped. That doesn’t stop it from making all sorts of tools that rely on the assumption that your “like” means that you liked the picture. Data is always an abstraction, a representation of some real phenomenon that cannot be fully captured by numbers. To fetishize data and imbue it with an almost magical capacity to describe our lives only mystifies the ways it functions in the world today.

Seshat diagram.jpg

The Seshat view of a society. Source

To Turchin’s credit, he doesn’t display the typical STEMlord contempt for the humanities — in his papers, he generously cites anthropologists and sociologists, while simply arguing that he wants to use data to test and validate their theories in a more rigorous way. In fact, the project that he and his colleagues describe is rather less ambitious than what is written in the Guardian article, perhaps because it is meant to lay the groundwork for future progress. Regions of the world throughout time (beginning around 10,000 years ago, in the Stone Age) are sampled and assigned variables which capture various aspects of its social development, including social/political structures, economic systems, and cultural practices. This produces a dataframe, essentially an Excel file, which can become the basis for all sorts of mathematical operations. 

They then employ the same kinds of statistical methods commonly employed by tech firms to predict, say, what kinds of ads achieve the best conversion metrics. But the Seshat dataset is about learning from our past, not boosting KPIs, and Turchin’s team uses it to test hypotheses about historical change by looking, for example, at the factors that influenced the development of information systems throughout the world. This in turn allows the researchers to form conclusions about social “evolution” — they love that word — which have the flavor of objective truth because they are the products of the scientific method. And these truths have a specific use, according to Turchin; in the Guardian profile, he holds that societies go through cycles of stability and crisis, and that his analysis can not only predict when a crisis is likely to occur, but also prescribe therapeutic interventions that can ameliorate or cure them.

Turchin’s theory thus represents a rejoinder to the dominant mode of academic history, which rejects any idea of underlying structure or universal law governing human affairs. Historians nowadays prefer to look at specific times and places in history, in the hope of understanding more deeply how human life is conditioned by the economic, political, social and cultural contexts surrounding it. This can tell us about what it means to be a human, but it does not reveal much about large-scale patterns in the development of our societies.

Then again, the idea that history moves in cycles is as old as the discipline itself. Herodotus, the first historian in the West, adapted a nested cyclical structure from the Iliad in order to give his Histories its epic heft. Aristotle and Polybius held that states undergo a process where monarchs become decadent and are overthrown by the aristocracy, only to be overthrown by the people after they decline, with a final descent into anarchy that is ended by the rise of a new monarch. Ibn Khaldun presented yet another cyclical model of social and political change in Islamic Golden Age, wherein civilized cultures become soft and fractious, eventually suffering conquest by hardened, unified nomadic groups from their periphery. Modern readers may be familiar with Marx’s idea that the dialectic of class struggle results in regular periods of social conflict.

But for better or worse, professional historians focus on understanding how people lived in the past while tending to avoid The Meaning of It All. The attractiveness of Turchin’s project might be in its appeal to a more popular understanding of history, which is that it does have a structure which can be understood, and that its lessons can be extended into the future. As Turchin himself explained in a recent blog post addressing criticism of his project, he does not conceive of it as a “threat” to history, but rather as a continuation of comparative historical research in the mode of Polybius and Ibn Khaldun. There’s a lesson for the professors there, I think, although not necessarily the same one Turchin is offering. 

That the Seshat project addresses a certain desire for deeper meaning in history is one thing; its claims of objectivity, scientific rigor, and generalizability are the real problem. The “complexity science” upon which the whole endeavor rests claims that computers can “discern patterns not visible to the human eye,” in the words of the Guardian. This is certainly true, but also ignores one of the cardinal rules of machine learning: garbage in, garbage out. This means that, even though your algorithm may perform admirably, if your data is bad, then the model is useless. 

It is clear that Turchin and his colleagues took great care in putting together the Seshat Databank, but that doesn’t change the fact that it represents a collection of assumptions and educated guesses about the relevant variables in “historical dynamics.” Not only do the researchers have to decide what aspects of past cultures are important enough to be measured, they also are forced to turn our complicated and often incomplete knowledge about the past into simple numbers. You could write a million words about Italy in the 15th century and not capture all the variation and complexity there, to say nothing of all the things we will never learn about the topic, because they are permanently lost. But at least historians are aware of that, and temper their interpretations of the past with an awareness that they are looking at it with modern eyes.

The Seshat researchers would probably reply that the point is not to understand any one culture in depth, but to use the advantages of scale offered by computers to deduce trends. Other projects use a similar logic. One digital simulation of society — in which blocks of code interact with one another following rules which approximate human behavior — might not be that useful, because it is so abstracted from our reality, but run millions of them, and you might learn some insights about how a city is likely to respond to an influx of immigrants, as a recent article in New Scientist claimed. 

Then again, it’s worth asking what’s the point of that point: How useful are these projects actually? Turchin also wants to run computer simulations to see which kinds of societies are more likely to collapse under certain stressors like foreign invasion or natural disaster, and also which kinds of interventions save them. I think that this project reflects and flatters the attitudes of elite liberals, who see society as akin to a human body — a complex system whose periodic crises (crisis being originally a medical term, as in “the patient is in critical condition”) should be managed by credentialed experts. A fine idea, if we set aside the fact that it ignores the political will of the vast majority of people. But what if the experts are wrong?

 

Both tech-optimists and their critics live in thrall to data fetishism. They imagine that access to, and mastery of, the vast reservoirs of data out there can confer supernatural powers — omniscience, omnipotence, and even the ability to create new life.  The hype around technology’s ability to prevent the kinds of catastrophes that marred our past reflects the mystique that data currently enjoys. But our Google searches and Facebook likes are not portals into our innermost selves; more often, they are likely to reveal that you are interested in buying a refrigerator, or whatever. That’s mostly what our data is used for, and even then, not always effectively, as when you constantly see refrigerator ads, even though you just bought a Frigidaire last week. 

But what the concept of data fetishism captures is not only the magical properties ascribed to our browser cookies and the like. It also gestures to the status of data as a commodity — “the new oil,” as the cliche goes. As in Marx’s concept of commodity fetishism, magical thinking about data not only secures its status as a source of profits, but also obscures the mundane and often exploitative social relations that go into its production and analysis. And what is history but a record of mundane and often exploitative social relations?

In other words, the Seshat Databank is less a “systematically organized” collection of all historical knowledge than a recording of user inputs by the researchers themselves. Their assumptions and biases — themselves shaped by historical conditions — imbue the dataset, but are hidden from view, Oz-like, by the screen of scientific objectivity. That’s to say nothing of biases inherent to the historical record itself, which is fragmentary and distributed unequally across cultures. So we ought to be circumspect about their claims of eventually solving the myriad crises facing us today. Because we are not abstractions, nor is the world we inhabit.

From Slacker to CEO: The (Linguistic) Rise of the Dude

In The Big Lebowski, Jeff Bridges’ character is called, rather famously, The Dude. As a bumbling, mostly passive, but ultimately decent figure, he is an ideal protagonist for the Coen Brothers’ parody of sun-soaked LA noir. In the opening voiceover, Sam Elliot’s cowboy character says that dude is “a name no one would self-apply where I come from,” alluding to the original meaning of the word: city-slickers or dandies who wanted to participate in the Wild West but weren’t man enough for the real thing. That’s what the term “Dude Ranch” insinuates, and it’s why in the first episode of Deadwood, Al Swearengen refers to the fatally in-over-his-head Brom Garret as “the New York dude.” In any case, Jeff Lebowski’s nickname is central to his personality and captures everything that differentiates him from his mirror, the titular “Big” Lebowski. This latter figure is an avatar for all things un-dude-like: wealth, power, conservatism, The Establishment.

Now think about how people use the word dude on the internet these days. This article from Deadspin is a good example. It’s a jeremiad against the “rich white dudes” who run the country and their puppet-jurist, Brett Kavanaugh. According to this usage, it is the Big Lebowski who is a “dude.” What the hell? Where does that leave The Dude?

You see this phenomenon more and more. While I think that the shift in usage has seismic implications, usually when I bring it up, people shrug and point out that bro is a much more popular dysphemism (I use this word reluctantly, but “slur” or even “epithet” would be awfully gratuitous) for straight white men. And the two concepts — the dude and the bro as manifestations of sexism and privilege — certainly did emerge onto the national Discourse from the same milieu, that being online feminism, before being amplified during the 2016 Democratic primary contest and finally mainstreamed by the Me Too movement.

But what about the collapse in distinctions between dudes and bros, as captured in the unfortunate portmanteau dudebro? To my mind, it was the bros who played lacrosse, worked in law and finance, and dominated our political class. Dudes were, for the most part, surfers and stoners from Southern California. Now I know, people use both words prolifically in their form as direct address, and dude is even effectively gender neutral in this function. It’s not quite right to say that they refer to entirely different modes of masculinity. Nonetheless, they carry significantly different associations, or at least they used to.

I’ve been wondering whether there’s some deeper meaning to us dudes becoming rich and powerful. You could say we’ve been gentrified. Does this linguistic microtrend have any social value, or at least impact on The Discourse?  A professor in one of my grad-school seminars might have asked — What kind of work does it do? The historical transformations of the word suggest that it has been central to American ideas of masculinity for over a century. And these days, when you see people talking about the subject of “Men,” it is usually a conversation about wrongdoing: toxic masculinity, arguing about the appropriateness of social penalties of sexual miscreants, backlash against the idea of toxic masculinity, male tears, paranoia about false accusations of sexual assault, etc. One of the main contributions of internet feminism, in my opinion, has been to show that all sorts of men can be bad, not just the sort of men you picture when you think of bad men. This process has taken many forms, for example the rising awareness that the elite can also be abusers, or the idea that “nerds” are not just shy, nice guys, but can engage in toxic masculinity as well.

Seen in this light, the collapse of distinction between dudes and bros, the hippies and elites, makes more sense. As we have seen just this week, in the Ryan Adams revelations, dudes can be monsters as well. There is still more to say about the process itself — for example, I think that the concept “Bernie Bros,” a group that was actually made of of dudes, was a critical inflection point in the story. I’m still not clear on exactly how it became acceptable to refer to the likes of the Koch brothers as dudes, and I’m not sure that it is entirely beneficial to think of masculinity as something that articulates itself in the same way regardless of class or other factors. I can accept, nonetheless, that this shift in usage has had some benefit to The Discourse. Still, I think we can all agree that dudebro has got to go.

The Manosphere as a Gnostic Cult

I was glad to see Max Read’s essay today in New York Magazine, which suggests that the “manosphere” sees the world in essentially gnostic terms. I’ve entertained the same idea myself, and like Read I came to it because of the clearly gnostic symbolism of the “red pill” concept, which the manosphere lifted from The Matrix. I suggest that everyone read it.

As I think Read realizes, however, the relationship between gnosticism, the internet, and our current political dilemma is a huge topic, which he could not hope to cover fully in the space he was allotted. Since he has already explained the framework for why online anti-feminism could be seen as a kind of gnosticism I’d simply like to add some of my thoughts in list form. (Please note that I am NOT writing an essay with numbered paragraphs. I hate those.)

  1. Gnosticism is a way of seeing the world which has emerged occasionally in history. As Hans Blumenberg has discussed, gnosticism arose as a challenge to orthodoxy in the first centuries of the Christian era, and then again during the middle ages, this time in the form of scholarly nominalism in addition to heresies like Catharism. Gnosticism, unlike orthodox Christianity (or, as is the case now, mainstream liberalism), is a worldview based on intense feelings of alienation. Thus, it thrives in historical circumstances which are alienating. And who is a more quintessentially alienated subject than an economically and sexually frustrated young man who spends all his time on the internet? I think it’s important to consider both the phenomenology of internet use as well as the conditions of late capitalism when we look at this most recent eruption of gnosticism.
  2. One complicating factor is that for gnostics, the material world was created by an evil god, while a higher spiritual or intellectual realm was created first by a good god. The purpose of gnosticism was to escape this lower realm we have been cast into and return to our original home in the spiritual realm through the acquisition of sacred knowledge (“gnosis”). For the gnostics, the immaterial realm of ideas was more real than the material world, which was an illusion. The Matrix’s red pill symbolism inverts this, as does the manosphere’s notion that feminism, an ideology, perverted a well-functioning world governed by natural hierarchies and evolutionary psychology. Nonetheless, we still do see this kind of thinking in those circles, especially in the case of Jordan Peterson, whose Maps of Meaning made the gnostic argument that archetypal ideas decisively shape our material reality.
  3. Speaking of Peterson, gnostic cults often formed around sages who offered access to salvific knowledge. This is quite obviously the case in the manosphere as well, and Peterson himself can be seen as a kind of modern heresiarch. Another trend in gnostic cults was spiritual hierarchy, with more advanced practitioners served by a larger lower class of followers, who could not hope for ultimate salvation. We can see this type of thought, for example, in the false “alpha-beta” dichotomy Red Pillers rely on so heavily in their understanding of masculinity.
  4. There’s a huge body of literature on gnosticism, which portrays them positively, almost as proto-hippies. There’s another critical corpus of work on gnostics by Christians, who catalog their various outrages. I’m not familiar enough with either tradition to say which is correct, but I do want to highlight something that the orthodox Christians have pointed out: that gnostic cults tended to extremes of either asceticism or hedonism. This was because of their denigration of the material realm, which suggested to some that all worldly things should be rejected, and to others that laws and norms restricting our behavior were pointless. We see this dichotomy reflected in the manosphere, with pickup artists indulging in as much casual sex as possible, and “Men Going Their Own Way” opting for “monk mode” in the service of achieving a stoic indifference to women.
  5. Read points out the connection to nihilism, which is symbolized by the “Black Pill.” The black pill is usually some kind of information that tells you there’s no escaping, for example, being involuntarily celibate. You have the jawline of a beta male, so no Red Pill is going to teach you how to get laid. The connection between gnosticism and nihilism is something that Hans Jonas noticed, and wrote about in an essay attached to his foundational 1958 text The Gnostic Religion. In the essay, he points out similarities between the gnostic outlook and German philosophers like Nietzsche and Heidegger. What I find important here is that gnostic expectations, when dashed, can lead to nihilistic despair, which should be a cause for concern.
  6. Many critics of the left, most notably Eric Voegelin, have described Marxism as a form of gnosticism. Today, we often see online leftists indulging in gnostic-seeming ideas, most significantly that this world is hell. The notion that Marxist or leftist thought offers an interpretive key which makes sense of our alienation, unlocking the secrets of existence, and moreover suggests a route to a solution, is all recognizably gnostic. It is also very seductive. This suggests that gnosticism may be a heresy, but it’s not completely useless. It’s just a matter of avoiding some of the downfalls listed above (particularly nihilism, and elitism).

Any of these thoughts could be developed further, and I’m curious to hear what others think. All in all, I’m hoping that Read’s essay sparks a trend in comparing modern experience to late-antique religious phenomena, because that’s one of my favorite things to do.

Vocabularies of Racism: Sin, Pathology, Structure

The question of whether politicians are racist is all over the news these days, and not just in the case of the Governor of Virginia. Several Democratic candidates for President have been asked whether they believe Donald Trump is a racist. Meanwhile, in the realm of celebrity, Liam Neeson admitted to entertaining fantasies of racial violence while denying that he was a racist himself. All of this has raised, once again, a question which has vexed Americans for some time now: how do we adjudicate whether or not someone is racist? I am certainly not the person to answer this question, but I have some thoughts on why the confusion exists, and it has to do with the political vocabularies.

Rather than relying on a spectrum or any other abstracted spatial metaphor, I find it most useful to organize political ideas into three broad yet easily distinguishable traditions: the conservative, the liberal, and the radical. These three streams of thought have accounted for most of the significant political concepts of the past few centuries, and it is often illustrative to look at how each, in a general sense, approaches certain political problems. For example, as Alan Kahan has written, the three traditions had very different ideas about how to determine who could participate in politics during the 19th century. Conservatives thought that suffrage was a hereditary property, not to be diluted by expansion. Radicals argued that it was a universal right, and liberals suggested that a limited but gradually expanding franchise should be delineated by “capacity,” or the ability to behave as a good political subject.

Now, think of how different Americans tend to talk about race and racism. We will set aside, for now, those Americans who are openly racist—the vast majority agree that it is bad to be a racist. The confusion lies in how they define racism. What follows is a very rough sketch of how the three major political traditions discuss racism in American society. It is important to remember that people can use the vocabulary of more than one tradition at different times. I’m merely trying to point out some general tendencies.

Conservatives use religious vocabulary and frameworks in talking about cases of racism. To be a racist is a kind of political or social sin. That is, it is an individual fault, and its cause is some defect in the individual’s heart, or soul. The criterion for “being a racist” is an open profession of an interior belief that one race is superior to another, similar to Christian professions of doctrinal beliefs. It is critical that the individuals inner beliefs and outward statements or acts are in alignment. Therefore, renouncement of personal racist beliefs (“I don’t have a racist bone in my body”) is often enough for conservatives to acquit an accused racist, just as the renouncement of heretical beliefs was often sufficient to demonstrate one’s allegiance to orthodox Christianity. In the rare case that person actually confesses to holding racist beliefs (and furthermore accepts that they are reprehensible), the only requirement for readmission into polite society is honest repentance, or simply the passage of time.

As with other social ills, when liberals talk about racism they rely on a system of medical metaphors: pathology, trauma, therapy, etc. Racism is conceived of as a disorder corrupting social relations within the body politic, which ought to be harmonious. It exists perhaps because of specific malignant agents or structures within the whole, or maybe without an intentional cause at all — it can be a kind of social neuroses, an imbalance that can develop on its own or represent an inheritance from earlier generations. The solution to the problem of racism is therefore limited therapeutic interventions by expert actors. This can mean the government, but also includes nonprofits, NGOs, specialized activists, and even private businesses. Occasionally a mass movement is necessary, but this is usually in the service of winning some specific policy reform that is meant to provide for healthier race relations. On an individual level, racists can be rehabilitated through education, psychological treatment, or even travel and culinary experimentation, all of which fall under the general rubric of “self-improvement.”

For liberals, racism and other bigotries are things that prevent the social system from functioning as it should. To radicals, on the other hand, racism is a feature, not a bug. It is one of the many structures of power that allow the ruling class to dominate and exploit the great majority of people. It exists not only in structures, such as the criminal justice system, but also in individuals as an ideology which buttresses the dominant position of the ruling class. Whereas liberals see prejudice as irrational, to radicals racism makes perfect sense, because it justifies unequal social relations after the fact. Thus, the solution to the problem of racism is to topple oppressive structures through mass movements and to confront racist ideology head-on, using force if necessary.

Once again, these are all rough tendencies, and oftentimes people are not consistently in one camp or the other. For example, the putatively liberal news media does rely on a liberal vocabulary when discussing racism in general, but when it comes to adjudicating whether or not individuals are racist, they often fall back on the more stringent conservative criteria. (I would even suggest that this is a big reason why no one is particularly pleased with the state of the discourse on race.) In any case, I merely offer this sketch as an explanation for some of the confusion — why, for example, conservatives insist that black prejudice against white people is just as big a problem as white supremacy. Sure, it’s partly self-interested hypocrisy, but it’s also justified by a much broader set of concepts that inform how they see the world. It’s useless, in my opinion, to point out their hypocrisy and accuse them of “bad faith:” my sense is that they genuinely view the issue like they say they do.

More interesting than the attenuated and self-serving conservative view, however, are the differences between radicals and liberals. The distinction I draw here can of course be applied to a variety of other topics: wealth inequality, climate change, imperialism, etc. In a political environment where left-wing activists are called “very liberal,” I think it’s worthwhile to continue to articulate what makes them different, even if any eventual solution to our deeply-seated racism problem (not to mention our current political crisis) will require some synthesis of the two traditions.