Paralogy, the Paradox of Tolerance, and Diversity

A conversation with a friend ages ago collided with some stuff I’ve been musing on lately–thoughts still percolating on what all this has to do with solidarity and intersectionality and the circular firing squad–and this came out. In rather a typo-ridden rush, so thanks to Benny Blue for his fast and thorough proofread!
A while back—long enough ago that the rise of ethnonationalism was still worrying as opposed to horrifying—I was having an email exchange with a friend of mine about it and he raised a point that I couldn’t immediately argue: Ethnonationalists claim that ethnic divides are insurmountable and inevitably lead to (presumed violent) conflict. Isn’t multiculturalism a concession to the first half of that statement? Wouldn’t it be better to have a monoculture in which all association is purely voluntary and all identity is purely self-defined.
I reacted with visceral horror, but at the time couldn’t really formulate a counterargument beyond “No, that would be awful.”
Here, most of a year later, is that argument.
I’m going to start with some basic definitions from literary theory:
A story is a set of events: “The king died and the queen died.”
A plot is a set of events arranged in a sequence and given causal relationships: “The king died, and then the queen died of grief.”
A narrative is what’s left of a plot after you subtract out the story; it is the relationships between events without the events themselves.
Think of a plot as a kind of structure, objects in space supported by a scaffolding. The scaffold is the narrative; the objects it holds in place and connects to one another are the events making up the story. The same events—the same story—becomes a very different plot when you change the narrative: “The king died of poisoning, and then the queen died of hanging.” The first plot implies the queen mourned the king so deeply she perished; the second that she murdered him and was executed for the crime. Or a third narrative, different from the second only in a single word: “The king died of poisoning, and then the queen died of hanging herself.” Now we are back to the implications of the first narrative, yet the basic events—the “facts” of the story—never changed.
The Postmodern Condition
In 1979, Jean-Francois Lyotard’s seminal book, The Postmodern Condition: A Report on Knowledge, used the concept of narrative as a framework through which to examine the broader world of human thought. He argued that all knowledge works the same way that literature does; that is, any knowledge system consists of two kinds of knowledge: positive knowledge (which is analogous to story) and narrative knowledge (which is analogous, as the name implies, to narrative). Positive knowledge consists of the facts and only the facts; narrative knowledge is what gives those facts meaning, by connecting and organizing them, placing them in relationships with one another. So, for example, if we think of a field of science as a knowledge system, the positive knowledge consists of the raw data, the experimental results or field observations, while the narrative knowledge consists of the models used to interpret that data.
Lyotard argued that the general trend of the 20th century was a trend away from grand narratives to petite narratives (also called metanarratives and micronarratives, respectively). A grand narrative is a broad, organizing idea, that is (or presents itself as being) universally applicable, true for all people in all places and times. By contrast, a micronarrative is confined to a single system of knowledge, and does not claim universality. Grand narratives span an entire culture; micronarratives exist within a single community. Modernism prized and sought after grand narratives: “All stories boil down to the Hero’s Journey.” “Everything in the universe can be reduced to Newtonian physics.” “Liberal democracy and capitalism are the best political and economic systems, and everyone everywhere would be better off under them.”
However, grand narrative comes at the price of squeezing out micronarratives. Communities who won’t “get with the program” are silenced and marginalized, whatever ideas they might have been able to share blocked because of their incompatibility with the grand narrative. Facts which cannot be fit into the grand narrative are discarded. Monoculture emerges—and soon begins showing its cracks.
Over the course of the 20th century, a number of developments undermined the grand narratives of modernism. Art movements like Dadaism and cubism questioned the “rules”—the grand narratives—of representational art. Scientific developments like relativity and quantum mechanics—which both appear to be true, and yet also appear irreconcilable—cast doubt on the grand narratives of physics. Civil rights movements, economic upheavals, and the conflicts engendered by colonialism cast doubt on the grand narratives of liberalism and capitalism.
Lyotard argued that these were all symptoms of the transition from singular grand narratives to a multiplicity of micronarratives. He proposed that the next stage of humanity—the next era of art and philosophy—after modernity would be characterized by what he called paralogy, a pun of sorts: it is a prefix and a suffix with nothing in between, suggesting systems of knowledge (-logy) coexisting beside one another (para-) without regard to their content (which would be the missing stem to which the affixes would normally attach). The titular “postmodern condition” of his book—which in turn has given its name to postmodernism—is the state of transition from modernism to paralogy, a period of confusion and social upheaval as grand narratives break down and micronarratives re-emerge or compete to become grand themselves.
But what does paralogy look like? Lyotard describes it as a multiplicity of coexisting knowledge systems, each shared by a given community. This exists in any society: the knowledge system of biology is shared by the community of biologists, the knowledge system of Jewish heritage is shared by the Jewish community, and so on. Someone who is both a biologist and Jewish is in both communities, and hence familiar with both knowledge systems; they use the narrative knowledge of biology when looking at biology facts, the narrative knowledge of Judaism when they look at Judaism facts, and some combination of both when looking at the positive knowledge shared between the two systems.
This is a microcosm of paralogy. In Lyotard’s conception of the post-postmodern condition, each community has its own system of knowledge, its own narratives, and applies them within that community. Any given individual belongs to multiple communities, and so each community is linked to the other communities to which its members belong, creating a network that spans all communities in the entire culture. Ideas generated in one community spread to others through this network, allowing all members of all communities to hear and evaluate them if they wish. To Lyotard this communication is key; he regards culture as an idea-generating engine, and paralogy makes it a better one: ideas which are non-obvious or even incomprehensible in one narrative can be found by another, and spread from there.
There’s another, and in my opinion better, argument for paralogy, however: it allows for the greatest possible diversity. Personal identity is narrative in nature: ethnicity is a narrative about where we and our customs come from, sexuality a narrative about how we experience (or don’t experience) attraction, and so on. When I say that I am a cishet male atheist postpositivist feminist socialist Jew, I am announcing a variety of ways in which I organize, relate, and assign meaning to my thoughts and experiences. In a paralogous society, I am free to belong to a multiplicity of communities that share each of those narratives, and many other communities besides. For virtually any facet of identity, I can find a group where that identity is shared, a community within which to explore, discuss, and evolve that narrative—and yet there is no concern of an “echo chamber” effect, because I belong to a multitude of other communities that have their own narratives, yet include some of the same positive knowledge in their system. More importantly, people who, in our current society, have had their identities marginalized and their narratives squeezed out by the grand narrative can do the same, freely forming communities where their identities and narratives are accepted and getting their ideas onto the same paralogous network as everyone else’s.
But isn’t this just what my friend described? People moving freely between ideologies and identities as they wish, in one grand monoculture?
No, but explaining the difference will require a bit of a detour and a closer examination of how narratives, and especially grand narratives, work.
Every narrative has certain ideas, or kinds of ideas, which it trends toward or away from. A work of purely mimetic fiction (also called non-genre fiction, but that’s another grand narrative at work) will not have characters go for a journey on a starship. A conspiracy theory transforms evidence that contradicts the theory into evidence that the conspiracy is powerful enough to fake the evidence. No scientific investigation will ever conclude that a phenomenon is the work of supernatural forces.
These are all examples of narrative imperatives: the structure of a narrative can have trends regardless of the positive knowledge associated with it. The sciences, for example, seek naturalistic explanations for natural phenomena; any positive knowledge which resists such explanation must be either rejected as an error or lie, or treated as an unknown but natural phenomenon for which there is no naturalistic explanation yet. “A god did it” is not science any more than traveling at Warp 6 is mimetic fiction. This is not a criticism of the sciences; it is an essential part of what makes them science. In this case, the narrative imperative is a good thing: if you want naturalistic explanations of natural phenomena, which you presumably do if you’re doing science, you want your narrative knowledge to push you away from supernatural explanations.
Not all imperatives are so benign. In particular, some narratives have imperatives that drive them to become grand narratives; in other words, some narratives will, given the space to do so, tend toward generating the idea that the community should impose these narratives on everyone and eliminate any and all contradictory narratives. I call these grandiose narratives: narratives which are not necessarily grand narratives in any given culture, but which contain a narrative imperative to become grand narratives if possible. Some examples of grandiose narratives which are not grand narratives in our culture: scientism, the belief that the sciences are the only true knowledge system, all others are false, and which therefore tends to the conclusion that other knowledge systems should be eliminated;* evangelical Christianity, which holds that there is a moral imperative to persuade all people to become Christians; antitheism, which holds that all religions are false and should be eliminated; ethnonationalism, which insists that one and only one ethnicity dominate a culture; and (just so that this list contains one item to which I am not fundamentally opposed) Marxism, which insists on a revolution leading to a single universal economic system and philosophy shared by all.
Since the presence of a grand narrative makes paralogy impossible, grandiose narratives are a fundamental threat to paralogy. But how to deal with them?
The Paradox of Tolerance
There is a common thread among grandiose narratives: they are all intolerant. Scientism cannot abide non-scientific beliefs, and given the opportunity and power, a community which adheres to scientism must seek to eliminate those beliefs. The same holds for evangelical Christianity and non-Christian beliefs or antitheism and religious beliefs. Ethnonationalism is even worse: it cannot abide non-ethnonationalist beliefs or other ethnic identities, and so, given the opportunity and power, an ethnonationalist community must seek to eliminate not just the beliefs but the ethnicities as well, which is to say it must engage in ethnic cleansing and genocide.
Grandiose narratives are thus anathema to a paralogous society; they cannot be tolerated if the society is to exist. In his The Open Society and Its Enemies (which, it should be noted, predates The Postmodern Condition by almost 25 years and thus contains no reference to paralogy), Karl Popper coined the term “The Paradox of Tolerance” for this phenomenon. He discussed it in terms of building a free and tolerant society: a society which tolerates anything, including intolerance, will inevitably be taken over by the intolerant and therefore cease to be tolerant.
Consider a society which is almost perfectly paralogous, but there is one community which is intolerant—let’s say they’re white supremacists. As a community in a paralogous society, they are on the network and able to spread their ideas—which, remember, were generated by a white supremacist narrative. Members of this group will be members of others, that being how the network works, and so will carry their racist ideas into other communities. Not all racist ideas are obviously so; subtly racist ideas will enter the knowledge systems of other communities, making those communities racist. The mere presence of a white supremacist community makes the society as a whole more racist—not every community will be equally “infected,” and some may even stay completely free of racism, but people of color be completely accepted and free to be themselves only in those few communities, the very definition of marginalization.
Or consider an example which doesn’t depend on paralogy or even the narrative-based epistemology we’ve outlined here: imagine a society which is perfectly tolerant, except for one anti-black racist. Put them anywhere in that society, and they make things worse for black people. If they’re a school teacher, their black students suffer. If they’re a clerk at the DMV, black people applying for licenses suffer. Even if we’re lucky enough that they don’t work any job that gives them power over any black person, if they have any black coworkers, those coworkers have to suffer dealing with a racist. If the racist has any black neighbors, or runs into a black clerk at the DMV when they go for their license—if, in short, the racist has any contact with black people whatsoever—the lives of those black people are made materially worse, and thus society as a whole is demonstrably a little less tolerant of black people than everybody else. There are only three possible things society can do to deal with this: lock the racist away somewhere where they’re guaranteed never to meet or have any effect on a black person (which is intolerant of intolerance), find a way to keep black people away from the racist (which is a restriction on black people but not anyone else, and hence intolerant of black people), or find a way to make the racist more tolerant (which is, again, intolerant of intolerance.)
Of those options, the middle one makes the problem worse; only the first and third actually work to make society more tolerant. Thus, a perfectly tolerant society is impossible unless each individual person is perfectly tolerant (which seems unlikely—we have to assume that if any reasonably sized society tolerates something, somebody somewhere is going to do it). A maximally tolerant society, by contrast, is one in which the only thing not tolerated is intolerance. This is the “paradox,” though it’s not actually one if we phrase it as follows: the maximal tolerance a society can achieve is to tolerate everything except first-order intolerance, where “first-order intolerance” is defined as intolerance of something which is not itself intolerant.
Note that we can still state the parable of the world’s only racist in terms of our epistemology, though it was presented as not requiring that epistemology: the racist’s knowledge system includes a grandiose narrative imperative to make life worse for their racism’s targets. It doesn’t matter if they themselves don’t care whether the rest of society shares their intolerance or not; as long as they act on their racism, the mere existence of this intolerance tends to create a society-wide grand narrative of intolerance.
Grandiose narratives are inherently intolerant; intolerance is inherently grandiose. They are, in other words, two words for the same thing, and hence the Paradox of Tolerance is also the Paradox of Grandiosity: the maximally paralogous society is one which excludes only grandiose narratives.
Paralogy vs. Monoculture
The Paradox of Grandiosity answers the question of how paralogy differs from my friend’s monoculture idea. Consider one feature of communities in a paralogous society: openness. We can describe the openness of a community as its position on a spectrum from fully open communities (which define a member as anyone who wants to be a member) to fully closed communities (which have requirements for membership that are impossible to achieve for anyone not born a member).
We can immediately see that an ethnonationalist community is going to fully be closed: along with any other requirements, if you’re not born part of the “right” ethnicity, you can never become a member of the community. But that’s true of racial identity in general: you may or may not choose to participate in or identify with the racial community in which you were born, but you cannot join any other. You can join closely related communities (for example, joining a family through marriage), and it is possible to be born part of multiple racial communities simultaneously, but racial identity is closed.**
The sciences are partially closed communities: becoming a scientist is possible for anyone in theory, but requires extensive effort and training. At least in some forms, evangelical Christianity is completely open; you can become one just by deciding you are one. Once you have joined, the pressure of the community and strong narrative imperatives will then begin making extensive demands, but joining itself is trivially easy.
These examples should make clear: a closed community is not necessarily grandiose or vice versa. Being closed is not the same as being intolerant—but my friend’s monoculture does not allow closed communities of any kind, since they restrict the individual’s freedom to identify however they want and participate in any community they want.*** My friend’s monoculture is intolerant of closed groups which are not themselves intolerant: it is first-order intolerant.
In short: the real concession to ethnonationalism is not acknowledging that diversity exists; it is rejecting that diversity should exist.
*”Eliminated” not necessarily implying elimination by force, of course. However, eliminating a knowledge system by persuasion still means the loss of its narrative, the dissolution of the community to whom that knowledge system belonged, the marginalization of any associated identities, and the erasure of any unique ideas which that knowledge system might have generated.
**Note that ethnicity and nationality are not the same as race and therefore do not have to function the same way. Judaism, for instance, is a mostly closed identity: someone who is not Jewish can become Jewish, but only through a difficult process.
***It is out of the scope of this essay, but I know that somebody at some point is going to ask about how all of this relates to trans issues, given that TERF rhetoric often includes criticism of the idea that anyone should be able to identify however they want and participate in any community they want without exception (which is justified) along with claims that this is what trans narratives imply (which is not). My answer, in brief: The binary model of gender is a grand narrative that rejects observable facts and marginalizes people. Those people are not themselves being intolerant—nothing about being trans, intersex, or nonbinary creates a narrative imperative to prevent others from being cis—and hence the binary model of gender is first-order intolerant. TERFs and other transphobes are grandiose and intolerant, and thus the maximally paralogous society cannot permit them.

Fundamentals: Afflict the Comfortable, Comfort the Afflicted

Been a while since I’ve done one of these, huh? If you’re not familiar, Fundamentals is a series where I discuss what I regard as fundamental ideas which underpin what I talk about on this site. These are the basic assumptions, How I Approach the World 101, written primarily so that I can point to them and say “go here” instead of having to periodically reiterate them. 
There’s an old maxim in journalism, which occasionally shows up in other fields: “Afflict the comfortable, comfort the afflicted.” Its meaning, in journalism at least, is fairly straightforward: avoid running stories in ways that make things worse for people in pain (for example, don’t publish the names of crime victims unless they want you to), and actively seek out stories that help people in trouble (for example, covering the negative impact of oppressive policies) or make life more difficult for people in positions of power (for example, uncovering a political scandal).
But I regard this as more than just a standard of journalistic ethics. It is a fundamental moral principle that underpins a lot of what I do, and so it’s worth unpacking a bit.
That “comfort the afflicted” is an important moral principle should go without saying. When people need help, you offer to help. (Helping, not saving, of course, but I’ve covered that elsewhere.) But why is it necessary to afflict the comfortable?
The answer is simple: communal responsibility. We are each of us responsible for bettering our own communities and cultures, which necessarily means subjecting them to scrutiny and change. This necessarily means that the members of our community who are comfortable with things as they are must be shaken up–if we are not disturbing them, then we are not improving our communities.
And this has wide-reaching implications. The common adage to “punch up, not kick down” is just a restatement of this principle. It’s why “reverse racism,” “men’s rights,” and “class war against the rich” are prima facie nonsensical, because whiteness, manhood, and wealth are excessively comfortable, safe positions in our society, and so puncturing their bubble of comfort is a necessary exercise in communal responsibility. That changes in our society are being perceived as afflictions by the comfortable serve as evidence that these changes are a good idea–or, to put it another way, the comments thread on any post about feminism demonstrates the necessity of feminism.
It is simply not enough to just help people who need help. Fundamental social change is required, and to achieve that will necessarily mean making people uncomfortable.

Fundamentals: Everything Ends

My Little Po-Mo vol. 2 is on sale! Check out the Books page for more info!

I’ve started a Patreon to fund The Near-Apocalypse of ’09! Patrons get early access to articles and videos and more!

The one absolute certainty, the one thing we know, is death.

So of course we spend most of our lives trying to run or hide from it, because certainty is terrifying. We pretend that some aspect of the self survives death (which of course we all know instinctively isn’t true, hence why we mourn death more intensely than any other departure or separation), we pretend that we ourselves are immortal, or that something eternal exists–a perfect eternal state of bliss somewhere in the past or future or sideways from the everyday world of change and time and death.

And we do this knowing it’s false, because of the essential tragedy of the human condition, the need for unconditional love. We need to believe that love–some kind of love, be it familial or fraternal or romantic–is forever, but of course it never is; if nothing else it ends with the death of the lover. So we convince ourselves that there’s a way out, either a way to shed the need to be loved or a way to find something eternal. We lie to ourselves that there might be things without beginning or end, that there might be such a thing as “perfect.” All the while watching people die, endeavors fail, institutions fall, civilizations collapse.

But this doesn’t have to be a bad thing. Yes, everything we build must someday crumble. Yes, the day will come when the last person who knew you personally dies, and with them all direct memory of you vanishes from this Earth. Yes, even if you become a Shakespeare or an Alexander or a Siddhartha, sooner or later you will end up an Ozymandias.

But it also means that every corrupt and restraining authority will someday fall, that every unfair rule will someday cease to be enforced, that every bully’s strength will someday fail. It doesn’t matter what revolution you desire; wait long enough and the object of your rebellion will fall, if not in your lifetime than at some future point.

Nothing lasts forever, which means everything is always changing. Surely some of that change has to be for the better, at least some of the time, right?

Fundamentals: Where Morality Comes From

I’m a firm believer that the key to understanding some aspect of human behavior is to understand the motivations behind it. If you know why people do what they do, then understanding what becomes trivial.

Further, I firmly believe that you cannot prescribe until you first describe–that until you have done your best to understand what something is, you have no business arguing about what it should be. So it follows that, if I am going to talk about morality and ethics–and given that I regard morality, politics, and aesthetics as inextricably intertwined, I have talked and will continue to talk about them–it behooves me to first try to understand what motivates them.

So why do people want to be moral? The glib answer, of course, is the same reason anyone ever wants anything: they think it will feel better than the alternative. But what feelings, specifically, are at work with morality? I think it comes down, ultimately, to four emotions:

  • Shame: Being seen by others as immoral feels bad, being intimately associated with rejection and negative judgment.
  • Guilt: Seeing oneself as immoral likewise feels bad, being associated with failure and self-doubt.
  • Pride: Seeing oneself as moral (and being seen by others as moral) feels good, because it’s associated with acceptance, positive judgment, achievement, and self-esteem. (Note: Tentatively I place the sense of fairness here–that is, we wish to be treated fairly and to treat others fairly because of its impact on our sense of pride. It’s possible, however, to regard fairness as a separate, fifth emotion underlying morality.)
  • Empathy: Not exactly an emotion, but definitely emotional in nature and a strong motivator behind altruism.

Ultimately, moral behavior is a matter of avoiding shame and guilt, pursuing pride, and acting with empathy. Moral crises come about when it’s not possible to do all of these at once–for example, when avoiding social disapproval means failing one’s own standards and vice versa.

Of course, looked at this way, it becomes immediately obvious why no logically consistent moral code–regardless of the metaethics behind it–can really work: emotional states aren’t logically consistent. And we can’t actually reject this emotional basis, because without it there’s no reason to be moral. Nor can any one of these emotions be ignored: Shame is necessary because it’s how we learn to be guilty. Guilt is necessary because it’s the moral equivalent of burning one’s hand on a hot stove. Pride is necessary because without it the only advantage to being moral over being amoral is that you might get caught. And empathy is necessary because without it morality becomes an irrelevant abstraction, unconnected with the material wellbeing of real people in the real world. Together, shame and empathy prevent morality from becoming solipsistic or narcissistic; guilt and pride prevent it from becoming conformist.

So why bother with thinking about morality at all? Why not just go with kneejerk emotional responses to every situation? I think Daniel Dennett has a good answer here, and I recommend the relevant chapters in his Freedom Evolves on the topic. (And all the rest of it, for that matter.) But basically, thinking about moral questions and coming up with rules of thumb serves a few purposes.

The first reason is what Dennett describes by analogy to the story of Odysseus and the Sirens: Having principles is a way of metaphorically tying ourselves to the mast, so that when we face a situation “in the moment” we are better prepared to resist temptation. In other words, principles are about recognizing that we are imperfect actors and sometimes make decisions in the moment that, once we have time to think about them, we regret. Thinking about moral questions and adopting rules of thumb or broad principles is a kind of self-programming, training ourselves to feel extra guilt when we break them and extra pride when we follow them, thus increasing the likelihood of resisting temptation in the moment.

Another reason is communication. Part of morality is accepting responsibility for one’s community, and shame is a critical tool for policing that community. Shared principles are a key way for a community to define for itself how it will police its members by clarifying what kinds of behaviors are appropriate for other members of the community to shame. Of course members of the community may disagree, resulting in conflict, but conflict is an inevitable (and frequently desirable) part of being in a community.

Be clear, however: principles, lists of rules, and all other attempts to codify morality are models, which is to say they are necessarily not the thing modeled. Morality is not adherence to a set of principles, but rather a complex and irreducible social and emotional state, which is why excessive adherence to principles leads always to advocating obviously immoral behavior. Ethics, in other words, is rightly a descriptive, not prescriptive, branch of philosophy: journalistic ethics is a description of how good journalists behave, not a set of commandments handed down by the journalism gods from on high. Studying such models is obviously very useful in becoming a good journalist, but is not in itself sufficient–like any rule set, the point is to understand them well enough to know when to break them. Journalistic ethics are, of course, just an example–the same goes for any other kind of ethics.

Of course, if morality is emotional in nature, it follows that just as there is no “correct” way to feel about something, there is no “correct” morality. That said, just because there’s no correct way to feel doesn’t mean there are no incorrect ways; it’s simply factually untrue to say that there isn’t a broad consensus about certain behaviors in certain scenarios. Baby-eating, for example, is almost universally regarded as repulsive, and so we can fairly safely say that a model of morality which prescribes eating babies as a normal practice has failed to accurately depict its subject.

More to the point, it doesn’t actually matter that there’s no correct model: if my morality–which here includes both the ways in which I model morality through principles and reason and the underlying emotional reality–demands that I oppose someone else’s actions or attempts to make their model of morality dominant within the community, then it demands it. Which of course is why people give logically inconsistent answers to ethical dilemmas: the curious responses to the trolley problem are of course completely understandable once you recognize that while passive and active choices aren’t logically different, they feel different.

In the end, as with aesthetics, any prescriptive model will necessarily be imperfect. But that’s the human condition, isn’t it? Making do with imperfect materials, striving ever to replace our old mistakes with new ones.

Fundamentals: Stop Suspending Disbelief

At Anime USA last week, I mentioned in one of my panels–might have been Analyzing Anime 101, might have been Postmodern Anime, I don’t remember which and haven’t gone through the video yet–that “the concept of suspension of disbelief needs to die in a fire.” This, of course, led to some people coming up to ask me about it after the panel (because for some reason when I ask for questions at the end of a panel, nobody raises a hand, but the minute I start packing up, I’m swarmed with people wanting to ask questions).

Here is the problem with suspension of disbelief: it makes you less literate. I mean, it’s also fundamentally impossible, but even attempting it makes you less literate, because what suspending disbelief means is trying to forget that a story isn’t real. Which means, in turn, giving up the ability to recognize it as a deliberately constructed artifice, created by actual human hands for an audience of actual people, within the context of a culture.

That is a huge thing to ignore. It means losing all ability to examine technique, to think about the difference between portrayal and endorsement, to question a work’s positionality. By pretending that a work is a window to another world, you erase the distinction between author and historian. Everything that happens in a story is a choice by its storyteller; there is no otherworld where events proceed independently, and of which the storyteller is an objective, uninvolved observer dutifully recording the deeds of others.

Consider, since it is the main subject of this blog, a cartoon. To suspend disbelief is to pretend that its characters are real people within a real world that obeys consistent rules, which is anathema to a cartoon like, say, Ren and Stimpy or Adventure Time, which depend on constantly twisting and warping settings, situations, and characters to surprise and entertain. To suspend disbelief is to ignore the animation itself, to refuse to examine how art styles, distortions of characters’ bodies, framing and camera angle shape the story and convey the priorities and interests of its creators.

This is not to say that we should never consider the diegetic; that’s as absurd as only considering it, as “suspension of disbelief” demands. It is possible to talk about a character, to discuss their motivations and experiences, to have an emotional reaction to them, without pretending that they’re real. People have emotional reactions to the imaginary all the time, from anxiety about imagined scenarios for an upcoming task to sexual fantasies to happy daydreams. I can say, “Batman is driven by survivor guilt over his parents’ death,” or “Twilight Sparkle is prone to anxious overreaction,” and it remains true, even though the characters in question do not exist. Indeed, it is because they are characters, and thus far less complex and self-contradictory than real people, that I can make such straightforward claims about their behavior with little expectation of contradiction.

There is thus nothing at all to be gained from the suspension of disbelief. It does not add anything to the appreciation or exploration of narrative, and cuts off access to much. It is yet another example of how badly the emphasis in general education on basic literacy gets in the way of full literacy.

Fundamentals: Criticism and Social Justice

The world in which we live is deeply, horrifyingly unfair.

Some of that unfairness is inescapable, a consequence of the terrifying randomness and even more terrifying determinism of the universe. Our friends and loved ones are as likely to be hit by buses as our enemies. Babies who haven’t even figured out that other people exist yet, let alone tried to hurt them, get diseases that cause horrible lifelong suffering. Market forces tend to amplify initial small disparities in wealth. Trashy reality shows are more profitable than well-written and acted dramas, even though hardly anyone actually watches them.

But a lot of that unfairness was invented by humans, and is entirely under human control. This kind of unfairness can be divided into two categories, which is an entire article on its own, but we’re interested today in only one of them, systemic injustice: all of the ways in which the systems and power relations that comprise our society are structurally unfair, even in the absence of deliberate action by any one individual. In other words, for this particular topic we’re less interested in unfairness that arises from people cheating, and more interested in unfairness that arises from the rules themselves.

That’s where social justice comes in. The idea is simple, its execution hard: create a society in which as much systemic injustice as possible is eliminated or corrected for. More fundamentally, social justice is simply the idea that fixing systemic injustice wherever possible is a major moral imperative. That one is not personally responsible for any particular unfair act is irrelevant; systemic injustice is a problem of a community, rather than individuals, and therefore a matter of communal, rather than personal, responsibility.

Which brings us to the role of criticism in all this, and in particular a specific family of critical schools including the feminist, queer, and postcolonial schools, among others. The common thread is a particular function of critical analysis, namely the identification of ways in which the text expresses, reflects, encourages, or perpetuates systemic injustices. From a social justice perspective, this is an extremely important activity. Texts, after all, are a major component of a culture, and a community’s culture is the primary means by which it influences the behavior of individuals within the community. In other words, it is by means of culture that systemic injustices perpetuate themselves, and therefore it is in the realm of culture that they must be met, identified, and combatted.

The primary function of criticism in general, if such a thing exists, could be said to think about culture and engage with it more mindfully. The function of social justice criticism, then, is to engage with culture while being mindful of systemic injustices. Note that this is not necessarily the same thing as criticizing a particular culture; particularly when dealing with works that originate outside one’s own community, it’s important not to project one’s own community’s issues onto that other community. That said, the interpretation of a text is as much an expression of culture as the creation of the text, so it is entirely legitimate to look at how a text from one culture might read in one’s own culture, as part of a critique of one’s own culture.

Ultimately, the goal of this is not to say, for example, “This movie is racist and therefore bad.” (Though, of course, there are movies which are bad and racist, including ones where the racism is what makes them bad. But racism doesn’t automatically make a work bad, it makes it racist.) The goal is not to attack individual works or creators–though sometimes that is necessary, because one of the ways in which systemic injustice functions is by making it easy to ignore individual acts of injustice–but rather to, as a member of the community, participate in one’s communal responsibility to help identify and mitigate systemic unfairness.

The key point here is that social justice criticism is emphatically not about attacking another, because it’s not about the Other at all. It’s about confronting the darkness in the extended Self, one’s own communities and cultures, and exposing it to light so that it can be dealt with. It’s about embracing one’s own culpability in communal responsibility for the state of the culture, and choosing to be mindful of that responsibility as a first step toward performing it.

Fundamentals: Community, Culture, and Responsibility

“Fundamentals” is an irregular series in which I write about certain basic ideas underlying my work on this site.

No human being exists in isolation. Each and every one of us is a member of multiple communities, some joined by choice (e.g., fandoms), others thrust upon us as a consequence of birth or upbringing (e.g., family, ethnicity), as a consequence of other choices (e.g., coworkers), or external circumstances and pressures; some are permanent, others temporary. And every community has a culture: collective rules and values, stories, material products, and so on. We are shaped by the cultures of the communities to which we belong, and they in turn emerge from the actions of each individual within the community. This does not deny individual choice, free will, or any of that; rather, it simply notes the plain fact that we are neither mindless drones nor completely autonomous actors unaffected by our environments and interactions with others. We are both individuals and members of communities, and it is equally a mistake to overemphasize either.

Which brings us to a rather critical point about responsibility. There is a tendency among some, I think, to assume that responsibility is exclusive and zero sum–in other words, that there are a finite number of responsibility points for any given occurrence, and if I take them all then no one else gets any. In other words, if Bob does something bad, to suggest that Bob was influenced by the surrounding culture is to deny, at least in part, that Bob was responsible for his actions.

This is nonsense. Take it as given that an individual is totally responsible for their actions and the consequences thereof. Culture emerges from the aggregate actions of all members of a community, and therefore all members of a community are responsible for their actions that contribute to that culture. All members of the community are shaped by that culture, and therefore their actions are influenced by–in other words, partial consequences of–the culture, which is to say the aggregate actions of all members of the community.

Thus, consider Alice, who shares a community and culture with Bob. Alice’s actions help shape the culture of that community, and therefore also Bob’s actions. Thus, Alice is partially responsible for the actions of Bob.

If we are totally responsible for our own actions and the consequences thereof, in other words, it follows that we are also responsible for the cultures we create and the ways in which they shape our own and others’ actions. Personal responsibility necessarily implies cultural and communal responsibility.

Which, let’s be clear on some things before anybody accuses me of saying something I’m not:

  • This does not mean that anything anyone does is the responsibility of every community to which they belong. Rather, it is necessary to first show how a particular culture influenced the person’s actions, and only then is it possible to assign responsibility to the community.
  • This does not apply only to “bad” actions and influences. Culture can have lots of positive influences, in which case every member of the community has some responsibility for that, too.
  • As I already said, this does not negate personal responsibility, but follows logically from it. A person is still entirely responsible for their own actions, it is simply also the case that there is communal responsibility. Like most seeming contradictions, this only appears to be one because of an unstated assumption, that responsibility is zero-sum and exclusive. Reject that notion, and it is completely possible for two people to be completely responsible for the same event, let alone one person completely responsible and another partially responsible.
  • Personal and social responsibility are not qualitatively the same. Personal responsibility, generally, is much more direct and concentrated; social responsibility tends to be diffuse by its very nature, spread thinly across many people. There are, of course, exceptions: for example, when a prominent community leader deliberately creates a culture of hatred and fear, for example, they carry a much larger and more concentrated portion of the responsibility for members of the community who lash out than the rank and file do, though again that does not negate the responsibility of the rest of the community for accepting and perpetrating the culture.

Fundamentals: Aesthetics and Ethics

I’ve been thinking about this for a while, and I’ve decided to start an irregular series of posts in which I discuss sort of fundamental ideas behind how I approach texts, write for this site, and just generally approach the world. There will be at least a couple this and next week, mostly because I realized that I had something I wanted to say, but felt I needed to explain some underlying concepts first. And it just generally occurred to me that I might want to have some posts I could just point to and say “go here” instead of having to repeat myself in articles and comments. Anyway, this is the first of these posts.

One of the most basic principles underlying my approach to criticism is that aesthetics are inextricable from ethics. Before I go any further, I should make clear that I am emphatically not endorsing Chekhov’s view that all art should be didactic and encourage “good morals.” However, neither am I endorsing Wilde’s contrary ars arsa position.

First there is the trivial sense: the creation of art is an action, which occurs in the real world and has consequences for real people. It is thus impossible for it to not have some moral dimension–“it’s for art” is not a defense against accusations of immorality. (Although, to be clear, many “moral” objections to art are simply prudery; however, the correct response is not “art is above moral concerns” but rather “your morality blows.”)

More importantly, however, aesthetics and ethics are fundamentally connected at their root: both are expressions of values, which is to say that both aesthetic and moral judgment derive from some underlying sense that some things–objects, ideas, sensations, material social conditions, whatever–are better than others.That quality of better does not actually vary between the aesthetic and the moral; better is better.

So, while separate categories, the aesthetic and the moral are inextricably intertwined. You can see it in the way we sometimes respond to either, the way we might refer to an immoral act as “disgusting” or a particularly moral one as “beautiful,” or conversely the way we might refer to bad art as “wrong” and good art as “right.”

This entanglement, in turn, means that the moral dimension is a legitimate consideration in any form of criticism. It is not as simple as saying that aesthetically good art must also be morally good or vice versa; rather it is, as I said, that the ethical dimension is one thing to consider in making aesthetic judgments (and the aesthetic is one thing to consider in making moral judgments).

A short version: beauty is good, but it is neither necessary to, nor sufficient for, goodness. Goodness is beautiful, but it is neither necessary to, nor sufficient for, beauty.