The unreal and the real

Ursula K. Le Guin is dead. I’m not totally okay with this.

Le Guin is usually glossed as a “big ideas” writer, with a focus on concepts and worldbuilding rather than memorable characters or dramatic scenes, and while this isn’t exactly wrong, it’s misleading in an important way. What continues to impress me about her work is her unflinching focus on the texture of life. She develops ideas by making people live through them, and she explores principles by creating physical locations and functioning institutions that embody them (or don’t). I’ve actually been going through some of her short stories recently and a lot of them aren’t even about anything. They’re just particular but unremarkable people living in particular but unremarkable circumstances and getting along with it, one way or another. Even “The Ones Who Walk Away From Omelas,” which is an explicit philosophical treatise, is told exclusively by describing a real society and then the actions of the people who live in it, in response to the way it is. She’s able to make stories like these not only compelling on a technical level (one of the downsides to her writing is the way that its effortless grace makes it painfully clear that you are never going to be good enough at anything to matter) but meaningful not despite but because of their complete lack of anything that would normally register as “meaning.”

Maybe this sounds like faint praise, like I’m just outlining the basic qualities that make someone a competent writer, but it’s not. It’s the most important thing. The only thing any of us actually has, ever, is the experience of lived reality (there’s nothing less “lived” about hearing about other people’s experiences or reading philosophical treatises). Fiction whose only purpose is to supplant this reality with a different one is properly called “escapist,” but so is fiction whose only purpose is to congeal reality into a nondescript mush of Ideas and Statements. In both cases you are ignoring the only thing that there actually is. Le Guin’s great achievement was the creation of worlds unlike our own that show us what our world is really like – unreality that makes the world more real. This is more than merely compelling or thought-provoking writing. It’s one of the secret keys to the truth.

This is also an important point about genre fiction in general. If Le Guin’s work counts as “genre fiction,” which it does, then the term doesn’t mean anything, which it doesn’t. Genre is not a property of a work but an after-the-fact critical appellation; it’s potential use is only as a tool, not a definition. Genre work often falls into the traps of getting lost in minutia or reverting to Big Moral Statements, but literary fiction has exactly the same problem: it’s often nothing but status-signalling lifestyle details coupled with half-baked pretension. The categorization isn’t the thing that matters; in fact, focusing on categorization is precisely a way of avoiding what actually matters, of pretending to do criticism when you’re really just doing bookkeeping. Nor is there anything wrong with the categorization itself; there certainly are comment elements and things that you can use to accurately (assuming you’re being honest about the endeavor and not just parroting received value statements) classify a story as “literary fiction” or “magic realism” or what have you. But if you’re doing this to say that literary fiction is “real” and genre fiction is “just for fun,” then you’re factually wrong, and Le Guin proves it.

Her feminism, for example, is significant not because of any particular big ideas or provocative analyses, but because she actually portrays the lived experiences of women, both in terms of their inner lives and the ways they navigate the social structures they exist in. This particular version of the double bind is that focusing only on “oppression” turns women into a homogeneous class of “oppressed people” who do nothing but reflect the pressures of their oppression, i.e. it makes women not people in exactly the way that patriarchy says they’re not. Whereas insisting on “agency” and “strong female characters” implicitly denies that oppression is a real thing: if overcoming oppression makes women superhuman, then oppression is actually a good thing, because it improves women. (Overgeneralization was the great failing of second-wave feminism, which was then overcorrected for in the third wave, and we’re still dealing with the fallout of that mistake.) These are the problems you run into when you try to talk about things abstractly, but the truth is pretty obvious: people are all kinds of fucked up in all different ways, and that includes social structures, which are made of people, but there are nevertheless recognizable patterns that recur in certain areas, and the combination of these things is what creates real-world behavior.

Le Guin transcends the dilemma by mastering both worlds: she puts believable people in understandable societies and lets us watch what happens. A simple example of how this works is the story “Semley’s Necklace,” where a fallen noble goes to ultimately self-destructive lengths to retrieve a priceless family heirloom. We understand this decision both in terms of who this person is and the pressures of the structure they inhabit (also, we understand who she is both as a product of and a reaction to the social structure she inhabits), and not in terms of handwaving ideological abstraction, a.k.a. magic. (I haven’t read a lot of her fantasy, but she understood that if magic were real, it would be rule-bound in the same way that everything else is; what she says about this is that, in fantasy writing, “you get to make it all up, even the rules of how things work, and then follow your rules absolutely.”)

This aspect of her work is why The Dispossessed is such an important book. The problem that all varieties of revolutionary politics share is that, because they are talking about a world that has never existed, they are necessarily completely abstract. The common complaint that they “only work on paper” is simply a failure of imagination; after all, actually-existing capitalism is precisely a system which sounds good in theory and is destroying the world in practice. The Dispossessed creates a believable anarchist society, and it does so by portraying the way that people actually live in such a society. The point isn’t that the book makes any particular political argument; the point is that reading it grants you the ability to imagine the world other than as it is, in practical terms.

In fact, the novel even has an explicit hero-character, Odo, the original leader of the anarchist revolution, who is revered and referenced as a source of wisdom and insight, but even in this case, her work is portrayed through people’s use of it in their own lives. That is, we see them reference and quote her as part of real-world in-the-moment arguments – we see how even explicit idolization unavoidably manifests itself as just another part of practical reality. Indeed, there’s even a separate story about Odo that portrays her as a confused old woman, puttering about and embarrassing herself and not accomplishing much of anything. Even when she creates explicit heroes, Le Guin insists on making them people. She denies everyone, including the reader, the ability to in any way “get out of” ordinary life.

And, of course, including herself. I don’t really know how she pulled this off, but she managed to become a famous writer without becoming a Famous Writer. She’s revered, sure, but not in the sense that people “praise but don’t read” her; she’s revered because people read the hell out of her. (On second thought, genre fiction might be more than categorization – it might be a mask. Maybe you have to embrace trivialization in order to avoid the greater danger of idolization.) Part of this can be ascribed to simple intellectual modesty. The Left Hand of Darkness, which is set on a planet whose inhabitants have only one gender that performs both human reproductive functions as needed, uses masculine pronouns to refer to them. At the time, Le Guin judged this to be the least bad of the available choices – as explained in the novel, “he” is, while not actually neutral, more neutral than “she” or any other alternative. But she didn’t insist on this choice as a matter of dogma; quite the opposite, she knowingly walked into one branch of a trap. For a later republication of “Winter’s King,” a short story set in the same world, she switched gears and used feminine pronouns, while preserving gendered titles, so the story features “Kings” and “Princes” who are referred to as “she.” This isn’t necessarily the “right” choice, either, but it gives the story a different subconscious color that reveals different aspects of it. So she ultimately did better than making the right choice; making both choices gave us both things to think about. And it also prevents her from being anyone’s hero on the subject. She was nowhere near dumb enough to think it was possible to be “neutral,” or to “just ask questions,” but the stands she took were impossible to piggyback off of, so the only available action was to take your own stand in response.

So the problem I’m having is that we really need this right now, because we’re falling into both holes. We’re suffocating in a flood of daily minutia that doesn’t connect to anything and vanishes as soon as it’s done stealing our oxygen, and we’re also falling back on vague generalizations and unexamined principles without instantiating them in real-world actions. We’re creating as many heroes and villains as we possibly can in attempt to understand the world, yet in every case we use them as an escape from understanding, as a simplistic substitute for reality. We need to make the connection the Le Guin was capable of; without her key, we’re locked in this room.

And yet, it is precisely here that she bequeaths us her final gift. Because she was never a hero, we always had to get along without her anyway. Nothing has changed. There’s really nothing to even complain about; we haven’t missed out on anything. She gave everything she had to the world, and she left what she didn’t have up to us. So now that we finally have to accept that we aren’t getting anything more from her, we have no remaining choice but to do what we should have been doing all along: we have to take what we’ve got and figure out what we’re going to do with it. We have always been on our own, and now, at the end, she forces us to confront that fact anew.

Still, on this particular subject it’s only polite to let her have the last word, so here’s a not entirely random passage from the story I happen to be reading at the moment:

Our daily life in the auntring was repetitive. On the ship, later, I learned that people who live in artificially complicated situations call such a life “simple.” I never knew anybody, anywhere I have been, who found life simple. I think a life or a time looks simple when you leave out the details, the way a planet looks smooth, from orbit.


I’m not qualified to comment on the specifics of the Elena Ferrante situation, but there’s a particular aspect of the response that I feel requires some elucidation. Rather than addressing the personal implications for Ferrante herself, or claiming that her identity is simply not relevant to a particular understanding of her work, many people seem to be going quite a bit further. They are claiming that it is wrong for this information to be available, that we ought not know it, and they are making this point in terms of criticism. They are saying that the correct critical posture is to choose to operate with less information. They would rather the information not exist at all; they would rather be lied to.

This broader issue has been in some contention recently. It’s become quotidian to hear that we’re in the middle of a “war on truth,” that the Information Age ability to “choose your own facts” is literally going to destroy the world. We’ve seen, for example, Newt Gingrich claim that it doesn’t matter what the crime statistics actually are, that as long as people feel like there’s a lot of crime, then extreme repressive measures against it are justified. Obviously, politics is a different matter than lit crit, except not really, because there isn’t all that much of a distinction here. The point of facts in policy is to come up with an approach that will affect the world in the way that it’s supposed to, and the point of facts in criticism is to come up with an interpretation that accords with reality in the way that it’s supposed to. This is why we’re not allowed to live in our own individual fantasylands; it’s what the truth is for. So if we really believe that there’s a problem here, then our only available response is to start taking the truth more seriously. This means respecting it even when it is ugly or vulgar or cruel, even when it has inconvenient consequences, and even when it is revealed by bad people with ulterior motives. The truth either matters or it doesn’t.

Now, part of what Ferrante is trying to do with herself is to react against the trend wherein, to grossly oversimplify, the conversation surrounding a work of art tends to supersede the work itself. The most important argument in favor of pseudonymity is that the work itself is what matters, and that the identity of the writer can only be a distraction, or at best the subject of a separate biographical interest. This is a half-truth. Certainly, there are writers whose status as famous writers is more important than anything they actually write, and certainly this is disgusting. The situation where, say, Dave Eggers is like “HEY GUYS I’M WRITING A BOOK ABOUT SOCIAL MEDIA” but then there’s nothing actually in the book is the worst possible situation. So if Ferrante’s work represents the inverse of that, then she’s doing a great job.

But the idea that a work can “speak for itself” in an absolute sense betrays an unjustified dedication to purity. The fear is that knowing the sausage-making details behind the words will rob them of their beauty, break the spell. This is backwards. It’s true that the only possible source of meaning for a text is the text itself, but it’s a fantasy to assume that we can just lift this meaning straight out, that context is a distraction rather than a necessary tool for doing the work of extraction. Hence, the point of identity is not that it can matter, but that it must. If there is beauty to be found, it must be found in the sausage.

And, indeed, this is already being done by the very people who are so concerned about not doing it. Ferrante’s identity and the context in which she was writing were already being taken into account. The term “anonymity” is being thrown around a lot here, but that’s not what’s going on at all. Elena Ferrante has been publishing pseudonymously and not anonymously. This may sound like a pedantic distinction, but there’s a real difference. Pseudonymity allows the reader to draw connections between works of the same author, to trace the development of themes, etc., hence there is such a thing as “an Elena Ferrante novel” regardless of her physical identity. It allows for the development of a persona. It is only under the condition of anonymity that a work speaks purely for itself – and because any work must be encountered in some kind of context (not to mention, it must be understood in terms of an existing language with a specific history), this condition is impossible to meet.

So, because Ferrante was never anonymous, people already knew that she was a woman living in present-day Italy, and this is already a lot of authorial context. If she had instead had turned out to be a black man who had lived in Compton in the 1980s, that would pretty obviously have initiated a comprehensive reappraisal of her work, right? And we didn’t actually know before that that wasn’t the case. Perhaps, as things happen, Ferrante’s real identity won’t add anything to people’s estimations of her work. But if this turns out to be the case, it can can only be because her readers were already making the correct assumptions – there’s no such thing as not making assumptions. In which case the revelation of her identity is still valuable information, because it confirms that what was being understood about her work was in fact correct. To take the most obvious example, if someone read one of her books and assumed she was female because they thought that a man could never have portrayed women’s social dynamics so accurately, then what that person has now learned is that they were right. They shouldn’t feel attacked; they should feel validated.

Furthermore, Ferrante has given interviews where she has explained herself and her writing process and commented on her stories, which is to say she’s done the exact thing that her supporters insist not be done. She has tied her work to an identity and made it about her. (Similarly, she seems to have deliberately chosen covers for her books in order to produce an intended effect, which is not how that normally works, and which suggests an intended interpretation.) I mean, look at this:

“Where do I start? In my childhood, my adolescence. Some of the poor Neapolitan neighborhoods were crowded, yes, and rowdy. To gather oneself, so to speak, was physically impossible. One learned very early to have the greatest concentration amid the greatest disruption. The idea that every ‘I’ is largely made up of others and by the others wasn’t theoretical; it was a reality. To be alive meant to collide continually with the existence of others and to be collided with, the results being at times good-natured, at others aggressive, then again good-natured. The dead were brought into quarrels; people weren’t content to attack and insult the living—they naturally abused aunts, cousins, grandparents, and great-grandparents who were no longer in the world.”

Definitely a case of a writer whose identity is completely unimportant to her work, right? It’s absurd to say that people “didn’t know” who Ferrante was and that they were therefore able to approach her work “without preconceptions”; rather, they were approaching it with a set of preconceptions that they had already implicitly established. Except of course they were, because you can’t not do that, and even if you could not do it there isn’t anything wrong with it in the first place. It doesn’t “taint” the “pure” experience of the work because there’s no such thing as purity in the first place.

So the argument that context diminishes the value of Ferrante’s work is completely untenable, because it was already being placed into a pretty specific context and that wasn’t causing anyone any problems. And this is where the real problem starts, because people are going even further than this, and claiming that the revelation of Ferrante’s identity is an actual attack on the value of her work, that having this additional information somehow delegitimizes it. First of all, if this were actually the case, it could only be because her work wasn’t much good in the first place. Either her work is fragile and dishonest enough that the truth can destroy it, or it is robust and honest enough that the truth can only enhance it. If her work requires an aura of mystery in order to mean anything, then it precisely does not stand on its own by that very fact. So the extreme level of defensiveness on display is deeply unwarranted.

There’s a false dichotomy at work here: either an author’s work is a muddled reflection of their own life and circumstances and that’s all it is, or it’s an entirely abstract pearl of “greatness” that cannot bear contact with the grime of reality. Either a work is entirely defined by its context, or it is entirely defined by its content. Naturally, neither of these is possible. The thing about the term “context” is that there’s always a context, so there’s no such thing as “pure” textual criticism, or indeed “purity” at all. Like, I kind of thought that this was the whole point of the feminist argument against Great Male Author Syndrome, so I’m somewhat confused to see feminists arguing that the only proper way to appreciate Ferrante is to revere her as an abstract Great Author and not to understand her as a person. Surely the point of identity politics is exactly the opposite: to assert that identity must always be accounted for, that the Platonic ideal of the Great Author is a false concept, that universality does not arise despite particularity but rather emerges from it. Surely feminists are capable of working through the complications of identity rather than ignoring them.

And this isn’t really all that complicated; some simple examples should clarify the point. As a young man, Fyodor Dostoevsky was involved in some radical political activity, for which he and his co-conspirators were arrested and sentenced to death. Their sentence was commuted, but they were not informed of this until after they had actually been led out to the prison yard and placed in front of a mock firing squad. So there was a brief period during which Dostoevsky was absolutely certain that he had only minutes left to live. The experience affected him somewhat. There’s a scene in The Idiot where the protagonist, making conversation, describes in detail the final experiences of a condemned criminal. Now, certainly, interpreting this scene as merely Dostoevsky’s description of his own experiences and taking that to “explain” it is the stupid way to go about things. But it’s a far cry from there to view the related biographical information as useless. For starters, this information draws our attention to the scene in the first place; it suggests that it should be read as a significant part of what Dostoevsky is trying to convey with the overall novel. But it also colors our interpretation of the words themselves; it doesn’t tell us what they mean, it remains the case that only the words themselves can do that, but they do it in context, and the more information we have, the better equipped we are to establish a truth-apt context.

Because interpretation is never a simple task. The fact that art only comes into existence via the subjective experience of a reader’s engagement with a text does not mean that it is impossible for a reading to be wrong. It is very much the opposite: it is precisely because of this that the vast majority of readings really are wrong in some significant way. There is always something you’ve missed, or something you’ve misinterpreted, or something that you lack the knowledge to place into context. There are always more paths to take and more ways to walk them, and many of these combinations will turn out to be fruitless. Which is to say we always need help. Biographical information is not our only source of aid, but it’s at least better than random; it’s something that’s close to the text. So it often helps a little, and it sometimes helps a lot. The real point is, if you think you can just shut yourself up with a text and stare at it real real hard and have The Truth rise up out of it, you’re fooling yourself. Such monkish leanings are out of place in a complicated and contradictory world. More than that, if the truth is merely a resource to be marshaled into the service of our existing prejudices whenever we need it, then the truth is worthless. It cannot give us anything we do not already have. In order for the truth to matter, it must be able to attack us; as such, it is our responsibility to fail to fully maintain our guard. To care about the truth is, paradoxically, to insist on being wrong.

So a more appropriate example would be one that actually changed my mind about something. I mean, that’s the only reason any of this matters, right? I’m not an interesting person, though, so you’ll have to forgive me for using a very boring story here. Once upon a time, I encountered a writer who was attempting to make sense of the apparent senselessness of modernity. His writing was hyper-intelligent and dizzyingly fast; he took direct aim at many of the things I was concerned about and struck at them with equal parts unrelenting force and honest humility. It was exhilarating, I felt validated, and I felt certain that I had found a source of answers. His name was David Foster Wallace, and I feel somewhat differently about him now.

The first thing that happened was just that I started to wise up about a few things and therefore began to notice some of the more major holes in Wallace’s analyses. I saw what was wrong with his arguments about the “usage wars” by learning literally the absolute basics about linguistics (pro tip: Language Log is a good website), and I came to realize the deep pointlessness of his McCain profile once I started taking politics seriously. But it didn’t seem like he was just a misguided weirdo; it still felt like he had a bead on the truth. So once this happened, I had to figure out what was going on.

And, well, this is a little embarrassing, but . . . the first thing that I came up with was that he was being ironic. We were meant to understand his arguments as misguided and follow through from there to reach the truth of the situation. Yeah, I know. It kind of does seem that way sometimes, though. Like, in the McCain piece, he spends the whole essay lecturing Young People for not caring about politics, and he also spends the whole essay talking about ad campaigns and shit and avoiding any actual political issues himself (indeed, he is explicitly dismissive of people with actual political beliefs when they inconveniently intrude into his narrative), so when he ends abruptly with the laughably condescending statement “try to stay awake,” it’s hard to imagine that he’s being serious. He’s telling people to stay awake regarding precisely the matter on which he was asleep throughout the entire essay. (And even on the level of personality, everyone who’s ever known McCain says he’s a huge fuckhead, so Wallace isn’t even doing optics right.) It’s ridiculous. So it seems like that has to be the point, right? The thing we’re meant to wake up from is the essay itself – we’re meant to understand Wallace’s approach as absurd and reject it in favor of the actual substance of political engagement.

But of course Wallace also argued rather strenuously against this sort of ironic posturing, so then it must be the case that his argument against irony is itself ironic, and . . . yeah, you can see why this doesn’t work at all. Look, I didn’t really think this was a good angle, okay? It’s just that it was the only thing I could come up with. And that’s the point: I got stuck and I couldn’t come up with a real interpretation because I lacked relevant information. I was operating under the assumption that Wallace was a smart guy who knew what he was doing, that he was An Author, and that he therefore must have somehow been right in a way that I couldn’t see.

What changed wasn’t a revelation or anything, it’s just that I finally put the pieces of what I knew together and realized that Wallace himself wasn’t really a good person, and, while he was certainly talented in some ways, he didn’t have any kind of special intellectual gifts. Which makes him just like everybody else, of course, the mistakes he made were the mistakes that everybody makes, but that’s exactly it: after demystifying his work, I started to see it as coming from a particular perspective, and things started to become clear. I stopped thinking of his stuff as having been written by David Foster Wallace the Renowned Thinker and started thinking of it as having been written by Dave Wallace, a depressed, introverted, desperate human being, and once I started doing that, I was finally able to see the actual words he had put on the page and figure out what they actually said. More specifically, I realized that the reason I had initially felt like he had to be right was that he was similar in some ways to me, which is to say that I was making the same mistakes that he was, and I was taking that fact as confirmation that we both must have been right. Great minds think alike. What I was experiencing was the bad kind of validation: a reification of my own prejudices. My identification with his work obscured my understanding of it.

So the point is not that biographical details compel certain answers, it is simply that we must recognize that there is a question in the first place. This rather unfortunate New Yorker article, while attempting to make the opposite point, makes exactly this point:

“And even if Anita Raja is Elena Ferrante, what does her mother’s terrible persecution during the Holocaust have to do with the books she wrote?”

Yes, exactly! That is exactly the question! Answering that question (not specifically, but in general) is what your job is as a critic. I mean, this is really bizarre. The author calls this an “obvious question,” but that fact that it is a question at all completely negates her argument. Criticizing a revelation on the grounds that it doesn’t explain everything at once is just flat stupidity. The point is precisely that this work can now be done, that these sorts of questions have now become askable. And maybe, as things turn out, the correct answer will be “nothing,” and the whole line of inquiry will turn out to be a dead end, but we can’t know that until we’ve actually asked the questions and done the investigation. You can’t jump ahead and read the end before you start; you have to get through the whole story, as it is written.

See, it’s bizarre that Ferrante thinks she’s mitigating the Famous Author Effect by insisting on pseudonymity, because what she’s really doing is the exact opposite. Hiding the practical aspects of her identity maintains the mystique around her work rather than dispelling it. Of course she’s using a persona, but pseudonymity has nothing to do with that. Anyone who writes anything is necessarily cultivating a persona. Smoothing out your persona into that of a featureless Platonic “Writer” makes it more likely that people will see your work as some sort of emblem of what they think they need rather than taking it as it is, project themselves into it rather than looking at the actual words on the page.

Indeed, this very honest article on the subject admits to doing exactly that:

“With Ferrante’s anonymity, I do not have to feel any hesitations about the entanglement of self and art. It is okay, in essence, to make her work all about me. Without the details of her life, there is no way to know what personal experiences influenced the fiction she creates. I can project as much as I want onto her work without hesitation. In my mind, she has created work that boils down to a few major themes, and I can use those as plot points to create an image of her experiences that is convenient to me. Her work, to me, is what I see in it. And I have learned from it.”

Though her openness is commendable, it’s not really clear what the author is going for here – she seems to be aware that her position is wrong at the same time that she’s defending it – so I guess it’s my job to point out that yes, this is wrong. Fantasy is the enemy of learning, and convenience is the enemy of meaning. Projecting yourself onto art defeats its purpose; if that’s all you’re doing, it can’t give you anything you don’t already have. What relating to a work means, rather, is exactly what that word says: developing a relationship, understanding a work as something other, and then bringing yourself to that new place. The whole point of the truth is that it is outside your control.

Which is why this is wrong:

“They want to make her small, by making her a real person with a real history and real name and real background. They want to assert control over that person, and what it represents, by revealing it.”

It is the exact opposite. First of all, none of this is up to you: people are small, Ferrante does have a real background, her work is the result of a particular confluence of historical and material conditions, and the only giants are the ones in your imagination. More to the point, though, it is precisely through the void of anonymity that you can “assert control” over a work and define it however you want. A person has limitations, but limitations cut both ways: they constrain a person’s claims on the truth, and they also constrain your claims on that person. As we’ve just seen, the people resisting the fact that Ferrante has a real identity are doing so because they want total control over her work – they want it all to themselves. In the absence of limiting facts, you’re free to live in your own imaginary world. What the truth does, functionally, is to prevent you from doing this. It forces you to do what is right rather than what you want.

So yes, there is a sense here in which the truth of identity brings the author “down to Earth” and makes her “small” and “limited,” but these are good things, because that’s where the truth is. On the ground. Down here, not up there. And that’s what the truth is: things aren’t “more true” the more pompous and grandiose they are. True things can be held in your hand. None of this restricts the potential universality of anyone’s work; it’s what allows us to find it in the first place. Being a person doesn’t make you less of a writer, it makes you more of one. As Noreen Malone very succinctly puts it, “being attached to a specific, limited, actual person — rather than an airy abstraction — is only damning if you think there’s something lacking about being an actual person.” Again, it is bizarre that feminists are making the case otherwise. Surely it is feminists more than anyone else who believe that the basic experience of being a person is more important than any abstract social framework.

There is an allowance to be made here for the fact that the readers we’re talking about are mostly if not exclusively women, and we continue to exist in a society that does not really allow women to have their own experiences. It’s entirely understandable that people who have found a rare source of validation will resist any attempted imposition of a different narrative, especially when they are accustomed to such impositions being both unavoidable and wrong. But even if one accepts the value of comfort, which I don’t, comfort can never be enough. Comfort at its best enables you to get by, and if getting by is your goal, you’re a nihilist. The truth doesn’t corral you in to one valid response, but it does establish a line. And you can’t claim honesty or good faith if you’re not willing to allow that line to cut through your comfort zone.

So the reason this issue can’t be left alone is that it’s a situation where cake is being had and also eaten. You can’t defend Ferrante as both a fragile human and an untouchable icon. You have to pick one or the other, and we all know what the right choice is. Indeed, that is whence cometh the defensiveness about all of this: these people know they’re wrong, and that is why they are resisting the truth. This isn’t a case where people are trying to impose their own standards on others; it’s a case where people have incoherent standards. If we take them at their word that they want the thing they say they want, then it’s only polite to inform them that they aren’t actually getting it.

On second thought, no, that isn’t the reason. I should be more honest here. I’m not doing this out of principles. I’m doing it because I’m upset. I’m upset about this:

“To fall in love with a book, in that way that I and so many others have fallen in love with Ferrante’s, is to feel a special kinship with its author, a profound sort of mutual receptivity and comprehension. The author knows nothing about you, and yet you feel that your most intimate self has been understood. The fact that Ferrante has chosen to be anonymous has become part of this contract, and has put readers and writer on a rare, equal plane. Ferrante doesn’t know the details of our lives, and doesn’t care to. We don’t know those of hers. We meet on an imaginative neutral ground, open to all.”

It’s difficult to know where to start with something like this. I suppose I’ll be polite and elide the psychological angle. The clearest flaw here is that all of this has absolute dick to do with how much you know about a writer’s personal details. You’re always doing this; in fact, nothing distinctive is actually being described here. So what exactly is the aspiration that the author feels she is being denied? Given the topic, it can only be the aspiration to avoid confounding details, to read unchallenged. This is cowardice. This demand for anonymity is a demand for a security blanket. I mean, come on. You really can’t relate to a work if there are already existing interpretations out there? You can’t feel understood other than by feeling like a special snowflake? You stop being able to relate to someone once you realize they’re different from how you imagined them? You can’t integrate uncomfortable truths into a deeper and more robust understanding? You call yourself a reader?

What aggravates me is that these people think they’re mounting some kind of brave last stand against the Famous Author Effect, when in fact they’re completely in thrall to it. They have so little ability to resist that they cannot imagine being able to relate to a work that actually makes demands of them. They’re so uncritical that they simply can’t function in the face of alternative theories. They’re so desperate for a smooth ride that they insist that a “neutral ground” be flattened out for them before they even step into the carriage.

You know what? I’m still not being honest enough. Calling this sort of thing “love” is viscerally repulsive. It makes me sick. What these people are saying is that they cannot love something that is actually real. They can only tolerate vague abstractions that allow themselves to be molded into whatever shape the “lover” desires. This is not love. It’s fetishization. It’s objectification.

The reason this is all so annoying is that we’re talking about the absolute basics here. I’m not even approaching any kind of radical critical theory. This is just the basic substance of what reading is. I can describe someone as “conniving” or “strategic” or “Machiavellian,” and these all mean different things, but that’s not because of anything that’s “in” the words themselves but because you bring a different set of associations with you to each word. Which means you might end up having to admit that, for example, you have some sort of weird idiosyncratic context that you’ve developed for a word that no one else has and therefore what you think the word is about is completely different from what everyone else thinks, and I might end up having to admit that I don’t know what words mean. There’s no way around this; there’s no “real” meaning. All text is context, and all language is a game. You can’t draw until you ante up.

And it’s actually even more annoying than that, because Ferrante’s fans were already doing this even when they didn’t know who she was, they just want to be able to deny to themselves that this is what they were doing. As mentioned, Ferrante never was any kind of abstract giant; she was always a person writing about being a person, and that’s what her fans were responding to, even as they histrionically insist the opposite. So all they’re actually doing is refusing to take on the responsibility of interpretation. They want to pretend that their initial naive impression of her work was “real.” Nothing is real. But that doesn’t mean there was necessarily anything wrong with that impression, it’s just that we shouldn’t be relying on intentional ignorance to obscure the situation for us. If we assume that the harshness of the truth is in conflict with personhood, we preemptively doom ourselves. We should be able to recognize that any public presentation is automatically a persona, and that a persona is always a shield.

And we must be able to do this and still find meaning in the work itself. The big kind of meaning, the same kind we’ve always been going for. Because that’s the thing: all those times in the past when someone encountered a work and felt the hand of god gripping them by the throat and decided that this could only be because the work was divinely inspired or the product of an inhumanly great talent, they, again, were doing this. There actually was a context and a set of biographical details that they were taking into account and sublimating into their broader understanding. They weren’t “surpassing” that context because they couldn’t; that’s impossible. They were just using a convenient fiction to make the job easier for themselves, to avoid complicating details and alternative interpretations. We can do better. We can do the job the hard way, the right way, and still get it done.

If knowing that Ferrante is a real human person fucks up your ability to relate to her work, that’s your problem. I don’t mean that as an insult, I mean it sympathetically. This is something we all have to deal with, and we have to deal with it regardless of how much specific information we have in each individual case. It doesn’t matter whether it’s something like 1984 that has already been theorized to death and back and then to death again or whether it’s unlabeled instrumental music that you found at random on a dead webpage. You always have to do your own work within the constraints of reality, and you always have to choose where to make your stand, and you always have to recognize that doing so leaves you open to attacks from all sides at all times. There’s nothing radical about any of this; it is merely the basic structure of how criticism works – how it has to work. The alternatives are phantasmic.

What’s scary about this, what people don’t want to accept, is that it’s up to you. There is no one bigger than you who can lift you up and carry you where you need to go. There are, at best, people who are just as lost as you are but who have been to different places and can suggest better directions, and a lot of the time there’s not even that. Progress requires assuming that you don’t know what you’re doing, and then of course doing it anyway. But the thing about the death of god is that god never existed in the first place. We were just pretending, and all that’s happened now is that we’ve stopped pretending. Because the truth is actually true, the only thing that a revelation can reveal is the thing that was always the case, all along. We have always lived in a world without giants, which means that the work that we’ve done has always been our own. It’s not that it’s up to you now, it’s that it’s still up to you. It has always been up to you; you have always been making this decision. Believe in yourself.

Ambiguity vs. contrarianism

I’m reading The Ethics of Ambiguity, which I probably should have done a while ago and which you should probably do at your earliest convenience. Despite the moderate philosophical jargon and the frequent references to Hegel, it’s really very practical. I don’t really see it cited much as a big important philosophy book, which I’m sure has nothing to do with the fact that it was written by a woman. Actually, the fact that philosophy has one of the biggest gender gaps out of anything really gives the lie to the whole story about men being good at numbers and women being good at words. The truth, obviously, is that men bully women out of any field they consider to be prestigious or important, or that is high-paying.

Anyway, I ran into some stuff that I thought was awfully relevant to certain modern-day issues. One of the things that’s become increasingly apparent via the internet is the fact that a lot of specialists are complete morons about anything outside of their specialty. As the saying goes, it’s better to remain silent and be thought a fool than to open your mouth and remove all doubt. Now that we’ve got Twitter giving people like Richard Dawkins the opportunity to mouth off about anything whenever they feel like it, doubt is increasingly being removed.

But the actual issue here isn’t new, as evidenced by the fact that de Beauvoir totally nails it:

“Almost all serious men cultivate an expedient levity; we are familiar with the genuine gaiety of the Catholics, the fascist ‘sense of humor.’ There are also some who do not even feel the need for such a weapon. They hide from themselves the incoherence of their choice by taking flight. As soon as the Idol is no longer concerned, the serious man slips into the attitude of the sub-man. He keeps himself from existing because he is not capable of existing without a guarantee. Proust observed with astonishment that a great doctor or a great professor often shows himself, outside of his specialty, to be lacking in sensitivity, intelligence, and humanity. The reason for this is that having abdicated his freedom, he has nothing else left but his techniques. In domains where his techniques are not applicable, he either adheres to the most ordinary of values or fulfills himself in a flight. The serious man stubbornly engulfs his transcendence in the object which bars the horizon and bolts the sky.  The rest of the world is a faceless desert.”

The fact that a specialist tends to rely on techniques which are not universally applicable is pretty straightforward. The insight here is how this error manifests itself. One option is as a “flight,” meaning a retreat from significance via dismissal. Anything to which the specialist’s techniques do not apply must by that very fact be meaningless. This is the most obvious problem with the New Atheists (or whatever we’re calling them now): anything which doesn’t fit into their schema is a “myth,” something that people should just get rid of. It’s even more obvious when it comes to science fetishists, who put the cart entirely before the horse: rather than defining science as the domain of the measurable, they wholly reject anything that isn’t measurable so as to be able to define science as everything.

This is also where the “fascist sense of humor” comes in: it makes it easier to dismiss things. Internet atheists have a constant boner for making fun of how dumb those silly fundamentalists are with their silly stories about angels and demons and their silly preoccupations with virginity and swear words, which conveniently keeps them from considering that maybe these things aren’t what religion is mostly about, that maybe in their blithe dismissal they’re actually missing something important. On the other hand, they’ll completely flip their shit on you if you point out that assuming your own position as the default and requiring people to argue you out of it is a fucking stupid way to communicate.

The second option is even more intriguing: outside of their specialty, specialists tend to revert to conventional values. Why should this be so? Any specialist is aware that the common understanding of their own discipline is usually oversimplified and often completely backwards. Dawkins, for example, could probably say quite a bit about the common understanding of evolution. Shouldn’t one be able to relate this same insight, that greater understanding often leads to a fundamental reassessment, to disciplines other than one’s own? Many find it odd when someone like Dawkins rejects the traditional superstitions of religion only to fall back on the traditional superstitions of white supremacy, or rejects the divine guidance of god only to fall back on the faux-divine guidance of Western imperialism.

But in fact, this is to be expected. Acknowledging the inapplicability of one’s expertise requires confronting the enormity of one’s limits as a finite human being. Someone who devotes their life to mastery of the scientific method must accept that, because of this devotion, there are things that they can never know. The temptation to overapply one’s techniques originates from the fear that the only alternative is nothing. But since these techniques aren’t actually applicable to everything, they don’t actually work. You actually can’t use science to learn about politics. The specialist is thus in a position where they must have an insightful perspective on some topic, but can’t actually develop one. Hence, conventional wisdom dressed up as contrarianism.

Speaking of contrarianism, de Beauvoir follows this attitude through to one of its natural developments:

“Nihilism is disappointed seriousness which has turned back upon itself. A choice of this kind is not encountered among those who, feeling the joy of existence, assume its gratuity. It appears either at the moment of adolescence, when the individual, seeing his child’s universe flow away, feels the lack which is in his heart, or, later on, when the attempts to fulfill himself as a being have failed; in any case, among men who wish to rid themselves of the anxiety of their freedom by rejecting the world and themselves.”

This not only explains the annoying phenomenon of teen contrarianism, but why this phenomenon is concentrated among the privileged. Naively, one might assume that the least oppressed people in a society, the people with the fewest obstacles and the greatest opportunities, would consequently be the least nihilistic. The problem is that a person who can choose to assume any burden tends thus to be “serious” in de Beauvoir’s sense of the term. A person – a white male – who can already do as he likes, who can freely choose to be a family man or a businessman or anarchist, evaluates each of these goals on their own merits, and finds them all wanting. Of course he does, because no real-world goal actually means anything by itself.

(Writing persona pro tip: de Beauvoir uses “man” and “he” as general referents for “person” because of language and the past and so forth. When I say “he,” it’s because I mean “he.”)

Meaning is found not in goals themselves, but in the transcending of limits. Black people tend to have stronger family ties than white people, not despite but because they live in a society that is actively trying to destroy them. Women pursue professional achievement because they live in a society that tells them the realm of business belongs to men. Gay people fight for marriage rights because they live in a society that devalues their relationships. This is why nihilism tends to manifest itself as a philosophical luxury. Not because it is luxurious, but because it and luxury share a natural habitat: the world without struggle.

But of course there is no such thing as “practical” nihilism, since you have to do something, so the teen contrarian makes the same move as the specialist: adopting conventional values, but dressing them up as iconoclasm. This is most obvious in the common case of Randianism (which, as Hamlet would say, really is common). It presents itself as a great individualist revelation, but in practice it pretty much just means letting capitalism do whatever it wants.

And it’s also the case for poor old Nietzsche, who, despite his best efforts, wound up as the preeminent representative of precisely this sort of banal contrarianism. When Nietzsche railed against “slave morality,” the morality of amelioration rather than achievement, he was talking about historical events that led to a particular mode of thought. He was not dumb enough to believe that the ruling class of his day was actually concerned with making everyone’s lives more comfortable.

But we can see why this misinterpretation is useful to the teen contrarian. If you believe that the problem is that society is “too accommodating,” it gives you something to oppose while doing exactly what the ruling class wants you to do: ignoring morality. It allows you to extract the sense of meaningfulness out of the concept of struggle without the inconvenience of actually challenging yourself. This is all especially sad when you consider that Nietzsche’s philosophy is basically an instruction manual on How to Be a Great Artist, but it winds up being used to support an utter dearth of creativity.

If you frequent some of the internet’s more desperate quarters, you may have encountered people claiming in all seriousness that nowadays white men are oppressed while everyone else has all the advantages. Same deal. Blaming women and minorities for stealing all your advantages seems iconoclastic now that society has accepted the validity of identity-based arguments. But the ancient pattern hasn’t changed: attacking the less advantaged instead of fighting to better yourself and your circumstances is still what society actually wants you to do, so you get to take a stroll down Easy Street while imagining that you’re running a gauntlet. As an added bonus, conceiving of yourself as oppressed in a philosophical sense without actually running into any practical obstacles allows you to maintain a permanent sense of self-serving victimization. If this isn’t the case, if you really are free, then all your failures are all your fault.

And that’s how 70-year-old philosophy explains the internet. The broader point is that theory matters. If you try to react to every dumb thing that happens individually, you’re going to be here all day. A good framework allows you to chart your own course.

The wicked game

(Part 1)

(Part 2)

(Part 3)

As I recall, the first time I encountered David Foster Wallace’s work was the cruise ship essay. This seems to be a common starting point; it’s certainly one of his more accessible and funnier pieces. For me, though, it had a certain personal significance. I had recently been on a cruise myself (not of my own volition), and my precise feelings about the experience were: never again. It was terribly gratifying to find someone who was not only taking an axe to one of society’s more ridiculous holy grails, but was doing so in a way that was both comprehensively intelligent and appealingly human.

I think this explains a lot of DFW’s appeal. Most of us are deeply uncomfortable with various aspects of our absurd society, but since most of these things are taken for granted, we can rarely express our feelings in a way that’s understandable to other people. In Wallace, we found someone who not only felt the same way, but was sharp and observant enough to express those feelings in a way that really brought the issues to life. It’s not hard to see why people yearn for a sort of intellectual-yet-human role model to help us navigate a confusing world. As it turns out, though, that person didn’t exist. We had to invent him.

I consider it a bit of a personal failing that it took me so long to figure out what Wallace’s problem was. Naturally, this would have been a lot more useful back when his mythologization was still a work-in-progress rather than a fait accompli. The truth is, I was so excited to find someone who seemed to be speaking to my concerns in an intelligent way that I failed to listen to my own feelings. The way that Wallace felt about his cruise was actually not at all the way that I felt about mine. The beginning was the end.

For Wallace, the problem with cruises is that they’re too good. Contrary to the typical American view of luxury being the goal of life, Wallace considers the experience of luxury to be an insidious form of nihilism – he pointedly highlights the cruise’s promise that patrons will finally be able to do “Absolutely Nothing.” If the struggle to manage and fulfill one’s desires is what constitutes the actual experience of life, then a situation in which all of one’s desires are met automatically without even any thought being involved is basically the same as having no desires, which is basically the same as not existing (if this is all sounding a bit Nietzschean, you’re going to have to hold your horses).

The basic problem with this is pretty obvious: not only are cruises not “too good,” they aren’t even good. Cruises suck. In addition to the fact that a cruise ship is basically just a crowded, extra-nausea-inducing hotel and the fact that the “entertainment” is all pathetic summer-camp-for-adults garbage, a cruise experience is fundamentally unenjoyable due to the way it smacks you in the face with its own exploitative nature. Maybe I’m a freak, but I don’t find the experience of being waited on to be any fun, particularly when it manifests itself as an army of servile brown people standing around every corner, sporting plastered-on smiles while waiting on pins and needles for the chance to be ordered around by some pompous Hawaiian-shirt-wearing tourist jackass.

The same effect is in play once the ship reaches one of its Remote Island Destinations. You get funneled off the ship directly into a ersatz strip mall full of chintzy tourist trash. The fact that this setup is so wildly incongruous with its location makes unavoidable the realization that it is there because of you, that the money you paid for the cruise is funding exploitation, that your presence on the island is white supremacy in action. The usual way to understand cruising is that it’s the closest middle-class people can get to being upper-class, but it’s actually more like the closest that alienated office workers can get to being imperialists.

In his typically annoying way, Wallace makes an incisive observation about this while also completely dismissing its importance:

“the ethnic makeup of the Nadir‘s crew is a melting-pot melange . . . it at first seems like there’s some basic Eurocentric caste system in force: waiters, bus-boys, beverage waitresses, sommeliers, casino dealers, entertainers and stewards seem mostly to be Aryans, while the porters and custodians and swabbies tend to be your swarthier types – Arabs and Filipinos, Cubans, West Indian blacks. But it turns out to be more complex than that”

no it doesn’t shut up stop talking. Christ. Racism is not a god damn intellectual puzzle for smart white people to thoughtfully pore over. It’s physical oppression, and you’d think that experiencing it in such close quarters would make that obvious. It did for me, anyway. Contra Wallace, the cruise experience did not make me feel pampered. It made me feel like I was on a plantation.

Double Bound

The reason it’s so difficult to figure out where DFW stands is that he pretty much never puts his foot down. He always leaves himself an out. For example, in an earlier post I accused him of advocating for a “kinder, gentler” ruling class. Not only does Wallace preemptively head off this accusation, he uses the exact same reference:

“Besides, the rise of Reagan/Bush/Gingrich showed that hypocritical nostalgia for a kinder, gentler, more Christian pseudo-past is no less susceptible to manipulation in the interests of corporate commercialism and PR image. Most of us will still take nihilism over neanderthalism.”

This is part of the argument in “E Unibus Plurum,” and he has to say this here because his anti-irony conclusion seems to lead pretty obviously to a re-adoption of traditional values. But note that this is not actually an argument, because Wallace obviously does not “take nihilism.” So, is Wallace merely talking about appearances, saying that the Reagan option would be appealing if it were presented better? Or is this rejection based on principles, and in favor of a third option? Given the essay’s conclusion, isn’t an accusation of “neanderthalism” precisely the sort of “risk” we might expect an “anti-rebel” to take? Meaning Wallace actually is in favor of this? He doesn’t say. All he does here is merely defend himself against the expected charge of political conservativism.

So, what I want to say now is that Wallace was “impossible to pin down,” but guess what:

“And make no mistake: irony tyrannizes us. The reason why our pervasive cultural irony is at once so powerful and so unsatisfying is that an ironist is impossible to pin down.”

Emphasis Wallace’s. Implying of course that Wallace himself is against this position and therefore can be pinned down. At this point, what I’m interested in is why Wallace constantly argues in this manner. Because these aren’t flukes: this sort of automatic backpedaling is an intrinsic part of his M.O., to the extent that it often occurs in the space of a single argument.

For example, part of “E Unibus Plurum” is devoted to debunking a utopian argument that asserts that improvements in technology will resolve the social problems with TV and turn it into a vector for liberation. It’s a transparently silly argument, and Wallace gives it the usual treatment. But then he produces the following paragraph:

“Oh God, I’ve just reread my criticisms of Gilder. That he is naive. That he is an ill-disguised apologist for corporate self-interest. That his book has commercials. That beneath its futuristic novelty it’s just the same old American same-old that got us into this televisual mess. That Gilder vastly underestimates the intractability of the mess. Its hopelessness. Our gullibility, fatigue, disgust. My attitude, reading Gilder, has been sardonic, aloof, depressed. I have tried to make his book look ridiculous (which it is, but still). My reading of Gilder is televisual. I am in the aura.”

Set aside whatever counterarguments you may be considering, and instead ask yourself: why did Wallace write this paragraph? If he actually thought this was a valid criticism of his argument, one expects that he would have, you know, fixed his argument before publishing it. It can only be that Wallace felt that he had to make his argument the way he did, and then do the best he could to do stifle the “televisual” aspect of it with this disclaimer. But that’s obviously wrong; it was obviously within his abilities to have made a straightforward factual argument without ridiculing his target. Rather, then, this paragraph exists because Wallace wants to believe that this is the only possible way of doing things, that he can’t escape “the aura.”

This same dynamic occurs even more provocatively in Wallace’s essay on Dostoevsky. The theme of this essay is that Dostoevsky is an important role model for modern Americans due to the fact that his work is both highly artistic and deeply moral. Naturally, this argument is part of Wallace’s overall claim that we’re “too ironic” nowadays and we don’t know how to be “sincere” anymore.

It is in this context that Wallace writes the following:

“Frank’s bio prompts us to ask ourselves why we seem to require of our art an ironic distance from deep convictions or desperate questions, so that contemporary writers have to either make jokes of them or else try to work them in under cover of some formal trick like intertextual quotation or incongruous juxtaposition, sticking the really urgent stuff inside asterisks as part of some multivalent defamiliarization-flourish or some such shit.”

That bit about the asterisks refers to this essay itself, throughout which Wallace interpolates Big Moral Questions in a manner like such as the following:

“** Is the real point of my life simply to undergo as little pain and as much pleasure as possible? My behavior sure seems to indicate that this is what I believe, at least a lot of the time. But isn’t this kind of a selfish way to live? Forget selfish – isn’t it awful lonely? **”

So again, what Wallace is doing here is explicitly criticizing his own approach rather than trying to fix it. This is a very odd move, because it results in Wallace neither having his cake nor eating it. He could have presented these issues directly, allowing them to stand on their own, or he could have gone deeper into his criticism, attempting to figure out a better way to ask these questions. As it is, half-assing it and then calling himself out like this blunts the immediacy of his questions, making the whole thing feel like little more than a parlor game – exactly the result that Wallace was so afraid of.

So, at this point, there’s only one possibility. Given how central this mistake is to Wallace’s argument, he can’t be doing it on accident. It must be the case that Wallace has a specific positive motivation to present his arguments in this way, to force himself into this apparent double bind. Recall also the way that Wallace’s sloppy arguments in his journalism just so happen to overlap perfectly with his ideological concerns. For example, Wallace thinks that the “Descriptivist” position on linguistics is that “there are no rules,” just like “televisual” irony supposedly means that “nothing means anything anymore,” just like the problem with politics is that “young voters” “don’t believe in anything anymore.” What we’re looking at here is a case of motivated reasoning.

The Howling Fantods

David Foster Wallace’s great fear was what he referred to as “solipsism.” The most vivid expression of this is the fate of Hal Incandenza that opens Infinite Jest: living a rich inner life while being completely unable to communicate with the outside world.

“’There is nothing wrong,’ I say slowly to the floor. ‘I’m in here.’

I’m raised by the crutches of my underarms, shaken toward what he must see as calm by a purple-faced Director: ‘Get a grip, son!’

DeLint at the big man’s arm: ‘Stop it!’

‘I am not what you see and hear.’

Distant sirens. A crude half nelson. Forms at the door. A young Hispanic woman holds her palm against her mouth, looking.

‘I’m not,’ I say.”

The main characters of Infinite Jest, Hal Incandenza and Don Gately, are of course transparently based on Wallace himself; hence, Wallace is here expressing what he sees as the dangers of his own personality and habits. This dramatization of Hal’s terrible fate is precisely Wallace’s expression of his own fear.

As everyone knows by now, one of DFW’s big influences was Wittgenstein. He’s explicit about this in exactly one place: “The Empty Plenum,” his review of David Markson’s Wittgenstein’s Mistress. Unfortunately, this is early DFW, before he had gotten his bad habits of pseudo-academic-ese, intrusive name-dropping, and pointlessly convoluted phraseology under control. It’s badly written, his argument is unclear, and he spends a lot of time on some really weak gender analysis that I’m going to ignore. Still, one does what one must.

Wallace’s central claim is that the philosophy espoused by Wittgenstein’s Tractatus Logico-Philosophicus amounts to advocacy of solipsism. Very, very briefly: the Tractatus claims that facts about the physical world are the only things that we can meaningfully talk about (“The world is everything that is the case,” and “What is the case, the fact, is the existence of atomic facts.”). But “facts” themselves exist only in our minds; thus, Wallace’s worry here is that:

“This latter possibility – if internalized, really believed – is a track that makes stops at skepticism & then solipsism before heading straight into insanity.”

In other words, all we have are our own experiences, and those are unreliable, so if we take this seriously, what we actually have is nothing, pure chaos.

This doesn’t really have much to do with what Wittgenstein was actually talking about. For him, the importance of this argument was that statements about ethics or metaphysics become meaningless. It was a philosophical problem, not a personal one. Still, it would seem that Wallace’s fear has some justification. We know how hard it is to get through to other people just based on everyday experience; it’s not too much of a stretch to take this difficulty seriously.

But “solipsism” is the wrong term for this issue. The whole point of solipsism from a philosophical standpoint is that there’s no way to actually tell the difference between Solipsist World and Non-Solipsist World. It’s a purely philosophical problem, but what Wallace is talking about is the experience and the feeling of not being able to get through to other people.

The other term Wallace likes to use for this is “loneliness,” which is closer to what he’s talking about, but still not totally precise. Wallace was, after all, a well-known author with plenty going on in his life. He wasn’t “lonely” in the obvious sense of the word (one can, of course, be lonely without being alone). So instead, the term I’m going to use is “intellectual isolation,” the fear that nothing that goes on inside our heads can ever really get out into the real world (and, perhaps even scarier, vice versa). Wallace’s specific fear was that, no matter what he said or did, he could never really express himself to another human.

When we understand the issue in this way, it becomes quite clear that this was the underlying impulse that motivated much of Wallace’s work. One of the notable things about Wallace’s oeuvre is how much he wrote about how to write – not in the technical sense, but in the philosophical sense, i.e. how one ought to write. This was his attempt to resolve the problem of intellectual isolation.

And while Wallace was pretty obviously projecting his own concerns onto Wittgenstein’s philosophy, it just so happens that Wittgenstein got around to addressing this problem as well. As Wallace mentions, Wittgenstein performed a dramatic about-face after the Tractatus, such that his second major work, the Philosophical Investigations, amounts to a direct refutation of the argument in the Tractatus. In the Investigations, Wittgenstein turns his approach around completely: rather than trying to determine the basis for language, he looks at language as it actually exists and is used. What he finds out here is one of those simple insights that has deep implications. Rather than language being purely referential (that is, only communicating physical facts) language is actually not referential at all; it is purely functional (that is, it’s a tool for social interaction).

The reason for this is that language is how people interact with each other, not how one person interacts with the world. If you were alone and looking at a tree, you wouldn’t point at it and say “look at that tree.” But you would do so if you were with another person and you wanted to draw their attention to the tree. You might even do so if there were no tree at all, and you were trying to trick the person. In such a case, your utterance obviously doesn’t refer to anything in the real world. Rather, it performs the function of making the other person look.

This is clearest in highly contrived situations, such as a job interview. The standard sort of exchange like Q: “What is your greatest weakness?” A: “Oh gosh, I don’t know, I guess I’m a bit of a perfectionist” isn’t mean to elicit any real information, it’s just for the interviewer to get a feel for the candidate. Most of the interview is really contentless; it’s a sort of “test” to verify that the candidate can respond to situations in the appropriate manner. Wittgenstein calls this sort of situation a “language game,” and each utterance of this sort can be thought of as one possible “move” in the game.

The insight, though, is that, on a fundamental level, all communication is like this. There are only language games, and every possible utterance is a move in whatever game we’re playing at the moment. A “private language” that could be used by only one person to refer directly to the physical world is an impossibility. The connection between language and the physical world is entirely mediated by other humans.

When Wallace gets to the part of “The Empty Plenum” where he explains the argument from the Philosophical Investigations, he doesn’t follow it through to its conclusion like he does with the argument from the Tractatus. This can only be because he doesn’t think the argument resolves the problem, and, indeed, the fact that he spent the rest of his life trying to work it out shows that he didn’t have a solid answer. But he did have an approach, whether it was consciously chosen or not.

Recall the argument Wallace makes about linguistics in “Authority and American Usage.” His claim is that the “Descriptivist” argument advocating “the abandonment of ‘artificial’ rules and conventions” must result in “a literal Babel,” and this is why prescriptive language rules are necessary. Recall further that Wallace makes this claim while discussing Wittgenstein’s argument against private language. We can now finally understand why Wallace makes the bizarre move from “language is purely social” to “arbitrary usage rules are required for understanding.” It’s because he was between a rock and a hard place. From the perspective of the Tractatus argument, language refers to real things, but it can’t actually be used to communicate our internal thoughts and feelings. Whereas under the Investigations argument, “everything is permitted”; language can be used for any purpose, but it has no grounding, so we can never really know what’s being said. Note that this latter argument is exactly the same as Wallace’s objection to irony: when someone is being ironic, they can say anything, but you can never really know what they mean.

And this is precisely the dilemma that Wallace was attempting to overcome as a writer: the issue of how to really communicate what he wants to say. Which means we’ve finally come to the heart of the matter. This is the central issue which all of Wallace’s work was an attempt to resolve. The approach that I’ve previously identified – Wallace’s sublimation of his own feelings into intellectual argument – was his attempted solution (though of course he never actually felt that he had succeeded).

Hence Wallace’s conclusion in “Authority and American Usage.” He accepts that language is fundamentally social and not a formal system, but he still thinks arbitrary usage rules are required. What he’s getting at here is illustrated best by his position with regard to African American Vernacular English. Wallace, unlike most “prescriptivists,” is aware that AAVE is a fully functional dialect and not a “degraded” version of “normal” English. The fact that “Standard” English and not AAVE is the prestige dialect in our society is entirely arbitrary (I mean, it’s the result of white supremacy, obviously, but it’s “arbitrary” in the academic sense). Wallace knows this, and yet he still demands that his students fully embrace Standard English. Why? Because the existence of a formalized standard dialect resolves his dilemma: having a single rigorously defined means of communication allows us to express ourselves such that we can be absolutely understood. This is the root of the prescriptivist anxiety against “ambiguity,” at least for Wallace. What matters to Wallace is not that his dialect specifically is the prestige dialect, but rather that there is a prestige dialect at all, regardless of which dialect that is.

The reason for this is that Wallace feels this is the only way for humans to really be able to communicate. It evades the Wittgensteinian Scylla and Charybdis (words are great) by preserving the social aspect of language that allows us to express ourselves, while providing a solid foundation that allows us to be unambiguously understood.

And it is this same approach that Wallace took in his writing and argumentation. He could have merely expressed himself as an ideological writer a la Dostoevsky, but then he would have been giving up on making sure that other people understood him (the Investigations approach). Or, if he had gone the academic route and made purely intellectual arguments, then he wouldn’t have been expressing himself at all; it would be as though he didn’t really exist (the Tractatus approach). Instead, he attempted to navigate a middle path through the double bind: he took his own feelings and anxieties, and “rigorously defined” them as intellectual arguments, such that everyone else could understand them.

Okay! So, you remember that all of this is the position that I’m arguing against, right? Yeah. Because this doesn’t actually work. It’s a con. And it wouldn’t be nearly as much of a problem if it weren’t for the fact that everyone fell for it.

Philosophers’ Error

The reason all of this matters is that Wallace’s work has almost universally been read in exactly the wrong way. He’s been accepted as an avatar of the current American situation, his earnest confusion and noncommittal intellectualism taken as guidelines. A.O. Scott’s remembrance of Wallace in the New York Times portrays the problem quite vividly:

“The moods that Mr. Wallace distilled so vividly on the page — the gradations of sadness and madness embedded in the obsessive, recursive, exhausting prose style that characterized both his journalism and his fiction — crystallized an unhappy collective consciousness. And it came through most vividly in his voice. Hyperarticulate, plaintive, self-mocking, diffident, overbearing, needy, ironical, almost pathologically self-aware (and nearly impossible to quote in increments smaller than a thousand words) — it was something you instantly recognized even hearing it for the first time. It was — is — the voice in your own head.”

This is a rare example of damning with fulsome praise. This is not how the “shock of recognition” you get from great art is supposed to work. If reflecting what’s already in your own head were all that writing could accomplish, what would be the point? The strength of writing is obviously its ability to capture the sense of internal monologue, but the point ought to be that that monologue is someone else’s, one you couldn’t otherwise hear in your own head. The “recognition” you feel ought to be that of “making the strange familiar,” of not merely encountering an alien perspective but feeling it deeply, such that it becomes a new part of yourself.

What’s critical to note here is the way that Scott recapitulates Wallace’s mistake. He’s aware that Wallace’s work had “his personality . . . stamped on every page,” but then goes on to claim that it “crystallized an unhappy collective consciousness.” This is exactly wrong: it crystallized Wallace’s own unhappiness. And given that Wallace’s unhappiness was in fact the result of serious-fucking-business clinical depression, we have less than no reason to interpret it as a general symptom of society.

Of course, a lot of this kid-gloves treatment has do with the fact that Wallace was a white male. Most people don’t have the luxury of speaking generally; most people are preemptively confined to their own perspectives. People like Wallace are getting an undeserved pass. Again, Scott is aware of this but fails to recognize its significance: he compares Wallace to a bunch of other authors who he refers to as “itchy late- and post-boomer white guys,” but somehow fails to account for the fact that other types of people exist (including other types of white men; not all of us are hopelessly confuddled by phantom postmodernism). These writers, including Wallace, are mapping out one small corner of human experience and not defining a “generational crisis.”

(And I get that Scott is writing an obituary and he’s obviously not going to criticize Wallace here. But the particular way in which he praises Wallace is what makes the point.)

And this is also why it’s so wrong to revere Wallace as some sort of great intellect (I mean, aside from how much of a front the whole “genius” thing was). Everyone’s aware of his personal problems by now, but these have been framed as foibles, evidence that he was a “flawed human being,” a doomed genius. Again, exactly wrong: Wallace’s problems are evidence that he was normal, that he was doing exactly what all the rest of us are doing: trying to make sense of a senseless universe using whatever shoddy tools we happen to have at hand. And this is why the limits of his perspective and the errors that resulted from those limits must be kept in full view.

This is not currently happening. Consider this deeply unfortunate individual, who is terribly interested in what Wallace’s opinion on selfie sticks would have been, had he only lived to tell us. Truly a shame, right? Again, this person has learned exactly the wrong lesson from Wallace: that a rigorous intellectual analysis of her own collection of trivial personal confusions contains the answers to the great questions about meaning and society. Wallace’s projection of his own problems onto the world has encouraged others to make the same mistake.

But the fact that David Foster Wallace was wrong about everything doesn’t mean that his work doesn’t have value. The limited nature of any one perspective is far from a new problem – and it’s far from insoluble. This is actually one of the things that humanity already has a handle on, though perhaps an unwitting one.

Playing the Wicked Game

Here’s the thing: not only is Wallace’s approach not a solution to his problem, it’s actually the only crime: arguing in bad faith. Bad faith is the thing that actually does to communication what Wallace thought irony did: it makes it impossible to tell where someone is coming from. This is why Wallace was able to, for example, write an entire article about John McCain’s candidacy, the point of which was to harangue young people for not having political convictions anymore, and get though the entire thing without ever betraying the slightest hint of what his own political beliefs may or may not have been.

But the reason Wallace couldn’t find a solution isn’t because there isn’t one; rather, it’s because there isn’t a problem. And, ~ironically~, we know this from the very source that sent Wallace tumbling down the rabbit hole in the first place: Wittgenstein’s Philosophical Investigations.

Like I said, the whole “language games” thing is really obvious on the surface – of course we communicate in functional ways that don’t actually refer to anything. The trick is to follow this argument all the way through. If we accept that all communication consists of “language games,” then the obstacles to communication that so vexed Wallace reveal themselves as phantoms: they are merely different games. One retains the options of playing by their rules, or choosing a different game.

Thus, Wallace’s insistence on one “correct” method of communication is essentially cheating: refusing to abide by the rules of any one game, he takes the rules of one game and applies them to another. But the truth is, just as language naturally resolves itself into mutual comprehensibility without anyone policing it, each language game serves its own purposes just fine, as long as you don’t expect one game to be able to do everything.

Consider Wallace’s criticism of John Updike. Wallace’s claim is that Updike is a “narcissist,” and while this is again a misuse of a technical term, it’s a common one, so it’s clear what Wallace meant. He meant that Updike only ever talked about himself, which he highlights via the following Updike quote:

“Of nothing but me . . . I sing, lacking another song.”

But that isn’t actually what this means. That is, I’m not familiar with Updike, so I have no idea what he meant by it, but I’ll tell you what I mean by it. What this quote refers to is the fact that none of us actually has access to anything other than our own subjective experience. So everything a person writes ultimately comes out of nowhere but their own head; even when they’re writing about experiences that are totally alien to their own, they’re still writing about their own experiences hearing about those experiences (or making them up). This is what it means to “lack another song.”

The catch is that this is fine. I mean, it has to be, because there’s no alternative. We actually are each trapped inside our own perspective, but that doesn’t stop us from communicating, as long as we don’t expect perfection. Updike can only talk about himself, but we understand this, and we take it into account when we read his work, and this allows us to derive our own insights from Updike’s perspective, or to gain an understanding of the particular type of person that he is, or even to read him entirely critically as an example of what not to do. All of these things are valuable. (Again, I have no idea whether Updike’s work is actually worth reading by this standard, but we’re talking about the principle here. Here’s a pretty good blog post that makes this argument with regard to Updike specifically.)

And all of this applies just as strongly to Wallace’s work, despite his attempts to dodge the issue. Even on a totally naive reading of Wallace, isn’t it pretty obvious that he consistently “sings of himself”? Like, are we supposed to think that all that shit about tennis was just a coincidence? “Uncritical” self-absorption is preferable to a self-absorption that pretends to universality.

So that’s one game: subjectivity. Another game is objectivity. Consider “Consider the Lobster,” where Wallace invokes Peter Singer as support for the argument against meat eating. Wallace brings up Singer only in passing, on the way to his own quietist conclusion. But he couldn’t have reached this conclusion if he had actually taken Singer seriously, because Singer’s argument is part of a moral framework that doesn’t really give you the option to just “worry” about the issue.

The famous example that defines Singer’s approach goes as follows: You emerge from the tailor’s, having just purchased a very nice outfit for $1,000. As you walk down the street, you see a child drowning in the river. No one else is around to help. You’re a strong enough swimmer to save the child easily, but doing so will completely ruin your expensive new outfit. Do you save the child? Obviously, the answer is “yes.” But now consider that, instead of seeing a child drowning, you arrive home and find a letter from a charity asking for a $1,000 donation to save the life a child in some far away country you’ve never heard of. Do you make the donation? Singer claims that the moral calculus is exactly the same in these two situations, and yet, most of us do not make these sorts of donations whenever we can. According to Singer, we ought to, and this results in a broad obligation to consider the moral effects of our spending choices from a utilitarian perspective. It’s the same deal with meat eating: just as $1,000 is not worth a child’s life, the taste of a good burger is not worth an animal’s life.

The point isn’t whether this argument is right or wrong, the point is that it imposes an obligation. In “Consider the Lobster,” Wallace goes well out of his way to make sure his argument doesn’t impose any obligations on anyone, and this is where he fails as an intellectual. Ideas aren’t toys; accepting an important idea ought to obligate you to change your life. But for Wallace, an idea is just an opportunity to reflect his own confusion. He avoids taking precisely the out that ideas are capable of providing: they can provide us with a framework from which to discuss an issue without having to rely on our own personal idiosyncrasies. “True objectivity” is impossible, obviously. (Singer certainly has his own ideological biases). But so what? That’s no excuse to not do our best.

And again, all of this applies just as strongly to Wallace’s work, despite his attempts to dodge the issue. Wallace does make explicit intellectual arguments which can be accepted or refuted on their own terms. I’ve done a little bit of this already, but let’s stick with the meat eating thing just for simplicity. The conclusion Wallace draws, that “it all still seems to come down to individual conscience,” is the one conclusion that is absolutely invalid. Eating animals is either morally permissible or it is not. If it’s not, you are obligated to avoid it to the best of your ability. If it is, then you don’t have to wring your hands about it. And given the current state of things, the latter is the conclusion that Wallace’s argument actually results in, making his position self-refuting. One does not have the option to stand still on an escalator.

So, you’re getting what’s happened here, right? Wallace succeeded despite his best efforts. That’s the amazing upshot of the language games argument: there is no such thing as intellectual isolation. Establishing a connection to other humans isn’t a prize you get for using language really well, it’s a prerequisite to language use in the first place. The mere use of language in any context is necessarily a connection to the broader human enterprise.

Wallace thought that expressing himself entailed this huge burden, but it’s actually impossible not to express yourself, as long as your audience is aware of what game you’re playing. And the problem with Wallace is precisely that he fooled his audience into avoiding this awareness. His approach makes it seems like he’s not playing games, like he’s just a really smart guy doing his best to figure things out, like he’s “the best mind of his generation.” But none of those three things actually exist.

And we’re not beholden to Wallace’s framework; we’re entirely within our rights to fix his mistakes. We don’t have to pretend like he was some kind of generational oracle and discuss him on that basis. We don’t have to play along with his attempted universalization of his own perspective. We can find what’s worthwhile in his work and apply it as needed, whether as insight, counterexample, or cautionary tale.

Once again, Wallace’s argument for linguistic prescriptivism acts as a microcosm of his overall approach. Let’s say you manage to successfully establish some arbitrary usage rule. Great. So what? Why does anybody have to care? Are you actually going to stop people from ignoring your rule whenever they feel like it? You’re not, because you can’t. Whether a person is understood when they speak isn’t up to you. It’s up to the world. And the meaning of Wallace’s work isn’t up to him, either. It’s up to us.

Nietzsche contra Wallace

Here’s a revealing aside from the Dostoevsky essay:

“Nietzsche would take Dostoevsky’s insight and make it the cornerstone of his own devastating attack on Christianity, and this is ironic: in our own culture of ‘enlightened atheism’ we are very much Nietzsche’s children, his ideological heirs, and without Dostoevsky there would have been no Nietzsche, and yet Dostoevsky is among the most profoundly religious of all writers.”

Okay, first of all, this is totally wrong. I’m supposed to be done with the debunking part here, but I can’t let this one slide. Nietzsche only discovered Dostoevsky in 1887 – too late to have influenced the major works that most defined his philosophy, Beyond Good and Evil and On the Genealogy of Morals (in fact, the two men were nearly contemporaries – Dostoevsky’s last work was written in 1880, Nietzsche’s only 8 years later. Nietzsche was never able to read The Brothers Karamazov because it had not yet been translated). It is true that Nietzsche was smitten with Dostoevsky after discovering him; during his final frenzy of work in 1888, Nietzsche repeatedly makes significant use of the word “idiot.” While this is entirely adorable, it’s a far cry from Dostoevsky being one of Nietzsche’s major influences. Moreover, while this is a bit much to get into here, seeing this connection as “ironic” is awfully superficial, as though Dostoevsky could be summed up as merely “Christian” and Nietzsche as merely “anti-Christian.” Nietzsche was, after all, a profound moralist – just not a Christian one.

That aside, it is precisely not the case that “we” are Nietzsche’s children. Most people are not atheists in either the literal or the metaphorical sense. As usual, Wallace is pretending like his own perspective amounts to a comprehensive explanation. What’s actually going on here is that one specific person is Nietzsche’s child: David Foster Wallace.

It might seem like you couldn’t find two more opposite personalities. Nietzsche, the unrepentant elitist, bombastic and reckless, guided by the past while reaching desperately into the future. Wallace, the determined populist, cautious and humble, embedded deeply in the present. Nietzsche was ignored in his own day due to being “untimely,” while Wallace was revered for his (alleged) ability to tap into the zeitgeist.

But the thing about opposites is that they’re two ends of one spectrum. Both men were engaged in a desperate struggle against what they saw as the creeping nihilism of their own time. Nietzsche saw a great void left behind by Christianity’s fading moral authority, a vast, flat plain on which only the “smallest” could survive. Of course, nature abhors a vacuum, even in morals, so the situation in Wallace’s day was quite different. Wallace saw a glut of meaning created by the rise of extreme pluralism, a great cacophony of noise through which no signal could be discerned.

The big difference is that Nietzsche was deeply self-aware in a way that Wallace was not. Nietzsche was explicit about the fact that his proposed new morality was based entirely on his own standards; indeed, that was the point. Wallace, while trying to be egalitarian, stumbled into the same territory unwittingly by universalizing his own particulars.

Nietzsche would not have been surprised:

“Gradually it has become clear to me what every great philosophy so far has been: namely, the personal confession of its author and a kind of involuntary and unconscious memoir; also that the moral (or immoral) intentions in every philosophy constituted the real germ of life from which the whole plant had grown.”

Nobody’s confused about the fact that Wallace was speaking from his own perspective. But people still talk about his arguments as though they have some kind of formal, universal validity. What Nietzsche is saying here is that this is never the case. It isn’t just the blatantly personal stuff, the supposedly analytical aspects of Wallace’s work are also only expressions of the type of person that he is.

As mentioned, the real trick here is that this isn’t a problem. It isn’t a problem for Nietzsche’s work, which is still valuable despite all the stuff he was blatantly wrong about, and despite the fact that we can no longer countenance his conclusions. And it isn’t a problem for Wallace’s work, because Nietzsche, the ailing diagnostician, has the cure for his crimes:

“The philosopher supposes that the value of his philosophy lies in the whole, in the structure; but posterity finds its value in the stone which he used for building, and which is used many more times after that for building – better. Thus it finds the value in the fact that the structure can be destroyed and nevertheless retains value as building material.”

This is why Wallace’s structure needs to be destroyed: so we can build better.

Building Better

Of course, it would be irresponsible to stop here without at least getting started on the whole rebuilding thing. This has hardly been a comprehensive overview of Wallace’s work (his fiction is a whole other topic), but we’ve been through enough to draw some basic conclusions.

The first and most important should be obvious by now: don’t try to universalize your own idiosyncrasies. In fact, as soon as you find yourself trying to make a big statement about something like “American culture” or “televisual irony,” it’s probably a good idea to just slow your roll. It’s commonly said that great art takes the particular and makes it universal, but that’s not what that means. It means that by expressing yourself creatively you provide other people with something that they can use to make connections that neither you nor they could have anticipated. You can’t force it like Wallace tried to. All you can do is express yourself within your limitations and trust your audience to meet you halfway. You have to play the wicked game.

When Wallace tried to simultaneously work from his own perspective and be objective, he was trying to avoid being “ideological.” But this is impossible; the point of the term “ideology” is precisely that everyone has one. It’s clear from the way Wallace deploys the term (which he does frequently) that he didn’t understand this. And the solution here is pretty straightforward: we always need to be cognizant of our own ideological assumptions, and we can’t let people like Wallace pretend like they aren’t arguing ideologically.

The second is to interrogate your damn frameworks. Wallace never did this and it always cost him. When he tried to talk about the politics of language, he shoehorned the whole thing into the “liberal/conservative” divide, because that’s all he knew about politics. When he talked about TV, his whole argument was based on the notion that TV was “ironic,” because that’s what everyone always says about it. Of course, this failure is what made his writing so appealing: he was telling people what they already knew (and this is where Wallace’s gift as a writer was more like a curse: it made his arguments more persuasive than they deserved to be).

The last is to not underestimate ideas. This is actually closely related to one of Wallace’s own insights – maybe his best. It occurs a couple of times in Infinite Jest, and it takes the form: don’t underestimate objects. What this means is that objects aren’t just things for humans to use; they have their own aspect of being that affects the way people interact with them. This is clearer than ever with the advent of smartphones, which are objects that are pretty obviously affecting people’s behavior in unanticipated ways. A piece of software is ultimately just an object, but its particular characteristics affect the people who use it. For example, one of the reasons search engines are so effective is not because they’re so brilliantly coded, but because people have learned how to phrase their queries in ways that are easy for a piece of software to process (such as focusing on improbable keywords). More disturbingly, we may even be learning to only want to ask things that can be answered by a search engine. The cliche that can be redeemed in order to describe this phenomenon is “the things you own end up owning you.”

Wallace failed to apply this insight to his treatment of ideas. He treated them like they were toys to bounce around (this is another reason why name-dropping really is a bad thing). Crucially, he treated Wittgenstein’s philosophy as an opportunity to merely reflect on his own anxieties. But as we’ve seen, if he hadn’t done this, if he had actually taken Wittgenstein seriously and followed his argument through, it could have resolved his problems. But this could only have happened if he had been willing to let an argument take him somewhere he wasn’t looking to go.

In a discussion of the “Death of the Author” theory, Wallace defines his position as follows:

“For those of us civilians who know in our gut that writing is an act of communication between one human being and another, the whole question seems kind of arcane.”

Let’s take him at his word.

David Foster Wallace was wrong about everything

(Part 1)

(Part 2)

While it’s a moderate amount of fun to go through and debunk all of David Foster Wallace’s silly arguments, there’s a real mystery here: how he could be so serious and thoughtful and yet so fundamentally clueless. While I very much don’t buy the whole “genius” angle (either in regards to DFW or in general), he does seem to have been smart enough that he shouldn’t have failed this comprehensively without a good reason. In other words, there must have been a fundamental flaw in his general approach – one which would be worth our while to identify and correct. In order to get to the bottom of this, we need to unpack the closest thing he wrote to a mission statement: “E Unibus Pluram.” This essay is where Wallace fully articulates his stance with regard to Our Modern Culture, which stance is, in short, opposed to irony and in favor of a sort of refined banality.

As usual, Wallace is taking a pretty basic idea, padding it with vague intellectualism, and using his substantial writing talent to make it look good. The idea that society nowadays is “too ironic” and “nothing means anything anymore” is common enough to have become its own cliche. As a result, there is a significant anti-DFW contingent that is largely motivated by an instinctive skepticism of anyone making this type of argument, which is a good instinct. Banality actually is a seriously bad thing and anyone who winds up in the vicinity of advocating it really needs to watch their step.

But we can do better than merely rejecting Wallace’s arguments on these grounds. First, this line of argument merits a thorough counterargument precisely because it’s so common. Second, if we accept that Wallace was a reasonably smart person and that he put a lot of work into his arguments, then it will be at least interesting to figure out how his efforts led him here. Finally, figuring out what Wallace’s deal was will help us come to a more complete understanding of what his work was really about.

In order for any of this to make sense, we need to start with a critical correction to Wallace’s framework: we need to define “irony.” I’ve mentioned that Wallace has a bad habit of not interrogating his framework that leads to him drawing overly broad conclusions, but in this case it’s worse. If the claim is that “irony” is destroying our ability to create meaning, then what we mean by “irony” is the entire issue.

Despite all the conniptions that people whip themselves into over the topic, the basic definition of irony is pretty simple: irony is when you use words to express something other than what those words actually say. The simplest example is sarcasm, which is when you use tone to indicate that what you mean is the opposite of what you’re saying. But irony in general does not necessarily convey the opposite of what you’re saying, it merely conveys something different. Note also that this definition does not imply any kind of motivation or ideological stance.

My favorite example for understanding irony is the “Friends, Romans, countrymen” speech from Julius Caesar (it’s in Act 3, Scene 2). As you’ll recall, Caesar has just been murdered by Brutus and the other senators, and an angry mob is at the capital demanding some answers. Brutus gives a simple explanation that satisfies the crowd, and then, being one of literature’s great honorable morons, leaves to allow Mark Antony to deliver the eulogy. Antony famously states that “I come to bury Caesar, not to praise him,” but the key to his speech is that he’s actually there to do neither. He’s there to incite a riot. The usual sense of irony is present when he repeatedly says that “Brutus is an honorable man”; certainly, this is the opposite of what Antony believes. But that’s not the point. Antony isn’t trying to convince people that Brutus is dishonorable, he’s trying to enrage them. Furthermore, Antony is being entirely sincere here. He actually loved Caesar and he’s actually pissed about him being murdered. Irony is merely the means by which he is taking this one action to advance his cause. It’s perfectly normal for irony and sincerity to coexist, because irony is not a worldview, it is a rhetorical technique. It can be used for whatever purpose one requires.

Despite this, it’s easy to see why various other concepts such as “detachment” or “cynicism” or “apathy” have glommed onto the concept of irony. Accepting irony as a legitimate method of communication is sort of like opening Pandora’s Box: everything becomes possible. It’s possible, for example, to use irony to avoid actually saying anything, or to use it to denigrate broadly without allowing for the possibility of a better alternative, but these are only possible uses of irony. Irony itself does not imply any particular motivation, which is why it’s so silly to say, as people so often do, that we’re living in an “ironic culture” or that irony is over because of a Broadway musical or whatever.

Okay, so, what’s the big deal if Wallace used a word wrong? He was referring to something with the word “irony,” so we should just be talking about whatever that thing was, right? Perhaps Wallace specifically meant the use of irony to stay cool and detached and avoid committing oneself, and that’s what he was arguing against. Unfortunately, this doesn’t work. The problem is that Wallace and the other cultural critics who lament our “ironic” society vastly overestimate the amount of irony that is actually present, because they lump together everything but the most po-faced sincerity under the “irony” label. This is ultimately the same old problem of Wallace using a broad brush to paint over the cracks in his actual analysis. If we go through Wallace’s arguments with a more rigorous understanding of what irony is and what it can do, we can both fix his conclusion and figure out where the flaws crept in.

So let’s talk about TV.

TV is My Friend

Wallace’s basic charge against TV is that it’s created a pervasively ironic culture through its combination of ubiquity and self-reference. Briefly: once TVs wound up in everybody’s homes and became a normal part of human life, TV programs then had to incorporate TV itself into their own content in order to maintain verisimilitude. One generation later, this self-incorporation has itself become part of everyone’s life experience, so now TV has to refer to itself referring to itself. Note that this is as far as it goes; there’s not an infinite number of possible layers of reference because at this point the Ouroboros has caught its own tail. If you try to add another layer you’ll still just have TV referring to itself referring to itself, which is the same as the third layer. Also note that the timeline for this checks out: TV first becomes ubiquitous during the naive 1950s, gains its first level of detachment one generation later, in the cynical 1970s, and achieves its true form of black-hole postmodernism in the nihilistic 1990s.

This is wrong. That is, all of this stuff did sort of happen in a basic sense, but it’s wrong to accept this as a complete explanation of American culture, which is exactly what Wallace is doing in this essay. The simple fact is that it’s a big world out there and there’s tons of other shit going on. Part of the problem with Wallace is that he’ll say one thing that’s correct in a limited way, leading people to accept his argument, but then go on to draw an unacceptably broad conclusion from it.

Wallace evokes the pervasiveness of TV with the statistic that “television is watched over six hours a day in the average American household.” He describes the situation is follows:

“Let’s for a second imagine Joe Briefcase as now just an average U.S. Male, relatively lonely, adjusted, married, blessed with 2.3 apple-cheeked issue, utterly normal, home from hard work at 5:30, starting his average six-hour stint in from of the television.”

Now, this is obviously a rhetorical description. Wallace is aware that an average is not a quota. But these sorts of clever flourishes are dangerous precisely because of their ability to smuggle in unintended assumptions. That’s why it matters that the situation Wallace is describing here is totally impossible.

If we assume that Joe here works for 8 hours a day, sleeps for 8 hours, and spends 2 hours on commuting/eating/errands/etc. (a significant underestimate), the six-hours-a-day statistic then implies that he spends 100% of his free time watching TV. Also, in order for six hours to be the average, some people would have to watch more than that, which is mathematically impossible. So, what does the six-hours-a-day statistic actually mean? For Wallace, its only significance is that it’s a big number. But if we consider the actual circumstances required for it to be true, we come to a very different conclusion: most TV watching occurs in the background.

This is fatal to everything that Wallace goes on to argue. If people are mostly watching TV in the background, then they are precisely not obsessively analyzing it and drawing deep philosophical conclusions in the way that Wallace needs them to be in order for his analysis to be applicable. The person who is doing that is, of course, Wallace himself. This is what I mean about smuggling in assumptions. Wallace is trying to create a “general” description of the TV-watching experience so that his argument can apply to everyone. But of course, there is no “everyone.” Each person has their own circumstances and personality, and as such, will interpret the same content in a different way; this is not the kind of thing that can be generalized. I’m sure that, for Wallace, the experience of watching TV was a deep source of existential anxiety in precisely the way he describes. But Wallace has no justification for projecting his own experiences onto everyone else.

In short, the content of TV or any other medium cannot cause the kind of broad social pandemic Wallace is attempting to diagnose here, because everyone will have their own idiosyncratic reaction to it. There is, of course, something that can cause broad social effects: structural conditions, which do affect everyone in the same way. This is what it means to say of TV that “the medium is the message.”

To understand this, let’s consider the current televisual situation and what it means for Wallace’s arguments. In this regard, “E Unibus Plurum” is seriously dated; it was written in 1990, and time has staggered on quite a bit since then. But this actually useful to our analysis: because both the content and the situation of TV are bit different now than they were in the early 90s, anything that’s the same between then and now cannot be explained by TV in the way that Wallace argues.

The structural change is that the rise of on-demand TV (first via DVDs, now via streaming) means that it is no longer “background viewing.” Quite the contrary, the current M.O. of the TV audience is “binge-watching,” which you can tell is a new thing because we had to make up a term for it. The result is that the TV experience is now more analytical and fannish, rather than merely fodder for small talk. The evidence for this is quite apparent: first, geek culture, using the definition that a “geek” is someone who’s a little too into a niche cultural product, is now mainstream. The current run of superhero movies, for examples, is starting to rely on its audience having the sort of obsessive in-knowledge that used to be the provenance of only the geekiest subcultures. Second, there’s about a billion thinkpieces all over the internet analyzing any new TV show that achieves any kind of popularity.

And third, the content itself has changed to better fit this new reality. Rather than sitcoms with recognizable premises and easy-to-get jokes, long-form dramas with complicated plots are now the order of the day, precisely because viewers can now be counted on not only to be capable of sorting such stories out, but to want to. This also means that TV shows tend to be less “zany” and more substantive; the sort of self-reflexive irony that Wallace opposed is no longer in vogue, precisely because TV shows now need to engage viewers for a long commitment and get them to talk the shows up to others, rather than merely flatter them with a cheap sense of recognition.

Furthermore, TV’s new angle is merely one aspect of a broader cultural shift away from the “ironic” 90s. Fannishness and hyper-engagement are the new normal, not just for TV but in general. The proliferation of spammy listicles and hyperbolic headlines demonstrates that the internet is replete with an aggressively (which is to say intentionally) naive sincerity. Our disdain is now reserved not for earnestness and candor but for hot takes and “negative” criticism. A lot of this is of course due to the structure of the internet itself, which allows people to coalesce around niche interests and choose not to read things that make them feel bad (or challenged, or like they might be wrong about something important). But in a sense it’s also just a mere trend, just like the whole “dark and edgy” thing in the 90s was a mere trend.

But isn’t this exactly what Wallace predicted, that the next trend in media would be one against irony? It would be – if Wallace were talking about trends. But he’s not, he’s attempting to diagnose a pandemic: the modern trap of informed meaninglessness. Thus, the question to ask is: now that the alleged virus is gone, what of the patient? Do we live in a just society that provides everyone with the opportunity to find meaning in their own lives? Are we no longer haunted by a vague sense of anxiety and guilt over our position in the world? Is image now less important than substance?

Indeed, the fact that Wallace’s analysis is still popular and people are still looking to him for guidance answers these questions all by itself. Irony was never the problem, and Wallace’s argument can, fittingly, be reduced to a cliche: he mistook the symptoms for the disease.

It’s TV’s Fault Why I Am This Way

To understand how Wallace got this wrong, let’s take a closer look at some of the examples he uses. By identifying the errors in his specific arguments, we can move toward a correction of his overall approach.

One of the central arguments in the essay is Wallace’s analysis of a Pepsi commercial. This is kind of an own goal all by itself, but I’m going to go ahead and take it seriously. The commercial is your basic “crowd of attractive young people having fun” type of deal, which Wallace analyzes as follows:

“There’s about as much ‘choice’ at work in this commercial as there was in Pavlov’s bell-kennel. The use of the word ‘choice’ here is a dark joke. In fact the whole 30-second spot is tongue-in-cheek, ironic, self-mocking . . . In contrast to a blatant Buy This Thing, the Pepsi commercial pitches parody. The ad is utterly up-front about what TV ads are popularly despised for doing, viz. using primal, flim-flam appeals to sell sugary crud to people whose identity is nothing but mass consumption. This ad manages simultaneously to make fun of itself, Pepsi, advertising, advertisers, and the great U.S. watching consuming crowd.”

The Pavlov reference is apt, if obvious, but how does this make the commercial ironic? If it’s “utterly up-front” about what it’s doing, doesn’t that make it entirely sincere? Given that all advertisements necessarily have the same purpose, isn’t a “parody” of an ad actually just another ad? Certainly, one can discern a contradiction between the preaching of “choice” and the fundamentally coercive nature of advertising, but is the word “choice” doing any actual work here other than supplying a vaguely positive connotation? Is this supposed contradiction actually relevant to what the ad is doing?

In fact, when we’re talking about commercials, we should be looking at the most superficial interpretation, because that’s the one that the vast majority of people are going to pick up on. In this case, there’s a bunch of attractive young people having fun and drinking Pepsi, so the message is pretty obviously that Pepsi equals fun party times. No analysis required.

And this is exactly how commercials actually work. The point of a commercial is very much not to act as some sort of intellectual Rubik’s Cube; the point is to throw a brand name at you along with some positive images to create an association in your mind between the brand and whatever the images connote (fun, adventure, sex, whatever; usually sex), such that the next time you see the brand in a store you’re subconsciously more inclined to buy it. This is why commercials are such a rich source of social stereotypes: they can’t afford to portray anything that isn’t instantly recognizable.

And even if you do accept an ironic reading of a commercial, it’s ultimately beside the point, because the functional purpose of a commercial is to move product. Nobody really cares what you think about it. Companies aren’t spending billions of dollars a year producing these stupid things as some kind of grad school art project. They’re doing it because it works.

Consider what Wallace claims is a typical reaction to this commercial (he does this by once again invoking “Joe Briefcase” from above – keep this guy in mind, because he’s going to turn out to be pretty important):

“The commercial invites Joe to ‘see through’ the manipulation the beach’s horde is rabidly buying. The commercial invites a complicity between its own witty irony and veteran viewer Joe’s cynical, nobody’s-fool appreciation of that irony. It invites Joe into an in-joke the Audience is the butt of. It congratulates Joe Briefcase, in other words, on transcending the very crowd that defines him. And entire crowds of Joe B.’s responded: the ad boosted Pepsi’s market share through three sales quarters.”

But that last line is a non-sequitur: how do we know that this analysis is why the commercial was effective (also, it’s one data point out of about a billion, but let’s stay focused)? If Wallace’s reading is correct, if you actually saw the commercial and then felt this way, why on Earth would this convince you to go out and buy a Pepsi? Wouldn’t you simply pat yourself on the back for getting the joke, and then continue to express your superiority by not buying the product that you’re supposedly laughing at? On the contrary, if the ad was successful, it can only be because it operated in the usual way: by creating a subconscious positive association in the viewer’s mind. This is why it doesn’t matter what your intellectual analysis of a commercial is: because commercials operate below the level of conscious analysis. Watching a commercial automatically creates an association in your mind, and that’s it.

So that’s the first half of the problem: any one line of intellectual analysis can only get you so far; sometimes it’s just not applicable. The second half is that, as mentioned, irony is a lot more versatile than Wallace makes it out to be.

Wallace’s claim is that TV’s approach functions as an “ironic permission slip” for harmful behavior. That is, it criticizes from a safe distance, allowing the viewer to accept the criticism of their own behavior without actually feeling the need to change it. Since the issue has been addressed but the viewer hasn’t really been challenged with anything, they’re free to resume their old habits, only now they’re able to pretend that they have a real justification for doing so.

You may recall that this is precisely the accusation I’ve made against Wallace’s work: that an essay such as “Consider the Lobster” gives the impression of having addressed an important issue but ultimately allows the reader to accept the argument and then keep doing whatever they want. Given that Wallace was not being ironic, it’s clear that irony itself is not the relevant distinction; just as easily as one can be glibly ironic, one can be glibly direct. We can complete the argument by noting that the inverse is also true: a truly challenging argument can be made directly, or it can be made by using irony. Wallace’s own examples of “ironic” television prove this point.

Here’s Wallace’s list of examples of TV’s patriarchal authority figures, meant to illustrate a decline into ironic shallowness:

“Compare television’s treatment of earnest authority figures on pre-ironic shows – The FBI‘s Erskine, Star Trek‘s Kirk, Beaver‘s Ward, The Partridge Family‘s Shirley, Hawaii Five-0‘s McGarrett – to TV’s depiction of Al Bundy on Married . . . with Children, Mr. Owens on Mr. Belvedere, Homer on The Simpsons, Daniels and Hunter on Hill Street Blues, Jason Seaver on Growing Pains, Dr. Craig on St. Elsewhere.

Well, that certainly is a lot of names. So Wallace must be right, right? Unless, you know, these examples actually aren’t all the same thing.

Let’s compare two contemporaries: Al Bundy and Homer Simpson. The point of Al’s character is precisely that he’s a terrible person, and we therefore enjoy pointing at him and laughing. We may even feel a smug sense of superiority, knowing that, however much we may suck sometimes, at least we’re better than this asshole.

This is not at all how we feel about Homer. The fundamental difference is that we’re rooting for Homer, even as we recognize that his problems are often his own stupid fault. Indeed, Homer’s foolishness is presented in such a way that we actually identify with it; the fact that Homer’s annoyed grunt has become a universal expression of self-directed frustration demonstrates our collective recognition that there’s a little Homer in all of us.

Furthermore, using Homer as an example of a subverted authority figure is well off the mark, because Homer is rarely presented in this context. We most often see him being kicked around by the uncaring forces of the broader world, in which he is merely one more fork-and-spoon operator in Sector 7-G. In fact, the Simpson family unit is actually the one place where the show’s usually unsparing satire balks. Not only does the family always stick together, but it is specifically as a father that Homer is able to achieve the few victories available to him in life. The ironic angle of The Simpsons does not prevent us from caring, as Wallace would have it. It’s precisely the opposite: by portraying a recognizably broken world and showing us the kind of moral victories that can be realistically achieved in such a world, The Simpsons makes caring a plausible option.

Next, here’s a list of examples of, uh, something related to “postmodern irony,” I guess:

“The commercials for Alf‘s Boston debut in a syndicated package feature the fat, cynical, gloriously decadent puppet (so much like Snoopy, like Garfield, like Bart, like Butt-Head) advising me to ‘Eat a whole lot of food and stare at the TV.’”

Seriously, I have no idea what all these characters are supposed to have in common. It’s times like this that the accusation of “name-dropping” is more than just an easy diss; it’s nice that you can think of a bunch of supposed examples of whatever it is you’re talking about, but guess what, you still need to actually support your argument.

Anyway, this is wrong, again. The counterexample, obviously, is Bart Simpson. Bart’s mischief is not an expression of decadent nihilism, it’s his attempt to be a person in a society that is trying to turn him into a robot. The fact that The Simpsons “ironically” validates Bart’s bad behavior (to an extent, there are also counterexamples such as “Bart vs. Thanksgiving” when the show clearly intends us to understand that he’s gone over the line) isn’t supposed to make us feel comfortable with it; it’s actually an incisive criticism of the society that has produced him. Contra Wallace, the purpose of the parody is not to allow the audience to laugh at the situation from a safe, comfortable distance. The purpose is to make the bizarre world of Springfield seem all too real.

So hey, did you notice that Wallace used The Simpsons as an example in both lists, and that in both cases it was wrong in the same way, and that if we consider this example more comprehensively it completely undoes his argument? You did, right?


Teacher, Mother, Secret Lover

The Simpsons is a fundamentally ironic show. The setup is actually a specific parody of the stereotypical 80s family sitcom, though this is somewhat difficult to understand from a modern standpoint, as said sitcoms have largely ended up in the dustbin of history. The point is, the setting and characters are basically all explicit stereotypes, and, per Wallace’s argument, we as the audience are expected to understand this “ironically,” that is, to recognize the stereotypes and see through them. Wallace’s claim is that the function of this sort of irony is to merely criticize without committing to a real position, such that anything the show actually tries to say can be dismissed as just being “part of the joke.” But this is deeply incorrect in both ways: first, the function of irony on The Simpsons is not merely to criticize; second, and more importantly, the show’s irony does not prevent it from making sincere statements.

While I could probably make this argument by just picking random episodes, let’s try to identify some of the more provocative examples. “Itchy & Scratchy & Marge” is a good fit, since it’s one of the classic “ironic take on social issues” episodes. Marge fulfills the role of the stereotypical “concerned housewife” when she organizes a boycott of The Itchy & Scratchy Show due to its violent influence on children. This episode lambasts both the priggish moralists of the censorship campaign and the hack corporate cartoonists who just want to be left alone to churn out their mindless program in peace. Wallace’s claim is that this sort of setup allows us to laugh at the issue from a safe distance without actually engaging with it. But this is not so: the purpose of the episode’s ironic tone is precisely to engage with the issue in a deeper way than by merely taking one side of it.

Consider the scene where Roger Meyers, Jr., the cynical, cigar-chomping executive behind Itchy & Scratchy, argues that cartoon violence can’t be a real problem because violence already existed before cartoons were invented. We’re meant to read this argument ironically: to recognize both that it has a basic validity and that it’s fundamentally fallacious, and also to understand why Meyers is making it. This ironic angle actually draws us in to the argument; we think: “well, of course TV didn’t invent violence, but that doesn’t mean it has no effect on anything.” Furthermore, it’s clear that Meyers doesn’t actually care and is just making the easiest, most self-serving argument he can come up with. Meyers is the villain here, and not in an “ironic” way: he’s actually a bad person for not caring about the social effects of his program. This directly challenges the prejudices of the viewers, who are naturally expected to be receptive to the anti-censorship argument.

On the other side, consider how Marge’s protest ultimately fails because she’s unwilling to go along with her histrionic supporters in boycotting Michelangelo’s David. The relationship is supposedly that these are both issues about “freedom of expression,” but we can see that this is absurd. Marge has a specific grievance against Itchy & Scratchy; she started the protest because the show actually caused Maggie to attack Homer with a mallet. That’s the actual issue, and the fact that the episode uses irony to deconstruct the standard protest narrative without ignoring the human aspect at the heart of it shows us that there’s a better way to handle the issue than to engage in tired arguments about “censorship.” The point of the irony is that the framework we use to talk about this issue is flawed. This episode is not mere criticism; it encourages us to look beyond the usual rhetoric and focus on the things that actually matter.

The Simpsons is also full of entirely sincere moments, one of the deeper ones occurring in “Bart Sells His Soul.” The episode begins with a fairly standard ironic take on religion, mocking both Reverend Lovejoy’s cynical approach to pastoring and Milhouse’s naive acceptance of all manner of pseudo-religious nonsense. Bart takes the expected position of the viewer: totally unmoved and entirely willing to give up what little remains of his spirituality for $5. But it’s Bart’s position that the episode goes on to attack; while nothing that happens is dramatic enough to really disprove Bart’s argument, the little details of his situation add up to a deep feeling of wrongness. We come to feel that, while Bart’s position may be a smart one to take, it’s ultimately not a wise one.

By the end of the episode, the ironic angle is totally gone, and the show, through Lisa, ends up making a straightforward philosophical argument:

“But you know, Bart, some philosophers believe that nobody is born with a soul, that you have to earn one, through suffering, and thought, and prayer, like you did last night.”

But this statement is only meaningful in the context of the episode’s previous disdain for religion as it is normally conceived. It is precisely through this criticism that the show is able to suggest to us that there may actually be something there, deeper than where we normally look (certain modern atheists could perhaps learn something here). Furthermore, while Bart appears to ignore Lisa’s philosophizing, he does so while eating the piece of paper symbolizing his soul, implying that, even without accepting the explicit argument, Bart has internalized something significant through his experience.

Finally, let’s consider a counterexample. “The Principal and the Pauper” is precisely the kind of thing Wallace is complaining about: an episode whose self-referential irony prevents it from saying anything about anything other than itself. I actually have more sympathy for this episode than most people; I recognize what it’s trying to do, and I can understand why someone writing for Season 9 of The Simpsons would be interested in making an episode like this. But the fact remains that it’s fundamentally hollow, and compared to the show’s prior greatness, it’s no surprise that this episode came as a bitter disappointment.

And that’s precisely the point: this episode is universally reviled. Not only is meaningless self-referential irony not taken for granted, it isn’t even expected. This episode is an outlier in terms of up-its-own-ass-ness (or at least it was, back when the show was worth talking about). And people responded to it in exactly the opposite way from what Wallace is claiming: they didn’t pat themselves on the back for being in on the joke, they were fucking pissed. The vehemence of the reaction was enough that the writer, future Futurama stalwart Ken Keeler, used the episode’s DVD commentary as an opportunity to try to explain himself.

And while The Simpsons is unique in many ways, it’s far from the only counterexample to Wallace’s argument. Along the same lines as “The Principal and the Pauper,” Family Guy is widely hated for using shallow irony to avoid being meaningful in any way whatsoever. Shows that used irony constructively include The Daily Show and The Colbert Report, which increased political engagement and helped make liberalism cool again (how effective this was at actually changing anything is a separate issue). Consider also the environmental episodes of Futurama, which use satire to make the vital point that there’s no such thing as short-term environmentalism. Shows that are enjoyed specifically for their lack of irony include Last Week Tonight, which is popular due to the fact that it actually explains political issues, and Friends, a still-beloved show which surely ranks among humanity’s most painfully earnest creations.

[update: when I wrote this I actually had no idea how popular Friends still was. Turns out it’s like uber whoa. Where’s all that irony when you need it?]

Furthermore, ironic self-reference is not confined to TV, and it’s hardly a modern invention anyway; stories about stories are as old as, like, stories. Examples can be found even in the work of a writer whom Wallace upholds as a shining example of how to address serious issues in fiction: Fyodor Dostoevsky.

The Brothers Karamazov begins with a “From the Author” note, which is in fact not from the author, but from the narrator of the story. This is interesting because the narrator is not actually a character in the story. He sometimes refers to himself as “I” and references his own location and observations, but we never actually meet the man. At other times, his voice completely dissolves and the book defaults to standard third-person omniscient narration; many sections are about the private thoughts of the characters when they are alone. Yet when the narrator’s voice does emerge, we see that he has his own quirks and is in fact a bit of an overwriter – the reader (that is, the reader of late-1800s Russia) is expected to notice this and to understand it as a deliberate choice on Dostoevsky’s part, rather than as bad writing. Furthermore, the opening note itself actually expresses an opinion on the story and suggests an interpretation, one that the narrator himself admits we might not agree with.

In this way, the line between fiction and reality – between the narrator, the characters, and the actual author of the actual book – becomes blurred. This exactly the kind of thing that people would describe as “postmodern irony” if someone like David Foster Wallace were doing it (Wallace’s use of narrative voice is actually quite straightforward by comparison). So, doesn’t Wallace’s critique also apply to Dostoevsky? Isn’t the function of devices like these to distance us from the story, to let us experience it from a safe remove without actually grappling with its ideas? As another example, Dostoevsky has a habit of mocking his real-life political opponents by placing their arguments in the mouths of his more ridiculous characters; doesn’t this allow us to merely laugh at these ideas rather than engaging them?

This line of thinking is obviously silly, because Dostoevsky is a Serious Writer whom we are required to Take Seriously. And we are correct to do so; when we’re talking about someone like Dostoevsky, we understand quite well that his artifice is a crucial part of how he achieves his intended (or unintended) effects. Yet when we’re talking about TV shows, the name of the game is to find a way to dismiss any possible importance in the actual content as quickly as possible. In this way, we can see that by ascribing an improbable amount of influence to TV, Wallace is in fact not taking it seriously; wrapping up the whole enterprise in a box labelled “irony” is a way to avoid engaging with the many things that are actually going on (The Simpsons being merely one of them, even in the 1990s). Wallace implicitly assumes that, unlike real art, TV is just a thing, and is therefore susceptible to a simple explanation of its one ideology and the one effect that ideology has on society. This becomes even clearer when we realize that Wallace’s argument treats TV shows and advertising in the same way, as though they were the same thing just because they’re located in the same place. This is as foolish as trying to come up with one single thing that all of “Russian literature” means and explaining all of Russian society on that basis.

Finally, Wallace’s argument that we’re all trapped in Ironic Purgatory is actually self-refuting. If we were, how could any of us understand what Wallace was saying? On the contrary, the fact that his charge against irony was met with such a fervently positive reception (viz. that fucking commencement speech) proves precisely that we are not all entangled in a morass of ironic self-reference, we are not content to be merely “in on the joke,” and we can quite easily recognize genuine emotion when we encounter it.

The truth is that there is no irony problem, and the reason I spent forever getting here is that this myth just won’t die. Irony is a tool, it has a variety of uses, and the idea of “ending irony” is as nonsensical as it is undesirable. Wallace paints a provocative picture of “postmodern” paranoia, but the truth is he’s tilting at windmills. He’s Don Quixote in reverse: so entranced by the mythology of “postmodern irony” that he is unable to see the basic nobility of the real world.

Reading is Fundamental

Wallace’s excursion into TV land is actually the lead-in to a point about literature. Specifically, Wallace claims that the self-referential irony of TV has spread out to infect avant-garde literary fiction. As an example, he cites Some Book, by Some Guy, which appears to be one big meaningless ironic pastiche of consumer culture, or something. Here’s the thing. I could obviously check what book Wallace is talking about and try to assess his argument, I’ve got the essay right here, but I have no reason to actually care, because I’ve never heard this book or its author referenced in any other context. I’m not the most clued-in when it comes to cutting-edge literary fiction, but when Wallace talks about people like Pynchon or DeLillo, I know what he’s referring to; indeed, Wallace is able to easily explain the influence of these authors in his essay. On the contrary, when he gets around to talking about Leyner’s book (the guy’s name is Leyner), he analyzes it convincingly enough, but he fails to do the one thing that’s required for his argument to actually matter: demonstrate that Leyner is actually representative of, like, anything at all. As it stands he’s just one guy who wrote a goofy book.

If we think about why Wallace chose this example, the mists begin to clear: Wallace is talking about literary fiction because it’s his genre, and he’s worried about the use of irony because that’s the problem that Wallace himself was trying to deal with when it came to his own work. The rationale Wallace gives for talking about Leyner’s book is that it was apparently “the biggest thing for campus hipsters since The Fountainhead” (I don’t understand this joke); all this means is that it was popular in Wallace’s milieu. But the fact that this is a particular concern of Wallace’s is precisely why he does not have license to portray it as evidence of a broad cultural malaise.

So, why is this a problem? Wallace is just talking about an area of his own personal experience, right? He isn’t making a broad argument about American culture, he’s just talking about one particular use of irony and one particular response to it, so I’m totally missing the point here, right? Except not even, because Wallace totally is claiming to make a comprehensive argument that applies to all of TV, all of literature, and therefore all of America. Remember good old Joe Briefcase, and how Wallace presents him as a generic American in order to make a completely general argument, and how this is a huge problem because it allows Wallace to project his own assumptions onto everyone else without justification? Observe how he initially sets the stage:

“Every lonely human I know watches way more than the average U.S. six hours a day. The lonely, like the fictive, love one-way watching. For lonely people are usually lonely not because of hideous deformity or odor or obnoxiousness – in fact there exist today support- and social groups for persons with precisely these attributes. Lonely people tend, rather, to be lonely because they decline to bear the psychic costs of being around other humans. They are allergic to people. People affect them too strongly. Let’s call the average U.S. lonely person Joe Briefcase.”

First of all, Wallace’s argument here is self-contradictory. If “lonely people” are well above the average in terms of TV watching, then the broader population of non-lonely people must be well below the average. But if this is the case, TV should be catering to this broader group of people, on account of there’s way more of them and TV is not a niche interest, meaning all of the conclusions Wallace draws about TV on the basis of what “lonely people” are like are wrong.

But the real significance of this quote is that people who “decline to bear the psychic costs of being around other humans” are not necessarily “lonely” – they are specifically introverts, and, as painful as it is for me to admit this, most people actually do have the particular mental disorder that allows them to be at ease around other humans. The reason Wallace focuses on introverts here is, of course, that he himself is an introvert. But given that he fails to explain this, he seems not to understand that this is something that makes him different from most people. That is, I’m sure Wallace started from the point of wondering why he felt differently than everyone else seemed to, but he then went on to, through projection and overgeneralization, explain his own problems as problems of the world.

With this in mind, we can see what Wallace is doing as he continues his setup:

“Joe Briefcase fears and loathes the strain of the special self-consciousness which seems to afflict him only when other real human beings are around, staring, their human sense-antennae abristle. Joe B. fears how he might appear, come across, to watchers. He chooses to sit out the enormously stressful game of U.S. appearance poker.”

Note not only the way Wallace is attributing specific characteristics to what is supposed to be a generic example character, but the evocativeness of this description of “Joe’s” feelings. Isn’t it terribly obvious that Wallace can only be describing the way that he himself feels? (I’ll vouch that this is a pretty decent expression of what moderate to severe introversion feels like). But why doesn’t he just say so? Why, indeed, does his example person appear to be designed precisely to be as unlike Wallace himself as possible? Consider that Wallace was certainly not the briefcase-carrying, 9-to-5 sort, and consider the earlier description of Joe of the head of a stereotypical nuclear family – also very much unlike Wallace.

The move that Wallace makes here is crucially important: he starts by describing his own feelings, invents an example character to embody those feelings, and then goes on to use this character as a fully generic example of whatever he feels like talking about at the moment. In this way, Wallace fools himself into thinking that his own feelings apply to everyone else, allowing him to draw broad conclusions through mere introspection. And this is not a con job: Wallace is not conscious of the fact that he’s doing this. Examples:

“We are the Audience, megametrically many, though most often we watch alone”

“One reason fiction writers seem creepy in person is that by vocation they really are voyeurs”

“by 1830, de Tocqueville had already diagnosed American culture as peculiarly devoted to easy sensation and mass-marketed entertainment”

“We,” “They,” “American culture.” So yes, Wallace does think that when he talks about TV he is talking about the TV audience in general, when he talks about “Image-Fiction” he is talking about literature in general, and when he talks about culture he is talking about America in general.

And he is, of course, wrong to be doing this. Being stuck in a rut of over-self-conciousness is Wallace’s problem, not TV’s. Being unable to work through modern meta-irony in order to say something meaningful is Wallace’s problem, not fiction’s. And the dearth of meaning in our semantically overcrowded society is . . . everyone’s problem, obviously, but Wallace’s entire explanation of how and why we got here is completely wrong, because the whole time he was only talking about himself.

David Foster Wallace Was Wrong About Everything

This realization recontextualizes the essay quite a bit. In order to correct Wallace’s fundamental error, we must not only avoid his generalizations, we must comprehensively edit his “we” to an “I,” his “U.S. Culture” to “my subculture,” and his “Joe Briefcase” to “David Foster Wallace.” We must understand Wallace’s work not as analytical or investigational or even observational, but as confessional.

The key, finally, is that is applies to everything Wallace wrote. Sometimes this is unproblematic, for example, when Wallace gives his thoughts on Kafka or David Lynch, he’s performing straightforward criticism; we understand that these are his arguments. But Wallace evidences this kind of restraint only rarely. When Wallace celebrates a new usage guide that supposedly resolves the deep political problems of language, it’s because said guide resolves the problem of Wallace’s own relationship with language; he assumes everyone else feels the same way; he’s wrong. When Wallace gets excited about McCain’s candidacy, it’s because McCain is providing what Wallace wants out of politics; he assumes everyone else wants the same thing; he’s wrong.

But it’s not enough to just reinterpret Wallace as a personal writer, because it is the specific move he makes of starting from a disguised version of his own prejudices and sublimating them into an intellectual argument that makes him actually wrong. Because he starts on unsteady footing, he stumbles with each step. The best place to see how this works is in “Authority and American Usage,” as this is both Wallace’s most comprehensive and most personal argument.

When Wallace finally gets around to addressing the actual “Descriptivist” linguistic argument – that languages have a set of real rules about how they actually function and several sets of fake rules that people make up for various reasons – this is how he begins his counterargument:

“A listener can usually figure out what I mean when I misuse infer for imply or say indicate for say, too. But many of these solecisms – or even just clunky redundancies like “The door was rectangular in shape” – require at least a couple extra nanoseconds of cognitive effort, a kind of rapid sift-and-discard process, before the recipient gets it.”

This is literally an unbelievably weak argument. Wallace actually has to say “nanoseconds,” because if he had phrased this in the usual way and said “seconds,” he would be making a stronger claim – one that is obviously wrong. But by softening his claim, he makes the argument worthless, because a) we can’t possibly determine the average cognitive “work” of an utterance down to the nanosecond and b) if it actually is just a nanosecond, then it’s not worth the effort to correct it. Ergo, since this argument is not plausible, it must not be Wallace’s actual argument.

But of course, Wallace is not making a linguistic argument at all; he is merely expressing himself, which is to say venting his own prejudices. The feeling that Wallace identifies as “extra work” is in fact nothing but his own feeling of irritation. This insight explains everything that is so odd about that essay. It explains why Wallace doesn’t properly engage with the work of professional linguists, the people who actually study the things he’s talking about. It explains why he meanders so broadly through so many different perspectives and ideas without properly connecting them to his main point. It explains why he chides others for making shallow, self-serving arguments and then makes even shallower, more self-serving arguments himself. And it explains why he gets an issue that he’s so passionate about so fundamentally wrong.

And so, when Wallace laments the “dead end” of irony, he’s merely addressing his own limits as a thinker. Consider his conclusion:

“The next real literary ‘rebels’ in this country might well emerge as some weird bunch of anti-rebels, born oglers who dare somehow to back away from ironic watching, who have the childish gall actually to endorse and instantiate single-entendre principles. . . . The new rebels might be artists willing to risk the yawn, the rolled eyes, the cool smile, the nudged ribs, the parody of gifted ironists, the ‘Oh how banal‘”

Think about what Wallace is actually saying here. He’s seriously claiming that anyone making a sincere argument is necessarily subject to ridicule and eyerolling. He has to be, because otherwise he has no argument. If this is just something that happens sometimes, then the “problem” is just that some people are jerks. But the idea that there are literally no sincere statements anymore, that ironic parody is just so devastating that no one’s willing to “risk” making them, is ridiculous. Again, what’s happening here is that Wallace is making a broad cultural pattern out of his own anxieties. It is Wallace who is constantly afraid that someone will laugh at him if he comes across as too sincere. Most people do not have this problem.

Indeed, the idea that irony has a vastly increased prevalence in modern times is itself highly debatable; given that the concept dates back to at least Socrates, I’m pretty sure people have had a handle on it for a while now. Frankly, the whole thing about Vietnam/Watergate/whatever being some kind of crucial turning point for cultural values is basically one big Kids These Days rant. Which, temporally speaking, you’d think would be over by now, but it seems to have stuck for some reason. The fact is that the overwhelming majority of Americans are still conventionally patriotic, even those who are “cynical” about politics, and the number of us who actually want to fundamentally change the structure of society, as opposed to “reforming” it to curb its “excesses,” is statistically insignificant.

Second, even if we assume that we are “more ironic” now, this really means that we’re better at communicating. We can understand complex arguments at various levels of remove. We are less easily fooled by the stated beliefs and motivations of deceivers. And, of course, we can use irony ourselves in order to say things in more effective ways than merely blurting them out and hoping for the best. By realizing that irony is a tool rather than an ideology, we can actually use it to express our sincere feelings more effectively.

Thus, Wallace’s core point, that we’re all lost in the labyrinth of irony and we need to find our way back into the daylight of sincerity, is ultimately an expression of his own discomfort with the conclusions we’ve drawn from the events of the 20th century – and with the realization of where we need to head next.

(Part 4)

David Foster Wallace as reactionary

(Part 1)

Last time I explained how David Foster Wallace’s approach to the politics of language in “Authority and American Usage” ended up backing him into a reactionary position. That essay can’t be considered a representative example, though, because Wallace obviously had some personal issues re: grammar snobbery that contributed to the muddling of his argument there. Once is an anomaly, twice is a coincidence, but three times is a pattern. Today I’m going to argue that the issues with Wallace’s general approach were not coincidental.

I’m somewhat embarrassed to admit that I was unable to finalize my argument here until encountering this Reddit thread, which is the only place I’ve seen DFW’s work described as reactionary. That was the last piece of the puzzle. I had been unable to make sense of Wallace’s consistent errors because I was assuming that he was “on my side.” Most discussion of Wallace seems to suffer from this same problem. People assume he was a good guy who was doing his best but had some problems, when in fact he was fundamentally misguided. Taking a genuinely adversarial approach to his work clarified everything for me, but more than that, I think this approach actually makes his work much more valuable. At the very least, it’s pretty clear by now that the DFW-as-self-help-guru approach is a dead end, so we ought to try something new.

In summary, Wallace’s general approach to political issues in his nonfiction writing was:

  1. Generalize and simplify the issue by imposing a commonly understood framework on it (e.g. liberals vs. conservatives, authority vs. anarchy, irony vs. sincerity). This results in both an overly broad approach and a dismissal of radical opinions, even when they’re directly evident in the subject matter. It also makes readers comfortable by allowing them to start from a framework that they already take for granted.
  2. Intellectualize the issue by bringing in as many ideas as possible, but fail to draw a strong conclusion, or even any conclusion at all. This confuses the issue and makes it look like there’s no real solution. Again, this makes readers feel comfortable, because Wallace isn’t “pretending like he has the answer,” and his writing doesn’t push anyone into making any real commitments.
  3. Fall back on a basic reactionary position, usually either traditional authority or individualistic who’s-to-say-ism. Step 2 makes this seem like the only possible option, and Step 1 allows this conclusion to seem much more broadly applicable than would be warranted even if it were justified.

There’s one very important tactic that Wallace uses constantly to support both steps 1 and 2, which is Both Sides-ing. This is basically the argument to moderation, but used to denigrate rather than support a position. Rather than arguing that a position must be right because it is moderate (Wallace never gets around to actually arguing in favor of any position), Both Sides-ing simply argues that anyone with an “extreme” position on either side of an issue must be wrong simply because they’re extreme. Ironically, this is the favorite tactic of precisely the type of modern thinkers that Wallace was deeply opposed to: those who believe that ridiculing a position is the same as arguing against it (namely, South Park Republicans). Naturally, Both Sides-ing is an inherently conservative tactic, since it denigrates any position that might actually change something.

To be clear, none of this has anything to do with what Wallace’s explicit political opinions were. The issue is not that he was secretly a conservative and was therefore a bad person or whatever. The issue is precisely that he was trying to be a good liberal, but his approach turned him around so consistently that he ended up defending banality.

For Your Consideration

The easiest place to start is “Consider the Lobster,” which tackles a relatively straightforward moral issue and leaves little room for complications. The ostensible purpose of this article is to cover the Maine Lobster Festival for Gourmet magazine, but Wallace, commendably, uses the opportunity to question the morality of meat eating. As Wallace puts it, the question is pretty simple: “is it all right to boil a sentient creature alive just for our own gustatory pleasure?” The problem is that the answer to this question as phrased is pretty obviously “no,” but Wallace spends the entire essay avoiding this conclusion. (Notably, he starts backtracking immediately, before even beginning to consider the actual issue: “Is the previous question irksomely PC or sentimental?” We certainly wouldn’t want to be sentimental about a moral issue.)

Wallace goes into a lot of detail about lobster biology, which isn’t totally irrelevant. The question of whether a creature has moral status actually is dependent on things like whether it feels pain, and grounding the issue in practical reality is much more effective than appealing to vague principles about Mother Earth’s Creatures or whatever. He also brings in the philosophical backing of Peter Singer’s famous utilitarian argument against meat-eating in Animal Liberation. The thing is, as Wallace frames the issue, the only argument on the side of meat-eating is one specific variety of pleasure, which means animals win as long as they have any moral status whatsoever, and it doesn’t take long to get to the conclusion that lobsters do. Furthermore, Wallace draws a contrast between the way we deal with “uncute” animals like lobsters as opposed to cows (he points out that, in contrast to the actually-existing World’s Largest Lobster Cooker, the idea of some state hosting the World’s Largest Killing Floor is totally implausible), demonstrating that we implicitly give animals like cows much greater moral status. But he totally fails to bring out the obvious implication of this: if eating lobsters is at all questionable, then eating cows is almost certainly immoral.

The other important thing Wallace does here is to Both Sides the issue by presenting both the festival’s glib sponsors and PETA as ideologues who are refusing to really consider the issue the way Wallace is. While it is trivially easy to make PETA look like a bunch of clowns, Wallace never actually presents a counterargument to the claim that killing things for no reason other than your own enjoyment is immoral. Wallace’s dismissive attitude towards PETA is indicative of a very basic lack of intellectual seriousness. Arguments are right or wrong on their own merits, regardless of how “fanatical” the people espousing them are. This is part of what it means to actually take a side: committing to the issue itself regardless of what a bunch of jackasses it ends up allying you with.

In a footnote, Wallace conveniently provides a perfect summary of the way in which he uses intellectualism to advance a radically anti-intellectual conclusion:

“Suffice it to say that both the scientific and philosophical arguments on either side of the animal-suffering issue are involved, abstruse, technical, often informed by self-interest or ideology, and in the end so totally inconclusive that as a practical matter, in the kitchen or restaurant, it all still seems to come down to individual conscience, going with (no pun) your gut.”

Sure, you could try to actually think about the issue, but everything is just sooo complicated, plus all those so-called “scientists” are just self-interested ideologues anyway, so you might as well just do whatever you feel like. This is literally the reactionary impulse dressed up as insight, literally Bill O’Reilly in a lab coat.

And so, Wallace ends with little more than a shrug of his shoulders. After amassing all the information necessary to draw a real conclusion, Wallace remains “concerned not to come off as shrill or preachy when what [he really is] is more like confused” and throws the question to his readers as an open issue. I mentioned that it was commendable for Wallace to have broached the issue in this forum, but his good intentions are completely undone by this conclusion. The piece as a whole allows its readers to feel like they’ve deeply considered all the facets of the issue, while in effect giving them the license to continue doing whatever they feel like, because how can any of us know what’s really right? If someone as smart as DFW can’t figure it out, wouldn’t it just be arrogant for the rest of us to pretend we have an answer? A commitment to continue considering the issue might not be such a bad conclusion in a different type of society, but in the world we actually live in, the slaughterhouses are going to keep churning out death until we actually do something about it. They aren’t going to wait around while we ponder difficult moral conundrums. Passivity is acquiescence.

(Not so incidentally, while Wallace does bring up the issue of factory farming and how it makes meat-eating immoral even if killing animals is not immoral, he never – and by “never” I mean in the entirety of his written output as far as I’m aware – actually brings up the issue of capitalism. He does talk about easier things like “commercialism” or “consumerism,” but I don’t recall him ever using the real c-word. I’m actually not sure what to make of this. It’s implausible that he was unfamiliar with Marx et al., but it’s also implausible that someone so concerned about the problem of meaninglessness in American culture could so thoroughly ignore the obvious culprit.)

All Aboard the Straight Talk Express

The other obvious place to go for Wallace’s political approach is his one essay actually addressing electoral politics: “Up, Simba,” his account of John McCain’s 2000 primary campaign. In retrospect, this essay is terribly easy to make fun of, now that McCain has actually had his shot and blown it about as hard as humanly possible. But despite his sympathy for McCain, the point of Wallace’s essay is not that he’s necessarily a great guy who should be president. It’s about what we actually want out of politics, and why we’re not getting it.

Unfortunately, what Wallace actually wants out of politics doesn’t seem to be anything that would actually help anyone. McCain’s policies are casually rattled through at the beginning; one might expect that the point of Wallace’s focus on McCain would be to ask how exactly a supposedly honorable straight-talking kind of guy arrives at these sort of positions, but in fact policy never comes up again. Instead we’re treated to a whirlwind tour of the McCain campaign’s buses and ad strategies and hotel arrangements, with constant condescending lectures from Wallace directed at those Young Voters who, for some reason, don’t care about politics.

The deep irony of this essay is that, for all his finger-wagging, Wallace is actually behind his “apathetic” targets. Wallace thinks the problem is that no politicians are honest anymore, that the government is “corrupt,” that there’s nothing to believe in. Wallace is afraid he’s “too cynical,” when in reality, he has only scratched the surface. Just as the problem with capitalism is not the morality of its participants, but its inherent structure, so too is the problem with the ruling class not that it consists of criminals and morons, but that it is a ruling class. This is what makes the article’s obsessive detailing of the shenanigans of the McCain campaign so deeply ridiculous. Wallace is concerned that we don’t care about politics anymore because it’s all just a bunch of clowns, and his response to this problem is to give us a tour of the circus.

It’s actually worse than that, though, thanks to the fact that John McCain is one of the few politicians who has actually been through some serious shit. Wallace uses his considerable skill as a writer to detail McCain’s harrowing experience as a Vietnamese POW, and it’s impossible not to feel some real sympathy here. The problem is that, in doing this, Wallace isn’t actually leaving the circus. As mentioned, the question of how McCain’s personal experiences led to his largely revolting political positions could have been really interesting. But for Wallace, the point is merely that McCain has some sort of abstract moral authority that we should respect for some unspecified reason. The problem here isn’t hard to see: McCain is among the biggest warmongers in the U.S. government, which is really saying something. How exactly does the experience of having been a POW legitimize advancing the sort of policy that creates POWs? Not only does Wallace not have an answer, he doesn’t even seem to realize there’s a question here.

Wallace summarizes his own political outlook as follows:

“Even in AD 2000, who among us is so cynical that he doesn’t have some good old corny American hope way down deep in his heart, lying dormant like a spinster’s ardor, not dead but just waiting for the right guy to give it to?”

Okay, “lying dormant like a spinster’s ardor” is some fucking killer writing, but as a simile, it’s exactly as wrong as the general argument. The spinster rejects romance not because she’s loveless, but because she refuses to play a game that she knows to be rigged against her. In the same way, Wallace’s Young Voters reject electoral politics not because they’re “jaded,” but because they have accurately assessed the situation and concluded that voting will not get them what they need. Wallace, meanwhile, is unable to conceive of any political progress that does not involve electing a Big Important Man to be the boss of America.

The thing is, despite all of his hand-wringing, Wallace is more than willing to blithely dismiss people who actually do believe in things:

“There are, of course, some groups of Young Voters who are way, way into modern politics. There’s Rowdy Ralph Reed’s far-Right Christians for one, and then out at the other end of the spectrum there’s ACT UP and the sensitive men and angry womyn of the PC Left. It is interesting, though, that what gives these small fringe blocs such disproportionate power is the simple failure of most mainstream Young Voters to get off their ass and vote.”

As a sensitive male leftist, I now regret never having had the opportunity to tell Wallace to go fuck himself. Seriously though, this is textbook Both Sides-ism: if only the Normal People would vote, we could get rid of all those crazy extremists and just have a nice, normal society where nobody ever complained about anything or made anyone else uncomfortable. (Also, in what universe do feminists have “disproportionate” political power – that is, in the direction that Wallace is implying?)

(Relatedly, Wallace refers to Rolling Stone‘s politics as “ur-liberal,” which, first of all, is not what that prefix means, and second, fucking lol.)

This is also a great example of Wallace’s habit of leaning on tired tropes when he has no idea what he’s talking about. By 2000, third-wave feminism had happened, and the term “womyn” was way the hell out of vogue. The name “riot grrl” was actually an explicit parody of the idea that spelling a word differently was politically meaningful. Also, ACT UP? Was about dealing with fucking AIDS. Which is really what the problem is here: we’re looking at a thought process that, while trying to find a way to make politics meaningful again, sees preventing people from dying as a “fringe” project of the “PC Left.”

Only You Can Prevent Forest Fires

Interestingly, Wallace’s dismissiveness of AIDS as a political issue has a precedent: the 1996 article “Back in New Fire” (I’m not seeing this one online). It appears in the posthumous collection Both Flesh and Not, and is, from a moral perspective, the worst thing Wallace ever wrote (Both Flesh and Not is actually valuable specifically because it contains much of Wallace’s worst writing). It is literally a defense of AIDS as a new source of meaning after the alleged vapidity of the sexual revolution.

I don’t think I need to explain what’s wrong with this; moreover, the extent to which the existence of this essay means that Wallace was a bad person is irrelevant. What is relevant is the fact that this argument, which is among the worst possible arguments that a person can make, is a direct result of the flaws in Wallace’s approach that we’ve been discussing. It starts by taking as a given the common framing of the sexual revolution as something that “cheapened” sexuality by making it too “easy” (hint: this is wrong), then addresses the issue on an intellectual level that ignores both the fact that people were and are fucking dying and the fact that their deaths were a political choice. The AIDS epidemic was ignored for the very basic reason that it primarily affected gay, black, and poor people. If you’re looking for meaning in AIDS, this is it: it’s a disgustingly vivid demonstration of how this society of ours actually works. But Wallace’s purely philosophical approach to the issue makes him totally blind to this important truth, so instead he winds up arguing that “the casual knights of my own bland generation [ed: speak for yourself] might well come to regard AIDS as blessing, a gift perhaps bestowed by nature to restore some critical balance.”

What’s really interesting/sad about this essay is that Wallace was close to getting it right. Regarding the virus itself, he points out that “natural things just are; the only good and bad things are people’s various choices in the face of what is.” Exactly. People, both those with power and those without, did make choices about what to do about AIDS, and those choices say something very important about the very issues that so concerned Wallace: what sense we can make of the sort of society we live in, and what we ought to do about it.

Love Me, I’m a Liberal

This dismissiveness of basic political issues in favor of lofty intellectual meandering was in fact one of Wallace’s running themes. Wallace never met an instance of racism or sexism that he couldn’t reanalyze as something that didn’t have to make anybody uncomfortable. When considered as a pattern, this actually starts to get deeply annoying, so you’ll have to forgive me for having a little fun here.

In “Host,” Wallace profiles right-wing radio host John Ziegler, whose primary political opinion seems to be that he hates OJ Simpson. Wallace details how Ziegler has been fired from several jobs due to his inability to refrain from publicly deploying every white bigot’s favorite word at every opportunity, and then analyzes the situation as follows: “John Ziegler does not appear to be a racist as “racist” is generally understood. What he is is more like very, very insensitive,” raising the perplexing question of what exactly Wallace thought the word “racist” meant. Similarly, regarding Ziegler’s attitude toward women: “Mr. Z is consistently cruel, both on and off the air, in his remarks about women. He seems unaware of it. There’s no clear way to explain why [ed: ?], but one senses that his mother’s death hurt him very deeply [ed: ???]”

Wallace’s essay on the porn industry, “Big Red Son,” is potentially the record holder for Longest Sustained What About The Menz-ing.

“Feminists of all different stripe oppose the adult industry for reasons having to do with pornography’s putative effect on women. Their arguments are well-known and in some respects persuasive. But certain antiporn arguments in the 1990s are now centered on adult entertainment’s alleged effects on the men who consume it.”

Once again, Wallace doesn’t know the facts and relies on a soundbite understanding of the issue. “Feminists of all different stripe” is exactly incorrect: pro-porn feminism is specifically a thing, and at the time this essay was written (again, Wallace is unaware that third-wave feminism happened), it was probably more popular than the alternative. Also, second-wave anti-porn arguments were very much about porn’s effect on men, the effect being that it caused them to beat and rape women. But this isn’t the sort of thing that Wallace is interested in. He opens the essay describing men who have castrated themselves, allegedly because “their sexual urges had become a source of intolerable conflict and anxiety.” Some people might be interested preventing rape and murder, but if a man somewhere is confused and anxious, Wallace is all over it.

One might expect that Wallace’s critical reading of John Updike, “Certainly The End of Something or Other, One Would Sort of Have to Think,” would be a great place for him to finally get around to some motherfucking feminism, but Wallace’s approach is nothing if not consistent. One odd thing about this essay is that people seem to identify the quotation “just a penis with a thesaurus” (referring to Updike, obviously) with Wallace, when in fact Wallace presents this and other dismissive quotes specifically to distance himself from them. What he says about them is the following:

“There are, of course, some obvious explanations for part of this dislike – jealousy, iconoclasm, PC backlash, and the fact that many of our parents revere Updike and it’s easy to revile what your parents revere. But I think the deep reason so many of my generation dislike Updike and the other GMNs [ed: Great Male Narcissists – note how Wallace assumes his own conclusion by using this term] has to do with these writers’ radical self-absorption, and with their uncritical celebration of this self-absorption both in themselves and in their characters.”

Sure, you might think you dislike Updike because he’s a blatant misogynist, but you’re just jealous, or maybe you have daddy issues. I, however, know the real, deep reason why you think you feel that way.

Given how blatantly insulting stuff like this is, it’s clear that a lot of what people see in Wallace is what they want to see. Of course, Wallace’s refusal to ever take a hard stand on anything makes this easy, but it doesn’t account for the motivation. There seems to be a very specific need that people really wish Wallace was filling.

This next one’s mostly for fun. This is an anecdote rather than an argument, but I’m counting it because it’s both representative and hilarious. It’s from “The (As It Were) Seminal Importance of Terminator 2”:

“The fact that what Skynet is attempting is in effect a retroactive abortion, together with the fact that “terminate a pregnancy” is a pretty well-known euphemism, led the female [ed: really?] I first saw the movie with in 1984 to claim, over coffee and pie afterward, that The Terminator was actually one long pro-choice allegory, which I said I thought was not w/o merit but maybe a bit too simplistic to do the movie real justice, which led to kind of an unpleasant row.”

It’s not clear what level of self-awareness Wallace is bringing to this story, but either way: a woman tries to engage him on a feminist issue, and he tells her that she’s being “simplistic” and then explains what the issue is really about from his own much deeper and more informed perspective, which does “real justice” to this action movie with Schwarzenegger in it. Somehow, this ends badly.

Okay, one last example. This one’s important because it’s completely unambiguous: Wallace takes a crystal-clear issue and totally fumbles it. It’s an aside from “Authority and American Usage” which seems to have been cut from the version that was published in Harper’s (as “Tense Present”) and appears therefore to not be online in plaintext form, so I guess I’m going to have to type out the whole fucking thing.

“In this reviewer’s opinion, the only really coherent position on the abortion issue is one that is both Pro-Life and Pro-Choice.”

(Yeah, okay, I could just stop here, but I’ll be professional about this.)

“Argument: as of 4 March 1999, the question of defining human life in utero is hopelessly vexed. That is, given our best present medical and philosophical understandings of what makes something not just a living organism but a person, there is no way to establish at just what point during gestation a fertilized ovum becomes a human being. This conundrum, together with the basically inarguable soundness of the principle ‘When in irresolvable doubt, about whether something is a human being or not, it is better not to kill it,’ appears to me to require any reasonable American to be Pro-Life. At the same time, however, the principle ‘When in irresolvable doubt about something, I have neither the legal nor the moral right to tell another person what to do about it, especially if that person feels that s/he is not in doubt’ is an unassailable part of the Democratic pact we Americans all make with one another, a pact in which each adult citizen gets to be an autonomous moral agent; and this principle appears to me to require any reasonable American to be Pro-Choice.

This reviewer is thus, as a private citizen and an autonomous agent, both Pro-Life and Pro-Choice. It is not an easy or comfortable position to maintain. Every time someone I know decides to terminate a pregnancy, I am required to believe simultaneously that she is doing the wrong thing and that she has every right to do it. Plus, of course, I have to believe that a Pro-Life + Pro-Choice stance is the only really coherent one and to restrain myself from trying to force that position on other people whose ideological or religious convictions seem (to me) to override reason and yield a (in my opinion) wacko dogmatic position. This restraint has to be maintained even when somebody’s (to me) wacko dogmatic position appears (to me) to reject the very Democratic tolerance that us keeping me form trying to force my position on him/her; it requires me not to press or argue or retaliate even when somebody calls me Satan’s Minion or Just Another Shithead Male, which forbearance represents the really outer and tooth-grinding limits of my own personal Democratic Spirit.

Wacko name-calling notwithstanding, I have encountered only one serious objection to this Pro-Life + Pro-Choice position. But it’s a powerful objection. It concerns not my position per se but certain facts about me, the person who’s developed and maintained it. If this sounds to you both murky and extremely remote from anything having to do with American usage, I promise that it becomes almost excruciatingly clear and relevant below.”

Does it ever. So let’s ignore the fact that Wallace is throwing himself a spectacular pity party here and – you know what, on second thought, let’s not. Wallace is fretting and sobbing and wringing his hands over the fact that sometimes people are mean to him in arguments about his objectively stupid position. Meanwhile, women are fucking dying from a lack of reproductive health care perpetuated by a tiny minority of zealots who have devoted their lives to a psychotic combination of fear and fetishization of vaginas.

And Wallace’s argument really is objectively stupid. The abortion debate has fuckall to do with whether you think abortion is a good thing or not. The idea that some people are actually in favor of abortions as such is literally an Onion article. The debate is about whether abortion ought to be available, and there’s no middle ground on that. It’s either practically possible to get an abortion, or it’s not. Based on Wallace’s description of his own opinions, it seems he wants abortion to be available but would always advise against getting one. This position is unambiguously pro-choice.

I really don’t think I’ve ever seen a better example of someone tying themselves up in knots by over-intellectualizing what is actually a very simple issue. Furthermore, Wallace is, once again, unaware of the relevant facts. There’s actually a rather famous essay arguing that abortion is morally permissible even if the abortee is assumed to be 100% human. But for all of Wallace’s concern about the issue, he never actually bothered to engage the relevant arguments. Working the whole thing out in his head was good enough for him.

The easy conclusion here is that Wallace was “too intellectual” and ignored the facts on the ground, which is partially true but not really an explanation. An intellectual approach is entirely compatible with the drawing of strong conclusions. For example, Peter Singer, whom Wallace cites in “Consider the Lobster,” makes an unequivocal moral utilitarian argument against meat-eating with disturbingly broad implications that remain controversial. Judith Jarvis Thomson’s essay on abortion, mentioned just above, is another example of using intellectual argument to support a radical position. Furthermore, Wallace was actually quite attentive to the facts on the ground; one of the things his work is known for is the way he vacuums up as much practical detail as possible to feed into his arguments. The problem is that he was often looking at the wrong facts; that is, facts that, correct or not, weren’t relevant.

As is hopefully obvious by now, all of this stuff isn’t just Wallace slipping up. The systematic errors in his approach were consistent, they had a cause, and that cause was Wallace’s tragic flaw.

(Part 3)

Tense future

As a person who reads things on the internet, you may or may not be aware that David Foster Wallace wrote a thing about grammar one time. The linguistic aspects of the piece have gotten their share of attention, on account of the fact that people just freak the fuck out about language for whatever reason, but Wallace’s fundamental claim is that language is inherently political; as such, the linguistic argument he makes here is also a political argument. Note in particular that, as published in Consider the Lobster, the title of this essay is “Authority and American Usage.” The important word there is “Authority.”

Structurally, the essay, which is ostensibly a review of a usage guide written by Bryan Garner, begins with an overview of the relevant linguistic issues, transitions into a political analogy for those issues, and then moves back into linguistics to discuss what Wallace feels is the importance of Garner’s book. The political analogy here is important because it means that Wallace’s conclusion about Garner is simultaneously a political argument. The purpose of this post is to explain what this argument is and why it’s wrong.

To start with, there are a number of factual and argumentative errors in Wallace’s essay that need to be discussed insofar as they inform his political analogy. Wallace starts by dividing language-carers up into “Descriptivists” and “Prescriptivists” (he consistently capitalizes both terms to make them seem like real affiliations that he’s not making up, so I’m going to be just as much of a dick and consistently put them in scare quotes). The immediate problem with this is that neither of these groups actually exists. “Descriptivists” are actually “linguists,” that is, people who study language and try to figure out what’s going on with it. The fact that they take a “descriptive” approach is a matter of necessity, not ideology; it’s just kind of how you have to approach things if you want to learn about them.

“Prescriptivists,” meanwhile, don’t exist for the opposite reason: they’re just people who have opinions about language; there isn’t anything specific that unites them. Everyone has opinions on language; a “Prescriptivist” is just anyone with enough privilege to get their opinion in the paper.

Wallace frames “Descriptivism” as a “rejection of conventional usage rules in English,” but this can’t actually be the case, since language obviously had to exist in studiable form before people could start applying explicit rules to it. In fact, the situation is the opposite of what Wallace claims: it’s prescriptive rules that only make sense in the context of an already-existing language; if people weren’t already doing things “wrong,” there wouldn’t be anything to complain about.

And obviously no language is ever created out of “conventional usage rules” in the first place, despite Wallace actually claiming that “language was invented to serve certain very specific purposes.” I’m going to give him the benefit of the doubt here and assume this is a convenient shorthand for “language arose in response to certain necessities” (so, uh, not that much of a shorthand, I guess), but it still indicates a fundamentally flawed way of looking at the issue.

Wallace’s claims here are ultimately based on the idea that the publication of Webster’s Third in 1961 was “the Fort Sumter of the contemporary usage wars,” due to the fact that its introduction set forth the “basic edicts” of “Descriptivism” for the first time. Nothing about this is correct. As this article explains, Wallace’s facts here are wrong in every important way. First, the text in question was not published in Webster’s Third. Second, it was not written by Philip Gove, the editor of Webster’s Third; it was quoted by him from the National Council of Teachers of English. Third, these were not new principles; they were endorsed by the Council precisely because they represented an existing consensus. Thus, it is not true that “Descriptivism” emerged as an “attack” on traditional grammar; to the extent that it qualifies as an ideology at all, it was merely an academic consensus that gradually spread out to influence society in general.

Wallace analogizes “Descriptivists” to political liberals and “Prescriptivists” to political conservatives, which is where the above errors start to become significant. His claim that “Descriptivism” “quickly and thoroughly took over US culture” is, by analogy, equivalent to the familiar conservative argument that liberalism is a recent phenomenon that emerged to attack traditional American values. In fact, Wallace makes this analogy explicit: he claims that “the ideological principles that informed the original Descriptivist revolution” were “the rejections of traditional authority (born of Vietnam) and of traditional inequality (born of the civil rights movement)” (bonus curmudgeon points: he says this in the middle of a complaint about “Political Correctness,” again, capitalized).

The political argument here is wrong in precisely the same way as the analogous linguistic argument. Power accumulation in human societies is a natural phenomenon; it’s only after a particular power structure has been established that it gets framed as “traditional” for the purpose of justification. Furthermore, as long as there’s been arbitrary authority, there’s been resistance. The concept of a “traditional” past that allows the struggle for justice to be framed as a recent aberration rather than a fundamental part of human history is a myth propagated by the ruling class, for obvious reasons.

Having established his framework, Wallace spends quite a lot of time debunking a phantom version of “Descriptivism” that allegedly claims that literally every utterance made by any speaker in any circumstances is 100% correct English, i.e. “there are no rules.” Nobody actually believes this, yet Wallace still feels the need to make arguments like “you can’t actually observe and record every last bit of every last native speaker’s ‘language behavior’.” No shit. Isn’t it pretty obvious that no one is actually trying to do this? General advice: if you feel the need to make an extremely obvious argument like this, you probably don’t understand the position you’re arguing against.

Again, this is analogous to the conservative talking point that anyone against any kind of existing authority is a radical who just wants to destroy society. I should probably clarify here that I’m not arguing that Wallace is himself a political conservative; he’s obviously not, and he implicitly allies himself with liberalism in this essay. The issue is precisely that it’s odd that his argument so closely mirrors the basic mythology of U.S. conservatives. As we’ll see, the problem is not Wallace’s explicit opinions, it’s his approach.

Wallace eventually gets around to the “Descriptivist” argument that’s actually relevant, which is that language consists of a set of actual rules which all fluent speakers instinctively follow (and therefore don’t require explicit elucidation from newspaper columnists), plus a set (actually several sets) of optional rules based on dialect, setting, etc., which may be followed or not without general loss of comprehensibility. Nobody is going to actually misunderstand you if you end a sentence with a preposition. The rest of the essay is a refutation of this argument; specifically, Wallace’s claim is that the “arbitrary” rules that “Prescriptivists” propound are not elitist window-dressing but are actually super important.

I’ve seen some people claiming that Wallace’s argument in this article actually ends up supporting “Descriptivism,” so it’s worth being explicit about his position here. He refers to the first (fake) version of “Descriptivism” as “Methodological Descriptivism” and the second (real) version of “Descriptivism” as “Philosophical Descriptivism,” and then claims that “This argument [“Philosophical Descriptivism”] is not the barrel of drugged trout that Methodological Descriptivism was [ed: yeah, arguments that you make up for the purpose of discrediting your opponents tend to be pretty weak], but it’s still vulnerable to objections,” before moving into “a more serious rejoinder to Philosophical Descriptivism.” The point of all this is that Wallace sets up “Philosophical Descriptivism” as the good kind of “Descriptivism” in order to argue against it. His ultimate conclusion about Garner’s usage guide is that it’s great because it makes a good argument for “Prescriptivism.” For example, when Wallace claims that Garner is “cunning” because he “likes to take bits of Descriptivist rhetoric and use them for very different ends,” he’s making it pretty clear whose side he’s on. Wallace’s explicit purpose in this essay is to defend “Prescriptivism.”

Now, as part of arguing against the good kind of “Descriptivism,” Wallace introduces a very interesting/bizarre argument involving – wait for it – Wittgenstein (if you were hoping that this discussion of David Foster Wallace was not going to end up involving Wittgenstein, I sincerely apologize). This is a multi-stage argument that’s going to take a little bit of work to get through, but it’s important because, as mentioned, the problem with this essay is Wallace’s approach, and his path through this argument makes that problem particularly clear.

The argument actually starts off rather insultingly. Wallace buries it in a footnote, claiming that it’s “lengthy and involved and rather, umm, dense,” and suggesting that “you’d maybe be better off simply granting the truth of the proposition.” As fucking though. This sort of fake homeyness where Wallace tries to act all casual and friendly while also implying that he’s so much smarter than you that you couldn’t possibly follow his argument is by far his most annoying characteristic.

And in fact, the argument isn’t hard to follow at all. It begins with a discussion of the fact that each of us experiences only our own mental states, and that we have no access to anyone else’s. For example, I’m aware of the particular sensation I have upon seeing the color red, and you’re aware of the particular sensation you have in response to the same stimulus, but between us, we have no way to confirm that we’re actually having the same sensation. There is no evidence that can be collected for or against the proposition that the sensation I have upon seeing red is the same sensation that you have upon seeing green. Oddly, Wallace presents this argument as the ridiculous fantasy of a stoned teenager, when in fact it’s just obviously correct. And it’s not merely correct, it’s part of one of the major problems of philosophy: the hard problem of consciousness. Wallace claims that this is a “solipsistic conceit,” but it’s no such thing; it relies on the assumption that other people have mental states comparable to one’s own.

What makes all this sloppiness on Wallace’s part especially odd is that it has nothing to do with his actual argument, which is Wittgenstein’s argument against private language. He claims that the idea of “private colors” is the same as the idea of “private language,” but that’s clearly wrong. The problem of private experience is that we can’t verify each other’s experiences, but we obviously can verify each other’s language choices; that’s what the entire essay is about.

And it’s also what the argument against private language is about. Now, I’ll admit that I haven’t read Wittgenstein, and I’m not claiming to know all the issues and implications here, but I think I can get through the basic argument without butchering it too badly. Basically, the only way we have to verify that our language usage is correct is to test it against other people. For example, if you and I are listening to a piece of music and I start talking about “arpeggios”, and you respond by talking about the same part of the song that I was thinking about, then I know that we’re talking about the same thing. If I were just talking to myself, I could pick an arbitrary aspect of the music and call it an “arpeggio,” but I would have no basis to call this a correct definition. Specifically, if I came up with a set of criteria that defined an arpeggio, and then picked part of a song and tried to decide whether or not it contained arpeggios, it would be impossible for me to be wrong regardless of which way I decided.

The only thing that can “disprove” my definition is another person. For example, if I start talking about the arpeggios in a song with someone else, and they say “what the hell are you talking about, that’s not an arpeggio, it’s a trill,” then maybe I’m using the wrong word. Of course, it’s also possible for the other person to be wrong, which we could find out by verifying it with more people and eventually achieving a common consensus about what the word refers to. Thus, language, as they say, takes a village.

And this is precisely where Wallace goes with this argument: “if the rules can’t be subjective, and they’re not actually ‘out there’ floating around in some kind of metaphysical hyperreality . . . then community consensus is really the only plausible option left.”

Now, this is starting to seem pretty strange, right? Wallace claims that “community consensus” is the only way that correctness in language can be determined, but he presents this as a refutation of “Descriptivism,” despite the fact that the idea that consensus rather than authority is what determines correctness is precisely what “Descriptivism” is. This, presumably, is what has fooled some people into thinking that Wallace is making a “Descriptivist” argument here. But, as mentioned, Wallace is arguing in favor of “Prescriptivism” as embodied by the particular usage guide that he’s favorably reviewing. There’s only one way to reconcile this, and it’s fairly disturbing: Wallace thinks that authority is the best way to establish consensus.

This argument obviously has implications that go far beyond whether splitting infinitives is cool or not, which is where the political aspect of all this comes in. But before getting into that, I want to make it absolutely clear that this is in fact the argument that Wallace is making, since at first blush it seems kind of implausible, not to mention borderline fascist.

Wallace’s ultimate conclusion in this essay is that Garner’s usage guide is awesome because: “Garner structures his judgements very carefully to avoid . . . elitism and anality.” “His personality is oddly effaced, neutralized.” “Garner’s lexical persona kept me from ever asking where the guy was coming from or what particular agendas or ideologies were informing [his judgements].” “Garner, in other words, casts himself as an authority not in an autocratic sense but in a technocratic sense. And the technocrat is not only a thoroughly modern and palatable image of authority but also immune to the charges of elitism/classism that have hobbled traditional Prescriptivism.”

Question: why is any of this a good thing? Why is it good for Garner’s persona to be “effaced” and “palatable?” Why is Wallace happy that a person with an unknown agenda is trying to tell him what to do? Why is it a good thing that Garner is immune to charges of elitism? What if such charges would be correct; what if he actually is an elitist? What if traditional “Prescriptivism” was justly hobbled?

The reason these things are good for Wallace is that they prevent Garner from being undermined as an authority. The question of whether or not Garner deserves to be treated as an authority is, for Wallace, irrelevant. Wallace explicitly celebrates Garner’s book as “basically a rhetorical accomplishment,” meaning that its actual correctness is beside the point. This is the core of Wallace’s support of “Prescriptivism”: the conviction that there must be an authority, no matter what.

(Incidentally, Wallace recommends Garner as a “technocrat” on the basis that’s he’s a lawyer. Is there any dystopia more nightmarish than a technocracy run by lawyers?)

Wallace claims that authority must be earned, but he doesn’t back that shit up. While he does allow that “Prescriptivists” can be wrong, his only actual objection to “Prescriptivism” as an enterprise is that it has a bad public image, hence his celebration of Garner’s purely rhetorical accomplishment. The only thing that “Prescriptivism” ever needed was a image enhancement, and that’s what Garner has supposedly provided.

To be clear, there’s nothing wrong with giving advice on language use. In fact, I’ll give you some right now: when using a pronoun to refer to a person of unspecified gender, you should use “they” in all situations where it’s not completely confusing, and use “she” otherwise. The reason for using “they” is that it already has some traction as a gender-neutral singular pronoun, and since it’s basically impossible to artificially introduce a new word as a basic part of speech like this, “they” is our only shot at a gender-neutral singular pronoun. The reason for defaulting to “she” is that literally everything else in society is male-default, so this is one easy thing you can do to help turn the tide, even if it’s a purely symbolic gesture. You can’t just always use “she,” though; if the gender of the referent is specified, then you need to use the appropriate pronouns. Even if you’re a gender abolitionist, denying the gender identity of actually existing people is actively harmful.

What you’ll notice about the above advice is that I have an agenda. If you disagree with my political goals (e.g. you don’t support absolute gender equality), or you agree with my goals but think my tactics are misguided (e.g. you don’t think gender-neutral language has any practical effect on equality, or you think you have a better way of going about it), then you have reason to disregard my advice. If, on the other hand, I had presented my advice as merely being a “rule” which you are obligated to follow in order to write “correctly,” then your only grounds to object would be “nuh-uh.”

Thus, by supporting this second type of advice, and by supporting Garner’s “invisible” persona, what Wallace is actually supporting is arbitrary authority: correctness without values.

Let’s back up a bit and look at how Wallace gets here. First, he establishes an oversimplified framework for the basic issue using commonplace ideas that everyone’s already comfortable with (“Descriptivists” = “liberals” = “rules are bad” vs. “Prescriptivists” = “conservatives” = “rules are good”). This makes us feel like we know where we’re standing and prevents any deeper aspects of the issues involved from rising to the surface. As one example, the post-war equality movements contained a radical approach to language criticism that went well beyond merely rejecting prescriptive rules. A number of feminist dictionaries were produced that explicitly challenged the patriarchal assumptions embedded in everyday language. This is the sort of real political challenge that demands to be addressed directly, but Wallace is having none of that. He’d much rather survey the general landscape from 10,000 feet in the air.

Furthermore, Wallace actually does recognize that his liberal/conservative split is an oversimplification, but he fails to address the implications of this. Specifically, he claims that “Political Correctness” (there aren’t scare quotes big enough) is an example of “liberal Prescriptivism,” which is entirely correct. As in my example above, “Political Correctness” is “Prescriptivism” with an agenda. But Wallace goes on to merely issue the standard complaints that “Political Correctness” represents “a whole new kind of Language Police” (this is getting to be painful to type), without realizing what this means for his argument. What it means is that “Prescriptivism,” like any other type of statement about what’s “right” and “wrong,” is always based on a value system. Thus, the kind of disinterested “Prescriptivist” that Wallace sees in Garner cannot exist. Everyone works from an ideology, and Garner is no exception. The fact that Garner seems not to have an agenda is precisely the danger: an invisible ideology is one that cannot be challenged.

An important aspect of Wallace’s oversimplification is the way it allows him to pay lip service to inconvenient arguments without actually following through on their implications. Significantly, Wallace admits that “traditional English is conceived and perpetuated by Privileged WASP Males,” but note the cutesy way he phrases this, and they way he puts the admission that this claim “is in fact true” in a footnote, as though this were an extraneous point. In fact, it is the very heart of the issue at hand: since we know for a fact that existing authority is fundamentally oppressive, Wallace needs to do a damn sight better than recommending an authority on a merely rhetorical basis. This realization should have caused Wallace to rethink his entire argument, but presenting it as a side note allows him to glibly glide by.

Second, Wallace busts out his trademark intellectual fireworks by bringing up as many Big Ideas as he can, as fast as he can, which naturally results in all of them being underanalayzed and unconvincingly connected to the actual issue at hand. In the Wittgenstein example above, we saw Wallace segue between unconnected ideas without justification, fail to place those ideas in their proper philosophical context, and finally draw a conclusion entirely at odds with the arguments that were supposedly used to support it. Importantly, Wallace’s conclusion is the result of smuggling in an unstated ideological premise: that authority is required for establishing a consensus. This premise is precisely what Wallace needs to argue for here in order for his conclusion (that an “expert” usage guide is required to resolve the “usage crisis”) to hold, but his fancy footwork allows him to get away without even mentioning it.

Finally, having avoided the implications of the relevant arguments while sneaking in his own unconscious preferences, Wallace is free to drift down onto a comfortable conclusion. Sure, the issues of language may be “complexly political,” but all that’s really needed to resolve them is for a Reasonable Man to write a new usage guide that everyone can agree on. Isn’t it comforting to know that things really are that easy?

The political implications of all this should be pretty clear by now. Wallace’s proposed solution to the “authority crisis” created by the post-war equality movements is, by analogy, the same as his solution to the “usage crisis” created by “Descriptivism”: the reestablishment of a credible authority – any credible authority. This lends a rather dark irony to Wallace’s concluding statement that Garner’s approach is “about as Democratic as you’re going to get these days.”

Wallace is, of course, wrong. The solution to the “usage crisis” is for people like Wallace and Garner to get off their fucking high horses already. Which, recall, does not mean that they aren’t allowed to give advice. It means that they have to stop pretending that their preferences are rules and open themselves up to ideological challenges based on the values they hold that inform those preferences. Analogously, the solution to the “authority crisis” in politics is not to establish a kinder, gentler ruling class. The solution is to finish what’s been started.

(Part 2)