The wicked game

(Part 1)

(Part 2)

(Part 3)

As I recall, the first time I encountered David Foster Wallace’s work was the cruise ship essay. This seems to be a common starting point; it’s certainly one of his more accessible and funnier pieces. For me, though, it had a certain personal significance. I had recently been on a cruise myself (not of my own volition), and my precise feelings about the experience were: never again. It was terribly gratifying to find someone who was not only taking an axe to one of society’s more ridiculous holy grails, but was doing so in a way that was both comprehensively intelligent and appealingly human.

I think this explains a lot of DFW’s appeal. Most of us are deeply uncomfortable with various aspects of our absurd society, but since most of these things are taken for granted, we can rarely express our feelings in a way that’s understandable to other people. In Wallace, we found someone who not only felt the same way, but was sharp and observant enough to express those feelings in a way that really brought the issues to life. It’s not hard to see why people yearn for a sort of intellectual-yet-human role model to help us navigate a confusing world. As it turns out, though, that person didn’t exist. We had to invent him.

I consider it a bit of a personal failing that it took me so long to figure out what Wallace’s problem was. Naturally, this would have been a lot more useful back when his mythologization was still a work-in-progress rather than a fait accompli. The truth is, I was so excited to find someone who seemed to be speaking to my concerns in an intelligent way that I failed to listen to my own feelings. The way that Wallace felt about his cruise was actually not at all the way that I felt about mine. The beginning was the end.

For Wallace, the problem with cruises is that they’re too good. Contrary to the typical American view of luxury being the goal of life, Wallace considers the experience of luxury to be an insidious form of nihilism – he pointedly highlights the cruise’s promise that patrons will finally be able to do “Absolutely Nothing.” If the struggle to manage and fulfill one’s desires is what constitutes the actual experience of life, then a situation in which all of one’s desires are met automatically without even any thought being involved is basically the same as having no desires, which is basically the same as not existing (if this is all sounding a bit Nietzschean, you’re going to have to hold your horses).

The basic problem with this is pretty obvious: not only are cruises not “too good,” they aren’t even good. Cruises suck. In addition to the fact that a cruise ship is basically just a crowded, extra-nausea-inducing hotel and the fact that the “entertainment” is all pathetic summer-camp-for-adults garbage, a cruise experience is fundamentally unenjoyable due to the way it smacks you in the face with its own exploitative nature. Maybe I’m a freak, but I don’t find the experience of being waited on to be any fun, particularly when it manifests itself as an army of servile brown people standing around every corner, sporting plastered-on smiles while waiting on pins and needles for the chance to be ordered around by some pompous Hawaiian-shirt-wearing tourist jackass.

The same effect is in play once the ship reaches one of its Remote Island Destinations. You get funneled off the ship directly into a ersatz strip mall full of chintzy tourist trash. The fact that this setup is so wildly incongruous with its location makes unavoidable the realization that it is there because of you, that the money you paid for the cruise is funding exploitation, that your presence on the island is white supremacy in action. The usual way to understand cruising is that it’s the closest middle-class people can get to being upper-class, but it’s actually more like the closest that alienated office workers can get to being imperialists.

In his typically annoying way, Wallace makes an incisive observation about this while also completely dismissing its importance:

“the ethnic makeup of the Nadir‘s crew is a melting-pot melange . . . it at first seems like there’s some basic Eurocentric caste system in force: waiters, bus-boys, beverage waitresses, sommeliers, casino dealers, entertainers and stewards seem mostly to be Aryans, while the porters and custodians and swabbies tend to be your swarthier types – Arabs and Filipinos, Cubans, West Indian blacks. But it turns out to be more complex than that”

no it doesn’t shut up stop talking. Christ. Racism is not a god damn intellectual puzzle for smart white people to thoughtfully pore over. It’s physical oppression, and you’d think that experiencing it in such close quarters would make that obvious. It did for me, anyway. Contra Wallace, the cruise experience did not make me feel pampered. It made me feel like I was on a plantation.


Double Bound

The reason it’s so difficult to figure out where DFW stands is that he pretty much never puts his foot down. He always leaves himself an out. For example, in an earlier post I accused him of advocating for a “kinder, gentler” ruling class. Not only does Wallace preemptively head off this accusation, he uses the exact same reference:

“Besides, the rise of Reagan/Bush/Gingrich showed that hypocritical nostalgia for a kinder, gentler, more Christian pseudo-past is no less susceptible to manipulation in the interests of corporate commercialism and PR image. Most of us will still take nihilism over neanderthalism.”

This is part of the argument in “E Unibus Plurum,” and he has to say this here because his anti-irony conclusion seems to lead pretty obviously to a re-adoption of traditional values. But note that this is not actually an argument, because Wallace obviously does not “take nihilism.” So, is Wallace merely talking about appearances, saying that the Reagan option would be appealing if it were presented better? Or is this rejection based on principles, and in favor of a third option? Given the essay’s conclusion, isn’t an accusation of “neanderthalism” precisely the sort of “risk” we might expect an “anti-rebel” to take? Meaning Wallace actually is in favor of this? He doesn’t say. All he does here is merely defend himself against the expected charge of political conservativism.

So, what I want to say now is that Wallace was “impossible to pin down,” but guess what:

“And make no mistake: irony tyrannizes us. The reason why our pervasive cultural irony is at once so powerful and so unsatisfying is that an ironist is impossible to pin down.”

Emphasis Wallace’s. Implying of course that Wallace himself is against this position and therefore can be pinned down. At this point, what I’m interested in is why Wallace constantly argues in this manner. Because these aren’t flukes: this sort of automatic backpedaling is an intrinsic part of his M.O., to the extent that it often occurs in the space of a single argument.

For example, part of “E Unibus Plurum” is devoted to debunking a utopian argument that asserts that improvements in technology will resolve the social problems with TV and turn it into a vector for liberation. It’s a transparently silly argument, and Wallace gives it the usual treatment. But then he produces the following paragraph:

“Oh God, I’ve just reread my criticisms of Gilder. That he is naive. That he is an ill-disguised apologist for corporate self-interest. That his book has commercials. That beneath its futuristic novelty it’s just the same old American same-old that got us into this televisual mess. That Gilder vastly underestimates the intractability of the mess. Its hopelessness. Our gullibility, fatigue, disgust. My attitude, reading Gilder, has been sardonic, aloof, depressed. I have tried to make his book look ridiculous (which it is, but still). My reading of Gilder is televisual. I am in the aura.”

Set aside whatever counterarguments you may be considering, and instead ask yourself: why did Wallace write this paragraph? If he actually thought this was a valid criticism of his argument, one expects that he would have, you know, fixed his argument before publishing it. It can only be that Wallace felt that he had to make his argument the way he did, and then do the best he could to do stifle the “televisual” aspect of it with this disclaimer. But that’s obviously wrong; it was obviously within his abilities to have made a straightforward factual argument without ridiculing his target. Rather, then, this paragraph exists because Wallace wants to believe that this is the only possible way of doing things, that he can’t escape “the aura.”

This same dynamic occurs even more provocatively in Wallace’s essay on Dostoevsky. The theme of this essay is that Dostoevsky is an important role model for modern Americans due to the fact that his work is both highly artistic and deeply moral. Naturally, this argument is part of Wallace’s overall claim that we’re “too ironic” nowadays and we don’t know how to be “sincere” anymore.

It is in this context that Wallace writes the following:

“Frank’s bio prompts us to ask ourselves why we seem to require of our art an ironic distance from deep convictions or desperate questions, so that contemporary writers have to either make jokes of them or else try to work them in under cover of some formal trick like intertextual quotation or incongruous juxtaposition, sticking the really urgent stuff inside asterisks as part of some multivalent defamiliarization-flourish or some such shit.”

That bit about the asterisks refers to this essay itself, throughout which Wallace interpolates Big Moral Questions in a manner like such as the following:

“** Is the real point of my life simply to undergo as little pain and as much pleasure as possible? My behavior sure seems to indicate that this is what I believe, at least a lot of the time. But isn’t this kind of a selfish way to live? Forget selfish – isn’t it awful lonely? **”

So again, what Wallace is doing here is explicitly criticizing his own approach rather than trying to fix it. This is a very odd move, because it results in Wallace neither having his cake nor eating it. He could have presented these issues directly, allowing them to stand on their own, or he could have gone deeper into his criticism, attempting to figure out a better way to ask these questions. As it is, half-assing it and then calling himself out like this blunts the immediacy of his questions, making the whole thing feel like little more than a parlor game – exactly the result that Wallace was so afraid of.

So, at this point, there’s only one possibility. Given how central this mistake is to Wallace’s argument, he can’t be doing it on accident. It must be the case that Wallace has a specific positive motivation to present his arguments in this way, to force himself into this apparent double bind. Recall also the way that Wallace’s sloppy arguments in his journalism just so happen to overlap perfectly with his ideological concerns. For example, Wallace thinks that the “Descriptivist” position on linguistics is that “there are no rules,” just like “televisual” irony supposedly means that “nothing means anything anymore,” just like the problem with politics is that “young voters” “don’t believe in anything anymore.” What we’re looking at here is a case of motivated reasoning.


The Howling Fantods

David Foster Wallace’s great fear was what he referred to as “solipsism.” The most vivid expression of this is the fate of Hal Incandenza that opens Infinite Jest: living a rich inner life while being completely unable to communicate with the outside world.

“’There is nothing wrong,’ I say slowly to the floor. ‘I’m in here.’

I’m raised by the crutches of my underarms, shaken toward what he must see as calm by a purple-faced Director: ‘Get a grip, son!’

DeLint at the big man’s arm: ‘Stop it!’

‘I am not what you see and hear.’

Distant sirens. A crude half nelson. Forms at the door. A young Hispanic woman holds her palm against her mouth, looking.

‘I’m not,’ I say.”

The main characters of Infinite Jest, Hal Incandenza and Don Gately, are of course transparently based on Wallace himself; hence, Wallace is here expressing what he sees as the dangers of his own personality and habits. This dramatization of Hal’s terrible fate is precisely Wallace’s expression of his own fear.

As everyone knows by now, one of DFW’s big influences was Wittgenstein. He’s explicit about this in exactly one place: “The Empty Plenum,” his review of David Markson’s Wittgenstein’s Mistress. Unfortunately, this is early DFW, before he had gotten his bad habits of pseudo-academic-ese, intrusive name-dropping, and pointlessly convoluted phraseology under control. It’s badly written, his argument is unclear, and he spends a lot of time on some really weak gender analysis that I’m going to ignore. Still, one does what one must.

Wallace’s central claim is that the philosophy espoused by Wittgenstein’s Tractatus Logico-Philosophicus amounts to advocacy of solipsism. Very, very briefly: the Tractatus claims that facts about the physical world are the only things that we can meaningfully talk about (“The world is everything that is the case,” and “What is the case, the fact, is the existence of atomic facts.”). But “facts” themselves exist only in our minds; thus, Wallace’s worry here is that:

“This latter possibility – if internalized, really believed – is a track that makes stops at skepticism & then solipsism before heading straight into insanity.”

In other words, all we have are our own experiences, and those are unreliable, so if we take this seriously, what we actually have is nothing, pure chaos.

This doesn’t really have much to do with what Wittgenstein was actually talking about. For him, the importance of this argument was that statements about ethics or metaphysics become meaningless. It was a philosophical problem, not a personal one. Still, it would seem that Wallace’s fear has some justification. We know how hard it is to get through to other people just based on everyday experience; it’s not too much of a stretch to take this difficulty seriously.

But “solipsism” is the wrong term for this issue. The whole point of solipsism from a philosophical standpoint is that there’s no way to actually tell the difference between Solipsist World and Non-Solipsist World. It’s a purely philosophical problem, but what Wallace is talking about is the experience and the feeling of not being able to get through to other people.

The other term Wallace likes to use for this is “loneliness,” which is closer to what he’s talking about, but still not totally precise. Wallace was, after all, a well-known author with plenty going on in his life. He wasn’t “lonely” in the obvious sense of the word (one can, of course, be lonely without being alone). So instead, the term I’m going to use is “intellectual isolation,” the fear that nothing that goes on inside our heads can ever really get out into the real world (and, perhaps even scarier, vice versa). Wallace’s specific fear was that, no matter what he said or did, he could never really express himself to another human.

When we understand the issue in this way, it becomes quite clear that this was the underlying impulse that motivated much of Wallace’s work. One of the notable things about Wallace’s oeuvre is how much he wrote about how to write – not in the technical sense, but in the philosophical sense, i.e. how one ought to write. This was his attempt to resolve the problem of intellectual isolation.

And while Wallace was pretty obviously projecting his own concerns onto Wittgenstein’s philosophy, it just so happens that Wittgenstein got around to addressing this problem as well. As Wallace mentions, Wittgenstein performed a dramatic about-face after the Tractatus, such that his second major work, the Philosophical Investigations, amounts to a direct refutation of the argument in the Tractatus. In the Investigations, Wittgenstein turns his approach around completely: rather than trying to determine the basis for language, he looks at language as it actually exists and is used. What he finds out here is one of those simple insights that has deep implications. Rather than language being purely referential (that is, only communicating physical facts) language is actually not referential at all; it is purely functional (that is, it’s a tool for social interaction).

The reason for this is that language is how people interact with each other, not how one person interacts with the world. If you were alone and looking at a tree, you wouldn’t point at it and say “look at that tree.” But you would do so if you were with another person and you wanted to draw their attention to the tree. You might even do so if there were no tree at all, and you were trying to trick the person. In such a case, your utterance obviously doesn’t refer to anything in the real world. Rather, it performs the function of making the other person look.

This is clearest in highly contrived situations, such as a job interview. The standard sort of exchange like Q: “What is your greatest weakness?” A: “Oh gosh, I don’t know, I guess I’m a bit of a perfectionist” isn’t mean to elicit any real information, it’s just for the interviewer to get a feel for the candidate. Most of the interview is really contentless; it’s a sort of “test” to verify that the candidate can respond to situations in the appropriate manner. Wittgenstein calls this sort of situation a “language game,” and each utterance of this sort can be thought of as one possible “move” in the game.

The insight, though, is that, on a fundamental level, all communication is like this. There are only language games, and every possible utterance is a move in whatever game we’re playing at the moment. A “private language” that could be used by only one person to refer directly to the physical world is an impossibility. The connection between language and the physical world is entirely mediated by other humans.

When Wallace gets to the part of “The Empty Plenum” where he explains the argument from the Philosophical Investigations, he doesn’t follow it through to its conclusion like he does with the argument from the Tractatus. This can only be because he doesn’t think the argument resolves the problem, and, indeed, the fact that he spent the rest of his life trying to work it out shows that he didn’t have a solid answer. But he did have an approach, whether it was consciously chosen or not.

Recall the argument Wallace makes about linguistics in “Authority and American Usage.” His claim is that the “Descriptivist” argument advocating “the abandonment of ‘artificial’ rules and conventions” must result in “a literal Babel,” and this is why prescriptive language rules are necessary. Recall further that Wallace makes this claim while discussing Wittgenstein’s argument against private language. We can now finally understand why Wallace makes the bizarre move from “language is purely social” to “arbitrary usage rules are required for understanding.” It’s because he was between a rock and a hard place. From the perspective of the Tractatus argument, language refers to real things, but it can’t actually be used to communicate our internal thoughts and feelings. Whereas under the Investigations argument, “everything is permitted”; language can be used for any purpose, but it has no grounding, so we can never really know what’s being said. Note that this latter argument is exactly the same as Wallace’s objection to irony: when someone is being ironic, they can say anything, but you can never really know what they mean.

And this is precisely the dilemma that Wallace was attempting to overcome as a writer: the issue of how to really communicate what he wants to say. Which means we’ve finally come to the heart of the matter. This is the central issue which all of Wallace’s work was an attempt to resolve. The approach that I’ve previously identified – Wallace’s sublimation of his own feelings into intellectual argument – was his attempted solution (though of course he never actually felt that he had succeeded).

Hence Wallace’s conclusion in “Authority and American Usage.” He accepts that language is fundamentally social and not a formal system, but he still thinks arbitrary usage rules are required. What he’s getting at here is illustrated best by his position with regard to African American Vernacular English. Wallace, unlike most “prescriptivists,” is aware that AAVE is a fully functional dialect and not a “degraded” version of “normal” English. The fact that “Standard” English and not AAVE is the prestige dialect in our society is entirely arbitrary (I mean, it’s the result of white supremacy, obviously, but it’s “arbitrary” in the academic sense). Wallace knows this, and yet he still demands that his students fully embrace Standard English. Why? Because the existence of a formalized standard dialect resolves his dilemma: having a single rigorously defined means of communication allows us to express ourselves such that we can be absolutely understood. This is the root of the prescriptivist anxiety against “ambiguity,” at least for Wallace. What matters to Wallace is not that his dialect specifically is the prestige dialect, but rather that there is a prestige dialect at all, regardless of which dialect that is.

The reason for this is that Wallace feels this is the only way for humans to really be able to communicate. It evades the Wittgensteinian Scylla and Charybdis (words are great) by preserving the social aspect of language that allows us to express ourselves, while providing a solid foundation that allows us to be unambiguously understood.

And it is this same approach that Wallace took in his writing and argumentation. He could have merely expressed himself as an ideological writer a la Dostoevsky, but then he would have been giving up on making sure that other people understood him (the Investigations approach). Or, if he had gone the academic route and made purely intellectual arguments, then he wouldn’t have been expressing himself at all; it would be as though he didn’t really exist (the Tractatus approach). Instead, he attempted to navigate a middle path through the double bind: he took his own feelings and anxieties, and “rigorously defined” them as intellectual arguments, such that everyone else could understand them.

Okay! So, you remember that all of this is the position that I’m arguing against, right? Yeah. Because this doesn’t actually work. It’s a con. And it wouldn’t be nearly as much of a problem if it weren’t for the fact that everyone fell for it.


Philosophers’ Error

The reason all of this matters is that Wallace’s work has almost universally been read in exactly the wrong way. He’s been accepted as an avatar of the current American situation, his earnest confusion and noncommittal intellectualism taken as guidelines. A.O. Scott’s remembrance of Wallace in the New York Times portrays the problem quite vividly:

“The moods that Mr. Wallace distilled so vividly on the page — the gradations of sadness and madness embedded in the obsessive, recursive, exhausting prose style that characterized both his journalism and his fiction — crystallized an unhappy collective consciousness. And it came through most vividly in his voice. Hyperarticulate, plaintive, self-mocking, diffident, overbearing, needy, ironical, almost pathologically self-aware (and nearly impossible to quote in increments smaller than a thousand words) — it was something you instantly recognized even hearing it for the first time. It was — is — the voice in your own head.”

This is a rare example of damning with fulsome praise. This is not how the “shock of recognition” you get from great art is supposed to work. If reflecting what’s already in your own head were all that writing could accomplish, what would be the point? The strength of writing is obviously its ability to capture the sense of internal monologue, but the point ought to be that that monologue is someone else’s, one you couldn’t otherwise hear in your own head. The “recognition” you feel ought to be that of “making the strange familiar,” of not merely encountering an alien perspective but feeling it deeply, such that it becomes a new part of yourself.

What’s critical to note here is the way that Scott recapitulates Wallace’s mistake. He’s aware that Wallace’s work had “his personality . . . stamped on every page,” but then goes on to claim that it “crystallized an unhappy collective consciousness.” This is exactly wrong: it crystallized Wallace’s own unhappiness. And given that Wallace’s unhappiness was in fact the result of serious-fucking-business clinical depression, we have less than no reason to interpret it as a general symptom of society.

Of course, a lot of this kid-gloves treatment has do with the fact that Wallace was a white male. Most people don’t have the luxury of speaking generally; most people are preemptively confined to their own perspectives. People like Wallace are getting an undeserved pass. Again, Scott is aware of this but fails to recognize its significance: he compares Wallace to a bunch of other authors who he refers to as “itchy late- and post-boomer white guys,” but somehow fails to account for the fact that other types of people exist (including other types of white men; not all of us are hopelessly confuddled by phantom postmodernism). These writers, including Wallace, are mapping out one small corner of human experience and not defining a “generational crisis.”

(And I get that Scott is writing an obituary and he’s obviously not going to criticize Wallace here. But the particular way in which he praises Wallace is what makes the point.)

And this is also why it’s so wrong to revere Wallace as some sort of great intellect (I mean, aside from how much of a front the whole “genius” thing was). Everyone’s aware of his personal problems by now, but these have been framed as foibles, evidence that he was a “flawed human being,” a doomed genius. Again, exactly wrong: Wallace’s problems are evidence that he was normal, that he was doing exactly what all the rest of us are doing: trying to make sense of a senseless universe using whatever shoddy tools we happen to have at hand. And this is why the limits of his perspective and the errors that resulted from those limits must be kept in full view.

This is not currently happening. Consider this deeply unfortunate individual, who is terribly interested in what Wallace’s opinion on selfie sticks would have been, had he only lived to tell us. Truly a shame, right? Again, this person has learned exactly the wrong lesson from Wallace: that a rigorous intellectual analysis of her own collection of trivial personal confusions contains the answers to the great questions about meaning and society. Wallace’s projection of his own problems onto the world has encouraged others to make the same mistake.

But the fact that David Foster Wallace was wrong about everything doesn’t mean that his work doesn’t have value. The limited nature of any one perspective is far from a new problem – and it’s far from insoluble. This is actually one of the things that humanity already has a handle on, though perhaps an unwitting one.


Playing the Wicked Game

Here’s the thing: not only is Wallace’s approach not a solution to his problem, it’s actually the only crime: arguing in bad faith. Bad faith is the thing that actually does to communication what Wallace thought irony did: it makes it impossible to tell where someone is coming from. This is why Wallace was able to, for example, write an entire article about John McCain’s candidacy, the point of which was to harangue young people for not having political convictions anymore, and get though the entire thing without ever betraying the slightest hint of what his own political beliefs may or may not have been.

But the reason Wallace couldn’t find a solution isn’t because there isn’t one; rather, it’s because there isn’t a problem. And, ~ironically~, we know this from the very source that sent Wallace tumbling down the rabbit hole in the first place: Wittgenstein’s Philosophical Investigations.

Like I said, the whole “language games” thing is really obvious on the surface – of course we communicate in functional ways that don’t actually refer to anything. The trick is to follow this argument all the way through. If we accept that all communication consists of “language games,” then the obstacles to communication that so vexed Wallace reveal themselves as phantoms: they are merely different games. One retains the options of playing by their rules, or choosing a different game.

Thus, Wallace’s insistence on one “correct” method of communication is essentially cheating: refusing to abide by the rules of any one game, he takes the rules of one game and applies them to another. But the truth is, just as language naturally resolves itself into mutual comprehensibility without anyone policing it, each language game serves its own purposes just fine, as long as you don’t expect one game to be able to do everything.

Consider Wallace’s criticism of John Updike. Wallace’s claim is that Updike is a “narcissist,” and while this is again a misuse of a technical term, it’s a common one, so it’s clear what Wallace meant. He meant that Updike only ever talked about himself, which he highlights via the following Updike quote:

“Of nothing but me . . . I sing, lacking another song.”

But that isn’t actually what this means. That is, I’m not familiar with Updike, so I have no idea what he meant by it, but I’ll tell you what I mean by it. What this quote refers to is the fact that none of us actually has access to anything other than our own subjective experience. So everything a person writes ultimately comes out of nowhere but their own head; even when they’re writing about experiences that are totally alien to their own, they’re still writing about their own experiences hearing about those experiences (or making them up). This is what it means to “lack another song.”

The catch is that this is fine. I mean, it has to be, because there’s no alternative. We actually are each trapped inside our own perspective, but that doesn’t stop us from communicating, as long as we don’t expect perfection. Updike can only talk about himself, but we understand this, and we take it into account when we read his work, and this allows us to derive our own insights from Updike’s perspective, or to gain an understanding of the particular type of person that he is, or even to read him entirely critically as an example of what not to do. All of these things are valuable. (Again, I have no idea whether Updike’s work is actually worth reading by this standard, but we’re talking about the principle here. Here’s a pretty good blog post that makes this argument with regard to Updike specifically.)

And all of this applies just as strongly to Wallace’s work, despite his attempts to dodge the issue. Even on a totally naive reading of Wallace, isn’t it pretty obvious that he consistently “sings of himself”? Like, are we supposed to think that all that shit about tennis was just a coincidence? “Uncritical” self-absorption is preferable to a self-absorption that pretends to universality.

So that’s one game: subjectivity. Another game is objectivity. Consider “Consider the Lobster,” where Wallace invokes Peter Singer as support for the argument against meat eating. Wallace brings up Singer only in passing, on the way to his own quietist conclusion. But he couldn’t have reached this conclusion if he had actually taken Singer seriously, because Singer’s argument is part of a moral framework that doesn’t really give you the option to just “worry” about the issue.

The famous example that defines Singer’s approach goes as follows: You emerge from the tailor’s, having just purchased a very nice outfit for $1,000. As you walk down the street, you see a child drowning in the river. No one else is around to help. You’re a strong enough swimmer to save the child easily, but doing so will completely ruin your expensive new outfit. Do you save the child? Obviously, the answer is “yes.” But now consider that, instead of seeing a child drowning, you arrive home and find a letter from a charity asking for a $1,000 donation to save the life a child in some far away country you’ve never heard of. Do you make the donation? Singer claims that the moral calculus is exactly the same in these two situations, and yet, most of us do not make these sorts of donations whenever we can. According to Singer, we ought to, and this results in a broad obligation to consider the moral effects of our spending choices from a utilitarian perspective. It’s the same deal with meat eating: just as $1,000 is not worth a child’s life, the taste of a good burger is not worth an animal’s life.

The point isn’t whether this argument is right or wrong, the point is that it imposes an obligation. In “Consider the Lobster,” Wallace goes well out of his way to make sure his argument doesn’t impose any obligations on anyone, and this is where he fails as an intellectual. Ideas aren’t toys; accepting an important idea ought to obligate you to change your life. But for Wallace, an idea is just an opportunity to reflect his own confusion. He avoids taking precisely the out that ideas are capable of providing: they can provide us with a framework from which to discuss an issue without having to rely on our own personal idiosyncrasies. “True objectivity” is impossible, obviously. (Singer certainly has his own ideological biases). But so what? That’s no excuse to not do our best.

And again, all of this applies just as strongly to Wallace’s work, despite his attempts to dodge the issue. Wallace does make explicit intellectual arguments which can be accepted or refuted on their own terms. I’ve done a little bit of this already, but let’s stick with the meat eating thing just for simplicity. The conclusion Wallace draws, that “it all still seems to come down to individual conscience,” is the one conclusion that is absolutely invalid. Eating animals is either morally permissible or it is not. If it’s not, you are obligated to avoid it to the best of your ability. If it is, then you don’t have to wring your hands about it. And given the current state of things, the latter is the conclusion that Wallace’s argument actually results in, making his position self-refuting. One does not have the option to stand still on an escalator.

So, you’re getting what’s happened here, right? Wallace succeeded despite his best efforts. That’s the amazing upshot of the language games argument: there is no such thing as intellectual isolation. Establishing a connection to other humans isn’t a prize you get for using language really well, it’s a prerequisite to language use in the first place. The mere use of language in any context is necessarily a connection to the broader human enterprise.

Wallace thought that expressing himself entailed this huge burden, but it’s actually impossible not to express yourself, as long as your audience is aware of what game you’re playing. And the problem with Wallace is precisely that he fooled his audience into avoiding this awareness. His approach makes it seems like he’s not playing games, like he’s just a really smart guy doing his best to figure things out, like he’s “the best mind of his generation.” But none of those three things actually exist.

And we’re not beholden to Wallace’s framework; we’re entirely within our rights to fix his mistakes. We don’t have to pretend like he was some kind of generational oracle and discuss him on that basis. We don’t have to play along with his attempted universalization of his own perspective. We can find what’s worthwhile in his work and apply it as needed, whether as insight, counterexample, or cautionary tale.

Once again, Wallace’s argument for linguistic prescriptivism acts as a microcosm of his overall approach. Let’s say you manage to successfully establish some arbitrary usage rule. Great. So what? Why does anybody have to care? Are you actually going to stop people from ignoring your rule whenever they feel like it? You’re not, because you can’t. Whether a person is understood when they speak isn’t up to you. It’s up to the world. And the meaning of Wallace’s work isn’t up to him, either. It’s up to us.


Nietzsche contra Wallace

Here’s a revealing aside from the Dostoevsky essay:

“Nietzsche would take Dostoevsky’s insight and make it the cornerstone of his own devastating attack on Christianity, and this is ironic: in our own culture of ‘enlightened atheism’ we are very much Nietzsche’s children, his ideological heirs, and without Dostoevsky there would have been no Nietzsche, and yet Dostoevsky is among the most profoundly religious of all writers.”

Okay, first of all, this is totally wrong. I’m supposed to be done with the debunking part here, but I can’t let this one slide. Nietzsche only discovered Dostoevsky in 1887 – too late to have influenced the major works that most defined his philosophy, Beyond Good and Evil and On the Genealogy of Morals (in fact, the two men were nearly contemporaries – Dostoevsky’s last work was written in 1880, Nietzsche’s only 8 years later. Nietzsche was never able to read The Brothers Karamazov because it had not yet been translated). It is true that Nietzsche was smitten with Dostoevsky after discovering him; during his final frenzy of work in 1888, Nietzsche repeatedly makes significant use of the word “idiot.” While this is entirely adorable, it’s a far cry from Dostoevsky being one of Nietzsche’s major influences. Moreover, while this is a bit much to get into here, seeing this connection as “ironic” is awfully superficial, as though Dostoevsky could be summed up as merely “Christian” and Nietzsche as merely “anti-Christian.” Nietzsche was, after all, a profound moralist – just not a Christian one.

That aside, it is precisely not the case that “we” are Nietzsche’s children. Most people are not atheists in either the literal or the metaphorical sense. As usual, Wallace is pretending like his own perspective amounts to a comprehensive explanation. What’s actually going on here is that one specific person is Nietzsche’s child: David Foster Wallace.

It might seem like you couldn’t find two more opposite personalities. Nietzsche, the unrepentant elitist, bombastic and reckless, guided by the past while reaching desperately into the future. Wallace, the determined populist, cautious and humble, embedded deeply in the present. Nietzsche was ignored in his own day due to being “untimely,” while Wallace was revered for his (alleged) ability to tap into the zeitgeist.

But the thing about opposites is that they’re two ends of one spectrum. Both men were engaged in a desperate struggle against what they saw as the creeping nihilism of their own time. Nietzsche saw a great void left behind by Christianity’s fading moral authority, a vast, flat plain on which only the “smallest” could survive. Of course, nature abhors a vacuum, even in morals, so the situation in Wallace’s day was quite different. Wallace saw a glut of meaning created by the rise of extreme pluralism, a great cacophony of noise through which no signal could be discerned.

The big difference is that Nietzsche was deeply self-aware in a way that Wallace was not. Nietzsche was explicit about the fact that his proposed new morality was based entirely on his own standards; indeed, that was the point. Wallace, while trying to be egalitarian, stumbled into the same territory unwittingly by universalizing his own particulars.

Nietzsche would not have been surprised:

“Gradually it has become clear to me what every great philosophy so far has been: namely, the personal confession of its author and a kind of involuntary and unconscious memoir; also that the moral (or immoral) intentions in every philosophy constituted the real germ of life from which the whole plant had grown.”

Nobody’s confused about the fact that Wallace was speaking from his own perspective. But people still talk about his arguments as though they have some kind of formal, universal validity. What Nietzsche is saying here is that this is never the case. It isn’t just the blatantly personal stuff, the supposedly analytical aspects of Wallace’s work are also only expressions of the type of person that he is.

As mentioned, the real trick here is that this isn’t a problem. It isn’t a problem for Nietzsche’s work, which is still valuable despite all the stuff he was blatantly wrong about, and despite the fact that we can no longer countenance his conclusions. And it isn’t a problem for Wallace’s work, because Nietzsche, the ailing diagnostician, has the cure for his crimes:

“The philosopher supposes that the value of his philosophy lies in the whole, in the structure; but posterity finds its value in the stone which he used for building, and which is used many more times after that for building – better. Thus it finds the value in the fact that the structure can be destroyed and nevertheless retains value as building material.”

This is why Wallace’s structure needs to be destroyed: so we can build better.


Building Better

Of course, it would be irresponsible to stop here without at least getting started on the whole rebuilding thing. This has hardly been a comprehensive overview of Wallace’s work (his fiction is a whole other topic), but we’ve been through enough to draw some basic conclusions.

The first and most important should be obvious by now: don’t try to universalize your own idiosyncrasies. In fact, as soon as you find yourself trying to make a big statement about something like “American culture” or “televisual irony,” it’s probably a good idea to just slow your roll. It’s commonly said that great art takes the particular and makes it universal, but that’s not what that means. It means that by expressing yourself creatively you provide other people with something that they can use to make connections that neither you nor they could have anticipated. You can’t force it like Wallace tried to. All you can do is express yourself within your limitations and trust your audience to meet you halfway. You have to play the wicked game.

When Wallace tried to simultaneously work from his own perspective and be objective, he was trying to avoid being “ideological.” But this is impossible; the point of the term “ideology” is precisely that everyone has one. It’s clear from the way Wallace deploys the term (which he does frequently) that he didn’t understand this. And the solution here is pretty straightforward: we always need to be cognizant of our own ideological assumptions, and we can’t let people like Wallace pretend like they aren’t arguing ideologically.

The second is to interrogate your damn frameworks. Wallace never did this and it always cost him. When he tried to talk about the politics of language, he shoehorned the whole thing into the “liberal/conservative” divide, because that’s all he knew about politics. When he talked about TV, his whole argument was based on the notion that TV was “ironic,” because that’s what everyone always says about it. Of course, this failure is what made his writing so appealing: he was telling people what they already knew (and this is where Wallace’s gift as a writer was more like a curse: it made his arguments more persuasive than they deserved to be).

The last is to not underestimate ideas. This is actually closely related to one of Wallace’s own insights – maybe his best. It occurs a couple of times in Infinite Jest, and it takes the form: don’t underestimate objects. What this means is that objects aren’t just things for humans to use; they have their own aspect of being that affects the way people interact with them. This is clearer than ever with the advent of smartphones, which are objects that are pretty obviously affecting people’s behavior in unanticipated ways. A piece of software is ultimately just an object, but its particular characteristics affect the people who use it. For example, one of the reasons search engines are so effective is not because they’re so brilliantly coded, but because people have learned how to phrase their queries in ways that are easy for a piece of software to process (such as focusing on improbable keywords). More disturbingly, we may even be learning to only want to ask things that can be answered by a search engine. The cliche that can be redeemed in order to describe this phenomenon is “the things you own end up owning you.”

Wallace failed to apply this insight to his treatment of ideas. He treated them like they were toys to bounce around (this is another reason why name-dropping really is a bad thing). Crucially, he treated Wittgenstein’s philosophy as an opportunity to merely reflect on his own anxieties. But as we’ve seen, if he hadn’t done this, if he had actually taken Wittgenstein seriously and followed his argument through, it could have resolved his problems. But this could only have happened if he had been willing to let an argument take him somewhere he wasn’t looking to go.


In a discussion of the “Death of the Author” theory, Wallace defines his position as follows:

“For those of us civilians who know in our gut that writing is an act of communication between one human being and another, the whole question seems kind of arcane.”

Let’s take him at his word.

How to smell a rat

I’m all for taking tech assholes down a notch (or several notches), but this kind of alarmism isn’t actually helpful:

“It struck me that the search engine might know more about my unconscious than I do—a possibility that would put it in a position not only to predict my behavior, but to manipulate it. Lose your privacy, lose your free will—a chilling thought.”

Don’t actually read that article, it’s bad. It’s a bunch of pathetic bourgeois lifestyle details spun into a conspiracy theory that’s terrifying only in its dullness, like a lobotomized Philip K. Dick plot. But it is an instructive example of how to get things about as wrong as possible.

I want to start with a point about the “free will” thing, since there are some pretty common and illuminating errors at work here. The reason that people think there’s a contradiction between determinism and free will (there’s not) is that they think determinism means that people can “predict” what you’re going to do, and therefore you aren’t really making a decision. This isn’t even necessarily true on its own: it may not be practically possible to do the calculations required to simulate a human brain fast enough for the results to be useful (that is, faster than the speed at which the universe does them. The reason we can calculate things faster than the universe can is that we abstract away all the irrelevant bits, but when it comes to something as complex as the brain, almost everything is relevant. This is why our ability to predict the weather is limited, for example. There’s too much relevant data to process in the amount of time we have to do it). But the more fundamental point is that free will has nothing to do with predictability.

Imagine you’re out to dinner with a friend who’s a committed vegan. You look at the menu and notice there’s only one vegan entree. Given this, you can predict with very high accuracy what your friend is going to order. But the reason you can do this is precisely because of your friend’s free will: their predictability is the result of a choice they made. There’s only one possible thing they can do, but that’s because it’s the only thing that they want to do.

Inversely, imagine your friend instead has a nervous disorder that causes them to freeze up when faced with a large number of choices. Their coping mechanism in such situations is to quickly make a completely random choice. Here, you can’t predict at all what your friend is going to order, and in this case it’s precisely because they aren’t making a free choice. They can potentially order anything, but the one thing they can’t do is order something they actually want.

The source of the error here is that people interpret “free will” to mean “I’m a special snowflake.” Since determinism means that you aren’t special, you’re just an object like everything else, it must also mean that you don’t have free will. But this folk notion of “free will” as “freedom from constraints” is a fantasy; as demonstrated by our vegan friend, freedom, properly understood, is actually an engagement with constraints (there’s no such thing as there being no constraints; if you were floating in a featureless void there would be nothing that could have caused you to develop any actual characteristics. Practically speaking, you wouldn’t exist). Indeed, nobody is actually a vegan as such, rather, people are vegan because of facts about the real world that, under a certain moral framework, compel this choice.

This applies broadly: rather than the laws of physics preventing us from making free choices, it is only because we live in an ordered universe that our choices are real. The only two possibilities are order or chaos, and it’s obvious that chaos is precisely the situation in which there really wouldn’t be any such thing as free will.

The third alternative that some people seem to be after is something that is ordered but is “outside” the laws of physics. Let’s call this thing “soul power.” The idea is that soul power would allow a person’s will to impinge upon the laws of physics, cheating determinism. But if soul power allows you to obviate the laws of physics, then all that means is that we instead need laws of soul power to understand the universe; if there were no such laws, if soul power were chaotic, then it wouldn’t solve the problem. What’s required is something that allows us to use past information to make a decision in the present, i.e. the future has to be determined by the past. And if this is so, it must be possible to understand the principles by which soul power operates. Ergo, positing soul power doesn’t solve anything; the difference between physical laws and soul laws is merely an implementation detail.

Relatedly, what your desires are in the first place is also either explicable or chaotic. So, in the same way, it doesn’t matter whether your desires come from basic physics or from some sort of divine guidance; whatever the source, your desires are only meaningful if they arise from the appropriate sorts of real-world interactions. If, for example, you grow up watching your grandfather slowly die of lung cancer after a lifetime of smoking, that experience needs to be able to compel you to not start smoking. The situation where this is not the case is obviously the one in which you do not have free will. What would be absurd is if you somehow had a preference for or against smoking that was not based on your actual experiences with the practice.

Thus, these are the two halves of the free will fantasy: that it makes you a special little snowflake exempt from the limits of science, and that you’re capable of “pure” motivations that come from the deepest part of your soul and are unaffected by dirty reality. What is important to realize is that both of these ideas are completely wrong, and that free will is still a real thing.

When we understand this, we can start to focus on what actually matters about free will. Rather than conceptualizing it holistically, that is, arguing about whether humans “do” or “don’t” have free will, we can look at individual decisions and determine whether or not they are being made freely.

Okay, so, we were talking about mass data acquisition by corporations (“Big Data” is a bad concept and you shouldn’t use it). Since none of the corporations in question employ a mercenary army (yet), what we should be talking about is economic coercion. As a basic example: Amazon has made a number of power plays for the purpose of controlling as much commercial activity as possible. As a result, the convenience offered by Amazon is such that it is difficult for many people not to use it, despite it now being widely recognized that Amazon is a deeply immoral company. If there were readily available alternatives to Amazon, or if our daily lives were unharried enough to allow us to find non-readily available alternatives, we would be more able to take the appropriate actions with regard to the information we’ve received about Amazon’s employment practices. The same basic dynamic applies to every other “disruptive” company.

(Side note: how hilarious is it that “disruptive” is the term used by people who support the practice? It’s such a classic nerd blunder to be so clueless about the fact that people can disagree with their goals that they take a purely negative term and try to use it like a cute joke, oblivious to the fact that they’re giving away the game.)

The end goal of Amazon, Google, and Facebook alike is to become “company towns,” such that all your transactions have to go through them (for Amazon this means your literal financial transactions, for Google it’s your access to information and for Facebook it’s social interaction, which is why Facebook is the skeeviest one out of the bunch). Of course, another name for this type of situation is “monopoly,” which is the goal of every corporation on some level (Uber is making a play for monopoly on urban transportation, for example). But company towns and monopolies are things that actually have happened in the past, without the aid of mass data collection. So if the ubiquity of these companies is starting to seem scary (it is), it would probably be a good idea to keep our eyes on the prize.

And while the data acquisition that these companies engage in certainly makes all of this easier, it isn’t actually the cause. The cause, obviously, is the profit motive. That’s the only reason any of these companies are doing anything. I mean, a lot of this stuff actually is convenient. If we lived in a society that understood real consent and wasn’t constantly trying to fleece people, mass data acquisition would be a great tool with all sorts of socially positive uses. This wouldn’t be good for business, of course, just good for humanity.

But the people who constantly kvetch about how “spooky” it is that their devices are “spying” on them don’t actually oppose capitalism. On the contrary, these people are upset precisely because they’ve completely bought into the consumerist fantasy that their participation in the market defines them as a unique individual. This fantasy used to be required to sell people shit; it’s not like you can advertise a bottle of cancer-flavored sugar water on its merits. But the advent of information technology has shattered the illusion, revealing unavoidably that, from an economic point of view, each of us is a mere consumer. The only aspect of your being that capitalism cares about is how much wealth can be extracted from you. You are literally a number in a spreadsheet.

But destroying the fantasy ought to be a step forward, since it was horseshit in the first place. That’s why looking at the issue of mass surveillance from a consumer perspective is petty as all fuck. I actually feel pretty bad for the person who wrote that article (you remember, the one up at the top that you didn’t read), since he’s apparently living in a world where the advertisements he receives constitute a recognition of his innermost self. And, while none of us choose to participate in a capitalist society, there does come a point at which you’re asking for it. If you’re wearing one of those dumbass fitness wristbands all day long so that you can sync the data to your smartphone, you pretty much deserve whatever happens to you. Because guess what: there actually is more to life than market transactions. It is entirely within your abilities to sit down and read a fucking book, and I promise that nobody is monitoring your brainwaves to gain insight into your interpretation of Kafka.

(Actually, one of the reasons this sort of “paranoia” is so hard to swallow is that the recommendation engines and so forth that we’re talking about are fucking awful. I have no idea how anyone is capable of being spooked by how “clever” these bone-stupid algorithms are. Amazon can’t even make the most basic semantic distinctions: when you click on something, it has no idea whether you’re looking at it for yourself, or for a gift, or because you saw it on Worst Things For Sale, or because it was called Barbie and Her Sisters: Puppy Rescue and you just had to know what the hell that was. If they actually were monitoring you reading The Metamorphosis they’d probably be trying to sell you bug spray.)

Forget Google, this is the real threat to humanity: the petty bourgeois lifestyle taken to such an extreme that the mere recognition of forces greater then one’s own consumption habits is enough to precipitate an existential crisis. I’m fairly embarrassed to actually have to say this, but it’s apparently necessary: a person is not defined by their browsing history, there is such a thing as the human heart, and you can’t map it out by correlating data from social media posts.

Of course, none of this means that mass surveillance is not a critical issue; quite the opposite. We’ve pretty obviously been avoiding the real issue here, which is murder. The most extreme consequences of mass surveillance are not theoretical, they have already happened to people like Abdulrahman al-Awlaki. This is why it is correct to treat conspiracy theorists like addled children: for all their bluster, they refuse to engage with the actual conspiracies that are actually killing people right now. They’re play-acting at armageddon.

There is one term that must be understood by anyone who wants to even pretend to have the most basic grounding from which to speak about political issues, and that term is COINTELPRO.

“A March 4th, 1968 memo from J Edgar Hoover to FBI field offices laid out the goals of the COINTELPRO – Black Nationalist Hate Groups program: ‘to prevent the coalition of militant black nationalist groups;’ ‘to prevent the rise of a messiah who could unify and electrify the militant black nationalist movement;’ ‘to prevent violence on the part of black nationalist groups;’ ‘to prevent militant black nationalist groups and leaders from gaining respectability;’ and ‘to prevent the long-range growth of militant black nationalist organizations, especially among youth.’ Included in the program were a broad spectrum of civil rights and religious groups; targets included Martin Luther King, Malcolm X, Stokely Carmichael, Eldridge Cleaver, and Elijah Muhammad.”

“From its inception, the FBI has operated on the doctrine that the ‘preliminary stages of organization and preparation’ must be frustrated, well before there is any clear and present danger of ‘revolutionary radicalism.’ At its most extreme dimension, political dissidents have been eliminated outright or sent to prison for the rest of their lives. There are quite a number of individuals who have been handled in that fashion. Many more, however, were “neutralized” by intimidation, harassment, discrediting, snitch jacketing, a whole assortment of authoritarian and illegal tactics.”

“One of the more dramatic incidents occurred on the night of December 4, 1969, when Panther leaders Fred Hampton and Mark Clark were shot to death by Chicago policemen in a predawn raid on their apartment. Hampton, one of the most promising leaders of the Black Panther party, was killed in bed, perhaps drugged. Depositions in a civil suit in Chicago revealed that the chief of Panther security and Hampton’s personal bodyguard, William O’Neal, was an FBI infiltrator. O’Neal gave his FBI contacting agent, Roy Mitchell, a detailed floor plan of the apartment, which Mitchell turned over to the state’s attorney’s office shortly before the attack, along with ‘information’ — of dubious veracity — that there were two illegal shotguns in the apartment. For his services, O’Neal was paid over $10,000 from January 1969 through July 1970, according to Mitchell’s affidavit.”

The reason this must be understood is that COINTELPRO is what happens when the government considers something an actual threat: they shut it the fuck down. If the government isn’t attempting to wreck your shit, it’s because you don’t matter.

With regard to the suppression of political discontent in America, it’s commonly acknowledged that “things are better now,” meaning it’s been a while since we’ve had a real Kent State Massacre type of situation (which isn’t to say that the government is not busy killing Americans, only that these killings (most obviously, murders by police) are not political in the sense we’re discussing here (that is, they’re part of a system of control, but not a response to a direct threat)). But this is only because Americans are now so comfortable that no one living in America is willing to take things to the required level (consider that the police were able to quietly rout Occupy in the conventional manner, without creating any inconvenient martyrs). This is globalization at work: as our slave labor has been outsourced, so too has our discontent.

And none of this actually has anything to do with surveillance technology per se. Governments kill whoever they feel like using whatever technology happens to be available at the time. If a movement gets to be a big enough threat that the government actually feels the need to take it down the hard way, they certainly will use the data provided by tech companies to do so. But not having that data wouldn’t stop them. The level of available technology is not the relevant criterion. Power is.

It would, of course, be great if we could pass some laws preventing the government from blithely snatching up any data it can get its clumsy fingers around, as well as regulations enforcing real consent for data acquisition by tech companies. But the fact that lawmakers have a notoriously hard time keeping up with technology is more of a feature than a bug. The absence of a real legislative framework creates a situation in which both the government and corporations are free to do pretty much whatever the hell they want. As such, there’s a strong disincentive for anyone who matters to actually try to change this state of affairs.

In summary, mass surveillance is a practical problem, not a philosophical one. The actual thing keeping us out of a 1984-style surveillance situation is the fact that all the required data can’t practically be processed (as in it’s physically impossible, since there’s exactly as much data as total theoretically available person-hours). So what actually happens is that the data all gets hoovered up and stored on some big server somewhere, dormant and invisible, until someone makes the political choice to access it in a certain way, looking for a certain pattern – and then decides what action to take in response to their findings. The key element in this scenario is not the camera on the street (or in your pocket), but the person with their finger on the trigger.

Unless you work for the Atlantic, in which case you can write what appears to be an entire cover article on the subject without ever mentioning any of this. So when you hear these jokers going on about how “spooky” it is that their smartphones are spying on them, recognize this attitude for what it is: the expression of a state of luxury so extreme that it makes petty cultural detritus like targeted advertising actually seem meaningful.

On the verge

There’s not that much to say about Axiom Verge itself. It’s good. It’s a good Metroid clone. That’s not even a dig or anything; it’s well-designed and it’s fun. And despite the fact that “clone” is the term we use for things like this, there isn’t anything wrong with doing genre work. What’s actually interesting about Axiom Verge, though, isn’t how good of a game it is, but how good of a game it isn’t.

av_shoot

The first thing that happens in the game is that you’re told to go into a room and pick up a gun. You then use the gun to shoot open the door to the next area. In fact, the next three upgrades you get after that are also weapons. The game proceeds pretty much how you would expect from this introduction: most of the gameplay is shooting things, and the boss battles are all firefights.

Actually, the game is disappointing even in this regard. You get a huge number of different weapons (like seriously way too many. Pro tip: the concept of “minimalism” exists for a reason) with different firing patterns and such, but the vast majority of the time the most effective thing to do is to stick with your default weapon and just mash the fire button as fast as you can. The second boss fight is especially anti-notable in this regard: it occurs after you’ve obtained three new weapons since the first boss, and none of them are useful. You know you’ve got a problem when you’re so into shooting ’em up that you’re failing Game Design 101.

This is especially sad when you remember that what makes the Metroid series notable is precisely not the combat, it’s the exploration. Metroid-style combat that consists of merely shooting at enemies until they go away is boring, which is fine, because it’s not supposed to be the focus of the game. Of course, Axiom Verge is far from the only game to make this mistake; indeed, the Metroid series itself suffers deeply from this problem, which is why Super Metroid is still the only game in the series that’s actually worth talking about. In all this time, not a single game has actually improved upon the aspects of Super Metroid that made it great; few have even competently imitated them.

av_platform

And yet, there are two items in Axiom Verge that actually hint at a path forward: the Address Disruptor, which allows you to “hack” certain enemies and objects in order to change their properties, and the Passcode Tool, which allows you to change the basic parameters of the game by discovering and entering certain passwords, which can then be turned on or off at will. Pretty interesting stuff, right? Here’s a fun idea: imagine that, instead of getting a gun first so that you can start shooting things as soon as possible, the first thing you picked up was the Address Disruptor, and instead of merely pointing it at the door and pressing “fire,” you actually had to use it to rearrange the environment in some way to be able to proceed. And then you got the Passcode Tool, and a password that, like, inverted gravity or something, and then you had an entirely different version of the game world to explore. Then imagine an entire game that followed from this introduction.

Go on, give yourself a minute to really think of some neat applications of these ideas. I’ll wait.

. . .

Did you enjoy that? I hope so, because none of the stuff you were imagining is actually in Axiom Verge.

av_break

The Address Disruptor does a couple of neat things. It does not do anything neat with the actual environment, whereupon its only use is to allow traversal though certain passages by revealing platforms or removing walls, exactly as if it were a blue key that opens blue doors. Hacking enemies, though, does provide a few interesting moments. Some turn into platforms, others gain the ability to break through certain walls. There’s even one that directly drops an upgrade when killed in its hacked form. Behaviors like these provide innovative new ways to explore the environment and search for secrets.

Ultimately, though, most of the Disruptor’s effects are combat-based. Fast-moving enemies slow down, enemies that normally chase and latch onto you will instead stay still and shoot at you, armored enemies become vulnerable to standard weaponry. Again, the problem with this is that combat is boring; since your goal is to just get rid of the enemies, their behaviors don’t really matter. You’re merely removing obstacles that are in your path. If hacking them makes them easier to deal with, fine; if it’s easier to just shoot them, that’s fine too.

At least that’s something, though. The Passcode Tool is apparently made out of some sort of alien technology that’s powered entirely by disappointment. There are exactly two types of passwords that you can find in the game: one translates some of the log entries you find, allowing you to read thought-provoking fragments that reveal intriguing details about the game’s complex backstory (sometimes I really wonder why I’m doing this to myself), and the other opens passageways in certain rooms, exactly as if it were a blue key that opens blue doors.

There’s one last point that needs to be made about the aesthetics of these items. They’re both presented as ways for you to “break the game,” and their graphical representations support this. Hacking enemies with the Address Disruptor causes them to appear “glitched,” and the Passcode Tool is basically a Game Genie (remember Game Genie? It’s back, in pog form). Even the game’s own ad copy claims that you can “break the game itself by using glitches to corrupt foes and solve puzzles in the environment.” Of course, this is exactly wrong: because these mechanics have specific, intentional effects and the game is designed around them, they precisely do not “break” the game. This may just seem like a cute reference, but what’s important is that it allows Axiom Verge to pretend to be doing more than it actually is; to make do with cuteness instead of trying for depth. This is the problem of mere cleverness.

av_bridge

So, since I already used the “kill yr idols” conclusion, let’s try something else. Axiom Verge is science-themed. The player character is a theoretical physicist (uh, I think. He works in a “laser lab,” anyway), and your equipment’s ability to alter reality is implied to come from the development and application of a “Theory of Everything.” What I’m going to suggest is that Axiom Verge ought to have followed its theme.

As mentioned, the game is a more combat-focused version of the basic Metroid design, “combat” in this case meaning that there are “enemies” whose only purpose is to be obstacles to your progress, and you get them out of the way by “attacking” them enough to get rid of them while avoiding their own attacks on you. This is very much not what science is like. Science (when done well), is about open-mindedness, collaboration, experimentation, careful observation, and even tedious rigor. Of course, I’m not claiming that the game should have tried to implement a complete representation of the scientific method, but I am claiming that it could easily have done better than implementing the exact opposite.

And here’s what’s interesting: the basic explorative gameplay of Metroid is actually already fairly science-like. You have to stay open-minded and look for alternative routes in order to successfully navigate the environment. You have to experiment to understand how your tools interact with the game world. You have to make careful observations to find likely locations of hidden areas. Sometimes you even have to tediously check every possible wall for a hidden passage. Axiom Verge, with its claimed ability to allow you to alter the environment via the Address Disruptor and change the basic nature of the game with the Passcode Tool, should have been able to do even better than this; it should have been a step forward. Instead, it does the easy thing and slaps a bunch more guns onto a basic design template. It retreats from the game it ought to have been.

av_highground

The third boss fight provides some insight into how things could have worked. The boss is a giant, screen-filling monstrosity that throws multiple simultaneous attacks at you, and if you try to fight it via the standard dodging-and-shooting approach, you’re totally doomed. Instead, you have to use the Address Disruptor to reveal more platforms in the area that both block some of the boss’s attacks and provide you with more advantageous positions from which to attack (it’s over once you have the high ground). This is a great example of how a boss fight can rely on thought and planning rather than reflexes and button mashing. And it shows that, even with just the tools that Axiom Verge already has, there could have been an entire game that worked this way.

This is the real significance of the fact that Axiom Verge is a Metroid clone. Starting from the basic Metroid design and then adding enough “innovative” ideas to make the game “original” is exactly the wrong approach. The clearest example of this mistake is the Remote Drone, which is used to move through narrow passages in exactly the same way as Metroid’s Morph Ball. Its look and feel are slightly different, which guarantees that clueless reviewers will praise it for “originality,” but the actual function of the item is exactly the same. Certainly, when one considers the possible applications of a remote-controlled robot in the context of scientific exploration, one can easily imagine several more interesting alternatives.

The better approach, then, is to start with a theme, something that you actually want the game to convey, and then use whatever aspects of existing designs are useful for doing so. Even if this results in a pure genre game, it’ll be one that matters for its own sake, that isn’t merely a representative of its category. This is part of the deep problem that video games have with insularity: they’re only judged against themselves. A different version of a game that’s already been judged “good” is therefore necessarily also “good.” But this doesn’t give anyone who doesn’t already like this type of game any reason to care about it; indeed, it doesn’t give the game a right to exist when someone else has already done it better. The reason nobody has to make any excuses when some new band comes out sounding like the Ramones is because it’s taken for granted that music is a way to express something; we expect it to stand up to judgment on our own terms. This is not currently the case for video games.

av_unknown

David Foster Wallace was wrong about everything

(Part 1)

(Part 2)

While it’s a moderate amount of fun to go through and debunk all of David Foster Wallace’s silly arguments, there’s a real mystery here: how he could be so serious and thoughtful and yet so fundamentally clueless. While I very much don’t buy the whole “genius” angle (either in regards to DFW or in general), he does seem to have been smart enough that he shouldn’t have failed this comprehensively without a good reason. In other words, there must have been a fundamental flaw in his general approach – one which would be worth our while to identify and correct. In order to get to the bottom of this, we need to unpack the closest thing he wrote to a mission statement: “E Unibus Pluram.” This essay is where Wallace fully articulates his stance with regard to Our Modern Culture, which stance is, in short, opposed to irony and in favor of a sort of refined banality.

As usual, Wallace is taking a pretty basic idea, padding it with vague intellectualism, and using his substantial writing talent to make it look good. The idea that society nowadays is “too ironic” and “nothing means anything anymore” is common enough to have become its own cliche. As a result, there is a significant anti-DFW contingent that is largely motivated by an instinctive skepticism of anyone making this type of argument, which is a good instinct. Banality actually is a seriously bad thing and anyone who winds up in the vicinity of advocating it really needs to watch their step.

But we can do better than merely rejecting Wallace’s arguments on these grounds. First, this line of argument merits a thorough counterargument precisely because it’s so common. Second, if we accept that Wallace was a reasonably smart person and that he put a lot of work into his arguments, then it will be at least interesting to figure out how his efforts led him here. Finally, figuring out what Wallace’s deal was will help us come to a more complete understanding of what his work was really about.

In order for any of this to make sense, we need to start with a critical correction to Wallace’s framework: we need to define “irony.” I’ve mentioned that Wallace has a bad habit of not interrogating his framework that leads to him drawing overly broad conclusions, but in this case it’s worse. If the claim is that “irony” is destroying our ability to create meaning, then what we mean by “irony” is the entire issue.

Despite all the conniptions that people whip themselves into over the topic, the basic definition of irony is pretty simple: irony is when you use words to express something other than what those words actually say. The simplest example is sarcasm, which is when you use tone to indicate that what you mean is the opposite of what you’re saying. But irony in general does not necessarily convey the opposite of what you’re saying, it merely conveys something different. Note also that this definition does not imply any kind of motivation or ideological stance.

My favorite example for understanding irony is the “Friends, Romans, countrymen” speech from Julius Caesar (it’s in Act 3, Scene 2). As you’ll recall, Caesar has just been murdered by Brutus and the other senators, and an angry mob is at the capital demanding some answers. Brutus gives a simple explanation that satisfies the crowd, and then, being one of literature’s great honorable morons, leaves to allow Mark Antony to deliver the eulogy. Antony famously states that “I come to bury Caesar, not to praise him,” but the key to his speech is that he’s actually there to do neither. He’s there to incite a riot. The usual sense of irony is present when he repeatedly says that “Brutus is an honorable man”; certainly, this is the opposite of what Antony believes. But that’s not the point. Antony isn’t trying to convince people that Brutus is dishonorable, he’s trying to enrage them. Furthermore, Antony is being entirely sincere here. He actually loved Caesar and he’s actually pissed about him being murdered. Irony is merely the means by which he is taking this one action to advance his cause. It’s perfectly normal for irony and sincerity to coexist, because irony is not a worldview, it is a rhetorical technique. It can be used for whatever purpose one requires.

Despite this, it’s easy to see why various other concepts such as “detachment” or “cynicism” or “apathy” have glommed onto the concept of irony. Accepting irony as a legitimate method of communication is sort of like opening Pandora’s Box: everything becomes possible. It’s possible, for example, to use irony to avoid actually saying anything, or to use it to denigrate broadly without allowing for the possibility of a better alternative, but these are only possible uses of irony. Irony itself does not imply any particular motivation, which is why it’s so silly to say, as people so often do, that we’re living in an “ironic culture” or that irony is over because of a Broadway musical or whatever.

Okay, so, what’s the big deal if Wallace used a word wrong? He was referring to something with the word “irony,” so we should just be talking about whatever that thing was, right? Perhaps Wallace specifically meant the use of irony to stay cool and detached and avoid committing oneself, and that’s what he was arguing against. Unfortunately, this doesn’t work. The problem is that Wallace and the other cultural critics who lament our “ironic” society vastly overestimate the amount of irony that is actually present, because they lump together everything but the most po-faced sincerity under the “irony” label. This is ultimately the same old problem of Wallace using a broad brush to paint over the cracks in his actual analysis. If we go through Wallace’s arguments with a more rigorous understanding of what irony is and what it can do, we can both fix his conclusion and figure out where the flaws crept in.

So let’s talk about TV.


TV is My Friend

Wallace’s basic charge against TV is that it’s created a pervasively ironic culture through its combination of ubiquity and self-reference. Briefly: once TVs wound up in everybody’s homes and became a normal part of human life, TV programs then had to incorporate TV itself into their own content in order to maintain verisimilitude. One generation later, this self-incorporation has itself become part of everyone’s life experience, so now TV has to refer to itself referring to itself. Note that this is as far as it goes; there’s not an infinite number of possible layers of reference because at this point the Ouroboros has caught its own tail. If you try to add another layer you’ll still just have TV referring to itself referring to itself, which is the same as the third layer. Also note that the timeline for this checks out: TV first becomes ubiquitous during the naive 1950s, gains its first level of detachment one generation later, in the cynical 1970s, and achieves its true form of black-hole postmodernism in the nihilistic 1990s.

This is wrong. That is, all of this stuff did sort of happen in a basic sense, but it’s wrong to accept this as a complete explanation of American culture, which is exactly what Wallace is doing in this essay. The simple fact is that it’s a big world out there and there’s tons of other shit going on. Part of the problem with Wallace is that he’ll say one thing that’s correct in a limited way, leading people to accept his argument, but then go on to draw an unacceptably broad conclusion from it.

Wallace evokes the pervasiveness of TV with the statistic that “television is watched over six hours a day in the average American household.” He describes the situation is follows:

“Let’s for a second imagine Joe Briefcase as now just an average U.S. Male, relatively lonely, adjusted, married, blessed with 2.3 apple-cheeked issue, utterly normal, home from hard work at 5:30, starting his average six-hour stint in from of the television.”

Now, this is obviously a rhetorical description. Wallace is aware that an average is not a quota. But these sorts of clever flourishes are dangerous precisely because of their ability to smuggle in unintended assumptions. That’s why it matters that the situation Wallace is describing here is totally impossible.

If we assume that Joe here works for 8 hours a day, sleeps for 8 hours, and spends 2 hours on commuting/eating/errands/etc. (a significant underestimate), the six-hours-a-day statistic then implies that he spends 100% of his free time watching TV. Also, in order for six hours to be the average, some people would have to watch more than that, which is mathematically impossible. So, what does the six-hours-a-day statistic actually mean? For Wallace, its only significance is that it’s a big number. But if we consider the actual circumstances required for it to be true, we come to a very different conclusion: most TV watching occurs in the background.

This is fatal to everything that Wallace goes on to argue. If people are mostly watching TV in the background, then they are precisely not obsessively analyzing it and drawing deep philosophical conclusions in the way that Wallace needs them to be in order for his analysis to be applicable. The person who is doing that is, of course, Wallace himself. This is what I mean about smuggling in assumptions. Wallace is trying to create a “general” description of the TV-watching experience so that his argument can apply to everyone. But of course, there is no “everyone.” Each person has their own circumstances and personality, and as such, will interpret the same content in a different way; this is not the kind of thing that can be generalized. I’m sure that, for Wallace, the experience of watching TV was a deep source of existential anxiety in precisely the way he describes. But Wallace has no justification for projecting his own experiences onto everyone else.

In short, the content of TV or any other medium cannot cause the kind of broad social pandemic Wallace is attempting to diagnose here, because everyone will have their own idiosyncratic reaction to it. There is, of course, something that can cause broad social effects: structural conditions, which do affect everyone in the same way. This is what it means to say of TV that “the medium is the message.”

To understand this, let’s consider the current televisual situation and what it means for Wallace’s arguments. In this regard, “E Unibus Plurum” is seriously dated; it was written in 1990, and time has staggered on quite a bit since then. But this actually useful to our analysis: because both the content and the situation of TV are bit different now than they were in the early 90s, anything that’s the same between then and now cannot be explained by TV in the way that Wallace argues.

The structural change is that the rise of on-demand TV (first via DVDs, now via streaming) means that it is no longer “background viewing.” Quite the contrary, the current M.O. of the TV audience is “binge-watching,” which you can tell is a new thing because we had to make up a term for it. The result is that the TV experience is now more analytical and fannish, rather than merely fodder for small talk. The evidence for this is quite apparent: first, geek culture, using the definition that a “geek” is someone who’s a little too into a niche cultural product, is now mainstream. The current run of superhero movies, for examples, is starting to rely on its audience having the sort of obsessive in-knowledge that used to be the provenance of only the geekiest subcultures. Second, there’s about a billion thinkpieces all over the internet analyzing any new TV show that achieves any kind of popularity.

And third, the content itself has changed to better fit this new reality. Rather than sitcoms with recognizable premises and easy-to-get jokes, long-form dramas with complicated plots are now the order of the day, precisely because viewers can now be counted on not only to be capable of sorting such stories out, but to want to. This also means that TV shows tend to be less “zany” and more substantive; the sort of self-reflexive irony that Wallace opposed is no longer in vogue, precisely because TV shows now need to engage viewers for a long commitment and get them to talk the shows up to others, rather than merely flatter them with a cheap sense of recognition.

Furthermore, TV’s new angle is merely one aspect of a broader cultural shift away from the “ironic” 90s. Fannishness and hyper-engagement are the new normal, not just for TV but in general. The proliferation of spammy listicles and hyperbolic headlines demonstrates that the internet is replete with an aggressively (which is to say intentionally) naive sincerity. Our disdain is now reserved not for earnestness and candor but for hot takes and “negative” criticism. A lot of this is of course due to the structure of the internet itself, which allows people to coalesce around niche interests and choose not to read things that make them feel bad (or challenged, or like they might be wrong about something important). But in a sense it’s also just a mere trend, just like the whole “dark and edgy” thing in the 90s was a mere trend.

But isn’t this exactly what Wallace predicted, that the next trend in media would be one against irony? It would be – if Wallace were talking about trends. But he’s not, he’s attempting to diagnose a pandemic: the modern trap of informed meaninglessness. Thus, the question to ask is: now that the alleged virus is gone, what of the patient? Do we live in a just society that provides everyone with the opportunity to find meaning in their own lives? Are we no longer haunted by a vague sense of anxiety and guilt over our position in the world? Is image now less important than substance?

Indeed, the fact that Wallace’s analysis is still popular and people are still looking to him for guidance answers these questions all by itself. Irony was never the problem, and Wallace’s argument can, fittingly, be reduced to a cliche: he mistook the symptoms for the disease.


It’s TV’s Fault Why I Am This Way

To understand how Wallace got this wrong, let’s take a closer look at some of the examples he uses. By identifying the errors in his specific arguments, we can move toward a correction of his overall approach.

One of the central arguments in the essay is Wallace’s analysis of a Pepsi commercial. This is kind of an own goal all by itself, but I’m going to go ahead and take it seriously. The commercial is your basic “crowd of attractive young people having fun” type of deal, which Wallace analyzes as follows:

“There’s about as much ‘choice’ at work in this commercial as there was in Pavlov’s bell-kennel. The use of the word ‘choice’ here is a dark joke. In fact the whole 30-second spot is tongue-in-cheek, ironic, self-mocking . . . In contrast to a blatant Buy This Thing, the Pepsi commercial pitches parody. The ad is utterly up-front about what TV ads are popularly despised for doing, viz. using primal, flim-flam appeals to sell sugary crud to people whose identity is nothing but mass consumption. This ad manages simultaneously to make fun of itself, Pepsi, advertising, advertisers, and the great U.S. watching consuming crowd.”

The Pavlov reference is apt, if obvious, but how does this make the commercial ironic? If it’s “utterly up-front” about what it’s doing, doesn’t that make it entirely sincere? Given that all advertisements necessarily have the same purpose, isn’t a “parody” of an ad actually just another ad? Certainly, one can discern a contradiction between the preaching of “choice” and the fundamentally coercive nature of advertising, but is the word “choice” doing any actual work here other than supplying a vaguely positive connotation? Is this supposed contradiction actually relevant to what the ad is doing?

In fact, when we’re talking about commercials, we should be looking at the most superficial interpretation, because that’s the one that the vast majority of people are going to pick up on. In this case, there’s a bunch of attractive young people having fun and drinking Pepsi, so the message is pretty obviously that Pepsi equals fun party times. No analysis required.

And this is exactly how commercials actually work. The point of a commercial is very much not to act as some sort of intellectual Rubik’s Cube; the point is to throw a brand name at you along with some positive images to create an association in your mind between the brand and whatever the images connote (fun, adventure, sex, whatever; usually sex), such that the next time you see the brand in a store you’re subconsciously more inclined to buy it. This is why commercials are such a rich source of social stereotypes: they can’t afford to portray anything that isn’t instantly recognizable.

And even if you do accept an ironic reading of a commercial, it’s ultimately beside the point, because the functional purpose of a commercial is to move product. Nobody really cares what you think about it. Companies aren’t spending billions of dollars a year producing these stupid things as some kind of grad school art project. They’re doing it because it works.

Consider what Wallace claims is a typical reaction to this commercial (he does this by once again invoking “Joe Briefcase” from above – keep this guy in mind, because he’s going to turn out to be pretty important):

“The commercial invites Joe to ‘see through’ the manipulation the beach’s horde is rabidly buying. The commercial invites a complicity between its own witty irony and veteran viewer Joe’s cynical, nobody’s-fool appreciation of that irony. It invites Joe into an in-joke the Audience is the butt of. It congratulates Joe Briefcase, in other words, on transcending the very crowd that defines him. And entire crowds of Joe B.’s responded: the ad boosted Pepsi’s market share through three sales quarters.”

But that last line is a non-sequitur: how do we know that this analysis is why the commercial was effective (also, it’s one data point out of about a billion, but let’s stay focused)? If Wallace’s reading is correct, if you actually saw the commercial and then felt this way, why on Earth would this convince you to go out and buy a Pepsi? Wouldn’t you simply pat yourself on the back for getting the joke, and then continue to express your superiority by not buying the product that you’re supposedly laughing at? On the contrary, if the ad was successful, it can only be because it operated in the usual way: by creating a subconscious positive association in the viewer’s mind. This is why it doesn’t matter what your intellectual analysis of a commercial is: because commercials operate below the level of conscious analysis. Watching a commercial automatically creates an association in your mind, and that’s it.

So that’s the first half of the problem: any one line of intellectual analysis can only get you so far; sometimes it’s just not applicable. The second half is that, as mentioned, irony is a lot more versatile than Wallace makes it out to be.

Wallace’s claim is that TV’s approach functions as an “ironic permission slip” for harmful behavior. That is, it criticizes from a safe distance, allowing the viewer to accept the criticism of their own behavior without actually feeling the need to change it. Since the issue has been addressed but the viewer hasn’t really been challenged with anything, they’re free to resume their old habits, only now they’re able to pretend that they have a real justification for doing so.

You may recall that this is precisely the accusation I’ve made against Wallace’s work: that an essay such as “Consider the Lobster” gives the impression of having addressed an important issue but ultimately allows the reader to accept the argument and then keep doing whatever they want. Given that Wallace was not being ironic, it’s clear that irony itself is not the relevant distinction; just as easily as one can be glibly ironic, one can be glibly direct. We can complete the argument by noting that the inverse is also true: a truly challenging argument can be made directly, or it can be made by using irony. Wallace’s own examples of “ironic” television prove this point.

Here’s Wallace’s list of examples of TV’s patriarchal authority figures, meant to illustrate a decline into ironic shallowness:

“Compare television’s treatment of earnest authority figures on pre-ironic shows – The FBI‘s Erskine, Star Trek‘s Kirk, Beaver‘s Ward, The Partridge Family‘s Shirley, Hawaii Five-0‘s McGarrett – to TV’s depiction of Al Bundy on Married . . . with Children, Mr. Owens on Mr. Belvedere, Homer on The Simpsons, Daniels and Hunter on Hill Street Blues, Jason Seaver on Growing Pains, Dr. Craig on St. Elsewhere.

Well, that certainly is a lot of names. So Wallace must be right, right? Unless, you know, these examples actually aren’t all the same thing.

Let’s compare two contemporaries: Al Bundy and Homer Simpson. The point of Al’s character is precisely that he’s a terrible person, and we therefore enjoy pointing at him and laughing. We may even feel a smug sense of superiority, knowing that, however much we may suck sometimes, at least we’re better than this asshole.

This is not at all how we feel about Homer. The fundamental difference is that we’re rooting for Homer, even as we recognize that his problems are often his own stupid fault. Indeed, Homer’s foolishness is presented in such a way that we actually identify with it; the fact that Homer’s annoyed grunt has become a universal expression of self-directed frustration demonstrates our collective recognition that there’s a little Homer in all of us.

Furthermore, using Homer as an example of a subverted authority figure is well off the mark, because Homer is rarely presented in this context. We most often see him being kicked around by the uncaring forces of the broader world, in which he is merely one more fork-and-spoon operator in Sector 7-G. In fact, the Simpson family unit is actually the one place where the show’s usually unsparing satire balks. Not only does the family always stick together, but it is specifically as a father that Homer is able to achieve the few victories available to him in life. The ironic angle of The Simpsons does not prevent us from caring, as Wallace would have it. It’s precisely the opposite: by portraying a recognizably broken world and showing us the kind of moral victories that can be realistically achieved in such a world, The Simpsons makes caring a plausible option.

Next, here’s a list of examples of, uh, something related to “postmodern irony,” I guess:

“The commercials for Alf‘s Boston debut in a syndicated package feature the fat, cynical, gloriously decadent puppet (so much like Snoopy, like Garfield, like Bart, like Butt-Head) advising me to ‘Eat a whole lot of food and stare at the TV.’”

Seriously, I have no idea what all these characters are supposed to have in common. It’s times like this that the accusation of “name-dropping” is more than just an easy diss; it’s nice that you can think of a bunch of supposed examples of whatever it is you’re talking about, but guess what, you still need to actually support your argument.

Anyway, this is wrong, again. The counterexample, obviously, is Bart Simpson. Bart’s mischief is not an expression of decadent nihilism, it’s his attempt to be a person in a society that is trying to turn him into a robot. The fact that The Simpsons “ironically” validates Bart’s bad behavior (to an extent, there are also counterexamples such as “Bart vs. Thanksgiving” when the show clearly intends us to understand that he’s gone over the line) isn’t supposed to make us feel comfortable with it; it’s actually an incisive criticism of the society that has produced him. Contra Wallace, the purpose of the parody is not to allow the audience to laugh at the situation from a safe, comfortable distance. The purpose is to make the bizarre world of Springfield seem all too real.

So hey, did you notice that Wallace used The Simpsons as an example in both lists, and that in both cases it was wrong in the same way, and that if we consider this example more comprehensively it completely undoes his argument? You did, right?


tv

Teacher, Mother, Secret Lover

The Simpsons is a fundamentally ironic show. The setup is actually a specific parody of the stereotypical 80s family sitcom, though this is somewhat difficult to understand from a modern standpoint, as said sitcoms have largely ended up in the dustbin of history. The point is, the setting and characters are basically all explicit stereotypes, and, per Wallace’s argument, we as the audience are expected to understand this “ironically,” that is, to recognize the stereotypes and see through them. Wallace’s claim is that the function of this sort of irony is to merely criticize without committing to a real position, such that anything the show actually tries to say can be dismissed as just being “part of the joke.” But this is deeply incorrect in both ways: first, the function of irony on The Simpsons is not merely to criticize; second, and more importantly, the show’s irony does not prevent it from making sincere statements.

While I could probably make this argument by just picking random episodes, let’s try to identify some of the more provocative examples. “Itchy & Scratchy & Marge” is a good fit, since it’s one of the classic “ironic take on social issues” episodes. Marge fulfills the role of the stereotypical “concerned housewife” when she organizes a boycott of The Itchy & Scratchy Show due to its violent influence on children. This episode lambasts both the priggish moralists of the censorship campaign and the hack corporate cartoonists who just want to be left alone to churn out their mindless program in peace. Wallace’s claim is that this sort of setup allows us to laugh at the issue from a safe distance without actually engaging with it. But this is not so: the purpose of the episode’s ironic tone is precisely to engage with the issue in a deeper way than by merely taking one side of it.

Consider the scene where Roger Meyers, Jr., the cynical, cigar-chomping executive behind Itchy & Scratchy, argues that cartoon violence can’t be a real problem because violence already existed before cartoons were invented. We’re meant to read this argument ironically: to recognize both that it has a basic validity and that it’s fundamentally fallacious, and also to understand why Meyers is making it. This ironic angle actually draws us in to the argument; we think: “well, of course TV didn’t invent violence, but that doesn’t mean it has no effect on anything.” Furthermore, it’s clear that Meyers doesn’t actually care and is just making the easiest, most self-serving argument he can come up with. Meyers is the villain here, and not in an “ironic” way: he’s actually a bad person for not caring about the social effects of his program. This directly challenges the prejudices of the viewers, who are naturally expected to be receptive to the anti-censorship argument.

On the other side, consider how Marge’s protest ultimately fails because she’s unwilling to go along with her histrionic supporters in boycotting Michelangelo’s David. The relationship is supposedly that these are both issues about “freedom of expression,” but we can see that this is absurd. Marge has a specific grievance against Itchy & Scratchy; she started the protest because the show actually caused Maggie to attack Homer with a mallet. That’s the actual issue, and the fact that the episode uses irony to deconstruct the standard protest narrative without ignoring the human aspect at the heart of it shows us that there’s a better way to handle the issue than to engage in tired arguments about “censorship.” The point of the irony is that the framework we use to talk about this issue is flawed. This episode is not mere criticism; it encourages us to look beyond the usual rhetoric and focus on the things that actually matter.

The Simpsons is also full of entirely sincere moments, one of the deeper ones occurring in “Bart Sells His Soul.” The episode begins with a fairly standard ironic take on religion, mocking both Reverend Lovejoy’s cynical approach to pastoring and Milhouse’s naive acceptance of all manner of pseudo-religious nonsense. Bart takes the expected position of the viewer: totally unmoved and entirely willing to give up what little remains of his spirituality for $5. But it’s Bart’s position that the episode goes on to attack; while nothing that happens is dramatic enough to really disprove Bart’s argument, the little details of his situation add up to a deep feeling of wrongness. We come to feel that, while Bart’s position may be a smart one to take, it’s ultimately not a wise one.

By the end of the episode, the ironic angle is totally gone, and the show, through Lisa, ends up making a straightforward philosophical argument:

“But you know, Bart, some philosophers believe that nobody is born with a soul, that you have to earn one, through suffering, and thought, and prayer, like you did last night.”

But this statement is only meaningful in the context of the episode’s previous disdain for religion as it is normally conceived. It is precisely through this criticism that the show is able to suggest to us that there may actually be something there, deeper than where we normally look (certain modern atheists could perhaps learn something here). Furthermore, while Bart appears to ignore Lisa’s philosophizing, he does so while eating the piece of paper symbolizing his soul, implying that, even without accepting the explicit argument, Bart has internalized something significant through his experience.

Finally, let’s consider a counterexample. “The Principal and the Pauper” is precisely the kind of thing Wallace is complaining about: an episode whose self-referential irony prevents it from saying anything about anything other than itself. I actually have more sympathy for this episode than most people; I recognize what it’s trying to do, and I can understand why someone writing for Season 9 of The Simpsons would be interested in making an episode like this. But the fact remains that it’s fundamentally hollow, and compared to the show’s prior greatness, it’s no surprise that this episode came as a bitter disappointment.

And that’s precisely the point: this episode is universally reviled. Not only is meaningless self-referential irony not taken for granted, it isn’t even expected. This episode is an outlier in terms of up-its-own-ass-ness (or at least it was, back when the show was worth talking about). And people responded to it in exactly the opposite way from what Wallace is claiming: they didn’t pat themselves on the back for being in on the joke, they were fucking pissed. The vehemence of the reaction was enough that the writer, future Futurama stalwart Ken Keeler, used the episode’s DVD commentary as an opportunity to try to explain himself.

And while The Simpsons is unique in many ways, it’s far from the only counterexample to Wallace’s argument. Along the same lines as “The Principal and the Pauper,” Family Guy is widely hated for using shallow irony to avoid being meaningful in any way whatsoever. Shows that used irony constructively include The Daily Show and The Colbert Report, which increased political engagement and helped make liberalism cool again (how effective this was at actually changing anything is a separate issue). Consider also the environmental episodes of Futurama, which use satire to make the vital point that there’s no such thing as short-term environmentalism. Shows that are enjoyed specifically for their lack of irony include Last Week Tonight, which is popular due to the fact that it actually explains political issues, and Friends, a still-beloved show which surely ranks among humanity’s most painfully earnest creations.

[update: when I wrote this I actually had no idea how popular Friends still was. Turns out it’s like uber whoa. Where’s all that irony when you need it?]

Furthermore, ironic self-reference is not confined to TV, and it’s hardly a modern invention anyway; stories about stories are as old as, like, stories. Examples can be found even in the work of a writer whom Wallace upholds as a shining example of how to address serious issues in fiction: Fyodor Dostoevsky.

The Brothers Karamazov begins with a “From the Author” note, which is in fact not from the author, but from the narrator of the story. This is interesting because the narrator is not actually a character in the story. He sometimes refers to himself as “I” and references his own location and observations, but we never actually meet the man. At other times, his voice completely dissolves and the book defaults to standard third-person omniscient narration; many sections are about the private thoughts of the characters when they are alone. Yet when the narrator’s voice does emerge, we see that he has his own quirks and is in fact a bit of an overwriter – the reader (that is, the reader of late-1800s Russia) is expected to notice this and to understand it as a deliberate choice on Dostoevsky’s part, rather than as bad writing. Furthermore, the opening note itself actually expresses an opinion on the story and suggests an interpretation, one that the narrator himself admits we might not agree with.

In this way, the line between fiction and reality – between the narrator, the characters, and the actual author of the actual book – becomes blurred. This exactly the kind of thing that people would describe as “postmodern irony” if someone like David Foster Wallace were doing it (Wallace’s use of narrative voice is actually quite straightforward by comparison). So, doesn’t Wallace’s critique also apply to Dostoevsky? Isn’t the function of devices like these to distance us from the story, to let us experience it from a safe remove without actually grappling with its ideas? As another example, Dostoevsky has a habit of mocking his real-life political opponents by placing their arguments in the mouths of his more ridiculous characters; doesn’t this allow us to merely laugh at these ideas rather than engaging them?

This line of thinking is obviously silly, because Dostoevsky is a Serious Writer whom we are required to Take Seriously. And we are correct to do so; when we’re talking about someone like Dostoevsky, we understand quite well that his artifice is a crucial part of how he achieves his intended (or unintended) effects. Yet when we’re talking about TV shows, the name of the game is to find a way to dismiss any possible importance in the actual content as quickly as possible. In this way, we can see that by ascribing an improbable amount of influence to TV, Wallace is in fact not taking it seriously; wrapping up the whole enterprise in a box labelled “irony” is a way to avoid engaging with the many things that are actually going on (The Simpsons being merely one of them, even in the 1990s). Wallace implicitly assumes that, unlike real art, TV is just a thing, and is therefore susceptible to a simple explanation of its one ideology and the one effect that ideology has on society. This becomes even clearer when we realize that Wallace’s argument treats TV shows and advertising in the same way, as though they were the same thing just because they’re located in the same place. This is as foolish as trying to come up with one single thing that all of “Russian literature” means and explaining all of Russian society on that basis.

Finally, Wallace’s argument that we’re all trapped in Ironic Purgatory is actually self-refuting. If we were, how could any of us understand what Wallace was saying? On the contrary, the fact that his charge against irony was met with such a fervently positive reception (viz. that fucking commencement speech) proves precisely that we are not all entangled in a morass of ironic self-reference, we are not content to be merely “in on the joke,” and we can quite easily recognize genuine emotion when we encounter it.

The truth is that there is no irony problem, and the reason I spent forever getting here is that this myth just won’t die. Irony is a tool, it has a variety of uses, and the idea of “ending irony” is as nonsensical as it is undesirable. Wallace paints a provocative picture of “postmodern” paranoia, but the truth is he’s tilting at windmills. He’s Don Quixote in reverse: so entranced by the mythology of “postmodern irony” that he is unable to see the basic nobility of the real world.


Reading is Fundamental

Wallace’s excursion into TV land is actually the lead-in to a point about literature. Specifically, Wallace claims that the self-referential irony of TV has spread out to infect avant-garde literary fiction. As an example, he cites Some Book, by Some Guy, which appears to be one big meaningless ironic pastiche of consumer culture, or something. Here’s the thing. I could obviously check what book Wallace is talking about and try to assess his argument, I’ve got the essay right here, but I have no reason to actually care, because I’ve never heard this book or its author referenced in any other context. I’m not the most clued-in when it comes to cutting-edge literary fiction, but when Wallace talks about people like Pynchon or DeLillo, I know what he’s referring to; indeed, Wallace is able to easily explain the influence of these authors in his essay. On the contrary, when he gets around to talking about Leyner’s book (the guy’s name is Leyner), he analyzes it convincingly enough, but he fails to do the one thing that’s required for his argument to actually matter: demonstrate that Leyner is actually representative of, like, anything at all. As it stands he’s just one guy who wrote a goofy book.

If we think about why Wallace chose this example, the mists begin to clear: Wallace is talking about literary fiction because it’s his genre, and he’s worried about the use of irony because that’s the problem that Wallace himself was trying to deal with when it came to his own work. The rationale Wallace gives for talking about Leyner’s book is that it was apparently “the biggest thing for campus hipsters since The Fountainhead” (I don’t understand this joke); all this means is that it was popular in Wallace’s milieu. But the fact that this is a particular concern of Wallace’s is precisely why he does not have license to portray it as evidence of a broad cultural malaise.

So, why is this a problem? Wallace is just talking about an area of his own personal experience, right? He isn’t making a broad argument about American culture, he’s just talking about one particular use of irony and one particular response to it, so I’m totally missing the point here, right? Except not even, because Wallace totally is claiming to make a comprehensive argument that applies to all of TV, all of literature, and therefore all of America. Remember good old Joe Briefcase, and how Wallace presents him as a generic American in order to make a completely general argument, and how this is a huge problem because it allows Wallace to project his own assumptions onto everyone else without justification? Observe how he initially sets the stage:

“Every lonely human I know watches way more than the average U.S. six hours a day. The lonely, like the fictive, love one-way watching. For lonely people are usually lonely not because of hideous deformity or odor or obnoxiousness – in fact there exist today support- and social groups for persons with precisely these attributes. Lonely people tend, rather, to be lonely because they decline to bear the psychic costs of being around other humans. They are allergic to people. People affect them too strongly. Let’s call the average U.S. lonely person Joe Briefcase.”

First of all, Wallace’s argument here is self-contradictory. If “lonely people” are well above the average in terms of TV watching, then the broader population of non-lonely people must be well below the average. But if this is the case, TV should be catering to this broader group of people, on account of there’s way more of them and TV is not a niche interest, meaning all of the conclusions Wallace draws about TV on the basis of what “lonely people” are like are wrong.

But the real significance of this quote is that people who “decline to bear the psychic costs of being around other humans” are not necessarily “lonely” – they are specifically introverts, and, as painful as it is for me to admit this, most people actually do have the particular mental disorder that allows them to be at ease around other humans. The reason Wallace focuses on introverts here is, of course, that he himself is an introvert. But given that he fails to explain this, he seems not to understand that this is something that makes him different from most people. That is, I’m sure Wallace started from the point of wondering why he felt differently than everyone else seemed to, but he then went on to, through projection and overgeneralization, explain his own problems as problems of the world.

With this in mind, we can see what Wallace is doing as he continues his setup:

“Joe Briefcase fears and loathes the strain of the special self-consciousness which seems to afflict him only when other real human beings are around, staring, their human sense-antennae abristle. Joe B. fears how he might appear, come across, to watchers. He chooses to sit out the enormously stressful game of U.S. appearance poker.”

Note not only the way Wallace is attributing specific characteristics to what is supposed to be a generic example character, but the evocativeness of this description of “Joe’s” feelings. Isn’t it terribly obvious that Wallace can only be describing the way that he himself feels? (I’ll vouch that this is a pretty decent expression of what moderate to severe introversion feels like). But why doesn’t he just say so? Why, indeed, does his example person appear to be designed precisely to be as unlike Wallace himself as possible? Consider that Wallace was certainly not the briefcase-carrying, 9-to-5 sort, and consider the earlier description of Joe of the head of a stereotypical nuclear family – also very much unlike Wallace.

The move that Wallace makes here is crucially important: he starts by describing his own feelings, invents an example character to embody those feelings, and then goes on to use this character as a fully generic example of whatever he feels like talking about at the moment. In this way, Wallace fools himself into thinking that his own feelings apply to everyone else, allowing him to draw broad conclusions through mere introspection. And this is not a con job: Wallace is not conscious of the fact that he’s doing this. Examples:

“We are the Audience, megametrically many, though most often we watch alone”

“One reason fiction writers seem creepy in person is that by vocation they really are voyeurs”

“by 1830, de Tocqueville had already diagnosed American culture as peculiarly devoted to easy sensation and mass-marketed entertainment”

“We,” “They,” “American culture.” So yes, Wallace does think that when he talks about TV he is talking about the TV audience in general, when he talks about “Image-Fiction” he is talking about literature in general, and when he talks about culture he is talking about America in general.

And he is, of course, wrong to be doing this. Being stuck in a rut of over-self-conciousness is Wallace’s problem, not TV’s. Being unable to work through modern meta-irony in order to say something meaningful is Wallace’s problem, not fiction’s. And the dearth of meaning in our semantically overcrowded society is . . . everyone’s problem, obviously, but Wallace’s entire explanation of how and why we got here is completely wrong, because the whole time he was only talking about himself.


David Foster Wallace Was Wrong About Everything

This realization recontextualizes the essay quite a bit. In order to correct Wallace’s fundamental error, we must not only avoid his generalizations, we must comprehensively edit his “we” to an “I,” his “U.S. Culture” to “my subculture,” and his “Joe Briefcase” to “David Foster Wallace.” We must understand Wallace’s work not as analytical or investigational or even observational, but as confessional.

The key, finally, is that is applies to everything Wallace wrote. Sometimes this is unproblematic, for example, when Wallace gives his thoughts on Kafka or David Lynch, he’s performing straightforward criticism; we understand that these are his arguments. But Wallace evidences this kind of restraint only rarely. When Wallace celebrates a new usage guide that supposedly resolves the deep political problems of language, it’s because said guide resolves the problem of Wallace’s own relationship with language; he assumes everyone else feels the same way; he’s wrong. When Wallace gets excited about McCain’s candidacy, it’s because McCain is providing what Wallace wants out of politics; he assumes everyone else wants the same thing; he’s wrong.

But it’s not enough to just reinterpret Wallace as a personal writer, because it is the specific move he makes of starting from a disguised version of his own prejudices and sublimating them into an intellectual argument that makes him actually wrong. Because he starts on unsteady footing, he stumbles with each step. The best place to see how this works is in “Authority and American Usage,” as this is both Wallace’s most comprehensive and most personal argument.

When Wallace finally gets around to addressing the actual “Descriptivist” linguistic argument – that languages have a set of real rules about how they actually function and several sets of fake rules that people make up for various reasons – this is how he begins his counterargument:

“A listener can usually figure out what I mean when I misuse infer for imply or say indicate for say, too. But many of these solecisms – or even just clunky redundancies like “The door was rectangular in shape” – require at least a couple extra nanoseconds of cognitive effort, a kind of rapid sift-and-discard process, before the recipient gets it.”

This is literally an unbelievably weak argument. Wallace actually has to say “nanoseconds,” because if he had phrased this in the usual way and said “seconds,” he would be making a stronger claim – one that is obviously wrong. But by softening his claim, he makes the argument worthless, because a) we can’t possibly determine the average cognitive “work” of an utterance down to the nanosecond and b) if it actually is just a nanosecond, then it’s not worth the effort to correct it. Ergo, since this argument is not plausible, it must not be Wallace’s actual argument.

But of course, Wallace is not making a linguistic argument at all; he is merely expressing himself, which is to say venting his own prejudices. The feeling that Wallace identifies as “extra work” is in fact nothing but his own feeling of irritation. This insight explains everything that is so odd about that essay. It explains why Wallace doesn’t properly engage with the work of professional linguists, the people who actually study the things he’s talking about. It explains why he meanders so broadly through so many different perspectives and ideas without properly connecting them to his main point. It explains why he chides others for making shallow, self-serving arguments and then makes even shallower, more self-serving arguments himself. And it explains why he gets an issue that he’s so passionate about so fundamentally wrong.

And so, when Wallace laments the “dead end” of irony, he’s merely addressing his own limits as a thinker. Consider his conclusion:

“The next real literary ‘rebels’ in this country might well emerge as some weird bunch of anti-rebels, born oglers who dare somehow to back away from ironic watching, who have the childish gall actually to endorse and instantiate single-entendre principles. . . . The new rebels might be artists willing to risk the yawn, the rolled eyes, the cool smile, the nudged ribs, the parody of gifted ironists, the ‘Oh how banal‘”

Think about what Wallace is actually saying here. He’s seriously claiming that anyone making a sincere argument is necessarily subject to ridicule and eyerolling. He has to be, because otherwise he has no argument. If this is just something that happens sometimes, then the “problem” is just that some people are jerks. But the idea that there are literally no sincere statements anymore, that ironic parody is just so devastating that no one’s willing to “risk” making them, is ridiculous. Again, what’s happening here is that Wallace is making a broad cultural pattern out of his own anxieties. It is Wallace who is constantly afraid that someone will laugh at him if he comes across as too sincere. Most people do not have this problem.

Indeed, the idea that irony has a vastly increased prevalence in modern times is itself highly debatable; given that the concept dates back to at least Socrates, I’m pretty sure people have had a handle on it for a while now. Frankly, the whole thing about Vietnam/Watergate/whatever being some kind of crucial turning point for cultural values is basically one big Kids These Days rant. Which, temporally speaking, you’d think would be over by now, but it seems to have stuck for some reason. The fact is that the overwhelming majority of Americans are still conventionally patriotic, even those who are “cynical” about politics, and the number of us who actually want to fundamentally change the structure of society, as opposed to “reforming” it to curb its “excesses,” is statistically insignificant.

Second, even if we assume that we are “more ironic” now, this really means that we’re better at communicating. We can understand complex arguments at various levels of remove. We are less easily fooled by the stated beliefs and motivations of deceivers. And, of course, we can use irony ourselves in order to say things in more effective ways than merely blurting them out and hoping for the best. By realizing that irony is a tool rather than an ideology, we can actually use it to express our sincere feelings more effectively.

Thus, Wallace’s core point, that we’re all lost in the labyrinth of irony and we need to find our way back into the daylight of sincerity, is ultimately an expression of his own discomfort with the conclusions we’ve drawn from the events of the 20th century – and with the realization of where we need to head next.

(Part 4)