Generica

This article makes the claim that a certain type of combat-free environment-based platformer constitutes a “new subgenre” of games. This is wrong for every possible reason, and we’re going to go through all of them.

First, the claim to novelty is straightforwardly false. There are lots of games that are focused on navigating an environment and avoiding dangers rather than fighting (even Mario games are mostly like this), and indirect storytelling is actually an overused and obnoxious fad right now. Bizarrely, the author lists a bunch of counterexamples only to dismiss them for basically no reason, despite the fact that they really are counterexamples. Abe’s Oddysee is precisely a game in which you have no combat abilities and have to negotiate threats through guile and evasion, and in which the story is told through environmental setpieces. Furthermore, the presence of combat or text or narration doesn’t change the general type of thing that a game is. Combat in games is just a metaphor, after all. Pressing A at the right time might represent either attacking an enemy or jumping away from a threat, but it’s still the same action. The point of analysis is not to pick as many nits as you possibly can; it’s to figure out what’s going on underneath the individual points of interest.

Second, it’s a category error. These things aren’t genre boundaries; they’re aesthetic effects. You can use them in any type of game. RPGs, for example, are most often overwrought power fantasies glued together with piles of text, but even here there’s nothing really complicated about inserting these types of effects. Final Fantasy II opens with you getting your ass kicked by a bunch of imperial soldiers who are a billion times stronger than you. After that, your first mission is to infiltrate an occupied town in order to make contact with an informant. The same imperial soldiers are patrolling the town, and encountering any of them is basically instant death. This both portrays the nature of the situation you’re in through properties of the environment and establishes your characters as being radically underpowered compared to their adversaries. Mother 3 has a chapter where you play as an enslaved monkey (literally; that’s not a joke or anything), who is similarly extremely weak compared to the enemies in the area. When you get into a battle, you have to rely on your slaver to do most of the work for you, which deepens the relationship dynamic that defines this part of the game. Also, the first town in the game starts out as an anarchist utopia. The thing that would normally be an item shop is actually just a communal storehouse where you can take whatever you need. Later, the town becomes commercialized, and the same building becomes an actual store where you have to pay for things with currency. This portrays the development of the town’s social situation through the state of the game environment. Of course, none of these are exactly the same thing as what’s being talked about in the article, but, like, no shit. Nothing’s exactly the same thing as anything. Again, the whole point of analysis is to draw connections between things which aren’t identical but which have similarities that can provide aid in understanding them. Categorizing everything into a precise set of neat little boxes might make you feel smart, but it doesn’t actually do anything for anyone.

Third, it’s irrelevant. What actually changes depending on whether the answer to the question “is this a new genre” is “yes” or “no”? Nothing. The games themselves stay the same regardless of how you categorize them. Identifying what the game is doing is useful for understanding it, but what you’ve done that, slapping a label onto an arbitrary collection of “similar” games doesn’t get you anything new. You’ve already explained the part that matters.

Fourth, it’s a misunderstanding of what the concept of “genre” actually is. Genre is not a property of a work; it’s an organizational category, and you can organize things however you want. Super Metroid, for example, is a shooter, and it’s also a platformer, and it’s an open-world exploration game, and it has an upgrade system, and it uses non-explicit storytelling (except for the intro text). All of these things are properties of the game, but none of them, individually or in combination, require you to affix any particular label to it. Depending on what you’re talking about, you can refer to any or all of these effects. In fact, what actually happened, historically, is that a rough approximation of this particular combination of traits ended up acquiring the label “Metroidvania,” which, in addition to being the absolute stupidest name anyone has ever come up with for anything, just goes to show that genre categorization is completely arbitrary and you can make up a new one basically whenever you want. So, actually, the article isn’t “wrong” in the usual sense, but rather simply useless. You’re always free to pick any three similar-ish games and call them a genre, which is exactly why doing that doesn’t do anything. But the fact that genre don’t real doesn’t mean that it has no utility as a critical concept. There’s no point in using individual effects to establish a genre, but you can use genre to understand what a game is doing through it’s individual effects. People understand the type of thing that a platformer or a fighting game or a rhythm game is, they know what you’re talking about as soon as you bring up the term, so these understandings can provide starting points that ground further analysis. This is why trying to establish a “new genre” is pointless: genre is only useful to the extent that people already understand it. Of course, new genre concepts do have to be established and named at some point, but this happens as a reflection of widespread understanding. “Metroidvania” become a known term once people started to gain an intuitive understanding of what type of game that was; indeed, the term itself reflects the fact that two seemingly dissimilar games “felt” the same to a lot of people. But for games that are obviously the same type of thing in the first place, there’s no point in pointing that out. You’re not telling anyone anything they don’t already know.

Fifth, the assumption that these traits are necessarily good things is unjustified. The author equivocates between the argument that this particular set of effects constitutes a genre and the argument that they’re a good thing, and in doing so avoids actually making the second argument, which is the one that matters. While video games do have a violence problem, it doesn’t follow that any game where you don’t kill things is solving that problem. The violence problem is deeper than just “killing things is bad”; it’s a problem because, to the extent that games are about interacting with an environment, violence-based interaction is a) necessarily adversarial: the environment can only hurt you, and you can’t interact with it productively, and b) reductively physical: the environment consists only of objects to be manipulated or destroyed, and contains no subjects to engage. A game where you can’t fight but you have to run through an area avoiding obstacles maintains both of these problems. The world is still just an enemy and not a place that you can actually exist in. Furthermore, setpiece-based storytelling also has the same problem: it portrays the world as just an object. Of course, this doesn’t mean these techniques are necessarily bad, either. Physical and adversarial actions are parts of the real world, and they’re worth portraying. Even explicit violence isn’t necessarily bad; you can make a game with violent acts that are actually meaningful. The real problem, rather, is the assumption that games have to do one type of thing: that you have to have combat, because otherwise it’s “not a game” – or that you have to have explicit non-violence, because otherwise it’s “not art.” The truth is that getting good requires being able to do different things as appropriate. In the previously mentioned Mother 3 example, you see the town transform as it becomes commercialized, but you can also talk to the residents in order to hear how they’re dealing with it and how they feel about it. Using both types of effects allows the game to communicate the situation more effectively. Picking out any individual effect, even something like non-violence that seems inherently laudable, and labeling it as a “good thing” is a reductive method of analysis that retards artistic development.

Sixth, the whole thing is a transparent ploy to smuggle in an unexamined assumption about what games are “supposed” to be like. Genre categorization is not an evaluation of quality, but the author is using genre as a means to argue that the games in question are good. Something being a platformer doesn’t make it a good or bad thing. Even if you really like platformers, there exist both good and bad games that meet the genre criteria. So you can establish a new genre, if you really want to, but you can’t establish a “good genre,” because there’s no such thing. Historically, though, there’s been a rather strong tendency in game criticism to assume that the “correct” way to do games is to put everything “in the gameplay.” (Anytime you see a derivative of the word “ludic” you’re almost certainly walking into this minefield.) This viewpoint is no more justified than the idea that movies should be strictly focused on cinematography or that lyrics aren’t a valid component of songwriting. In terms of storytelling, there are a lot of people operating under that assumption that the goal is to make storytelling in games as “interactive” as possible. There’s nothing wrong with doing work in this area, but when you assume that this is the only thing that can be done, you basically box yourself into a corner. The usual way the complaint is phrased is that a cutscene or anything else that the player doesn’t “choose” is “non-interactive” and therefore “takes you out of the action” because it’s not “gameplay.” This is an unquestioned assumption that being “in the action” is always a good thing. Actually, it’s not even an accurate description; walking through a setpiece is not being “in the action,” it’s just an unclear cutscene where you have to hold right while you’re watching it. Selecting an option from a menu in order to trigger one of two endings is also not meaningfully interactive; it’s just lazy writing. And that’s exactly how you can tell that there’s no real line here: you can have empty interaction, where the player is pressing a button but is not actually engaged in what they’re doing in a meaningful way, or you can have something nominally non-interactive like a cutscene where the player is actually paying attention and thinking about what’s going on. What makes one choice better than another isn’t how it’s coded or how you can categorize it or what it “counts as,” but what it does. So if you insist that games are only one narrow type of thing, you will end up with games that barely do anything. One of the examples in the article is a game that portrays a Soviet-style dystopia using things like staged propaganda shots. The author is really impressed by this for some reason, even though it’s basically the least impressive thing possible. The Decaying Soviet Dystopia is a cliche; as soon as I say that you already know exactly what I’m talking about. The thing about portraying cliches is that it’s easy; you barely have to do anything other than invoking the name (that’s pretty much the definition of a cliche). So the fact that a game uses a certain technique to do literally the easiest possible thing is actually an indication that that technique is nearly useless, and the fact that this appears impressive to someone is an indication that what’s actually going on here is fetishization.

Seventh, this is all really beside the point, because this is ultimately just another by-the-numbers entry is the musty old “art” debate that yet clings to game criticism like a damp fog. The post is tagged “games as art,” which doesn’t seem to make any sense. How is categorization at all relevant to artistic merit? There are a billion different types of nails, but categorizing a new “subgenre” of nail doesn’t make nails art. Recall, though, what’s actually going on here: we’re talking about a particular set of effects that are assumed to be good things and are furthermore considered the things that games are supposed to be like, and this is what’s being called “art.” So the implicit claim here is that if you do games “right,” so that they’re “interactive” enough, then they’ll be “good enough” to “count” as art. The concept of art is being used as a standard of quality. Hence, the focus is on taking the unique aspect of games – interactivity (although not really, we’ll talk about this later) – and making it as “good” as possible, so that it reaches the level of qualifying as art. But this whole conceptualization is completely erroneous, as evidenced simply by the fact that there exists bad art. Indeed, that is exactly the situation that games are really in right now: the fact that most games are thoughtless kill-em-ups or obtuse number-crunchers or, yes, pretentious pseudo-interactive badly-written short stories doesn’t mean that they aren’t expressing things. It means they’re expressing things badly. Games are bad art, and arguing the “art” part only serves to emphasize the “bad” part. And of course there’s no point in arguing it in the first place, because you can’t convince people that games are giving them significant experiences. They’re either feeling it or they’re not. In fact, that’s the silver lining here: the reason people are talking about these things is that they feel intuitively that games are capable of real expression. The insistence that “games are art” is how people convince themselves that what’s they’re feeling is real and valid. But the need to use that label is born out of defensiveness, and it doesn’t actually help to make anything any better. That’s why the only way to do this is the other way around: to make games that are good enough that nobody feels the need to discuss them “as though” they were art. That is: assume games are art. So what? There’s no point in answering the first question if you can’t answer the second one. There probably was a time when it needed to be pointed out that, yes, games are a form of art, but just pointing that out is all that really needs to be done, because if you can’t move forward from there in order to start making claims that actually matter, there was no point in making that point in the first place. What we need right now is more matter with less art.

Eighth, give me a fucking break. This is what passes for criticism these days?

Masquerade

I’m not qualified to comment on the specifics of the Elena Ferrante situation, but there’s a particular aspect of the response that I feel requires some elucidation. Rather than addressing the personal implications for Ferrante herself, or claiming that her identity is simply not relevant to a particular understanding of her work, many people seem to be going quite a bit further. They are claiming that it is wrong for this information to be available, that we ought not know it, and they are making this point in terms of criticism. They are saying that the correct critical posture is to choose to operate with less information. They would rather the information not exist at all; they would rather be lied to.

This broader issue has been in some contention recently. It’s become quotidian to hear that we’re in the middle of a “war on truth,” that the Information Age ability to “choose your own facts” is literally going to destroy the world. We’ve seen, for example, Newt Gingrich claim that it doesn’t matter what the crime statistics actually are, that as long as people feel like there’s a lot of crime, then extreme repressive measures against it are justified. Obviously, politics is a different matter than lit crit, except not really, because there isn’t all that much of a distinction here. The point of facts in policy is to come up with an approach that will affect the world in the way that it’s supposed to, and the point of facts in criticism is to come up with an interpretation that accords with reality in the way that it’s supposed to. This is why we’re not allowed to live in our own individual fantasylands; it’s what the truth is for. So if we really believe that there’s a problem here, then our only available response is to start taking the truth more seriously. This means respecting it even when it is ugly or vulgar or cruel, even when it has inconvenient consequences, and even when it is revealed by bad people with ulterior motives. The truth either matters or it doesn’t.

Now, part of what Ferrante is trying to do with herself is to react against the trend wherein, to grossly oversimplify, the conversation surrounding a work of art tends to supersede the work itself. The most important argument in favor of pseudonymity is that the work itself is what matters, and that the identity of the writer can only be a distraction, or at best the subject of a separate biographical interest. This is a half-truth. Certainly, there are writers whose status as famous writers is more important than anything they actually write, and certainly this is disgusting. The situation where, say, Dave Eggers is like “HEY GUYS I’M WRITING A BOOK ABOUT SOCIAL MEDIA” but then there’s nothing actually in the book is the worst possible situation. So if Ferrante’s work represents the inverse of that, then she’s doing a great job.

But the idea that a work can “speak for itself” in an absolute sense betrays an unjustified dedication to purity. The fear is that knowing the sausage-making details behind the words will rob them of their beauty, break the spell. This is backwards. It’s true that the only possible source of meaning for a text is the text itself, but it’s a fantasy to assume that we can just lift this meaning straight out, that context is a distraction rather than a necessary tool for doing the work of extraction. Hence, the point of identity is not that it can matter, but that it must. If there is beauty to be found, it must be found in the sausage.

And, indeed, this is already being done by the very people who are so concerned about not doing it. Ferrante’s identity and the context in which she was writing were already being taken into account. The term “anonymity” is being thrown around a lot here, but that’s not what’s going on at all. Elena Ferrante has been publishing pseudonymously and not anonymously. This may sound like a pedantic distinction, but there’s a real difference. Pseudonymity allows the reader to draw connections between works of the same author, to trace the development of themes, etc., hence there is such a thing as “an Elena Ferrante novel” regardless of her physical identity. It allows for the development of a persona. It is only under the condition of anonymity that a work speaks purely for itself – and because any work must be encountered in some kind of context (not to mention, it must be understood in terms of an existing language with a specific history), this condition is impossible to meet.

So, because Ferrante was never anonymous, people already knew that she was a woman living in present-day Italy, and this is already a lot of authorial context. If she had instead had turned out to be a black man who had lived in Compton in the 1980s, that would pretty obviously have initiated a comprehensive reappraisal of her work, right? And we didn’t actually know before that that wasn’t the case. Perhaps, as things happen, Ferrante’s real identity won’t add anything to people’s estimations of her work. But if this turns out to be the case, it can can only be because her readers were already making the correct assumptions – there’s no such thing as not making assumptions. In which case the revelation of her identity is still valuable information, because it confirms that what was being understood about her work was in fact correct. To take the most obvious example, if someone read one of her books and assumed she was female because they thought that a man could never have portrayed women’s social dynamics so accurately, then what that person has now learned is that they were right. They shouldn’t feel attacked; they should feel validated.

Furthermore, Ferrante has given interviews where she has explained herself and her writing process and commented on her stories, which is to say she’s done the exact thing that her supporters insist not be done. She has tied her work to an identity and made it about her. (Similarly, she seems to have deliberately chosen covers for her books in order to produce an intended effect, which is not how that normally works, and which suggests an intended interpretation.) I mean, look at this:

“Where do I start? In my childhood, my adolescence. Some of the poor Neapolitan neighborhoods were crowded, yes, and rowdy. To gather oneself, so to speak, was physically impossible. One learned very early to have the greatest concentration amid the greatest disruption. The idea that every ‘I’ is largely made up of others and by the others wasn’t theoretical; it was a reality. To be alive meant to collide continually with the existence of others and to be collided with, the results being at times good-natured, at others aggressive, then again good-natured. The dead were brought into quarrels; people weren’t content to attack and insult the living—they naturally abused aunts, cousins, grandparents, and great-grandparents who were no longer in the world.”

Definitely a case of a writer whose identity is completely unimportant to her work, right? It’s absurd to say that people “didn’t know” who Ferrante was and that they were therefore able to approach her work “without preconceptions”; rather, they were approaching it with a set of preconceptions that they had already implicitly established. Except of course they were, because you can’t not do that, and even if you could not do it there isn’t anything wrong with it in the first place. It doesn’t “taint” the “pure” experience of the work because there’s no such thing as purity in the first place.

So the argument that context diminishes the value of Ferrante’s work is completely untenable, because it was already being placed into a pretty specific context and that wasn’t causing anyone any problems. And this is where the real problem starts, because people are going even further than this, and claiming that the revelation of Ferrante’s identity is an actual attack on the value of her work, that having this additional information somehow delegitimizes it. First of all, if this were actually the case, it could only be because her work wasn’t much good in the first place. Either her work is fragile and dishonest enough that the truth can destroy it, or it is robust and honest enough that the truth can only enhance it. If her work requires an aura of mystery in order to mean anything, then it precisely does not stand on its own by that very fact. So the extreme level of defensiveness on display is deeply unwarranted.

There’s a false dichotomy at work here: either an author’s work is a muddled reflection of their own life and circumstances and that’s all it is, or it’s an entirely abstract pearl of “greatness” that cannot bear contact with the grime of reality. Either a work is entirely defined by its context, or it is entirely defined by its content. Naturally, neither of these is possible. The thing about the term “context” is that there’s always a context, so there’s no such thing as “pure” textual criticism, or indeed “purity” at all. Like, I kind of thought that this was the whole point of the feminist argument against Great Male Author Syndrome, so I’m somewhat confused to see feminists arguing that the only proper way to appreciate Ferrante is to revere her as an abstract Great Author and not to understand her as a person. Surely the point of identity politics is exactly the opposite: to assert that identity must always be accounted for, that the Platonic ideal of the Great Author is a false concept, that universality does not arise despite particularity but rather emerges from it. Surely feminists are capable of working through the complications of identity rather than ignoring them.

And this isn’t really all that complicated; some simple examples should clarify the point. As a young man, Fyodor Dostoevsky was involved in some radical political activity, for which he and his co-conspirators were arrested and sentenced to death. Their sentence was commuted, but they were not informed of this until after they had actually been led out to the prison yard and placed in front of a mock firing squad. So there was a brief period during which Dostoevsky was absolutely certain that he had only minutes left to live. The experience affected him somewhat. There’s a scene in The Idiot where the protagonist, making conversation, describes in detail the final experiences of a condemned criminal. Now, certainly, interpreting this scene as merely Dostoevsky’s description of his own experiences and taking that to “explain” it is the stupid way to go about things. But it’s a far cry from there to view the related biographical information as useless. For starters, this information draws our attention to the scene in the first place; it suggests that it should be read as a significant part of what Dostoevsky is trying to convey with the overall novel. But it also colors our interpretation of the words themselves; it doesn’t tell us what they mean, it remains the case that only the words themselves can do that, but they do it in context, and the more information we have, the better equipped we are to establish a truth-apt context.

Because interpretation is never a simple task. The fact that art only comes into existence via the subjective experience of a reader’s engagement with a text does not mean that it is impossible for a reading to be wrong. It is very much the opposite: it is precisely because of this that the vast majority of readings really are wrong in some significant way. There is always something you’ve missed, or something you’ve misinterpreted, or something that you lack the knowledge to place into context. There are always more paths to take and more ways to walk them, and many of these combinations will turn out to be fruitless. Which is to say we always need help. Biographical information is not our only source of aid, but it’s at least better than random; it’s something that’s close to the text. So it often helps a little, and it sometimes helps a lot. The real point is, if you think you can just shut yourself up with a text and stare at it real real hard and have The Truth rise up out of it, you’re fooling yourself. Such monkish leanings are out of place in a complicated and contradictory world. More than that, if the truth is merely a resource to be marshaled into the service of our existing prejudices whenever we need it, then the truth is worthless. It cannot give us anything we do not already have. In order for the truth to matter, it must be able to attack us; as such, it is our responsibility to fail to fully maintain our guard. To care about the truth is, paradoxically, to insist on being wrong.

So a more appropriate example would be one that actually changed my mind about something. I mean, that’s the only reason any of this matters, right? I’m not an interesting person, though, so you’ll have to forgive me for using a very boring story here. Once upon a time, I encountered a writer who was attempting to make sense of the apparent senselessness of modernity. His writing was hyper-intelligent and dizzyingly fast; he took direct aim at many of the things I was concerned about and struck at them with equal parts unrelenting force and honest humility. It was exhilarating, I felt validated, and I felt certain that I had found a source of answers. His name was David Foster Wallace, and I feel somewhat differently about him now.

The first thing that happened was just that I started to wise up about a few things and therefore began to notice some of the more major holes in Wallace’s analyses. I saw what was wrong with his arguments about the “usage wars” by learning literally the absolute basics about linguistics (pro tip: Language Log is a good website), and I came to realize the deep pointlessness of his McCain profile once I started taking politics seriously. But it didn’t seem like he was just a misguided weirdo; it still felt like he had a bead on the truth. So once this happened, I had to figure out what was going on.

And, well, this is a little embarrassing, but . . . the first thing that I came up with was that he was being ironic. We were meant to understand his arguments as misguided and follow through from there to reach the truth of the situation. Yeah, I know. It kind of does seem that way sometimes, though. Like, in the McCain piece, he spends the whole essay lecturing Young People for not caring about politics, and he also spends the whole essay talking about ad campaigns and shit and avoiding any actual political issues himself (indeed, he is explicitly dismissive of people with actual political beliefs when they inconveniently intrude into his narrative), so when he ends abruptly with the laughably condescending statement “try to stay awake,” it’s hard to imagine that he’s being serious. He’s telling people to stay awake regarding precisely the matter on which he was asleep throughout the entire essay. (And even on the level of personality, everyone who’s ever known McCain says he’s a huge fuckhead, so Wallace isn’t even doing optics right.) It’s ridiculous. So it seems like that has to be the point, right? The thing we’re meant to wake up from is the essay itself – we’re meant to understand Wallace’s approach as absurd and reject it in favor of the actual substance of political engagement.

But of course Wallace also argued rather strenuously against this sort of ironic posturing, so then it must be the case that his argument against irony is itself ironic, and . . . yeah, you can see why this doesn’t work at all. Look, I didn’t really think this was a good angle, okay? It’s just that it was the only thing I could come up with. And that’s the point: I got stuck and I couldn’t come up with a real interpretation because I lacked relevant information. I was operating under the assumption that Wallace was a smart guy who knew what he was doing, that he was An Author, and that he therefore must have somehow been right in a way that I couldn’t see.

What changed wasn’t a revelation or anything, it’s just that I finally put the pieces of what I knew together and realized that Wallace himself wasn’t really a good person, and, while he was certainly talented in some ways, he didn’t have any kind of special intellectual gifts. Which makes him just like everybody else, of course, the mistakes he made were the mistakes that everybody makes, but that’s exactly it: after demystifying his work, I started to see it as coming from a particular perspective, and things started to become clear. I stopped thinking of his stuff as having been written by David Foster Wallace the Renowned Thinker and started thinking of it as having been written by Dave Wallace, a depressed, introverted, desperate human being, and once I started doing that, I was finally able to see the actual words he had put on the page and figure out what they actually said. More specifically, I realized that the reason I had initially felt like he had to be right was that he was similar in some ways to me, which is to say that I was making the same mistakes that he was, and I was taking that fact as confirmation that we both must have been right. Great minds think alike. What I was experiencing was the bad kind of validation: a reification of my own prejudices. My identification with his work obscured my understanding of it.

So the point is not that biographical details compel certain answers, it is simply that we must recognize that there is a question in the first place. This rather unfortunate New Yorker article, while attempting to make the opposite point, makes exactly this point:

“And even if Anita Raja is Elena Ferrante, what does her mother’s terrible persecution during the Holocaust have to do with the books she wrote?”

Yes, exactly! That is exactly the question! Answering that question (not specifically, but in general) is what your job is as a critic. I mean, this is really bizarre. The author calls this an “obvious question,” but that fact that it is a question at all completely negates her argument. Criticizing a revelation on the grounds that it doesn’t explain everything at once is just flat stupidity. The point is precisely that this work can now be done, that these sorts of questions have now become askable. And maybe, as things turn out, the correct answer will be “nothing,” and the whole line of inquiry will turn out to be a dead end, but we can’t know that until we’ve actually asked the questions and done the investigation. You can’t jump ahead and read the end before you start; you have to get through the whole story, as it is written.

See, it’s bizarre that Ferrante thinks she’s mitigating the Famous Author Effect by insisting on pseudonymity, because what she’s really doing is the exact opposite. Hiding the practical aspects of her identity maintains the mystique around her work rather than dispelling it. Of course she’s using a persona, but pseudonymity has nothing to do with that. Anyone who writes anything is necessarily cultivating a persona. Smoothing out your persona into that of a featureless Platonic “Writer” makes it more likely that people will see your work as some sort of emblem of what they think they need rather than taking it as it is, project themselves into it rather than looking at the actual words on the page.

Indeed, this very honest article on the subject admits to doing exactly that:

“With Ferrante’s anonymity, I do not have to feel any hesitations about the entanglement of self and art. It is okay, in essence, to make her work all about me. Without the details of her life, there is no way to know what personal experiences influenced the fiction she creates. I can project as much as I want onto her work without hesitation. In my mind, she has created work that boils down to a few major themes, and I can use those as plot points to create an image of her experiences that is convenient to me. Her work, to me, is what I see in it. And I have learned from it.”

Though her openness is commendable, it’s not really clear what the author is going for here – she seems to be aware that her position is wrong at the same time that she’s defending it – so I guess it’s my job to point out that yes, this is wrong. Fantasy is the enemy of learning, and convenience is the enemy of meaning. Projecting yourself onto art defeats its purpose; if that’s all you’re doing, it can’t give you anything you don’t already have. What relating to a work means, rather, is exactly what that word says: developing a relationship, understanding a work as something other, and then bringing yourself to that new place. The whole point of the truth is that it is outside your control.

Which is why this is wrong:

“They want to make her small, by making her a real person with a real history and real name and real background. They want to assert control over that person, and what it represents, by revealing it.”

It is the exact opposite. First of all, none of this is up to you: people are small, Ferrante does have a real background, her work is the result of a particular confluence of historical and material conditions, and the only giants are the ones in your imagination. More to the point, though, it is precisely through the void of anonymity that you can “assert control” over a work and define it however you want. A person has limitations, but limitations cut both ways: they constrain a person’s claims on the truth, and they also constrain your claims on that person. As we’ve just seen, the people resisting the fact that Ferrante has a real identity are doing so because they want total control over her work – they want it all to themselves. In the absence of limiting facts, you’re free to live in your own imaginary world. What the truth does, functionally, is to prevent you from doing this. It forces you to do what is right rather than what you want.

So yes, there is a sense here in which the truth of identity brings the author “down to Earth” and makes her “small” and “limited,” but these are good things, because that’s where the truth is. On the ground. Down here, not up there. And that’s what the truth is: things aren’t “more true” the more pompous and grandiose they are. True things can be held in your hand. None of this restricts the potential universality of anyone’s work; it’s what allows us to find it in the first place. Being a person doesn’t make you less of a writer, it makes you more of one. As Noreen Malone very succinctly puts it, “being attached to a specific, limited, actual person — rather than an airy abstraction — is only damning if you think there’s something lacking about being an actual person.” Again, it is bizarre that feminists are making the case otherwise. Surely it is feminists more than anyone else who believe that the basic experience of being a person is more important than any abstract social framework.

There is an allowance to be made here for the fact that the readers we’re talking about are mostly if not exclusively women, and we continue to exist in a society that does not really allow women to have their own experiences. It’s entirely understandable that people who have found a rare source of validation will resist any attempted imposition of a different narrative, especially when they are accustomed to such impositions being both unavoidable and wrong. But even if one accepts the value of comfort, which I don’t, comfort can never be enough. Comfort at its best enables you to get by, and if getting by is your goal, you’re a nihilist. The truth doesn’t corral you in to one valid response, but it does establish a line. And you can’t claim honesty or good faith if you’re not willing to allow that line to cut through your comfort zone.

So the reason this issue can’t be left alone is that it’s a situation where cake is being had and also eaten. You can’t defend Ferrante as both a fragile human and an untouchable icon. You have to pick one or the other, and we all know what the right choice is. Indeed, that is whence cometh the defensiveness about all of this: these people know they’re wrong, and that is why they are resisting the truth. This isn’t a case where people are trying to impose their own standards on others; it’s a case where people have incoherent standards. If we take them at their word that they want the thing they say they want, then it’s only polite to inform them that they aren’t actually getting it.

On second thought, no, that isn’t the reason. I should be more honest here. I’m not doing this out of principles. I’m doing it because I’m upset. I’m upset about this:

“To fall in love with a book, in that way that I and so many others have fallen in love with Ferrante’s, is to feel a special kinship with its author, a profound sort of mutual receptivity and comprehension. The author knows nothing about you, and yet you feel that your most intimate self has been understood. The fact that Ferrante has chosen to be anonymous has become part of this contract, and has put readers and writer on a rare, equal plane. Ferrante doesn’t know the details of our lives, and doesn’t care to. We don’t know those of hers. We meet on an imaginative neutral ground, open to all.”

It’s difficult to know where to start with something like this. I suppose I’ll be polite and elide the psychological angle. The clearest flaw here is that all of this has absolute dick to do with how much you know about a writer’s personal details. You’re always doing this; in fact, nothing distinctive is actually being described here. So what exactly is the aspiration that the author feels she is being denied? Given the topic, it can only be the aspiration to avoid confounding details, to read unchallenged. This is cowardice. This demand for anonymity is a demand for a security blanket. I mean, come on. You really can’t relate to a work if there are already existing interpretations out there? You can’t feel understood other than by feeling like a special snowflake? You stop being able to relate to someone once you realize they’re different from how you imagined them? You can’t integrate uncomfortable truths into a deeper and more robust understanding? You call yourself a reader?

What aggravates me is that these people think they’re mounting some kind of brave last stand against the Famous Author Effect, when in fact they’re completely in thrall to it. They have so little ability to resist that they cannot imagine being able to relate to a work that actually makes demands of them. They’re so uncritical that they simply can’t function in the face of alternative theories. They’re so desperate for a smooth ride that they insist that a “neutral ground” be flattened out for them before they even step into the carriage.

You know what? I’m still not being honest enough. Calling this sort of thing “love” is viscerally repulsive. It makes me sick. What these people are saying is that they cannot love something that is actually real. They can only tolerate vague abstractions that allow themselves to be molded into whatever shape the “lover” desires. This is not love. It’s fetishization. It’s objectification.

The reason this is all so annoying is that we’re talking about the absolute basics here. I’m not even approaching any kind of radical critical theory. This is just the basic substance of what reading is. I can describe someone as “conniving” or “strategic” or “Machiavellian,” and these all mean different things, but that’s not because of anything that’s “in” the words themselves but because you bring a different set of associations with you to each word. Which means you might end up having to admit that, for example, you have some sort of weird idiosyncratic context that you’ve developed for a word that no one else has and therefore what you think the word is about is completely different from what everyone else thinks, and I might end up having to admit that I don’t know what words mean. There’s no way around this; there’s no “real” meaning. All text is context, and all language is a game. You can’t draw until you ante up.

And it’s actually even more annoying than that, because Ferrante’s fans were already doing this even when they didn’t know who she was, they just want to be able to deny to themselves that this is what they were doing. As mentioned, Ferrante never was any kind of abstract giant; she was always a person writing about being a person, and that’s what her fans were responding to, even as they histrionically insist the opposite. So all they’re actually doing is refusing to take on the responsibility of interpretation. They want to pretend that their initial naive impression of her work was “real.” Nothing is real. But that doesn’t mean there was necessarily anything wrong with that impression, it’s just that we shouldn’t be relying on intentional ignorance to obscure the situation for us. If we assume that the harshness of the truth is in conflict with personhood, we preemptively doom ourselves. We should be able to recognize that any public presentation is automatically a persona, and that a persona is always a shield.

And we must be able to do this and still find meaning in the work itself. The big kind of meaning, the same kind we’ve always been going for. Because that’s the thing: all those times in the past when someone encountered a work and felt the hand of god gripping them by the throat and decided that this could only be because the work was divinely inspired or the product of an inhumanly great talent, they, again, were doing this. There actually was a context and a set of biographical details that they were taking into account and sublimating into their broader understanding. They weren’t “surpassing” that context because they couldn’t; that’s impossible. They were just using a convenient fiction to make the job easier for themselves, to avoid complicating details and alternative interpretations. We can do better. We can do the job the hard way, the right way, and still get it done.

If knowing that Ferrante is a real human person fucks up your ability to relate to her work, that’s your problem. I don’t mean that as an insult, I mean it sympathetically. This is something we all have to deal with, and we have to deal with it regardless of how much specific information we have in each individual case. It doesn’t matter whether it’s something like 1984 that has already been theorized to death and back and then to death again or whether it’s unlabeled instrumental music that you found at random on a dead webpage. You always have to do your own work within the constraints of reality, and you always have to choose where to make your stand, and you always have to recognize that doing so leaves you open to attacks from all sides at all times. There’s nothing radical about any of this; it is merely the basic structure of how criticism works – how it has to work. The alternatives are phantasmic.

What’s scary about this, what people don’t want to accept, is that it’s up to you. There is no one bigger than you who can lift you up and carry you where you need to go. There are, at best, people who are just as lost as you are but who have been to different places and can suggest better directions, and a lot of the time there’s not even that. Progress requires assuming that you don’t know what you’re doing, and then of course doing it anyway. But the thing about the death of god is that god never existed in the first place. We were just pretending, and all that’s happened now is that we’ve stopped pretending. Because the truth is actually true, the only thing that a revelation can reveal is the thing that was always the case, all along. We have always lived in a world without giants, which means that the work that we’ve done has always been our own. It’s not that it’s up to you now, it’s that it’s still up to you. It has always been up to you; you have always been making this decision. Believe in yourself.

Get the cuts you need

This ranking of Tegan and Sara albums is a remarkably comprehensive argument against the idea that ranking things has any value whatsoever.

The first and most obvious issue is that The Con isn’t at the top, seeing as it’s clearly magical. When Robert Mapplethorpe was taking pictures of Patti Smith for the cover of Horses, he looked at one of the shots and said to her, “this one has the magic.” There’s nothing really remarkable about the image, it’s just a lady wearing a suit and looking at the camera, but it is, in fact, magic. Anyone who’s seen it has it permanently burned into their mind; it has a transcendent itself-ness that gives it an insistent significance; it communicates in a primal language that creates its own understanding; it’s the kind of thing that you feel in your heart before you even know what it is. The Con is the one that has the magic.

This seems like a slam-dunk argument that it’s “the best,” but it doesn’t actually hold up on reflection. The characteristic of magic is precisely that it does not obey the normal laws, so it’s incoherent to say that something is the objective best because it’s magical. On the contrary, the whole point of rankings, it would seem, is to get around vague notions of “specialness” and down to brass tacks.

So, I mean, fine. If you’re trying to rank things, you need to ignore magic and just focus on the things you can measure. This is already pretty suspicious, since we’re ignoring what seems to be the most important thing, but we can at least see if it works. The specific claim made by the listperson is that So Jealous is cohesive while The Con is all over the place. This is certainly true, and it’s also obviously intentional; you don’t follow “Hop A Plane” with “Soil, Soil” because you’re trying to provide a smooth listening experience. But there’s a critical missing step here, which is the argument that a consistent album is better than one with extreme emotional ups and downs. To me, this is clearly backwards: the album that pulls off greater emotional range and more diverse songwriting is the better one.

But the issue isn’t whether I’m right or the listperson is right; the issue is that neither of us is. That missing step is actually just nonexistent: there is no argument you can make as to why one mode of expression is “better” than another. This is professionally known as the is/ought gap: even an exhaustive description of reality does not imply any standard or metric for evaluating that reality. You can spend all day making correct, incisive observations, but everyone still has to decide on the actual value of the underlying content for themselves. So there’s all sorts of analysis to be done as to what an album does and what it’s about, but even after you’ve done that, tacking a number onto it is still completely arbitrary.

So the first two problems with rankings are that they can’t capture transcendence, and that they don’t add anything to the analysis they’re based on. You can just talk about music without sticking numbers next to it. But surely the problem is that we’re splitting hairs between great and greater albums, right? There are certainly some albums that can be said to be better than others, so ratings do have a proper and properly limited utility. For example, Under Feet Like Ours is clearly the worst Tegan and Sara album. They’re pretty much the exact opposite of the band that comes right out of the gate with a fully-formed sound and then struggles to move forward with it: their first record is jittery and awkward, all ideas and no form, and every album after it takes a huge leap in a new direction. The comparison is made extra simple by the fact that This Business Of Art contains many of the same songs reworked to be more fleshy and coherent, making it easy to see it as a strictly superior album.

But what does it actually mean to call something the “worst” album? Does it mean that you shouldn’t listen to it? I mean, maybe; it depends on how worst it is and whether Tegan and Sara are the sort of band who are good even when they’re bad (or when they want to be bad). So even if we take this ranking to be straightforwardly correct, it still gives us no relevant information. After all, we’re talking about the album that has “This Is Everything” on it, which is definitely something I would recommend experiencing (not to mention that that song is itself about transcendent value overcoming practical failure). This is the paradox of aesthetics: being worse doesn’t actually make something worse, so rankings are wrong even when they’re right.

At this point I should probably clarify that I am aware of what the actual purpose of ranking things on the internet is. Rankings are supposed to be bad and wrong, because they’re outrage bait (arbitrary lists of nonsensical rankings are a running joke on Gawker, the internet’s premier source of self-incriminating mockery). What I’m meant to be doing here is ranting about how putting Sainthood in the bottom half of basically any list about anything is blatantly pathological behavior. But the fact that rankings are guaranteed to generate strident disagreement is just more evidence that there’s no substance behind them. If, for example, I were inclined to make a (much) better version of this list, how would I handle Sainthood? The album is fucked up six ways from Monday, from the awkwardly staggered lyrics on “Paperback Head” to the way “Sentimental Tune” crescendos itself off a cliff. “Arrow” bristles with big, gaudy effects, “On Directing” piles up multiple layers of self-reference, “Red Belt” is placid to the point of pain. Are these good things or bad things? Should I just give in to idiosyncrasy and order the albums however I feel like, or should I try to restrain my own impulses as much as possible and put them in the most defensible order? Which of these approaches is less dishonest? The fact that there’s no answer to any of this reveals the fundamental problem: there is zero connection between the score someone gives an album and the actual experience they had listening to it, i.e. the thing that actually matters. The usual distinction is between subjective feeling and objective facts, but actually, neither of these things has any correspondence with rankings.

The reason this is important, the reason we shouldn’t just let these things be stupid and find something better to do (although we should also do that), is that there’s a reason they work, a real insight that underlies their cynical conception: people like rankings. It’s not just internet lists, the basic concept of giving something a numerical score necessarily implies a ranking order. So if it’s so obvious that no one cares which number some hack sticks next to something on a list, why are lists and scores and rankings all over the place?

If you’re reviewing like dishwashers or whatever, some of them actually are going to work better than others, and numerical ratings are a sensible way to represent that. The key, though, is the word “work,” which implies a defined function: the better appliance is the one that accomplishes its intended task more effectively. Some aspects of this will be subjective or situational, but by doing multiple carefully controlled tests and aggregating the results, we can arrive at an assessment that will be broadly applicable in most circumstances, one that is “objective” in the colloquial sense of the term. This process is what we generally refer to as “science.”

And you can’t apply this to art, because art isn’t functional. It is susceptible to analysis; you can focus like a microscope and make all sorts of substantive observations, but none of that is useful until you cross the is/ought gap and start valuing things. And once you’ve done that, you have left the realm of objectivity, turning any kind of rank or score you want to assign into a category error. Subjective experience isn’t just hard to get at, it’s absolutely inaccessible.

Why, then, would anyone get upset about a rating being “wrong,” given that it has nothing to do with how anyone feels about the work in question? The specific feeling at issue is invalidation. When someone gives a low “objective” score to something you care about, you feel like your experience is invalid. And the opposite feeling is the reason people like ratings in the first place: a high “objective” score means that you’re right to like something, that the way you feel is true. But such a source of validation can only be described as cowardice. Certainly, the person who embraces things only after they have been deemed permissible by the appropriate cultural gatekeepers is a coward. But the same is true of any external source of validation: the safety of objectivity is a refuge from the responsibility to determine one’s own values.

Music is one of the less extreme examples here, since the appeal of music is generally understood as idiosyncratic anyway. The most prominent example is, of course, ethics in games journalism. The extreme stupidity of that, uh, “debate” is somewhat offset by the fact that it’s drawing the battle lines very clearly. Some people think games are basically appliances that you plug yourself into in order to be “entertained,” in which case assigning them an objective numerical score according to how entertaining they are makes perfect superficial sense. Others think that the point of games is to create new experiences, which requires engagement with the real world and active acceptance of subjectivity. So the one nice thing about this is that is provides a convenient sanity check. If you find yourself on the wrong side of an argument this obvious, you’ve gotta back it up.

Subjective experience is inherently desperate. It contains within itself the understanding that it cannot be verified or transmitted, that it is a pure moment, that it seems on reflection to not exist at all, even though it’s the only thing of any actual importance. Art is largely an attempt to get around this, to turn subjectivity into something more substantive than simply raw feeling. At a live show in particular, it’s easy to imagine that everyone else in the room is feeling the same thing you are. But the attempt to reify subjective experience as something externally valid is no kind of solution, because this creates only a hollow shell, bereft of the animating spirit that made it matter in the first place. The only real option is to embrace the horror, to hold transience without shaking.

But we have to be careful not to back into reverse nihilism. If everyone just likes what they like, then nothing means anything. There’s no substantive distinction between hearing a symphony and watching paint dry. So it seems like we need a way to preserve objective orders of rank without muffling subjectivity.

The one rating system with a subjectivity-respecting justification is the two-point thumbs-up/thumbs-down system, because it implies nothing other than an answer to the actual question: should I spend time on this or not? But if you accept that, you’ve accepted the idea that there are objective standards that can be applied to art, and that would seem to open the floodgates. Any other rating system, no matter how convoluted, has the same theoretical justification. A rating of 3 stars out of 5 could mean “check this out only if you’re a fan of the genre”; 28.6 points out of 100 could mean “worth it only on a rainy day when you don’t feel like doing anything else.” So it seems like accepting the validity of analysis requires us to countenance every nonsensical rating system that anyone comes up with, and we’re stuck disagreeing with every Bad Opinion on a case-by-case basis.

This is a false dichotomy based on the idea that analysis requires a number as its output, and anything else is just personal feelings. It is not only possible to combine subjectivity and analysis, it is vitally necessary. If you only have your own subjective experience, then there’s no room for any kind of conversation or collaboration; everything is just, like, your opinion, man. And if all you have is analysis, you’ve just got a big convoluted structure that doesn’t mean anything; it might as well just be a big rock.

In fact, it’s perfectly easy to do everything valuable about analysis while ignoring the dumb numerical part. For example, consider this review of Sleater-Kinney’s discography (eMusic used to have a lot of good music writing, but it apparently got run over by the freight train of progress at some point). It has the same album-by-album format, but even as it makes judgments, it doesn’t pretend to be any kind of ranking, and this clarifies the underlying analysis. Actually, the reason I chose this example is because it’s moronic. The person who wrote it thinks that Janet Weiss joined on Call the Doctor, so he disses Lora MacFarlane’s drumming on the first album and then praises it on the second, without realizing he’s talking about the same person. The real problem, though, it that he thinks The Hot Rock was “Sleater-Kinney on an off day,” which I honestly can’t even begin to address. I lack the ability to inhabit the mental space where this is a comprehensible statement. But because the album isn’t “ranked,” there’s no fake argument about whether the ranking is “right” or whether it’s properly “objective” or what the fuck ever. We can just accept that this guy has terrible taste in music and get on with our lives.

What we need to do is to split the concept of ranking along its fault line. As mentioned, it conflates two distinct things that don’t have a real connection: there’s analysis, and then there’s putting a number on that analysis. And what putting a number on something actually means is establishing a hierarchy. Hence the phenomenon of the internet ranking list: a list of which things are better than which other things.

Hierarchy has its uses, but our society has established a hierarchy of people, and this surely ranks among the greatest possible crimes against existence. Its function is to justify the dominance of the ruling class by positing them as the “best” people, and to justify the direction of our development as progress toward a higher goal. People want to feel like they live in an ordered universe, but there are different types of order, and some of them are dispreferable to chaos. Recognizing that there is no real apotheosis, no greatest hit, reveals our society’s horrors as the chosen project of our rulers – a project that we have the freedom to oppose. This is the cure for our crimes.

There’s an old saying, which I thought was Chinese but am completely failing to source, that perfection arouses the envy of the gods. I mean, everything is an “ancient Chinese proverb,” but I genuinely thought that this was a Chinese or Japanese concept. There is, for example, the Japanese term “wabi-sabi,” which expresses the idea that flaws can make something better. But this isn’t like mystical wisdom or whatever; it’s a very practical concept that doesn’t have anything to do with spirituality or metaphysics. What it means is that perfection is impossible not because it is the asymptotic limit of quality, but because it is a self-defeating concept.

If there actually were a perfect album, something that literally every person agreed was the best, that could only mean that no one had an individual reaction to it. If one person were to feel something about it that no one else did, that opens up the possibility that someone else could have such a reaction to a different album, meaning the perfect album is not perfect. In order to actually be perfect, it would have to provide all possible experiences to all people, which is to say it would have to do nothing. The only way to speak to everyone equally is to remain silent. Actually, even that’s not true; 4’33” is one of the most divisive pieces of music in existence. Anything that actually exists is necessarily rough, flawed, divisive, incomplete, and wrong. Perfection is logically impossible.

In a more practical sense, this means that nothing can ever meaningfully be said to be “the best” of anything, even when the category is as simple as 7 albums by the same band. Even on a strictly personal level, this is why it doesn’t make sense to have favorites. Casual conversation is one thing, but seriously conceiving of your values in terms of “favorites,” as a hierarchy, is ridiculous. More than that, it’s a concession to a social schema which is actively trying to kill you, a subordination of your subjectivity, the thing that makes you actually exist, to a fake, boring god of rules and lines. America has a very particular problem with being unable to comprehend quality in any sense other than being “#1” or the “Mattress King” or whatever. Hence certain of our current political problems.

And Tegan and Sara are actually one of the best examples of why this whole idea is dumb, because they don’t make albums that are trying to be better versions of albums they’ve already made. They make different albums. Following up Sainthood with Heartthrob was a good move, even from my perspective, where Sainthood is the kind of music I like and Heartthrob is, uh, less so. The only thing worse than changing is not changing. In the words of Kathleen Hanna, we don’t want to hear you making the right decisions, we want to hear your voice. You might write something that someone might want to read, someday.