Download your emotion, baby

Why does no one understand anything about the internet? Serious question. Look at this nonsense:

Freed from the anguish of choosing, music listeners can discover all kinds of weird, nettlesome, unpleasant, sublime, sweet, or perplexing musical paths.

I honestly can’t remember the last time I encountered a howler as blatant as claiming that choice is now less important because of the multiplicity of options offered by the internet. Obviously, the opposite is true: choice is now so omnipresent as to have become tyrannical. It used to be that you were justified in just listening to whatever was on the radio, or whatever the officially-licensed music weirdo at the record store recommended, or whatever bands happened to be playing at your local venues, because you didn’t really have any other options. Now you have all the options; you have to choose. At every moment of every day, you must choose the one thing out of an infinity of options that you will spend this portion of your finite human existence on, and you must do so with the full knowledge that you are damning yourself to miss out on all the things you didn’t choose, forever.

I think that much is pretty obvious. But here’s the important part. This:

These paths branch off constantly, so that by the end of a night that started with the Specials, you’re listening to Górecki’s Miserere, not by throwing a dart, but by following the quite specific imperatives of each moment’s needs, each instant’s curiosities. It is like an open-format video game, where you make the world by advancing through it.

is also wrong. (Also this is a typically terrible video game analogy made by someone who has no idea what video games are actually like, but one thing at a time here.) Just because you have theoretical access to every song ever made (which is not actually the case, but seriously, one thing at a time) does not suddenly transport you into an unfettered wonderland of pure personal choices. In fact, the author cites a rather strong piece of evidence against himself: Spotify carries about four million songs which have never been listened to, by anyone, ever. So it is clearly not the case that people are freely venturing into heretofore unexplored terrain. Indeed, the fact that internet discourse is crammed full of nostalgia suggests that people actually aren’t seeking out new experiences at all. You may have noticed that, post-internet, pop stardom and celebrity are bigger industries than ever. The paradox of internet culture is that a practical infinity of choices makes people more likely to stick with what they already know. Except that’s not a paradox at all, because of course that’s what’s going to happen. The internet does not magically remove society’s existing constraints. On the contrary, by strengthening people’s ability to engage, the internet enables people to cleave more strongly to the things that they were already into. Ergo, Beyoncé’s Twitter mob.

This part makes the failure of analysis pretty clear:

Just five years ago, if you wanted to listen legally to a specific song, you bought it (on CD, on MP3), which, assuming finite resources, meant you had to choose which song to buy, which in turn meant you didn’t buy other songs you had considered buying. Then, a person with $10 to spend could have purchased five or six songs, or, if he was an antiquarian, an album. Now, with $10, that same person can subscribe to a streaming service for a month and hear all five or six songs he would have purchased with that money, plus 20 million or so others.

What’s missing here is very obviously the non-monetary component of opportunity cost. A person has only so many hours in the day to spend listening to music. So yes, it’s great that money is less of a constraint now, but the more important constraint, the issue of what you’re actually going to choose to do with your finite human existence, is as strong as ever. In fact, it’s stronger: there is now more nonsense to engage with, more to attend to, more demands on your attention, and hence less time to make these supposedly free choices we’ve all been gifted with.

These factual inaccuracies point us to the deeper philosophical problem, which is that choice is not simply a matter of the raw number of options you have. Having more options makes it more likely that your choice-set will include good choices, but it also makes it harder to find those choices amidst the noise.

Think of it this way: imagine all the songs on Spotify were unlabelled. All you could do was listen to songs at absolute total random out of its entire catalog. Total horrorshow, right? But this is the maximum amount of free choice: it is totally unencumbered by any kind of bias, including your own. Now imagine that the songs were all labelled, but there were no other discovery tools. This is better, because you can at least find things you’ve already heard of and check out new songs with interesting names, but it’s still pretty hard to discover stuff. Now consider the internet as it currently exists, where you’re constantly being barraged with recommendations and promotions and soforth. This is both more constrained and better than any permutation of the above examples, because you actually have stuff to go on: you can find recommenders you trust and branch out from things you already like and etc.

What’s happened here is that our choices have gotten better as they’ve become more constrained, and the reason this happens is because the constraints are operating in the correct direction: towards things you might actually want to listen to. There are, of course, also constraints that operate in incorrect directions; the reason most of what’s on the radio is garbage is because it’s selected based on what executives think will make money rather than what actually sounds good. So, naturally, there is a situation better than the current one, which is one where all of those recommendation engines and music bloggers and soforth don’t have ulterior motives in the areas of commercial appeal and popularity. This is, of course, an additional constraint that removes things from your search queue that got there because of advertising or whatever, and it, again, makes things better. Choice is a false idol; freedom isn’t free.

And this is a good thing, because if your choices really were totally unconstrained, they would be essentially random, which is to say chaotic, which is to say meaningless. Remember that bit above about “each instant’s curiosities”? Yeah, that’s nihilism. If you’re seriously just going off of your pure momentary whims, you’re an animal. Whereas when you do things like check out formative artists in genres you like, or explore the various bands that were part of a scene you’re interested in, you are engaging with the structure of reality and making choices that are actually connected to the things you care about. While there is a real and important distinction between coerced and uncoerced choices (and lack of options can be a form of coercion), a choice has to be based on real-world conditions in order to be meaningful; the concept of an “unconstrained” choice is oxymoronic. It’s only by being attached to contingent circumstances in the real world that your choices have any chance of being worth a shit.

Indeed, this “free choice” framing betrays a disturbing assumption: that any experience is just as good as any other. If the pure number of options you have is what’s meaningful, that can only be because the content of the options themselves is not meaningful. Which, if true, would mean that all experiences are meaningless. This, for example:

When I hear a song for the fiftieth time, I remember the wall color of my studio apartment on Mt. Vernon Street in Cambridge, Massachusetts, in 1996, and I remember how cold the awful landlady kept it, and I remember her shivering whippet scratching at my door so that he could come in and curl up in the hollows of my giant furry Newfoundland.

is nothing but the worst kind of banal egoism. If the only significance of music is that it reminds you of some arbitrary shit from your past, then music is meaningless. It might as well be a scrap of paper or an oddly-shaped rock.

Luckily this is a lie, which is obvious when you consider why this is wrong:

Play the songs you heard on February 2, 2013, in the order in which you played them, and you can recreate not just the emotions but the suspense and surprise of emotion as it changes in time.

Dude has literally never heard that a person cannot step into the same river twice. What makes music powerful is the fact that its substance emerges out from particular experiences, not that it is buried within them. Contingency and temporality are what make existence meaningful; without them, you are a portrait and not a person.

Oh and by the way all of this actually has dick to do with the internet. Internet technology enables all of this, but the actual power sources behind these dynamics are political and psychological, just as they always have been. The basic failing of almost all writing about the internet is that it assumes that the same old patterns of behavior somehow assume an unprecedented radical significance now that they’re happening On The Internet.

So, okay, let’s hit this. The first part is easy: the reason people want their experiences to be permanently frozen in time and eternally retrievable is because they don’t want to die. Tough shit, friends. Your name has not been written in the book of life. You’re going to exist for a while, and then you’re going to stop existing.

Moving on, the choice fallacy is a clear outgrowth of consumerism. The idea that picking your very own very special choice from the largest possible menu of options is the ideal situation is a fantasy concocted to sell shit in supermarkets. If contingency matters, then goods are not fungible and capitalism loses its claim to meaning. Which is of course the case; even under the most charitable interpretation of capitalism, what it’s good for is producing enough goods to give people the opportunity to do things that are actually meaningful. Taking economic growth itself as a goal is a blatant capture of the ends by the means. In the same sense, to assume that having the largest number of possible options for which music or movies or books or whatever to experience is what matters is to forget what makes these things worth experiencing in the first place.

The last piece of the puzzle is why everyone constantly talks about The Internet like it actually has its own agenda, rather than simply being an amplifier (or a suppressor) for existing motivations. This is pure ideology. The internet just happened yesterday, so it’s easy to take it as an explanation for everything that’s happening right now and thereby avoid any examination of the underlying forces. Because those forces are not the lizard people and the reverse vampires; those forces are you. The actual conspiracy is the one inside your head, constantly arranging everything you experience to serve its invisible ends. Aggregating data ain’t going to get you out of this. You’ve got to fight theory with theory.

F+

Here’s a modern horror story:

“The teacher takes the girl’s paper and rips it in half. ‘Go to the calm-down chair and sit,’ she orders the girl, her voice rising sharply.

‘There’s nothing that infuriates me more than when you don’t do what’s on your paper,’ she says, as the girl retreats.

. . .

After sending the girl out of the circle and having another child demonstrate how to solve the problem, Ms. Dial again chastises her, saying, ‘You’re confusing everybody.’ She then proclaims herself ‘very upset and very disappointed.'”

Let’s briefly set aside the hilarious irony of an irate adult sending a child to the “calm-down chair,” because this is actually important. It’s not about being “mean,” or the teacher “losing control,” or whether the kids are being “terrorized” or need to “toughen up.” It’s about ideology.

That’s why this person is an idiot:

“Some parents had another view. Clayton Harding, whose son, currently in fourth grade, had Ms. Dial as a soccer coach, said: ‘Was that one teacher over the line for 60 seconds? Yeah. Do I want that teacher removed? Not at all. Not because of that. Now if you tell me that happens every single day, that’s a different thing. But no one is telling me that, and everyone is telling me about all the amazing things that she does all the other days.'”

One of the more dangerous things about the internet is that it creates the illusion that “data” just pops up out of nowhere instead of having specific contingent physical sources (also “data” is a conceptual category and not actually a type of physical thing, but that’s another story). In this case, obviously, the assistant teacher would not have been recording this incident unless they already knew that something was up, i.e. this type of thing had in fact been happening on a regular basis.

More than that, though, the fact that this is how the teacher behaves when she has a “lapse in judgment” means that the rest of the time she’s biting her tongue. What we’re seeing here is what she actually believes: that “underperforming” children deserve to be ostracized and humiliated. And the fact that the school supports her means that’s also what the school believes.

Per standard procedure, the New York Times spends the entire article wringing its hands over a bunch of nonsense, then buries the lede right at the end:

Dr. McDonald, the N.Y.U. professor, who also sits on the board of the Great Oaks Charter School on the Lower East Side, said that the behavior in the video violated an important principle of schooling.

“Because the child’s learning was still a little fragile — as learning always is initially — she made an error,” he said in his email. “Good classrooms (and schools) are places where error is regarded as a necessary byproduct of learning, and an opportunity for growth. But not here. Making an error here is a social offense. It confuses others — as if deliberately.”

Whether this is in fact a “principle of schooling” is precisely the issue. As I’m sure you’re aware, what is euphemistically referred to as “education reform” is in fact a major ideological conflict over this exact point. But even this guy doesn’t have it quite right – he’s still framing things as though a child giving an unexpected response is an “error” that needs to be “corrected,” as though “errors” are “byproducts” of growth rather than the substance of growth themselves.

Naturally, since we’re talking ideology here, this confusion is not limited to schooling. Labelling something as an “error” is a pretty obvious value judgment. What we’re actually talking about is what it means for something to be “correct” in general; what, in a practical sense, is the right thing. In the past, we had the idea that “might makes right,” that those who happened to be victorious were by that fact necessarily of superior ability or favored by god or whatever. Despite the phrase now being shorthand for barbarism, this philosophy has one major advantage: the winning party has to actually win. They have to do something to deserve it. Today, we’re enlightened enough that we don’t have to worry about reality anymore. We now live in world where “right makes right,” where what’s right is right by virtue of it being accepted as such and for no other reason, where filling in the bubble labelled “B” on a standardized test is correct if and only if the grading rubric specifies “B” as the correct answer.

Wikipedia, for example. How do you know that the information on Wikipedia is accurate? Well, if it weren’t, someone would have corrected it.

There’s a broad misconception that people only know things that they have been explicitly taught. This is most dramatically demonstrated in the area of language. Children learn their first language (or two) without ever being explicitly taught anything about it. It’s actually not at all clear how it would even be possible to “teach” language to someone who can’t talk; it would be very much like teaching the proverbial blind person to see colors. Yet, as children age, we cling to the idea that they must be educated out of their “errors,” that a language is a big stone tablet of rules against which one checks each utterance for “correctness.” In the saddest case, one reaches adulthood with a disorganized basket stuffed full of “rules” that they then go about waving in the face of anyone who says anything “incorrectly.”

The truth is not that verification implies correctness, but that learning implies error. Language correction is self-contradictory: the fact that you have to tell someone that they said something wrong means that there wasn’t actually anything wrong with it. If there had been, the error would have occurred organically: they would have been misunderstood. It’s clear that this is how we actually learn things about language: we fail to express ourselves, and then we try again.

(Of course, this only applies to people who are paying attention. We’re all familiar with the type of person who talks so much and so inattentively that they end up creating their own unique mishmash of noises and gesticulations, such that they are able to utter on endlessly without ever intersecting reality.)

The point is that things we are explicitly taught account for probably about 1% of our actual knowledge base. What actually happens is that we have experiences and then we try to create a framework under which those experiences make sense. As such, it’s theoretically possible to accelerate the process, to create a sort of hyper-pedagogy in which the student is constantly barraged with miniaturized interactions designed to create a specific understanding. And by “theoretically” I mean video games.

The basic framework for modern video games is the challenge/failure/retry loop. The historical-material basis for this was arcade games. Arcade games were required to eat quarters, which meant each play session had to 1) provide a dopamine jolt, 2) terminate itself (eat the quarter), and 3) provide an incentive for initiating another session (inserting another quarter). The most popular solution to this equation was something called “extra lives.” Your quarter bought you a certain number of lives (usually 3, a psychologically significant number), and then the game started trying to kill you.

The critical moment comes when you fail a challenge and then have the opportunity to try it again. If the game is at all decently designed, you’ll have some idea of what happened and what you want to try to do next time. So in your typical action game like Metal Slug or whatever, the right thing to do is to hit the enemies with your attacks and the wrong thing to do is to get hit by the enemies’ attacks. So you’ll be thinking about how to position yourself and when to attack and so forth, and you’ll want to try again in order to test these ideas out. By hitting you with this sort of scenario over and over again, the game locks you into whatever its idea of a good time is. Materially speaking, in order for the game to be as short and dopamine-intense as possible, the failure loop happens as often as possible. In other words, games are very educational.

And what’s being taught is the thing that every one of these interactions has in common. You’re presented with a given situation with given rules, and there’s a “correct” set of actions to take that will result in the outcome that has been defined by the game as “success.” Executing this set of actions is the right thing to do. Anything else is the wrong thing to do. In certain extreme cases, progressing in a game will require you to do something that is obviously wrong in terms of narrative, such as aiding an enemy or falling into a trap. In such cases, the game implicitly frames doing the wrong thing as the right thing to do.

So the original problem is obviously that games have mostly been about dumb things like avoiding projectiles and jumping on turtles. And this is still largely the case; increased substance in games has been well outstripped by increased flash and pretentiousness. More fundamentally, though, the material situation has changed. Since games have stopped needing to eat quarters, the “failure” part of the loop has atrophied. Now that we pay for games once and play them until “finished,” failure becomes a mere impediment that may as well be done away with. Instead, games now give the player some actions to perform, reward them for doing so, and that’s it.

But “bringing back” failure isn’t a solution. “Failure” on its own isn’t any more significant than “success.” In fact, there’s currently a countertrend in the form of “ultra-hard” games which jam the failure loop into overdrive, and this isn’t any better. Failing the same meaningless challenge 100 times is exactly as pointless as successfully executing the same meaningless task 100 times, in exactly the same way.

What we ought to be looking for are forms of success and failure that are interesting, that cause you to reassess your situation in some way, to question your assumptions, and to gain new insights. Which is what art is supposed to do. It’s deeply sad that people are so zealous in insisting that games “count” as art, yet so blasé about actually getting them to do the things that art is good for.

Of course, not all games are derived from the arcade model. Sim games, for example, tend to lack explicit goals and thereby make room for interesting failures. SimCity allows you to explicitly sic disasters on yourself just to see what happens. Dwarf Fortress is mostly known for players’ stories of the hilarious catastrophes they’ve suffered. And of course there are pure story games, as well as games that are entirely focused on providing aesthetic experiences. But the failure loop is still at the core of how video games are generally conceived, and these exceptions are often ones that prove the rule. For example, story games often have “correct” choices that you need to make in order to get the “true” ending.

On the other hand, I’d be remiss not not mention that, throughout the history of games, players have often ignored what games are supposed to be about and created their own goals and rules of interaction. The most popular example is speedrunning, which usually involves subverting the normal progression of a game and playing it in a way it wasn’t designed for. This provides heartening evidence that structure isn’t everything, that people can find their own truths even in the midst of the labyrinth. Still, the motivation for finding these alternate paths in the first place is often the poverty of the intended experience. The fact that people can make do with garbage is hardly a justification to keep producing more. On the contrary, this sort of player creativity provides us with new vistas to set out for.

(Games that explicitly support speedrunning have entirely missed the mark in this regard: the point is not to incorporate speedrunning as a new task in the same type of game, the point is that there are different types of games to be designed.)

If connecting all of this to the current state of society seems hyperbolic, that’s probably because it is. Video games are barely doing anything right now. We’re lucky that they’re still in their infancy; the problem is that they’re enfants terribles, and they’re going to grow up. And this may in fact happen sooner rather than later.

Going back to our unfortunate charter school students: what are they actually learning? For a few minutes each day, they are presented with some facts or rules or something that they are instructed to internalize. But every second they’re at school, they are being taught a deeper lesson: that the goal of life is to respond to challenges by producing the right answers. What we’re looking at here is the mentality for which “failure is not an option.” This phrase is, first of all, a category error, because failure isn’t something you choose, but more importantly, it represents an extremely dangerous way to think. It assumes that everything’s been figured out, that our society’s assumed goals are not only correct, but worth any sacrifice.

In education, it’s unavoidable that students will say or do things that a lesson planner could never have anticipated. This is a good thing. They’re children, and they’re human. When it comes to games, inconveniences like these can be abstracted away. The player can be given no actions to perform but the “correct” one, and no tools except those needed to do so. One can then be assured that they will do the right thing. Imagining such a system applied to actual humans is obviously horrendous. And yet, those who think that the purpose of education is to train children to answer correctly are advancing precisely this dystopia – the same dystopia enjoyed by millions as their primary form of entertainment. And so it is that, by an astonishing coincidence, the rise of video games has coincided exactly with the rise of neoliberalism.

When Shakespeare said that “all the world’s a stage,” what he was actually saying was obviously “I’ve got plays on the brain 24/7.” All the world’s an anything if you’re obsessed enough with whatever that thing is. We can just as easily conceive of the world as a game: one in which we are constantly presented with tiny tasks governed by rules of interaction. Every time we act, the world reacts; we get feedback; we learn something. But our actions also create the context in which further interactions happen: we’re designers as well as players. Every second of every day, we are creating ideology, and knowing this gives us a small amount of control over the process. If all the world’s a game, it’s badly designed. But we, as the players, still have a choice. We can choose to go for the high score and unlock all the achievements, or we can choose to play a different game of our own design.

Gamed to death

My post about level ups needs an addendum, as there’s a related issue that’s somewhat more practical. That is, it’s an actual threat.

The concept of power growth can be generalized to the concept of accumulation, the difference being that accumulation doesn’t have to refer to anything. When you’re leveling up in a game, it’s generally for a reason, e.g. you need more HP in order to survive an enemy’s attack or something. Even in traditional games, though, this is not always the case. There are many RPGs where you have like twelve different stats and it’s not clear what half of them even do, yet it’s still satisfying to watch them all go up when you level. This leads many players to pursue “stat maxing” even when there’s no practical application for those stats. Thus, we see that the progression aspect of leveling is actually not needed to engage players. It is enough to provide the opportunity for mere accumulation, a.k.a. watching numbers go up. This might sound very close to literally watching paint dry, but the terrible secret of video games is that people actually enjoy it.

The extreme expression of this problem would be a game that consists only of leveling up, that has no actual gameplay but merely provides the player with the opportunity to watch numbers go up and rewards their “effort” with additional opportunities to watch numbers go up. This game, of course, exists; it’s called FarmVille, it’s been immensely popular and influential and has spawned a wide variety of imitators. The terror is real.

Of course, as its very popularity indicates, FarmVille itself is not the problem. In fact, while FarmVille is often taken to be the dark harbinger of the era of smartphone games, its design can be traced directly back to the traditional games that it supposedly supplanted (the worst trait of “hardcore” gamebros is that they refuse to ever look in the damn mirror). Even in action-focused games such as Diablo II or Resident Evil 4, much of the playtime involves running around and clicking on everything in order to accumulate small amounts of currency and items. While this has a purpose, allowing you to purchase new weapons and other items that help you out during the action segments, it doesn’t have to be implemented this way. You could just get the money automatically whenever you defeat an enemy, as you do in most RPGs. But even in RPGs where this happens, there are still treasures and other collectibles littering the environment. This is a ubiquitous design pattern, and it exists for a reason: because running around and picking up vaguely useful junk is fun.

This pattern goes all the way back to the beginning. Super Mario Bros., for example, had coins; they’re one of the defining aspects of what is basically the ur-text of video games. Again, these coins actually did something (they gave you extra lives, eventually. Getting up to 100 coins in the original Super Mario Bros. is actually surprisingly hard), but again again, this isn’t the actual reason they were there. They were added for a specific design reason: to provide players with guidance. Super Mario Bros. was a brand-new type of game when it came out; the designers knew that they had to make things clear in order to prevent players from getting lost. So one of the things they did was add coins at strategic locations to encourage the player to take certain actions and try to get to certain places. And the reason this works is because collecting coins is fun on its own, even before the player figures out that they’re going to need as many extra lives as they can get.

The coins here are positioned to indicate to the player that they're supposed to jump onto the moving platform to proceed.

And there’s something even more fundamental than collectibles, something that was once synonymous with the concept of video games: score. Back in the days of arcade games, getting a high score was presented as the goal of most games. When you were finished playing, the game would ask you to enter your initials, and then show you your place on the scoreboard, hammering in the idea that this was the point of playing. Naturally, since arcade games were designed to not be “completable,” this was a way of adding motivation to the gameplay. But there’s more to it than that. By assigning different point values to different actions, the designers are implicitly telling the player what they’re supposed to be doing. Scoring is inherently an act of valuation.

In Pac-Man, for example, there are two ways you can use the power pellets: you can get the ghosts off your ass for a minute while you try to clear the maze, or you can hunt the ghosts down while they’re vulnerable. Since the latter is worth more points than anything else, the game is telling you that this is the way you’re supposed to be playing. The reason for this, in this case, is that it’s more fun: chasing the ghosts creates an interesting back-and-forth dynamic, while simply traversing the maze is relatively boring. Inversely, old light-gun games like Area 51 or Time Crisis often had hostages that you were penalized for shooting. In a case like this, the game is telling you what not to do; rather than shooting everything indiscriminately, you were meant to be careful and distinguish between potential targets.

So, in summary, the point of “points” or any other “numbers that go up” is to provide an in-game value system. What, then, does this mean for a game like FarmVille, which consists only of points? It means that such a game has no values. It’s nihilistic. It’s essentially the unironic version of Duchamp’s Fountain. The point of Fountain was that the work itself had no traditional artistic merit; it “counted” as art only because it was presented that way. Similarly, FarmVille is not what you’d normally call a “game,” but it’s presented as one, so it is one. The difference, of course, is that Duchamp was making a rather direct negative point. People weren’t supposed to admire Fountain, they were supposed to go fuck themselves. FarmVille, on the other hand, expects people to genuinely enjoy it. Which they do.

And again, the point is that FarmVille is not an aberration; its nihilism is only the most naked expression of the nihilism inherent in the way modern video games are understood. One game that made this point was Progress Quest, a ruthless satire of the type of gameplay epitomized by FarmVille. In Progress Quest, there is literally no gameplay: you run the application and it just automatically starts making numbers go up. It’s a watching paint dry simulator. The catch is that Progress Quest predates FarmVille by several years (art imitates life, first as satire, then as farce); it was not parodying “degraded” smartphone games, but the popular and successful games of its own time, such as EverQuest, which would become a major influence on almost everything within the mainstream gaming sphere. The call is coming from inside the house.

Because the fact that accumulation is “for” something in a game like Diablo II ultimately amounts to no more than it does for FarmVille. You kill monsters so that you can get slightly better equipment and stats, which you then use to kill slightly stronger monsters and get slightly better equipment again, ad nauseum. It’s the same loop, only more spread out and convoluted; it fakes meaning by disguising itself. In this sense, FarmVille, like Fountain, is to be praised for revealing a simple truth that had become clouded by incestuous self-regard.

There is, of course, a real alternative, which is for games to actually have some kind of aesthetic value, and for that to be the motivation for gameplay. This isn’t hard to understand. Nobody reads a book because they get points for each page they turn; indeed, the person who reads a famous book simply “to have read it” is a figure of mockery. We read books because they offer us experiences that matter. There is nothing stopping video games from providing the same thing.

The catch is that doing this requires a realization that the primary audience for games is currently unwilling to make: that completing a goal in a video game is not a real accomplishment. As games have invested heavily in the establishment of arbitrary goals, they have taken their audience down the rabbit hole with them. Today, we are in position where certain people actually think that being good at video games matters, that the conceptualization of games as skill-based challenges is metaphysically significant (just trust me on this one, there’s evidence for it but you really don’t want to see it). As a result, games have done an end-run around the concept of meaning. Rather than condemning Sisyphus to forever pushing his rock based on the idea the meaningless labor is the worst possible fate, we have instead convinced Sisyphus that pushing the rock is meaningful in the traditional sense; he now toils of his own volition, blissfully (I wish I could take credit for this metaphor, but this guy beat me to it).

This is an understandable mistake. As humans, limited beings seeking meaning in the raw physicality of the universe, we’ve become accustomed to looking for signs that distinguish meaningful labor from mere toil. It is far from an unusual mistake to confuse the sign for the destination. But the truth is that any possible goal (money, popularity, plaudits, power) is also something that we’ve made up. The universe itself provides us with nothing. But this realization does not have to stop us: we can insist on meaning without signs, abandon the word without losing the sense. This is the radical statement that Camus was making when he wrote that “we must imagine Sisyphus happy.” He was advising us to reject this fundamental aspect of our orientation towards reality.

We have not followed his advice. On the contrary, games have embraced their own meaninglessness. The most obvious symptom of this is achievements, which have become ubiquitous in all types of games (the fact that they’re actually built-in to Steam is evidence enough). Achievements are anti-goals, empty tokens that encourage players to perform tasks for no reason other than to have performed them. Many are quite explicit about this; they’re things like “ 1000 more times than you would have to do it to complete the game.” Some achievements are better than this, some even point towards interesting things that add to the gameplay experience, but the point is the principle: that players are expected to perform fully arbitrary tasks and to expect nothing else from games. In light of this, it does not matter whether a game is fun or creative or original or visually appealing. No amount of window dressing can counteract the fact that games are fundamentally meaningless.

If you want a picture of the future of games, imagine a human finger clicking a button and a human eye watching a number go up. Forever.


While renouncing games is a justifiable tactical response to the current situation, it’s not a solution. Games are just a symptom. Game designers aren’t villains, they’re just hacks. They’re doing this stuff because it works; the problem is in people.

Accumulation essentially exploits a glitch in human psychology, similar to gambling (many of these games have an explicit gambling component). It compels people to act against their reason. It’s not at all uncommon these days to hear people talk about how they kept playing a game “past the point where it stopped being fun.” I’m not exactly sure what the source of the problem is. Evolution seems unlikely, as pre-civilized humans wouldn’t have had much opportunity for hoarding-type behavior. Also, the use of numbers themselves seems to be significant, which suggests a post-literate affliction. I suppose the best guess for the culprit would probably be capitalism. Certainly, the concept of currency motivates many people to accumulate it for no practical reason.

Anyway, I promised you a threat, so here it is:

“They are told to forget the ‘poor habits’ they learned at previous jobs, one employee recalled. When they ‘hit the wall’ from the unrelenting pace, there is only one solution: ‘Climb the wall,’ others reported. To be the best Amazonians they can be, they should be guided by the leadership principles, 14 rules inscribed on handy laminated cards. When quizzed days later, those with perfect scores earn a virtual award proclaiming, ‘I’m Peculiar’ — the company’s proud phrase for overturning workplace conventions.”

(Okay real talk I actually didn’t remember the bit about the “virtual award.” I started rereading the article for evidence and it was right there in the second paragraph. I’m starting to get suspicious about how easy these assholes are making this for me.)

What’s notable about this is not that Amazon turned out to be the bad guy. We already knew that, both because of the much worse situation of their warehouse workers and because, you know, it’s a corporation in a capitalist society. What’s important is this:

“[Jeff Bezos] created a technological and retail giant by relying on some of the same impulses: eagerness to tell others how to behave; an instinct for bluntness bordering on confrontation; and an overarching confidence in the power of metrics . . .

Amazon is in the vanguard of where technology wants to take the modern office: more nimble and more productive, but harsher and less forgiving.”

What’s happening in avant-garde workplaces like Amazon is the same thing that’s happened in games. The problem with games was that they weren’t providing any real value, and the problem with work in a capitalist society is that most of it is similarly pointless. The solution in games was to fake meaning, and the solution in work is going to be the same thing.

And, just as it did in games, this tactic is going to succeed:

“[M]ore than a few who fled said they later realized they had become addicted to Amazon’s way of working.

‘A lot of people who work there feel this tension: It’s the greatest place I hate to work,’ said John Rossman, a former executive there who published a book, ‘The Amazon Way.’

. . .

Amazon has rules that are part of its daily language and rituals, used in hiring, cited at meetings and quoted in food-truck lines at lunchtime. Some Amazonians say they teach them to their children.

. . .

‘If you’re a good Amazonian, you become an Amabot,’ said one employee, using a term that means you have become at one with the system.

. . .

[I]n its offices, Amazon uses a self-reinforcing set of management, data and psychological tools to spur its tens of thousands of white-collar employees to do more and more.

. . .

‘I was so addicted to wanting to be successful there. For those of us who went to work there, it was like a drug that we could get self-worth from.’”

It’s only once these people burn out and leave that they’re able to look back and realize they were working for nothing. This is exactly the same phenomenon as staying up all night playing some hack RPG because you got sucked in to the leveling mechanism. It’s mechanical addiction to a fake goal.

The fundamental problem here, of course, is that Amazon isn’t actually trying to make anything other than money. A common apologist argument for capitalism is that economic coercion is required to motivate people to produce things, but this is pretty obviously untrue. First, people have been building shit since long before currency came into the picture; more importantly, it’s obvious just from simple everyday observation that people are motivated to try to do a good job when they feel like they’re working on something that matters, and people slack off and cut corners when they know that what they’re doing is actually bullshit. The problem with work in a capitalist society is that people aren’t fools; the reason employees have to be actively “motivated” is because they know that what they’re doing doesn’t merit motivation.

The focus with Amazon has mostly been on that fact that they’re “mean”; the Times contrasts them with companies like Google that entice employees with lavish benefits rather than psychological bullying. But this difference is largely aesthetic; the reason Google offers benefits such as meals and daycare is because it expects its employees to live at their jobs, just as Amazon does.

As always, it’s important to view the system’s cruelest symptoms not as abnormal but as extra-normative behavior. The reason Amazon does what it does is because it can: it has the kind of monitoring technology required to pull this off and its clout commands the kind of devotion from its employees required to get away with it. Amazon is currently on the cutting edge; as information technology becomes more and more anodyne, this will become less and less the case. Consider that Google’s double-edged beneficence is only possible because Google is richer than fuck, consider the kind of cost-cutting horseshit your company pulls, and then consider the kind of cost-cutting horseshit your company would pull if it had Amazon-like levels of resourcefulness and devotion.

So, while publications like the New York Times are useful for getting the sort of “average” ruling-class perspective on the issues of the day, you have to keep the ideological assumptions of this perspective in mind, which in this case is super easy: the Times assumes that Amazon’s goal of maximizing its “productivity” is a valid and even virtuous one (also, did you notice how they claimed that this is happening because “technology wants” it to happen? Classic pure ideology). All of the article’s hand-wringing is merely about whether Amazon’s particular methods are “too harsh” or “unsustainable.” The truth, obviously, is that corporate growth itself is a bad thing because corporate growth means profit growth and profits are by definition the part of the economy getting sucked out by rich fucks instead of actually being used to produce things for people. This goes double for Amazon specifically, which doesn’t contribute any original functionality of its own, but merely supersedes functionalities already being provided by existing companies in a more profitable fashion.

And this is where things get scary. With video games, the only real threat is that, by locking themselves into their Sisyphean feedback loop, games will become hyper-effective at wasting the time of the kind of people who have that kind of time to waste. Tragic, in a sense, but in another sense we’re talking about people who are making a choice and who are consequently reaping what they’ve sown. But the problem with the economy is that when rich fucks play games, the outcome affects everybody. And when those games are designed against meaning, and all of us are obligated to play in order to survive, what we’re growing is a value system, and what we’re harvesting is nihilism. Bad design is a fate worse than death.

In this vein, I strongly recommend that you get a load of this asshole:

“’In the office of the future,’ said Kris Duggan, chief executive of BetterWorks, a Silicon Valley start-up founded in 2013, ‘you will always know what you are doing and how fast you are doing it. I couldn’t imagine living in a world where I’m supposed to guess what’s important, a world filled with meetings, messages, conference rooms, and at the end of the day I don’t know if I delivered anything meaningful.’”

Can you imagine living in a world where values are determined by humans? It’s getting kind of difficult!

When the situation is this fucked, even the New York Times has its moments:

“Mr. Bohra declined to let any of his employees be interviewed. But he said the work was more focused now, which meant smaller teams taking on bigger workloads.”

You know you’re an asshole when the shit you’re pulling is so blatantly horrific that even the “paper of record” is scoring sick burns on you from behind its veil of ersatz objectivity.


The thing is, when it comes to values, “money” in society has the same function as “score” in video games: it’s a heuristic that maps only loosely onto the thing that it’s actually supposed to represent. Ideally, economic growth would represent the actual human-life-improving aspects of a society, and to an extent, it does. Despite everything, most people really are trying to make the world a decent place to live. But a capitalist society is one where “growth” is pursued for its own sake, where spending a million dollars to feed starving children is just as good as spending that money on car decals, or on incrementally faster smartphones, or on weapons.

This is why you need to watch the fuck out any time someone starts talking about “meritocracy.” The problem with “meritocracy” is the same as the problem with “utilitarianism”: you have to actually define “merit” or “utility,” and that’s the entire question in the first place. With utilitarianism this is less of a problem, since it’s more of a philosophical question and this understanding is usually part of the discussion (also, when utilitarianism was first introduced it was a revolutionary new idea in moral philosophy, it’s just that today it tends to be invoked by people who want to pretend like they’ve solved morality when they actually haven’t even started thinking about it). But the meritocracy people are actually trying to get their system implemented; indeed, they often claim that their “meritocracy” already exists.

To be explicit, the word “meritocracy” is internally inconsistent. Claiming that a society should be a “democracy,” for example, establishes a goal: a society’s rulership should be as representative of the popular will as possible (that is, assuming the word “democracy” is being used in good faith, which is rarely the case). But the concept of “merit” requires a goal in order to be meaningful. It’s trivial to say that society should favor the “best,” because the question is precisely: the best at what? The most creative, or the most efficient? The most compassionate, or the most ruthless? Certainly, our current society, including our corporations, is controlled by people who are the best at something, it’s just that that “something” isn’t what most of us want to promote.

The problem isn’t that these people are hiding their motives; they talk big but they aren’t actually that sophisticated, especially when it comes to philosophy. It’s worse: the problem is that they have no goals in the first place. For all their talk of “disruption,” they are in truth blindly following the value system implicitly established by the set of historical conditions they happen to be operating in (see also: Rand, Ayn). This is necessarily the case for anyone who focuses their life on making money, since money doesn’t actually do anything by itself; it means whatever society says it means. This is why rich fucks tend to turn towards philanthropy, or at least politics: as an attempt to salvage meaning from what they’ve done with their lives. But even then, the only thing they know how to do is to focus on reproducing the conditions of their own success. When gazing into the abyss, all they can see is themselves.

Thus far, the great hope of humanity has lain in the fact that our rulers are perpetually incapable of getting their shit together. The problem is that they no longer have to. If nuclear weapons gave them the ability to destroy the world on accident, information technology has given them the ability to destroy values just as accidentally. A blind, retarded beast is still capable of crushing through sheer weight. The reason achievements in games took off isn’t because anyone designed things that way, it’s because fake-goal-focused games appeal to people, they sell. The reason Amazon seems to be trying to design a dystopian workplace isn’t because of evil mastermindery, it’s simply because they have the resources to pursue their antigoal of corporate growth with full abandon. Indeed, what we mean by “dystopia” is not an ineffective society, it’s a society that is maximally effective towards bad ends. And if capitalists are allowed to define our values by omission, if the empty ideal of “meritocracy” is taken as common sense rather than an abdication of responsibility, if arbitrary achievement has replaced actual experience, then the rough beast’s hour has come round at last; it is slouching toward Silicon Valley to be born.

How to smell a rat

I’m all for taking tech assholes down a notch (or several notches), but this kind of alarmism isn’t actually helpful:

“It struck me that the search engine might know more about my unconscious than I do—a possibility that would put it in a position not only to predict my behavior, but to manipulate it. Lose your privacy, lose your free will—a chilling thought.”

Don’t actually read that article, it’s bad. It’s a bunch of pathetic bourgeois lifestyle details spun into a conspiracy theory that’s terrifying only in its dullness, like a lobotomized Philip K. Dick plot. But it is an instructive example of how to get things about as wrong as possible.

I want to start with a point about the “free will” thing, since there are some pretty common and illuminating errors at work here. The reason that people think there’s a contradiction between determinism and free will (there’s not) is that they think determinism means that people can “predict” what you’re going to do, and therefore you aren’t really making a decision. This isn’t even necessarily true on its own: it may not be practically possible to do the calculations required to simulate a human brain fast enough for the results to be useful (that is, faster than the speed at which the universe does them. The reason we can calculate things faster than the universe can is that we abstract away all the irrelevant bits, but when it comes to something as complex as the brain, almost everything is relevant. This is why our ability to predict the weather is limited, for example. There’s too much relevant data to process in the amount of time we have to do it). But the more fundamental point is that free will has nothing to do with predictability.

Imagine you’re out to dinner with a friend who’s a committed vegan. You look at the menu and notice there’s only one vegan entree. Given this, you can predict with very high accuracy what your friend is going to order. But the reason you can do this is precisely because of your friend’s free will: their predictability is the result of a choice they made. There’s only one possible thing they can do, but that’s because it’s the only thing that they want to do.

Inversely, imagine your friend instead has a nervous disorder that causes them to freeze up when faced with a large number of choices. Their coping mechanism in such situations is to quickly make a completely random choice. Here, you can’t predict at all what your friend is going to order, and in this case it’s precisely because they aren’t making a free choice. They can potentially order anything, but the one thing they can’t do is order something they actually want.

The source of the error here is that people interpret “free will” to mean “I’m a special snowflake.” Since determinism means that you aren’t special, you’re just an object like everything else, it must also mean that you don’t have free will. But this folk notion of “free will” as “freedom from constraints” is a fantasy; as demonstrated by our vegan friend, freedom, properly understood, is actually an engagement with constraints (there’s no such thing as there being no constraints; if you were floating in a featureless void there would be nothing that could have caused you to develop any actual characteristics. Practically speaking, you wouldn’t exist). Indeed, nobody is actually a vegan as such, rather, people are vegan because of facts about the real world that, under a certain moral framework, compel this choice.

This applies broadly: rather than the laws of physics preventing us from making free choices, it is only because we live in an ordered universe that our choices are real. The only two possibilities are order or chaos, and it’s obvious that chaos is precisely the situation in which there really wouldn’t be any such thing as free will.

The third alternative that some people seem to be after is something that is ordered but is “outside” the laws of physics. Let’s call this thing “soul power.” The idea is that soul power would allow a person’s will to impinge upon the laws of physics, cheating determinism. But if soul power allows you to obviate the laws of physics, then all that means is that we instead need laws of soul power to understand the universe; if there were no such laws, if soul power were chaotic, then it wouldn’t solve the problem. What’s required is something that allows us to use past information to make a decision in the present, i.e. the future has to be determined by the past. And if this is so, it must be possible to understand the principles by which soul power operates. Ergo, positing soul power doesn’t solve anything; the difference between physical laws and soul laws is merely an implementation detail.

Relatedly, what your desires are in the first place is also either explicable or chaotic. So, in the same way, it doesn’t matter whether your desires come from basic physics or from some sort of divine guidance; whatever the source, your desires are only meaningful if they arise from the appropriate sorts of real-world interactions. If, for example, you grow up watching your grandfather slowly die of lung cancer after a lifetime of smoking, that experience needs to be able to compel you to not start smoking. The situation where this is not the case is obviously the one in which you do not have free will. What would be absurd is if you somehow had a preference for or against smoking that was not based on your actual experiences with the practice.

Thus, these are the two halves of the free will fantasy: that it makes you a special little snowflake exempt from the limits of science, and that you’re capable of “pure” motivations that come from the deepest part of your soul and are unaffected by dirty reality. What is important to realize is that both of these ideas are completely wrong, and that free will is still a real thing.

When we understand this, we can start to focus on what actually matters about free will. Rather than conceptualizing it holistically, that is, arguing about whether humans “do” or “don’t” have free will, we can look at individual decisions and determine whether or not they are being made freely.

Okay, so, we were talking about mass data acquisition by corporations (“Big Data” is a bad concept and you shouldn’t use it). Since none of the corporations in question employ a mercenary army (yet), what we should be talking about is economic coercion. As a basic example: Amazon has made a number of power plays for the purpose of controlling as much commercial activity as possible. As a result, the convenience offered by Amazon is such that it is difficult for many people not to use it, despite it now being widely recognized that Amazon is a deeply immoral company. If there were readily available alternatives to Amazon, or if our daily lives were unharried enough to allow us to find non-readily available alternatives, we would be more able to take the appropriate actions with regard to the information we’ve received about Amazon’s employment practices. The same basic dynamic applies to every other “disruptive” company.

(Side note: how hilarious is it that “disruptive” is the term used by people who support the practice? It’s such a classic nerd blunder to be so clueless about the fact that people can disagree with their goals that they take a purely negative term and try to use it like a cute joke, oblivious to the fact that they’re giving away the game.)

The end goal of Amazon, Google, and Facebook alike is to become “company towns,” such that all your transactions have to go through them (for Amazon this means your literal financial transactions, for Google it’s your access to information and for Facebook it’s social interaction, which is why Facebook is the skeeviest one out of the bunch). Of course, another name for this type of situation is “monopoly,” which is the goal of every corporation on some level (Uber is making a play for monopoly on urban transportation, for example). But company towns and monopolies are things that actually have happened in the past, without the aid of mass data collection. So if the ubiquity of these companies is starting to seem scary (it is), it would probably be a good idea to keep our eyes on the prize.

And while the data acquisition that these companies engage in certainly makes all of this easier, it isn’t actually the cause. The cause, obviously, is the profit motive. That’s the only reason any of these companies are doing anything. I mean, a lot of this stuff actually is convenient. If we lived in a society that understood real consent and wasn’t constantly trying to fleece people, mass data acquisition would be a great tool with all sorts of socially positive uses. This wouldn’t be good for business, of course, just good for humanity.

But the people who constantly kvetch about how “spooky” it is that their devices are “spying” on them don’t actually oppose capitalism. On the contrary, these people are upset precisely because they’ve completely bought into the consumerist fantasy that their participation in the market defines them as a unique individual. This fantasy used to be required to sell people shit; it’s not like you can advertise a bottle of cancer-flavored sugar water on its merits. But the advent of information technology has shattered the illusion, revealing unavoidably that, from an economic point of view, each of us is a mere consumer. The only aspect of your being that capitalism cares about is how much wealth can be extracted from you. You are literally a number in a spreadsheet.

But destroying the fantasy ought to be a step forward, since it was horseshit in the first place. That’s why looking at the issue of mass surveillance from a consumer perspective is petty as all fuck. I actually feel pretty bad for the person who wrote that article (you remember, the one up at the top that you didn’t read), since he’s apparently living in a world where the advertisements he receives constitute a recognition of his innermost self. And, while none of us choose to participate in a capitalist society, there does come a point at which you’re asking for it. If you’re wearing one of those dumbass fitness wristbands all day long so that you can sync the data to your smartphone, you pretty much deserve whatever happens to you. Because guess what: there actually is more to life than market transactions. It is entirely within your abilities to sit down and read a fucking book, and I promise that nobody is monitoring your brainwaves to gain insight into your interpretation of Kafka.

(Actually, one of the reasons this sort of “paranoia” is so hard to swallow is that the recommendation engines and so forth that we’re talking about are fucking awful. I have no idea how anyone is capable of being spooked by how “clever” these bone-stupid algorithms are. Amazon can’t even make the most basic semantic distinctions: when you click on something, it has no idea whether you’re looking at it for yourself, or for a gift, or because you saw it on Worst Things For Sale, or because it was called Barbie and Her Sisters: Puppy Rescue and you just had to know what the hell that was. If they actually were monitoring you reading The Metamorphosis they’d probably be trying to sell you bug spray.)

Forget Google, this is the real threat to humanity: the petty bourgeois lifestyle taken to such an extreme that the mere recognition of forces greater then one’s own consumption habits is enough to precipitate an existential crisis. I’m fairly embarrassed to actually have to say this, but it’s apparently necessary: a person is not defined by their browsing history, there is such a thing as the human heart, and you can’t map it out by correlating data from social media posts.

Of course, none of this means that mass surveillance is not a critical issue; quite the opposite. We’ve pretty obviously been avoiding the real issue here, which is murder. The most extreme consequences of mass surveillance are not theoretical, they have already happened to people like Abdulrahman al-Awlaki. This is why it is correct to treat conspiracy theorists like addled children: for all their bluster, they refuse to engage with the actual conspiracies that are actually killing people right now. They’re play-acting at armageddon.

There is one term that must be understood by anyone who wants to even pretend to have the most basic grounding from which to speak about political issues, and that term is COINTELPRO.

“A March 4th, 1968 memo from J Edgar Hoover to FBI field offices laid out the goals of the COINTELPRO – Black Nationalist Hate Groups program: ‘to prevent the coalition of militant black nationalist groups;’ ‘to prevent the rise of a messiah who could unify and electrify the militant black nationalist movement;’ ‘to prevent violence on the part of black nationalist groups;’ ‘to prevent militant black nationalist groups and leaders from gaining respectability;’ and ‘to prevent the long-range growth of militant black nationalist organizations, especially among youth.’ Included in the program were a broad spectrum of civil rights and religious groups; targets included Martin Luther King, Malcolm X, Stokely Carmichael, Eldridge Cleaver, and Elijah Muhammad.”

“From its inception, the FBI has operated on the doctrine that the ‘preliminary stages of organization and preparation’ must be frustrated, well before there is any clear and present danger of ‘revolutionary radicalism.’ At its most extreme dimension, political dissidents have been eliminated outright or sent to prison for the rest of their lives. There are quite a number of individuals who have been handled in that fashion. Many more, however, were “neutralized” by intimidation, harassment, discrediting, snitch jacketing, a whole assortment of authoritarian and illegal tactics.”

“One of the more dramatic incidents occurred on the night of December 4, 1969, when Panther leaders Fred Hampton and Mark Clark were shot to death by Chicago policemen in a predawn raid on their apartment. Hampton, one of the most promising leaders of the Black Panther party, was killed in bed, perhaps drugged. Depositions in a civil suit in Chicago revealed that the chief of Panther security and Hampton’s personal bodyguard, William O’Neal, was an FBI infiltrator. O’Neal gave his FBI contacting agent, Roy Mitchell, a detailed floor plan of the apartment, which Mitchell turned over to the state’s attorney’s office shortly before the attack, along with ‘information’ — of dubious veracity — that there were two illegal shotguns in the apartment. For his services, O’Neal was paid over $10,000 from January 1969 through July 1970, according to Mitchell’s affidavit.”

The reason this must be understood is that COINTELPRO is what happens when the government considers something an actual threat: they shut it the fuck down. If the government isn’t attempting to wreck your shit, it’s because you don’t matter.

With regard to the suppression of political discontent in America, it’s commonly acknowledged that “things are better now,” meaning it’s been a while since we’ve had a real Kent State Massacre type of situation (which isn’t to say that the government is not busy killing Americans, only that these killings (most obviously, murders by police) are not political in the sense we’re discussing here (that is, they’re part of a system of control, but not a response to a direct threat)). But this is only because Americans are now so comfortable that no one living in America is willing to take things to the required level (consider that the police were able to quietly rout Occupy in the conventional manner, without creating any inconvenient martyrs). This is globalization at work: as our slave labor has been outsourced, so too has our discontent.

And none of this actually has anything to do with surveillance technology per se. Governments kill whoever they feel like using whatever technology happens to be available at the time. If a movement gets to be a big enough threat that the government actually feels the need to take it down the hard way, they certainly will use the data provided by tech companies to do so. But not having that data wouldn’t stop them. The level of available technology is not the relevant criterion. Power is.

It would, of course, be great if we could pass some laws preventing the government from blithely snatching up any data it can get its clumsy fingers around, as well as regulations enforcing real consent for data acquisition by tech companies. But the fact that lawmakers have a notoriously hard time keeping up with technology is more of a feature than a bug. The absence of a real legislative framework creates a situation in which both the government and corporations are free to do pretty much whatever the hell they want. As such, there’s a strong disincentive for anyone who matters to actually try to change this state of affairs.

In summary, mass surveillance is a practical problem, not a philosophical one. The actual thing keeping us out of a 1984-style surveillance situation is the fact that all the required data can’t practically be processed (as in it’s physically impossible, since there’s exactly as much data as total theoretically available person-hours). So what actually happens is that the data all gets hoovered up and stored on some big server somewhere, dormant and invisible, until someone makes the political choice to access it in a certain way, looking for a certain pattern – and then decides what action to take in response to their findings. The key element in this scenario is not the camera on the street (or in your pocket), but the person with their finger on the trigger.

Unless you work for the Atlantic, in which case you can write what appears to be an entire cover article on the subject without ever mentioning any of this. So when you hear these jokers going on about how “spooky” it is that their smartphones are spying on them, recognize this attitude for what it is: the expression of a state of luxury so extreme that it makes petty cultural detritus like targeted advertising actually seem meaningful.