Bubble babble

I’m entirely certain you’re well-acquainted with the idea that “media bubbles” are a big problem right now, effecting disinformation and perverting ideology and generally destroying society in an orgy of postmodern technological mediation. Certainly, there is cause for concern; unlike in the past, when everyone had complete correct information that they used to make fully rational decisions, nowadays humans have somehow become closed-minded and parochial. The figure of the barely-informed loudmouth shouting his kneejerk opinions into the public square represents a truly new development in history. And now that bad things are happening in politics, which has never been the case before, it’s clear that something must have gone horribly wrong.

No, okay, so I’m super annoyed about all the hyperventilation, there’s nothing more obnoxious than small-minded arguments against small-mindedness, but there’s also a real issue here. The internet certainly is generating a world-historical amount of garbage data, and political polarization really has increased to an extreme degree. The fundamental dynamic at issue here is what pretentious people like to call “epistemic closure.” When one’s sources of information or methods for evaluating it are limited in some fundamental way, certain areas of knowledge become inaccessible – or, worse, only accessible in the wrong way, such that the formation of inaccurate ideas comes to be considered true knowledge. Fox News will never give a sympathetic hearing to an idea like universal single-payer health care, so if that’s where all your information comes from, you can never develop an informed opinion on this topic. It’s important to realize that this is an absolute constraint; it’s not that it becomes harder to get to the truth, it’s that it becomes impossible. This is the double-edge of the Enlightenment ideal: since there’s no such thing as divine wisdom or whatever, you cannot form correct ideas without accurate and comprehensive information, regardless of how smart or conscientious or committed you are.

Now, one of the few positive results of the 2016 election is that no one is any longer laboring under the delusion that there’s any kind of “unbiased” source that can be relied on for complete information. “Traditional” news sources simply represent one particular set of biases. There’s plenty of issues on which they’re incapable of informing you. Most obviously, an enforced centrist perspective will fail to understand a situation where the “center” is falling apart and all new growth is happening on the “extremes” (that is, it will understand the situation incorrectly, as a “breakdown of communication” or a “legitimacy crisis” or whatever). So the popular response to this is the idea of a “balanced media diet.” The worry is that the internet allows and/or forces people to self-sort into ever more polarized communities, so you have to make the effort to seek out sources that oppose your existing beliefs. The villains then become “algorithms” that deliver pre-polarized information, or “cult-like” communities that suppress dissent.

Unfortunately, it’s not that simple. The most important source of epistemic closure is our finitude as physical beings. Simply put, there are only so many hours each day you can spend reading shit, so it’s more than a little odd to argue that people should be spending more of said hours reading things they believe to be more wrong. If you could really read everything, and also spend the requisite time to analyze and distill it all, then sure, that would solve the problem. In reality, though, you have to choose what you’re going to care about, and any choice you make is going to define a particular horizon. If you’re a feminist, for example, you could spend half of your time reading feminist sources and the other half reading anti-feminist sources, and this would give you a “balanced” perspective, in the sense that you’d understand what’s going on on both sides. But this understanding will necessarily be shallower than the one you’d get by focusing your time on one side; you’ll miss deeper arguments and distinctions and internal diversity. For one thing, you might come to believe that there are only “two sides,” which is not the case. Anyone who knows a second thing about feminism knows that its herstory is coated with blood spilled by many thousands of vicious internal disagreements. One way to get over feminist dogmatism is to read more anti-feminism, but an equally effective option is to read more feminism. There isn’t one choice that “works” and one choice that doesn’t. There are different choices that have different effects. Some bubbles are bigger than others, but you can’t not be in a bubble.

This is why blaming the internet or “algorithms” or whatever misses the mark. Like, I don’t enjoy defending tech assholes, but they really just aren’t relevant to this situation. There is a sort of consumer rights issue here; people should be able to find out how their feeds and things are being customized and change them if they want to. But arguing that search results should be more “responsible” is arguing the opposite: it’s arguing for non-transparent corporations to have more control over what people read. I mean, it’s pretty obvious that most people talking about this are only thinking things through from their side. They see lots of “bad” articles floating around, and they feel like “someone should do something,” so they imagine that Google can somehow code social responsibility for them. Practically speaking, though, you can’t make that kind of a distinction in general.1 “Misinformation” is a value judgment made by the end user. If you write an algorithm that adds more articles about global warming to the feeds of denialists, that same algorithm will necessarily also add more denialist articles to the feeds of people who believe in global warming. You can’t have it both ways. Rather, trying to have it both ways is exactly how things get fucked up. Someone at the New York Times gets it into their head that they have a “liberal bias” that needs to be corrected, so they hire an Islamophobic global warming denialist to write opinion columns. Problem solved.

People want to read things that accord with their beliefs, and – this is the important part – they have good reasons for doing so. The reason feminists, for example, disprefer reading misogynist diatribes isn’t because they’re offended or whatever, it’s because they believe feminism to be true, and they’re obviously more interested in reading things that are probably true than things that are probably false.

You don’t just automatically start understanding things once you’ve read broadly enough. You have to process the information, and how you do that – and why you’re doing it – is going to affect what conclusions you end up with. Like, there is a problem with certain types of feminists spending all of their time yelling at Bad Things and not actually developing their ideas. But if you’re one of these people, and you decide to “broaden your media diet,” all that’s going to happen is that you’re going to find more things to yell at. It’s going to strengthen your existing biases, and that’s going to happen regardless of what it is that you’re reading, and the reason for this is because it’s what you want. This isn’t even a bad thing, because the only way this is not the case is if you lack the ability to critically analyze information, which is, um, a somewhat worse situation to be in. If your goal is just to avoid being wrong, then you might as well not read anything. But if your reason for reading things and drawing conclusions is to do something with the information, then you can’t just wait around until you’re “sure,” because that’s never. In order to actually get somewhere, you have to take a stand somewhere and start moving, which will necessitate rejecting opposing ideas. Breathing underwater requires a bubble.

I’m not just applying this to my own side, either. The fact that people believe all kinds of weird conspiracy theories about the Clintons makes perfect sense, because the Clintons really are classic amoral political schemers, so if you’re opposed to them, it’s more accurate than not to assume that they’re up to some shady shit. Besides, liberals believe whatever nonsense people come up with about Trump, too. It’s the same thing. This is the normal way human communication works.

It does remain the case that the normal way human communication works is badly, and that real lies have real consequences. If you believe that Planned Parenthood is literally dismembering infants and selling their body parts to, uh, somebody (I’m not deep enough into this to know whence the nationwide demand for baby torsos supposedly originates), your advocacy on the subject is going to be somewhat more zealous. But learning the actual fact that only X% of Planned Parenthood’s expenditures go towards abortion-related services doesn’t change the moral calculus of the situation. If abortion is evil, then a little bit of it is still evil. It’s certainly worthwhile to correct lies, but you can’t fact-check your way around morality. If abortion is actually moral, then Planned Parenthood’s particular operating details don’t matter. An organization that spent 100% of its funds on abortion and sold the remains for ice cream money would be a moral organization. Focusing on the nuts and bolts here means dodging the real issue, and this is generally the case in political discussions. Even if Clinton really did use her secret email server to help the Illuminati plan Benghazi, the actual question at hand remains which policies we prefer to advance as a society. In general, misinformation does not add a unique problem to our existing difficulties in figuring out how to talk to each other. It makes things worse, but it’s not itself a crisis.

What is a crisis is when these sorts of discussions become impossible, when an enforced “healthy diet” drains the flavor from the world. When you’re stuck reading nothing but “respectable” media sources, that’s when you have a real problem, and extremism is the solution to that problem. It’s what makes new things possible. Which means that, yes, even the recent explosive growth of rightist extremism has to be understood as a positive development. InfoWars may be maximally false, but if you don’t have InfoWars, you also don’t have the truth. The fact that people have these beliefs is a bad thing, of course, but given that they do, it’s better for them to be out in the open. I mean, their agenda hasn’t actually changed, right? Reagan talked pretty on the TV, but his whole cut-services-and-fellate-corporations deal was exactly the same thing as what the current government’s up to right now. People lately have been praising Bush Jr. for talking nice about Islam, but he was doing this at the same time that his administration was turning Muslims into America’s new Great Civilizational Enemy; Trump is just picking up where he left off. Those situations were worse than the one we’re in now – rather, those situations are why we’re now in our current situation – because there was more obfuscatory rhetoric that had to be disentangled before you could get at what was really going on. This is now less of a problem; we’re getting closer to the point where people actually know what the stakes are.

It’s comforting to imagine that there’s a “middle ground” where we can all get along peaceably, but there’s not. Extremism doesn’t create disagreements, it reveals the disagreements that were already there, because people have real disagreements. Pretending this is not the case prevents anything worthwhile from ever happening. We don’t want a society where there’s “reasonable debate” about sexism, where half the time the Hyde Amendment is in place and half the time it isn’t. We want a society where sexism doesn’t exist. We want everyone trapped inside the feminism bubble, permanently.

This is the truth that must be acknowledged. All the things that people are so concerned about these days – political polarization, ideological extremism, the speed and diversity of information, the dethronement of traditionally respected sources of various kinds of authority – are the things that are, in spite of everything, going well. There’s no way to “fix” this, because it’s not broken. What was broken was the “end of history” bullshit that convinced people there were no fights left to be had, and that situation is now better. We are more confused now because we are closer to the truth – we have, in at least some sense, stopped lying. This is what has to happen. Getting the ocean without the roar of its many waters is not a real option. The real options are: retreat or advance.

 


  1. From a technical perspective, the reason this can’t work is that you have to write the code before you know what data it’s going to be run against, so you would have to be able to predict what information is going to be true or false before that information has actually been generated, meaning you can’t rely on the details of the information itself, meaning you can’t actually be making a real judgment as to whether it’s “disinformation” or not; you can only be relying on contextual coincidence. And if you try to get around this by using human intervention, all you’ve done is appointed an arbitrary, unaccountable person to act as an arbiter of truth, which is obviously several steps backwards. 

People’s choice

This extremely boring controversy over Facebook’s topic sorter algorithm or whatever it is is extremely boring, but it’s at least good for one thing: it’s clarifying how people implicitly view Facebook, and, correspondingly, what kind of society they think they live in.

Now, the whole thing has obviously been ginned up by the Right-Wing Scandal Generator, which at this point seems to have self-actualized and gone Skynet. It’s essentially conspiracy theorist Mad Libs: take any liberal-ish group or any government agency except the military, slap on a charge about converting kittens to Satanism or saying something mean about white people, and see if it has legs. Which it usually does, since these people are operating under a severe case of epistemic closure.

Anyway, for the rest of us, the newsworthy bit was that Facebook actually has people deciding which stories are popular instead of blind algorithms. Of course, in practice, there’s no difference. Algorithms are written by people, and they carry whatever implicit or explicit biases went into their creation. The point, though, is that people were assuming Facebook didn’t have its fingers in the pie, and they were upset to find out that it did. This has happened before. When Facebook ran its emotional manipulation experiment, for example, there wasn’t any practical consequence anyone could point to, but people didn’t like the idea of Facebook picking and choosing what they saw instead of letting it happen “naturally.”

What makes this all not make sense is the fact that Facebook is a corporation. Corporations obviously have their own interests and biases. In fact, we expect them to; we understand corporations as actors, if not persons. This is why we expect them to do things like withdraw advertising from bigoted programs or support charities, and why we get mad when they outsource jobs or use stereotypes to sell products. It’s also why we talk about pointless things like corporate “greed” or “corruption” instead of focusing on the actual structure that causes them to act the way they do. So if people thought of Facebook in this way, there wouldn’t be anything untoward about its behavior. Of course Facebook, staffed largely by young liberals (or at least tech libertarians), is not going to be interested in promoting Racist Grandpa’s email forwards. Accusing Facebook of censoring conservative stories makes exactly as much sense as accusing Fox News of censoring liberal stories. And remember, it’s the small-government fetishists who are getting mad about this, which, yeah, it’s opportunism, but it’s not even a sensible claim unless you assume that Facebook has a general public responsibility. After all, these same people are currently engaged in a deathly struggle to save private corporations from such scourges as having to sign contraception coverage waiver forms and having to bake cakes for gay people.

So what this means is that people don’t think of Facebook as a corporation. And this makes total sense, because Facebook doesn’t do any of the things that corporations are supposed to be for. It doesn’t create a product that people buy, or create content supported by advertising. It’s not even something like Google’s search engine where it feels like a utility but is still a tool with an actual function. Facebook is a bulletin board. It allows people to do things with it rather than doing anything itself. Sure, it’s a piece of software that requires development and maintenance, but in terms of function, Facebook is essentially a park. It’s a public space where people come to interact with each other. It’s a commons. The only reason demands for neutrality in its operation are comprehensible is that everyone implicitly understands that it doesn’t make sense for anyone to be profiting off of it.

The funny thing about capitalism as a world-defining ideology is that nobody actually believes in it. We expect corporations to be good people rather than to follow the incentives that define their existence in the first place. And we expect the commons to be respected and maintained rather than privatized and pillaged. Despite the much-vaunted “cynicism” of the American public, people actually go around assuming they’re living in a much better society than they actually are – one that basically works for people, and whose problems are the result of bad actors rather than the necessary consequences of the systems that constitute it. A world of bad actors is quite a lot better than a world of bad systems, because a world of bad actors can be fixed by getting rid of the bad actors. But a world of bad systems will go wrong no matter how the people in it act, and we haven’t yet figured out how to reliably change systems for the better. One assumes there’s a way, but one also doesn’t get one’s hopes up.

In the meantime, if you really want a neutral platform, there’s only one reasonable course of action. Nationalize Facebook.

Gamed to death

My post about level ups needs an addendum, as there’s a related issue that’s somewhat more practical. That is, it’s an actual threat.

The concept of power growth can be generalized to the concept of accumulation, the difference being that accumulation doesn’t have to refer to anything. When you’re leveling up in a game, it’s generally for a reason, e.g. you need more HP in order to survive an enemy’s attack or something. Even in traditional games, though, this is not always the case. There are many RPGs where you have like twelve different stats and it’s not clear what half of them even do, yet it’s still satisfying to watch them all go up when you level. This leads many players to pursue “stat maxing” even when there’s no practical application for those stats. Thus, we see that the progression aspect of leveling is actually not needed to engage players. It is enough to provide the opportunity for mere accumulation, a.k.a. watching numbers go up. This might sound very close to literally watching paint dry, but the terrible secret of video games is that people actually enjoy it.

The extreme expression of this problem would be a game that consists only of leveling up, that has no actual gameplay but merely provides the player with the opportunity to watch numbers go up and rewards their “effort” with additional opportunities to watch numbers go up. This game, of course, exists; it’s called FarmVille, it’s been immensely popular and influential and has spawned a wide variety of imitators. The terror is real.

Of course, as its very popularity indicates, FarmVille itself is not the problem. In fact, while FarmVille is often taken to be the dark harbinger of the era of smartphone games, its design can be traced directly back to the traditional games that it supposedly supplanted (the worst trait of “hardcore” gamebros is that they refuse to ever look in the damn mirror). Even in action-focused games such as Diablo II or Resident Evil 4, much of the playtime involves running around and clicking on everything in order to accumulate small amounts of currency and items. While this has a purpose, allowing you to purchase new weapons and other items that help you out during the action segments, it doesn’t have to be implemented this way. You could just get the money automatically whenever you defeat an enemy, as you do in most RPGs. But even in RPGs where this happens, there are still treasures and other collectibles littering the environment. This is a ubiquitous design pattern, and it exists for a reason: because running around and picking up vaguely useful junk is fun.

This pattern goes all the way back to the beginning. Super Mario Bros., for example, had coins; they’re one of the defining aspects of what is basically the ur-text of video games. Again, these coins actually did something (they gave you extra lives, eventually. Getting up to 100 coins in the original Super Mario Bros. is actually surprisingly hard), but again again, this isn’t the actual reason they were there. They were added for a specific design reason: to provide players with guidance. Super Mario Bros. was a brand-new type of game when it came out; the designers knew that they had to make things clear in order to prevent players from getting lost. So one of the things they did was add coins at strategic locations to encourage the player to take certain actions and try to get to certain places. And the reason this works is because collecting coins is fun on its own, even before the player figures out that they’re going to need as many extra lives as they can get.

The coins here are positioned to indicate to the player that they're supposed to jump onto the moving platform to proceed.

And there’s something even more fundamental than collectibles, something that was once synonymous with the concept of video games: score. Back in the days of arcade games, getting a high score was presented as the goal of most games. When you were finished playing, the game would ask you to enter your initials, and then show you your place on the scoreboard, hammering in the idea that this was the point of playing. Naturally, since arcade games were designed to not be “completable,” this was a way of adding motivation to the gameplay. But there’s more to it than that. By assigning different point values to different actions, the designers are implicitly telling the player what they’re supposed to be doing. Scoring is inherently an act of valuation.

In Pac-Man, for example, there are two ways you can use the power pellets: you can get the ghosts off your ass for a minute while you try to clear the maze, or you can hunt the ghosts down while they’re vulnerable. Since the latter is worth more points than anything else, the game is telling you that this is the way you’re supposed to be playing. The reason for this, in this case, is that it’s more fun: chasing the ghosts creates an interesting back-and-forth dynamic, while simply traversing the maze is relatively boring. Inversely, old light-gun games like Area 51 or Time Crisis often had hostages that you were penalized for shooting. In a case like this, the game is telling you what not to do; rather than shooting everything indiscriminately, you were meant to be careful and distinguish between potential targets.

So, in summary, the point of “points” or any other “numbers that go up” is to provide an in-game value system. What, then, does this mean for a game like FarmVille, which consists only of points? It means that such a game has no values. It’s nihilistic. It’s essentially the unironic version of Duchamp’s Fountain. The point of Fountain was that the work itself had no traditional artistic merit; it “counted” as art only because it was presented that way. Similarly, FarmVille is not what you’d normally call a “game,” but it’s presented as one, so it is one. The difference, of course, is that Duchamp was making a rather direct negative point. People weren’t supposed to admire Fountain, they were supposed to go fuck themselves. FarmVille, on the other hand, expects people to genuinely enjoy it. Which they do.

And again, the point is that FarmVille is not an aberration; its nihilism is only the most naked expression of the nihilism inherent in the way modern video games are understood. One game that made this point was Progress Quest, a ruthless satire of the type of gameplay epitomized by FarmVille. In Progress Quest, there is literally no gameplay: you run the application and it just automatically starts making numbers go up. It’s a watching paint dry simulator. The catch is that Progress Quest predates FarmVille by several years (art imitates life, first as satire, then as farce); it was not parodying “degraded” smartphone games, but the popular and successful games of its own time, such as EverQuest, which would become a major influence on almost everything within the mainstream gaming sphere. The call is coming from inside the house.

Because the fact that accumulation is “for” something in a game like Diablo II ultimately amounts to no more than it does for FarmVille. You kill monsters so that you can get slightly better equipment and stats, which you then use to kill slightly stronger monsters and get slightly better equipment again, ad nauseum. It’s the same loop, only more spread out and convoluted; it fakes meaning by disguising itself. In this sense, FarmVille, like Fountain, is to be praised for revealing a simple truth that had become clouded by incestuous self-regard.

There is, of course, a real alternative, which is for games to actually have some kind of aesthetic value, and for that to be the motivation for gameplay. This isn’t hard to understand. Nobody reads a book because they get points for each page they turn; indeed, the person who reads a famous book simply “to have read it” is a figure of mockery. We read books because they offer us experiences that matter. There is nothing stopping video games from providing the same thing.

The catch is that doing this requires a realization that the primary audience for games is currently unwilling to make: that completing a goal in a video game is not a real accomplishment. As games have invested heavily in the establishment of arbitrary goals, they have taken their audience down the rabbit hole with them. Today, we are in position where certain people actually think that being good at video games matters, that the conceptualization of games as skill-based challenges is metaphysically significant (just trust me on this one, there’s evidence for it but you really don’t want to see it). As a result, games have done an end-run around the concept of meaning. Rather than condemning Sisyphus to forever pushing his rock based on the idea the meaningless labor is the worst possible fate, we have instead convinced Sisyphus that pushing the rock is meaningful in the traditional sense; he now toils of his own volition, blissfully (I wish I could take credit for this metaphor, but this guy beat me to it).

This is an understandable mistake. As humans, limited beings seeking meaning in the raw physicality of the universe, we’ve become accustomed to looking for signs that distinguish meaningful labor from mere toil. It is far from an unusual mistake to confuse the sign for the destination. But the truth is that any possible goal (money, popularity, plaudits, power) is also something that we’ve made up. The universe itself provides us with nothing. But this realization does not have to stop us: we can insist on meaning without signs, abandon the word without losing the sense. This is the radical statement that Camus was making when he wrote that “we must imagine Sisyphus happy.” He was advising us to reject this fundamental aspect of our orientation towards reality.

We have not followed his advice. On the contrary, games have embraced their own meaninglessness. The most obvious symptom of this is achievements, which have become ubiquitous in all types of games (the fact that they’re actually built-in to Steam is evidence enough). Achievements are anti-goals, empty tokens that encourage players to perform tasks for no reason other than to have performed them. Many are quite explicit about this; they’re things like “ 1000 more times than you would have to do it to complete the game.” Some achievements are better than this, some even point towards interesting things that add to the gameplay experience, but the point is the principle: that players are expected to perform fully arbitrary tasks and to expect nothing else from games. In light of this, it does not matter whether a game is fun or creative or original or visually appealing. No amount of window dressing can counteract the fact that games are fundamentally meaningless.

If you want a picture of the future of games, imagine a human finger clicking a button and a human eye watching a number go up. Forever.


While renouncing games is a justifiable tactical response to the current situation, it’s not a solution. Games are just a symptom. Game designers aren’t villains, they’re just hacks. They’re doing this stuff because it works; the problem is in people.

Accumulation essentially exploits a glitch in human psychology, similar to gambling (many of these games have an explicit gambling component). It compels people to act against their reason. It’s not at all uncommon these days to hear people talk about how they kept playing a game “past the point where it stopped being fun.” I’m not exactly sure what the source of the problem is. Evolution seems unlikely, as pre-civilized humans wouldn’t have had much opportunity for hoarding-type behavior. Also, the use of numbers themselves seems to be significant, which suggests a post-literate affliction. I suppose the best guess for the culprit would probably be capitalism. Certainly, the concept of currency motivates many people to accumulate it for no practical reason.

Anyway, I promised you a threat, so here it is:

“They are told to forget the ‘poor habits’ they learned at previous jobs, one employee recalled. When they ‘hit the wall’ from the unrelenting pace, there is only one solution: ‘Climb the wall,’ others reported. To be the best Amazonians they can be, they should be guided by the leadership principles, 14 rules inscribed on handy laminated cards. When quizzed days later, those with perfect scores earn a virtual award proclaiming, ‘I’m Peculiar’ — the company’s proud phrase for overturning workplace conventions.”

(Okay real talk I actually didn’t remember the bit about the “virtual award.” I started rereading the article for evidence and it was right there in the second paragraph. I’m starting to get suspicious about how easy these assholes are making this for me.)

What’s notable about this is not that Amazon turned out to be the bad guy. We already knew that, both because of the much worse situation of their warehouse workers and because, you know, it’s a corporation in a capitalist society. What’s important is this:

“[Jeff Bezos] created a technological and retail giant by relying on some of the same impulses: eagerness to tell others how to behave; an instinct for bluntness bordering on confrontation; and an overarching confidence in the power of metrics . . .

Amazon is in the vanguard of where technology wants to take the modern office: more nimble and more productive, but harsher and less forgiving.”

What’s happening in avant-garde workplaces like Amazon is the same thing that’s happened in games. The problem with games was that they weren’t providing any real value, and the problem with work in a capitalist society is that most of it is similarly pointless. The solution in games was to fake meaning, and the solution in work is going to be the same thing.

And, just as it did in games, this tactic is going to succeed:

“[M]ore than a few who fled said they later realized they had become addicted to Amazon’s way of working.

‘A lot of people who work there feel this tension: It’s the greatest place I hate to work,’ said John Rossman, a former executive there who published a book, ‘The Amazon Way.’

. . .

Amazon has rules that are part of its daily language and rituals, used in hiring, cited at meetings and quoted in food-truck lines at lunchtime. Some Amazonians say they teach them to their children.

. . .

‘If you’re a good Amazonian, you become an Amabot,’ said one employee, using a term that means you have become at one with the system.

. . .

[I]n its offices, Amazon uses a self-reinforcing set of management, data and psychological tools to spur its tens of thousands of white-collar employees to do more and more.

. . .

‘I was so addicted to wanting to be successful there. For those of us who went to work there, it was like a drug that we could get self-worth from.’”

It’s only once these people burn out and leave that they’re able to look back and realize they were working for nothing. This is exactly the same phenomenon as staying up all night playing some hack RPG because you got sucked in to the leveling mechanism. It’s mechanical addiction to a fake goal.

The fundamental problem here, of course, is that Amazon isn’t actually trying to make anything other than money. A common apologist argument for capitalism is that economic coercion is required to motivate people to produce things, but this is pretty obviously untrue. First, people have been building shit since long before currency came into the picture; more importantly, it’s obvious just from simple everyday observation that people are motivated to try to do a good job when they feel like they’re working on something that matters, and people slack off and cut corners when they know that what they’re doing is actually bullshit. The problem with work in a capitalist society is that people aren’t fools; the reason employees have to be actively “motivated” is because they know that what they’re doing doesn’t merit motivation.

The focus with Amazon has mostly been on that fact that they’re “mean”; the Times contrasts them with companies like Google that entice employees with lavish benefits rather than psychological bullying. But this difference is largely aesthetic; the reason Google offers benefits such as meals and daycare is because it expects its employees to live at their jobs, just as Amazon does.

As always, it’s important to view the system’s cruelest symptoms not as abnormal but as extra-normative behavior. The reason Amazon does what it does is because it can: it has the kind of monitoring technology required to pull this off and its clout commands the kind of devotion from its employees required to get away with it. Amazon is currently on the cutting edge; as information technology becomes more and more anodyne, this will become less and less the case. Consider that Google’s double-edged beneficence is only possible because Google is richer than fuck, consider the kind of cost-cutting horseshit your company pulls, and then consider the kind of cost-cutting horseshit your company would pull if it had Amazon-like levels of resourcefulness and devotion.

So, while publications like the New York Times are useful for getting the sort of “average” ruling-class perspective on the issues of the day, you have to keep the ideological assumptions of this perspective in mind, which in this case is super easy: the Times assumes that Amazon’s goal of maximizing its “productivity” is a valid and even virtuous one (also, did you notice how they claimed that this is happening because “technology wants” it to happen? Classic pure ideology). All of the article’s hand-wringing is merely about whether Amazon’s particular methods are “too harsh” or “unsustainable.” The truth, obviously, is that corporate growth itself is a bad thing because corporate growth means profit growth and profits are by definition the part of the economy getting sucked out by rich fucks instead of actually being used to produce things for people. This goes double for Amazon specifically, which doesn’t contribute any original functionality of its own, but merely supersedes functionalities already being provided by existing companies in a more profitable fashion.

And this is where things get scary. With video games, the only real threat is that, by locking themselves into their Sisyphean feedback loop, games will become hyper-effective at wasting the time of the kind of people who have that kind of time to waste. Tragic, in a sense, but in another sense we’re talking about people who are making a choice and who are consequently reaping what they’ve sown. But the problem with the economy is that when rich fucks play games, the outcome affects everybody. And when those games are designed against meaning, and all of us are obligated to play in order to survive, what we’re growing is a value system, and what we’re harvesting is nihilism. Bad design is a fate worse than death.

In this vein, I strongly recommend that you get a load of this asshole:

“’In the office of the future,’ said Kris Duggan, chief executive of BetterWorks, a Silicon Valley start-up founded in 2013, ‘you will always know what you are doing and how fast you are doing it. I couldn’t imagine living in a world where I’m supposed to guess what’s important, a world filled with meetings, messages, conference rooms, and at the end of the day I don’t know if I delivered anything meaningful.’”

Can you imagine living in a world where values are determined by humans? It’s getting kind of difficult!

When the situation is this fucked, even the New York Times has its moments:

“Mr. Bohra declined to let any of his employees be interviewed. But he said the work was more focused now, which meant smaller teams taking on bigger workloads.”

You know you’re an asshole when the shit you’re pulling is so blatantly horrific that even the “paper of record” is scoring sick burns on you from behind its veil of ersatz objectivity.


The thing is, when it comes to values, “money” in society has the same function as “score” in video games: it’s a heuristic that maps only loosely onto the thing that it’s actually supposed to represent. Ideally, economic growth would represent the actual human-life-improving aspects of a society, and to an extent, it does. Despite everything, most people really are trying to make the world a decent place to live. But a capitalist society is one where “growth” is pursued for its own sake, where spending a million dollars to feed starving children is just as good as spending that money on car decals, or on incrementally faster smartphones, or on weapons.

This is why you need to watch the fuck out any time someone starts talking about “meritocracy.” The problem with “meritocracy” is the same as the problem with “utilitarianism”: you have to actually define “merit” or “utility,” and that’s the entire question in the first place. With utilitarianism this is less of a problem, since it’s more of a philosophical question and this understanding is usually part of the discussion (also, when utilitarianism was first introduced it was a revolutionary new idea in moral philosophy, it’s just that today it tends to be invoked by people who want to pretend like they’ve solved morality when they actually haven’t even started thinking about it). But the meritocracy people are actually trying to get their system implemented; indeed, they often claim that their “meritocracy” already exists.

To be explicit, the word “meritocracy” is internally inconsistent. Claiming that a society should be a “democracy,” for example, establishes a goal: a society’s rulership should be as representative of the popular will as possible (that is, assuming the word “democracy” is being used in good faith, which is rarely the case). But the concept of “merit” requires a goal in order to be meaningful. It’s trivial to say that society should favor the “best,” because the question is precisely: the best at what? The most creative, or the most efficient? The most compassionate, or the most ruthless? Certainly, our current society, including our corporations, is controlled by people who are the best at something, it’s just that that “something” isn’t what most of us want to promote.

The problem isn’t that these people are hiding their motives; they talk big but they aren’t actually that sophisticated, especially when it comes to philosophy. It’s worse: the problem is that they have no goals in the first place. For all their talk of “disruption,” they are in truth blindly following the value system implicitly established by the set of historical conditions they happen to be operating in (see also: Rand, Ayn). This is necessarily the case for anyone who focuses their life on making money, since money doesn’t actually do anything by itself; it means whatever society says it means. This is why rich fucks tend to turn towards philanthropy, or at least politics: as an attempt to salvage meaning from what they’ve done with their lives. But even then, the only thing they know how to do is to focus on reproducing the conditions of their own success. When gazing into the abyss, all they can see is themselves.

Thus far, the great hope of humanity has lain in the fact that our rulers are perpetually incapable of getting their shit together. The problem is that they no longer have to. If nuclear weapons gave them the ability to destroy the world on accident, information technology has given them the ability to destroy values just as accidentally. A blind, retarded beast is still capable of crushing through sheer weight. The reason achievements in games took off isn’t because anyone designed things that way, it’s because fake-goal-focused games appeal to people, they sell. The reason Amazon seems to be trying to design a dystopian workplace isn’t because of evil mastermindery, it’s simply because they have the resources to pursue their antigoal of corporate growth with full abandon. Indeed, what we mean by “dystopia” is not an ineffective society, it’s a society that is maximally effective towards bad ends. And if capitalists are allowed to define our values by omission, if the empty ideal of “meritocracy” is taken as common sense rather than an abdication of responsibility, if arbitrary achievement has replaced actual experience, then the rough beast’s hour has come round at last; it is slouching toward Silicon Valley to be born.

How to smell a rat

I’m all for taking tech assholes down a notch (or several notches), but this kind of alarmism isn’t actually helpful:

“It struck me that the search engine might know more about my unconscious than I do—a possibility that would put it in a position not only to predict my behavior, but to manipulate it. Lose your privacy, lose your free will—a chilling thought.”

Don’t actually read that article, it’s bad. It’s a bunch of pathetic bourgeois lifestyle details spun into a conspiracy theory that’s terrifying only in its dullness, like a lobotomized Philip K. Dick plot. But it is an instructive example of how to get things about as wrong as possible.

I want to start with a point about the “free will” thing, since there are some pretty common and illuminating errors at work here. The reason that people think there’s a contradiction between determinism and free will (there’s not) is that they think determinism means that people can “predict” what you’re going to do, and therefore you aren’t really making a decision. This isn’t even necessarily true on its own: it may not be practically possible to do the calculations required to simulate a human brain fast enough for the results to be useful (that is, faster than the speed at which the universe does them. The reason we can calculate things faster than the universe can is that we abstract away all the irrelevant bits, but when it comes to something as complex as the brain, almost everything is relevant. This is why our ability to predict the weather is limited, for example. There’s too much relevant data to process in the amount of time we have to do it). But the more fundamental point is that free will has nothing to do with predictability.

Imagine you’re out to dinner with a friend who’s a committed vegan. You look at the menu and notice there’s only one vegan entree. Given this, you can predict with very high accuracy what your friend is going to order. But the reason you can do this is precisely because of your friend’s free will: their predictability is the result of a choice they made. There’s only one possible thing they can do, but that’s because it’s the only thing that they want to do.

Inversely, imagine your friend instead has a nervous disorder that causes them to freeze up when faced with a large number of choices. Their coping mechanism in such situations is to quickly make a completely random choice. Here, you can’t predict at all what your friend is going to order, and in this case it’s precisely because they aren’t making a free choice. They can potentially order anything, but the one thing they can’t do is order something they actually want.

The source of the error here is that people interpret “free will” to mean “I’m a special snowflake.” Since determinism means that you aren’t special, you’re just an object like everything else, it must also mean that you don’t have free will. But this folk notion of “free will” as “freedom from constraints” is a fantasy; as demonstrated by our vegan friend, freedom, properly understood, is actually an engagement with constraints (there’s no such thing as there being no constraints; if you were floating in a featureless void there would be nothing that could have caused you to develop any actual characteristics. Practically speaking, you wouldn’t exist). Indeed, nobody is actually a vegan as such, rather, people are vegan because of facts about the real world that, under a certain moral framework, compel this choice.

This applies broadly: rather than the laws of physics preventing us from making free choices, it is only because we live in an ordered universe that our choices are real. The only two possibilities are order or chaos, and it’s obvious that chaos is precisely the situation in which there really wouldn’t be any such thing as free will.

The third alternative that some people seem to be after is something that is ordered but is “outside” the laws of physics. Let’s call this thing “soul power.” The idea is that soul power would allow a person’s will to impinge upon the laws of physics, cheating determinism. But if soul power allows you to obviate the laws of physics, then all that means is that we instead need laws of soul power to understand the universe; if there were no such laws, if soul power were chaotic, then it wouldn’t solve the problem. What’s required is something that allows us to use past information to make a decision in the present, i.e. the future has to be determined by the past. And if this is so, it must be possible to understand the principles by which soul power operates. Ergo, positing soul power doesn’t solve anything; the difference between physical laws and soul laws is merely an implementation detail.

Relatedly, what your desires are in the first place is also either explicable or chaotic. So, in the same way, it doesn’t matter whether your desires come from basic physics or from some sort of divine guidance; whatever the source, your desires are only meaningful if they arise from the appropriate sorts of real-world interactions. If, for example, you grow up watching your grandfather slowly die of lung cancer after a lifetime of smoking, that experience needs to be able to compel you to not start smoking. The situation where this is not the case is obviously the one in which you do not have free will. What would be absurd is if you somehow had a preference for or against smoking that was not based on your actual experiences with the practice.

Thus, these are the two halves of the free will fantasy: that it makes you a special little snowflake exempt from the limits of science, and that you’re capable of “pure” motivations that come from the deepest part of your soul and are unaffected by dirty reality. What is important to realize is that both of these ideas are completely wrong, and that free will is still a real thing.

When we understand this, we can start to focus on what actually matters about free will. Rather than conceptualizing it holistically, that is, arguing about whether humans “do” or “don’t” have free will, we can look at individual decisions and determine whether or not they are being made freely.

Okay, so, we were talking about mass data acquisition by corporations (“Big Data” is a bad concept and you shouldn’t use it). Since none of the corporations in question employ a mercenary army (yet), what we should be talking about is economic coercion. As a basic example: Amazon has made a number of power plays for the purpose of controlling as much commercial activity as possible. As a result, the convenience offered by Amazon is such that it is difficult for many people not to use it, despite it now being widely recognized that Amazon is a deeply immoral company. If there were readily available alternatives to Amazon, or if our daily lives were unharried enough to allow us to find non-readily available alternatives, we would be more able to take the appropriate actions with regard to the information we’ve received about Amazon’s employment practices. The same basic dynamic applies to every other “disruptive” company.

(Side note: how hilarious is it that “disruptive” is the term used by people who support the practice? It’s such a classic nerd blunder to be so clueless about the fact that people can disagree with their goals that they take a purely negative term and try to use it like a cute joke, oblivious to the fact that they’re giving away the game.)

The end goal of Amazon, Google, and Facebook alike is to become “company towns,” such that all your transactions have to go through them (for Amazon this means your literal financial transactions, for Google it’s your access to information and for Facebook it’s social interaction, which is why Facebook is the skeeviest one out of the bunch). Of course, another name for this type of situation is “monopoly,” which is the goal of every corporation on some level (Uber is making a play for monopoly on urban transportation, for example). But company towns and monopolies are things that actually have happened in the past, without the aid of mass data collection. So if the ubiquity of these companies is starting to seem scary (it is), it would probably be a good idea to keep our eyes on the prize.

And while the data acquisition that these companies engage in certainly makes all of this easier, it isn’t actually the cause. The cause, obviously, is the profit motive. That’s the only reason any of these companies are doing anything. I mean, a lot of this stuff actually is convenient. If we lived in a society that understood real consent and wasn’t constantly trying to fleece people, mass data acquisition would be a great tool with all sorts of socially positive uses. This wouldn’t be good for business, of course, just good for humanity.

But the people who constantly kvetch about how “spooky” it is that their devices are “spying” on them don’t actually oppose capitalism. On the contrary, these people are upset precisely because they’ve completely bought into the consumerist fantasy that their participation in the market defines them as a unique individual. This fantasy used to be required to sell people shit; it’s not like you can advertise a bottle of cancer-flavored sugar water on its merits. But the advent of information technology has shattered the illusion, revealing unavoidably that, from an economic point of view, each of us is a mere consumer. The only aspect of your being that capitalism cares about is how much wealth can be extracted from you. You are literally a number in a spreadsheet.

But destroying the fantasy ought to be a step forward, since it was horseshit in the first place. That’s why looking at the issue of mass surveillance from a consumer perspective is petty as all fuck. I actually feel pretty bad for the person who wrote that article (you remember, the one up at the top that you didn’t read), since he’s apparently living in a world where the advertisements he receives constitute a recognition of his innermost self. And, while none of us choose to participate in a capitalist society, there does come a point at which you’re asking for it. If you’re wearing one of those dumbass fitness wristbands all day long so that you can sync the data to your smartphone, you pretty much deserve whatever happens to you. Because guess what: there actually is more to life than market transactions. It is entirely within your abilities to sit down and read a fucking book, and I promise that nobody is monitoring your brainwaves to gain insight into your interpretation of Kafka.

(Actually, one of the reasons this sort of “paranoia” is so hard to swallow is that the recommendation engines and so forth that we’re talking about are fucking awful. I have no idea how anyone is capable of being spooked by how “clever” these bone-stupid algorithms are. Amazon can’t even make the most basic semantic distinctions: when you click on something, it has no idea whether you’re looking at it for yourself, or for a gift, or because you saw it on Worst Things For Sale, or because it was called Barbie and Her Sisters: Puppy Rescue and you just had to know what the hell that was. If they actually were monitoring you reading The Metamorphosis they’d probably be trying to sell you bug spray.)

Forget Google, this is the real threat to humanity: the petty bourgeois lifestyle taken to such an extreme that the mere recognition of forces greater then one’s own consumption habits is enough to precipitate an existential crisis. I’m fairly embarrassed to actually have to say this, but it’s apparently necessary: a person is not defined by their browsing history, there is such a thing as the human heart, and you can’t map it out by correlating data from social media posts.

Of course, none of this means that mass surveillance is not a critical issue; quite the opposite. We’ve pretty obviously been avoiding the real issue here, which is murder. The most extreme consequences of mass surveillance are not theoretical, they have already happened to people like Abdulrahman al-Awlaki. This is why it is correct to treat conspiracy theorists like addled children: for all their bluster, they refuse to engage with the actual conspiracies that are actually killing people right now. They’re play-acting at armageddon.

There is one term that must be understood by anyone who wants to even pretend to have the most basic grounding from which to speak about political issues, and that term is COINTELPRO.

“A March 4th, 1968 memo from J Edgar Hoover to FBI field offices laid out the goals of the COINTELPRO – Black Nationalist Hate Groups program: ‘to prevent the coalition of militant black nationalist groups;’ ‘to prevent the rise of a messiah who could unify and electrify the militant black nationalist movement;’ ‘to prevent violence on the part of black nationalist groups;’ ‘to prevent militant black nationalist groups and leaders from gaining respectability;’ and ‘to prevent the long-range growth of militant black nationalist organizations, especially among youth.’ Included in the program were a broad spectrum of civil rights and religious groups; targets included Martin Luther King, Malcolm X, Stokely Carmichael, Eldridge Cleaver, and Elijah Muhammad.”

“From its inception, the FBI has operated on the doctrine that the ‘preliminary stages of organization and preparation’ must be frustrated, well before there is any clear and present danger of ‘revolutionary radicalism.’ At its most extreme dimension, political dissidents have been eliminated outright or sent to prison for the rest of their lives. There are quite a number of individuals who have been handled in that fashion. Many more, however, were “neutralized” by intimidation, harassment, discrediting, snitch jacketing, a whole assortment of authoritarian and illegal tactics.”

“One of the more dramatic incidents occurred on the night of December 4, 1969, when Panther leaders Fred Hampton and Mark Clark were shot to death by Chicago policemen in a predawn raid on their apartment. Hampton, one of the most promising leaders of the Black Panther party, was killed in bed, perhaps drugged. Depositions in a civil suit in Chicago revealed that the chief of Panther security and Hampton’s personal bodyguard, William O’Neal, was an FBI infiltrator. O’Neal gave his FBI contacting agent, Roy Mitchell, a detailed floor plan of the apartment, which Mitchell turned over to the state’s attorney’s office shortly before the attack, along with ‘information’ — of dubious veracity — that there were two illegal shotguns in the apartment. For his services, O’Neal was paid over $10,000 from January 1969 through July 1970, according to Mitchell’s affidavit.”

The reason this must be understood is that COINTELPRO is what happens when the government considers something an actual threat: they shut it the fuck down. If the government isn’t attempting to wreck your shit, it’s because you don’t matter.

With regard to the suppression of political discontent in America, it’s commonly acknowledged that “things are better now,” meaning it’s been a while since we’ve had a real Kent State Massacre type of situation (which isn’t to say that the government is not busy killing Americans, only that these killings (most obviously, murders by police) are not political in the sense we’re discussing here (that is, they’re part of a system of control, but not a response to a direct threat)). But this is only because Americans are now so comfortable that no one living in America is willing to take things to the required level (consider that the police were able to quietly rout Occupy in the conventional manner, without creating any inconvenient martyrs). This is globalization at work: as our slave labor has been outsourced, so too has our discontent.

And none of this actually has anything to do with surveillance technology per se. Governments kill whoever they feel like using whatever technology happens to be available at the time. If a movement gets to be a big enough threat that the government actually feels the need to take it down the hard way, they certainly will use the data provided by tech companies to do so. But not having that data wouldn’t stop them. The level of available technology is not the relevant criterion. Power is.

It would, of course, be great if we could pass some laws preventing the government from blithely snatching up any data it can get its clumsy fingers around, as well as regulations enforcing real consent for data acquisition by tech companies. But the fact that lawmakers have a notoriously hard time keeping up with technology is more of a feature than a bug. The absence of a real legislative framework creates a situation in which both the government and corporations are free to do pretty much whatever the hell they want. As such, there’s a strong disincentive for anyone who matters to actually try to change this state of affairs.

In summary, mass surveillance is a practical problem, not a philosophical one. The actual thing keeping us out of a 1984-style surveillance situation is the fact that all the required data can’t practically be processed (as in it’s physically impossible, since there’s exactly as much data as total theoretically available person-hours). So what actually happens is that the data all gets hoovered up and stored on some big server somewhere, dormant and invisible, until someone makes the political choice to access it in a certain way, looking for a certain pattern – and then decides what action to take in response to their findings. The key element in this scenario is not the camera on the street (or in your pocket), but the person with their finger on the trigger.

Unless you work for the Atlantic, in which case you can write what appears to be an entire cover article on the subject without ever mentioning any of this. So when you hear these jokers going on about how “spooky” it is that their smartphones are spying on them, recognize this attitude for what it is: the expression of a state of luxury so extreme that it makes petty cultural detritus like targeted advertising actually seem meaningful.