Face down

We all had a good laugh when Apple decided that the future of technology was making you unlock your phone by wiggling it in front of your face, every time you need to use it, in public. But the thing about extremely stupid ideas is that they have real underlying causes, which is why the funniest things are often simultaneously the most serious. This is no exception, and the real issue here is particularly not pretty.

We should start by admitting an oft-ignored truth, which is that passwords are good. They’re the correct form of security at the level of the individual user, and the reason for this is that they are a proper technical implementation of consent. The problem is that, when a system gets a request to provide access to an account, it has no idea why or from where the request is coming in; it just has the request itself. So the requirement is that access is provided if and only if the person associated with the account wants it to be provided. The way you implement this is by establishing an unambiguous communication signal. This works just like a safe word in a BDSM scene: you take a signal that would normally never occur and assign a fixed meaning to it, so that when it does occur, you know exactly what it means. That’s what a password is, and that’s why it works. “Security questions,” on the other hand, are precisely how passwords don’t work, because anything personally associated with you is not a low frequency signal. Anyone who knows that information can just send it in, so it doesn’t accord with user consent. All those celebrities who got hacked were actually compromised through their security questions, because of course they were, because personal information about celebrities is publicly available. They would have been perfectly fine had their email systems simply relied on generic passwords.

Furthermore, none of the alleged problems with passwords are real problems. The reason for all the stupid alternate-character requirements on passwords is supposedly that they increase complexity, but this doesn’t actually matter. The only thing that matters is that the signal is low frequency, and the problem with a password like “password123” isn’t that it lacks some particular combination of magic characters,1 but is simply that it’s high frequency. But anything that wouldn’t be within a random person’s top 100 guesses is, for practical purposes, zero frequency, so a password like “kittensarecute” or “theboysarebackintown” is essentially 100% secure. There’s no actual reason to complicate it any further, and in fact several reasons not to, because forgetting your password or having to write it down are real security threats.

Literally the only problem with simple passwords like this is that they can be hacked; that is, a computer program can derive them from a fixed pattern. If your password is a combination of dictionary words, then a “dictionary attack” can derive it from all the possible combinations of all the words in the dictionary in a relatively short amount of time, because that’s actually not all that much data. The frequency isn’t low enough. But the thing about this is that it’s portrayed as an end-user problem when it isn’t one at all; it’s a server problem. A user can’t actually guess how their password is going to be hacked; the attacker might use a dictionary attack, or they might pick a different pattern that happens to match the one you used in an attempt to evade a dictionary attack. The real way to prevent this is for the server to disallow it – the server shouldn’t allow a frequency of attempts high enough to convert a low-frequency signal into a high-frequency one. Preventing this isn’t the user’s job, because they can’t actually do anything about it. The server can.

And of course no one is ever actually going to hack your password. You don’t matter enough for anyone to care. What actually happens, as one hears about constantly in the news, is that a company’s server gets breached and all the passwords on it are compromised from the back end. When this happens, the strength and secrecy of your password are completely irrelevant, because the attacker already has your credentials, no matter what form they’re in. Again, this is not a problem with passwords. The passwords are doing their job; it’s the server that’s failing.

So the thing about biometrics is that they’re worse than passwords, because they don’t implement consent. At best, they implement identity, but that’s not what you want. If the police arrest you and want to snoop through your phone without a warrant, they have your identity, so if your phone is secured through biometrics, they have access to it without your consent. But they don’t have your password unless you give it to them. Similarly, the ability of passwords to be changed when needed is a strength. It’s part of the implementation of consent: if the situation changes such that the previously agreed-upon term no longer communicates the thing its supposed to communicate, you have to be able to change it. In BDSM terms, if your safe word is “lizard,” but then you want to do a scene about, y’know, lizard people or something, then the word isn’t going to convey the right thing anymore, so you have to come up with a new one. This is the same thing that happens in a data breach: because someone else knows your password, it no longer communicates consent – but precisely because you can change it, it can continue to perform its proper function. Whereas if someone steals your biometric data, you’re fucked forever. So when Apple touts the success rate or whatever of their face-scanning thing, they’ve completely missed the point. It doesn’t matter how accurate it is, because it implements the wrong thing.2

So, given all of this, why would a major company expend the amount of resources required to implement biometrics? We’ve already seen the answers. First, passwords look bad from the end-user perspective, because they feel insecure – unless you’re forced to use a random jumble of characters, in which case they feel obnoxious. And in either case you have to manage multiple passwords, which can be genuinely difficult. Biometrics, by contrast, feel secure, even though they’re not, and they’re very easy to use. They also feel “future-y,” allowing companies to sell them like some big new fancy innovation, when they’re actually a step backwards. In short, they’re pretty on the outside. At the risk of putting too fine a point on it, Apple is invested in the conceptualization of technology as magic.

More than that, though, biometrics demonstrate a focus on the appearance of security at the expense of its actuality – that is, they’re security theater. What all those data breaches in the news indicate is that, for all the ridiculous security paraphernalia that gets foisted on us, companies don’t actually bother much with security on their end. They don’t want to spend the money, so they make you do it, and because you can’t do it, because you don’t actually have the necessary means, the result is actual insecurity. Thus, the appearance of security, mediated by opaque technology that most people don’t understand, provides these companies with cover for their own incompetence. The only function being performed here by “technology” is distraction.3

What this means, then, is that technology isn’t technology. That is, the things that we talk about when we talk about “tech” aren’t actually about tech. Indeed, “tech companies” aren’t even tech companies4. Google and Facebook make their money through advertising; they’re ad companies. The fact that they use new types of software to sell their ads is only relevant to their business model in that provides a shimmery sci-fi veneer to disguise their true, hideous forms. Amazon is not actually a website; it’s a big-box retailer in exactly the same vein as Target and Wal-Mart. A lot of people thought it was “ironic” when Amazon stated opening physical stores, but that’s only the case if you assume that Amazon has some kind of ideological commitment to online ordering. What Amazon has an ideological commitment to is capturing market share, and they’re going to keep doing that using whatever technological means are available to them. Driving physical retailers out of business and then filling the vacuum with their own physical stores is precisely in line with how Amazon operates – it’s what you should expect them to do, if you actually understand what type of thing they are. Uber is only an “app” in the sense that that mediates their actual business model, which is increasing the profits of taxi services by evading regulations and passing costs on the the drivers (Uber’s business model doesn’t account for the significant maintenance costs incurred by constantly operating a vehicle, because those costs are borne by the drivers, who aren’t Uber’s employees. But Uber still takes the same cut of the profits regardless.) Apple is the closest, since they actually develop new technology, but even then they mostly make money by selling hardware (after having it manufactured as cheaply as possible), meaning they’re really just in the old-fashioned business of commodity production.

So if you try to understand these companies in terms of “tech,” you’re going to get everything wrong. There isn’t a design reason why Apple makes the choices it does; there’s a business reason. Nobody actually wanted an iPhone without a headphone port, but Apple relies on their sleek, minimalist imagery to move products, so they had to make the phone slimmer, even if it meant removing useful functionality. And of course no one is ever going to be interested in a solid-glass phone that shatters into a million pieces when you sneeze at it, but Apple had to come up with something that looked impressive to appease the investors and the media drones, so that’s what we got.

But this isn’t even limited to just these “new” companies; it’s the general dynamic by which technology relates to economics. There’s been a recent countertrend of elites pointing out that, actually, modern society is pretty great from a historical perspective, but they’re missing the point that this is despite our system of social organization, not because of it. That is, barring extreme disasters along the lines of the bubonic plague or the thing that we’re currently running headlong into, it would be incomprehensibly bizarre for the general standard of living not to increase over time. As long as humans are engaged in any productive activity at all, things are going to continuously get better, because things are being produced. The fact that we’re not seeing this – that real wages have been stagnant for decades and people are more stressed and have less leisure time then ever – indicates that we are in the midst of precisely such a disaster. Our current economic system is a world-historical catastrophe on par with the Black Death.

Do I even need to explicitly point out that this is why global warming is happening? It isn’t because of technology, it’s because rich fucks have decided they’d rather destroy the world for a short-term profit than be slightly less rich. It’s somewhat unfortunate that the physics are such that everyone is going to die, but the decision itself was made a long time ago. If it wasn’t greenhouse gasses, it would be something else. There’s always nuclear war or mass starvation or what have you. The fact of the matter is that we’ve chosen a social configuration that doesn’t support human life. That’s the whole story.

To address this technically, it’s certainly true that the age of capitalism has seen a vast increase in worldwide standards of living, but it’s not capitalism that caused that. It’s actually the opposite: trade and industrialization created the conditions for capitalism to become possible in the first place. Capitalism is not the cause of industrialization or globalization, it’s the response to these things. It is the determination of how the results of these things will be applied, and what actually happens it that it ensures that the gains will always be pointed in the wrong direction. The fact of globalization has nothing to do with any of the problems attributed to it; the problems reside entirely in how globalization is happening: who’s managing it, what their priorities are, and where the results are going. Like, it’s really amazing to consider how much potential productivity is being wasted right now. All the people employed in advertising, or in building yachts, or in think tanks, or on corporate advisory boards, or in failed attempts at “regime change,” or designing new gadgets that are less functional then the old ones, or all those dumbass “internet-connected” kitchen appliances, all of that, all of the time and energy and resources being spent on all of that stuff and far more, is all pure waste. Imagine the kind of society we could have if all of that potential were actually being put to productive use.

And it’s deeply hilarious how committed everyone is to misunderstanding this as thoroughly as possible. Like, the actual word we have for someone who negatively fetishizes technology is “Luddite,” but the Luddites were precisely people who cared about the practical results of technology – they cared about the fact that their livelihoods were being destroyed. They attacked machines because those machines were killing them. Every clueless takemonger inveighing about how globalization is leaving people behind or social media is dividing us or smartphones are alienating us is completely failing to grasp the basic point that the Luddites instinctively understood. The results of technological developments are not properties of the technology itself; they arise from political choices. The technology is simply the means by which those choices are implemented. In just the same way, attacking technology is not merely a symptom of incomprehension or phobia or lifestyle. It is also a political choice.

An engine doesn’t tell you where to go or how to travel. It just generates kinetic energy. It can take you past the horizon, but if you instead point it into a ditch, it will be equally happy to drive you straight into the dirt. There’s nothing counterintuitive about that; the function of technology is no great mystery. It just obeys the rules – not only the physical ones, but the social ones as well. All of the problems that people attribute to technology (excepting things like software glitches that are actual implementation failures) are actually problems with the rules. The great lesson of the age of technology is that technology doesn’t matter; as long as society continues on in its present configuration, everything will continue to get worse.

 


  1. The way you can tell that complexity requirements are bullshit is that they’re all different. There are plenty of nerds available to run the numbers on this, so if there really were a particular combination of requirements that resulted in “high security,” it would have been figured out by now and the same solution would have been implemented everywhere. But because the actual solution is contextual – that is, it’s the thing that no one else is guessing, which also means it’s unstable – you can’t implement it as a fixed list of requirements. The reason it feels like each website’s requirements are just some random ideas that some intern thought sounded “secure” enough is because that’s actually what they are. 
  2. I mean, face-scanning can’t actually work the way they say it does, because of identical twins. If the scan can distinguish between identical twins, that means it’s using contextual cues such as hair and expression, which means there are cases when these things would cause it to fail for an individual user, and if it can’t distinguish between identical twins (or doppelgangers), then that’s also a failure. I’d also be curious to know how much work the engineers put into controlling for makeup, because that’s a pretty common and major issue, and I’m guessing the answer is not much. 
  3. The real situation is significantly more dire than this. It isn’t just that Equifax, for example, sucks at security, it’s that Equifax should not exist in the first place. Taking the John Oliver Strategy and making fun of Equifax for being a bunch of dummies completely misses what’s really going on here. 
  4. I’m not giving up my “tech assholes” tag though, it’s too perfect. 

See no evil

My last post requires an addendum. I mentioned that expecting social media companies to filter out bad political content is a fool’s errand, because all you’d be doing is shackling yourself to someone else’s biases. So there’s that, but there’s also a deeper, category-level confusion which has been occurring with increased prevalence and which pretty much nobody is picking up on.

Some time ago, Google changed its search interface to add little boxes and things for “recommended” results. This is supposed to make it easier to find answers to direct questions without having to go through a whole page of links. But people have been noticing that this approach leads to a lot of untoward results; for example, queries regarding the Holocaust used to produce Holocaust denial pages in the boxy results. It’s easy to understand why this happens: most people accept the occurrence of the Holocaust as a historical fact, so the only people who actually input queries along the lines of “did the Holocaust really happen?” are denialists (or at least budding denialists), who then click through to denialist sites. So the Google algorithm is just performing its usual function of showing people the most popular results correlated with their input.

There isn’t actually a way around this. As long as there are Holocaust denial sites on the internet, there will exist some query that directs you to them. I mean, if there wasn’t, Google wouldn’t be much of a search engine, right? But that doesn’t mean that there’s no problem here. Rather, the problem is specifically with the boxes that pick out some of the results and stamp them with the imprimatur of officiality. As long as that’s happening, Google actually is recommending those results. So the only sensible option here is to get rid of the boxy results. Google’s job is to show you what’s on the internet and nothing else.

Importantly, there is a technical reason why this is the correct solution. It is impossible for Google’s boxy results feature to work “correctly,” because it is internally contradictory. It is intended to be both a dynamically-generated response based on the most relevant data currently present on the internet and an Official Correct Answer. You can’t do both of those things at once. You have to pick one. Furthermore, picking the second one is also impossible, because the number of potential questions is literally infinite. What the boxy results actually are is an illusion. They look like a recommendation when they are actually no different than anything else that happens to come up in the list of results. The reason that boxy results specifically reflect badly on Google is because they are lies. It is correct to say in this case that Google is lying to you, even though the results are completely unintentional, because Google has constructed its interface to look like something that it is not, and is thereby conveying false information.1 So the only logically viable option is for Google to quit fucking around and just be a search engine, which, you might recall, was the whole thing it was good at in the first place.2

People seem to be having a certain amount of difficulty understanding this. Naturally, there’s always a performative moral crisis when something like this happens, but in this case the complaints are almost universally targeted at the same, specific, exactly wrong place. Consider this article, which correctly points out that the problem is specifically with the boxy results:

For most of its history, Google did not answer questions. Users typed in what they were looking for and got a list of web pages that might contain the desired information. Google has long recognized that many people don’t want a research tool, however; they want a quick answer. Over the past five years, the company has been moving toward providing direct answers to questions along with its traditional list of relevant web pages.

Type in the name of a person and you’ll get a box with a photo and biographical data. Type in a word and you’ll get a box with a definition. Type in “When is Mother’s Day” and you’ll get a date. Type in “How to bake a cake?” and you’ll get a basic cake recipe. These are Google’s attempts to provide what Danny Sullivan, a journalist and founder of the blog SearchEngineLand, calls “the one true answer.” These answers are visually set apart, encased in a virtual box with a slight drop shadow. According to MozCast, a tool that tracks the Google algorithm, almost 20 percent of queries — based on MozCast’s sample size of 10,000 — will attempt to return one true answer.

Unfortunately, not all of these answers are actually true.

and then immediately descends into psychotic gibberish:

Google needs to invest in human experts who can judge what type of queries should produce a direct answer like this, Shulman said. “Or, at least in this case, not send an algorithm in search of an answer that isn’t simply ‘There is no evidence any American president has been a member of the Klan.’ It’d be great if instead of highlighting a bogus answer, it provided links to accessible, peer-reviewed scholarship.”

. . .

The fastest way for Google to improve its featured snippets is to release them into the real world and have users interact with them. Every featured snippet comes with two links in the footnote: “About this result,” and “Feedback.” The former explains what featured snippets are, with guidelines for webmasters on how to opt out of them or optimize for them. The latter simply asks, “What do you think?” with the option to respond with “This is helpful,” “Something is missing,” “Something is wrong,” or “This isn’t useful,” and a section for comments.

This is all nonsense. The problem is that Google gives some of its results a false sense of authority, so the solution is for it to give a different set of its results even more of a false sense of authority, while also soliciting comments from everyone and putting in 3,000 different links allowing people to leave 30,000 different layers of feedback, because then the results won’t be confusing anymore.

Again, there are a literally infinite number of possible queries and results, which is the whole reason you write a search engine in the first place. Putting in custom results for specific queries both breaks the functionality of what Google is supposed to be doing, and is a futile game of whack-a-mole, a drop of water in a sea of bullshit. Furthermore, when you go down this road you’re trusting Google to provide the “right” results, which is a task at which it has absolutely no institutional competence. Is there seriously anyone who still hasn’t noticed that nerds are generally extremely bad at anything outside of their direct area of expertise? (That’s kind of the definition of “nerd,” actually.) To precisely the extent that you have a curated system, you do not have a search engine. You have some nerd’s journal.

Again, again, Google can either be a search engine or a source of direct information. It can’t be both things, and the practical effect of “solutions” like this is to transform Google into an extremely shitty direct information source. Think about this for literally five seconds: if the problem is that the web has a bunch of shitty content on it, then how is soliciting more information from the same place going to change anything? Are we seriously assuming that Holocaust deniers are going to be above gaming these sorts of things? The idea that individual people can change Google results by yelling at the company loudly enough is not any kind of solution; it’s properly horrifying. It means that search results are constantly subject to the random whims and biases of the people who are the best at yelling about things on the internet. This isn’t order; it’s chaos.

You may recall that the internet already has a source for crowdsourced direct information. It’s called Wikipedia. And, indeed, the problem that a lot of people are having here is that they are expecting Google to be the same thing as Wikipedia. In other words, they are incapable of understanding that a search engine and a source of information are different types of things, and thus, when one of them doesn’t behave like the other, they see it as a “problem” that needs to be “fixed”:

This is a really remarkable comment, especially coming from a guy with a fucking book emoji in his name. There’s not even an argument here, there’s just a completely unexamined assumption that Google and Wikipedia are directly comparable on some kind of “information quality” level or something and that one of them is “better” than the other. This is as far from intellectualism as it’s possible to get. (Don’t even get me started on the pathetic haughtiness of “do better,” as though it were any kind of meaningful statement (as though it imparted any semantic content at all), let alone a solution.)

Since I know I have to say this explicitly, I am absolutely not arguing that there is any such thing as a “neutral” platform or algorithm or that Google is not completely fucked up and deserving of excoriation. This isn’t about “neutrality” and “bias,” this is about what type of thing a thing is. What I am arguing is that things need to be criticized for what they are actually doing. It is correct for people to give Wikipedia shit about, for example, how it addresses trans people, because what’s on Wikipedia was put there by a specific person and approved by other specific people. Wikipedia’s “neutral point of view” thing is largely bullshit, because you can’t actually do that, but it is correct for it to attempt to stick to the facts and avoid editorializing. There’s no point in complaining that Wikipedia doesn’t promote your own personal political philosophy hard enough. But when it comes to something like which gender you use to refer to a trans person, there isn’t a “neutral option,” and the issue can’t be avoided. You have to make a choice, and that choice merits criticism.

So, as mentioned, the part of the Google results that is actually wrong is the boxy results, and they’re wrong in general, not just when they display “wrong” answers. Aspiring detectives may have noticed that I lied earlier. The Holocaust denial thing didn’t actually come up in one of the boxy results, it was just at the top of the normal list. So the people complaining there actually were full of shit. More specifically, they were full of shit insofar as they were directing their complaints at Google. The existence of the site is the problem, not the fact that Google’s algorithm noticed that it was on the internet and displayed it to the people to whom it calculated it was probably relevant.

This does not mean the algorithm is “neutral.”3 There’s no such thing. There are a lot of different methods you can use to find and display search results. They can be based on the site’s overall popularity, or on how many people clicked through from a given source, or on how well the content appears to match the search parameters regardless of traffic patterns. You can even switch this around; you could, for example, specifically promote less popular sites when they match certain search criteria. This would distribute traffic more equally and advance less popular opinions, though it might also increase the bullshit ratio. Hell, you could even take all the valid results and just display them randomly – this would actually have the positive effect of promoting previously unknown sources (hi), even though it would certainly increase the bullshit ratio, perhaps by quite a lot (depending on the extent to which “authoritative” sources are actually bullshit in the first place).

These are the real choices Google has to make even if it stops lying, and any choice made here is going to have political results. Pushing all the results towards the New York Times center is just as much of a political action as promoting fringe sites. So criticism of the behavior of Google’s algorithm is in fact within bounds here, as long as that’s actually what you’re criticizing. Pointing out that one bad result appears in one place is not a real argument, because nobody actually put it there. In order to make that argument, you have to argue against the general behavior that results in that particular output, and when you do that, you are implicitly arguing against all of the behavior that results from the parameters you’re selecting for criticism. You can coherently make the argument that Google should be promoting more “authoritative” results, but only if you’re willing to accept that non-authoritative results that you happen to agree with will also get downgraded. And the reason I’m claiming that people are full of shit here is that I don’t think anyone actually believes this. What people actually want is for the bad results to just not be there, because their existence is actively immoral. Which is an entirely praiseworthy opinion, but you can’t just wish them away. You have to think about how you actually want these things to be determined, because the consequences are going to be far greater than the one or two bad results you happen to encounter. I mean, if you really do want only “officially approved” sources displayed when you perform a general internet search, I’m within my rights to conclude that you’re an authoritarian.

There’s a reason this is happening, though. Google is not trying to act as a search engine and failing; it is choosing to promote itself as an source of information and is doing so dishonestly. The reason it is making this choice is that it is what people want. People don’t actually want to know what’s out there on the internet. They want a magic box to give them the right answer. That’s the only possible explanation for the proliferation of those stupid talking internet cylinders. My ability to comment intelligently on this aspect of the problem is somewhat limited, as I cannot for the life of me imagine why anyone would a) pay to b) put a robot in the middle of their house that c) talks at them and d) constantly monitors them in order to e) sell them shit, all for the sake of f) an inferior version of the functionality that you already have on your desktop and know how to use, because you ordered the thing off of Amazon in the first place. That is literally my idea of hell. Anyway, the reason people buy these things, one supposes, is that they want to be able to yell indistinctly at a robot and have the robot give them the magical Correct Answer. In other words, they want to be lied to. In order to respond to this desire, Google has to be dishonest, because it’s not possible to honestly create an incoherent system.

Pressuring Google to censor “bad” search results one at a time doesn’t solve a real problem.4 I don’t actually object to Holocaust denial sites being delisted (good riddance, obviously), but I do object to intentional delusion. I object to people who think that removing unpleasant things from their field of vision is the same as improving material conditions for living humans. Indeed, what we’re really talking about here is removing unpleasant truths, because it is a real fact that these sites really exist, and that their existence accurately reflects the fact that large numbers of people sincerely believe these things. This is real news. All obscuring it does is make liberals feel better because now they don’t have to see the bad things. You may recall that this dynamic has resulted in some problems recently.

The true fact of the matter is that the world is a disgusting place. This should neither be accepted nor ignored. But not ignoring it also means not fooling yourself about where things are coming from. It means choosing high-value targets and not easy ones. It means understanding how the things you are yelling at work so that you can yell at them accurately. It means taking actions that actually move the world in a better direction instead of the ones that merely move you into a more comfortable chair. Above all, it means keeping your eyes open to the things that are the most disgusting to look at. The only option for interacting with reality is to learn how to navigate the sea of bullshit.

It is for this reason that category errors matter. If you can’t tell the difference between a racist website written by a person and the racist output of an algorithm, you are not actually perceiving reality. Even though those things are both wrong – even though algorithms can be just as blameworthy as individual people – they’re wrong for different reasons, and they require different responses. There’s a reason we have different names for different things. Different things are different. A search engine is not the same thing as a news site. Treating different things as though they were the same thing is called stupidity. It makes you wrong about things.

We also have a name for the desire to retreat from a complicated world into a simplistic shell of officially-verified Correct Answers. It’s called cowardice.

 

 


  1. So, strictly speaking, this is a UX problem and not an algorithm problem. The extent to which a program’s interface determines its functionality both apart from and synergistically with its back-end code is kind of a whole other thing, though. 
  2. In case you’re wondering, AI, in addition to not being a solution, is not even a unique issue here. An actually intelligent AI would actually be intelligent, i.e. it would be a person. A practical AI that is not intelligent is just a fancy executable. This is actually another category error: the kind of AIs we have right now are just really complicated single-function computer programs; the sci-fi type of AI is an actual agent with human-like general reasoning capabilities (or perhaps not-so-human-like, but at least functionally similar). No matter how impressive the former is, it’s not the same type of thing as the latter. People are constantly getting this wrong and freaking out over really simple programs displaying barely surprising behavior; frankly, I don’t understand why people are so eager to leap to the completely unsupported conclusion that robots are about to take over the world. Anyway, the point is that we ought to be using two different terms for these things, because they are in fact different things. 
  3. You might want to note that a search engine is actually an object – it’s a fixed block of executable code. Objects aren’t neutral, but that doesn’t make them the same type of thing as subjects.5 Objects do not (non-metaphorically) have things like “desires” or “goals.” They have inputs that they accept, internal calculations that they perform, and outputs that they generate. (This applies just as well to ordinary physical objects. When you throw a rock, the input is force, the internal calculations involve weight and wind resistance and ductility and soforth, and then the output is force again.) 
  4. Also, this isn’t even the half of it. Google is up to way shadier shit than this; specifically, Google’s advertising monopoly – the fact that it both sells ads and controls and extracts money from ad blockers, meaning it is effectively selling ads to itself – is a book-length problem with serious implications for how the internet is going to work. This is exactly why we have (or are supposed to have) antitrust regulation. Google shouldn’t be allowed to be both things. The extent to which this is a bigger problem than racist websites showing up sometimes cannot be overstated 
  5. The big plot twist is that, even though objects and subjects are distinctly different types of things, living in a material world means being a material girl. Er, it means that all people (subjects) are also objects. They’re physical bodies existing in physical space. Importantly, though, a person is not an object in addition to being a subject, but is rather one thing that is both an object and a subject at the same time, in the same mode of being. Reconciling this apparent paradox is one of the Great Problems. 

Endless talk

Given recent developments in Nazis, this is probably a good time for some real talk on the whole free speech thing. While this topic has been discussed to death, it’s attracted a truly staggering amount of dullardry in the process, so I feel the need for boring philosophical clarity.

First, there is no such position as free speech absolutism. You cannot begin understanding the issue until you understand this. We like to talk about “rights” as though they are unlimited, but that’s not how the concept works. In terms of moral philosophy, a right is something that you don’t violate for utilitarian purposes. There are times when killing someone might actually result in the best overall outcome, but you still don’t kill people in those cases, because you have the right not to be killed.1 But it’s for this same reason that you can and indeed have to violate rights in order to preserve other rights. In the real world, rights conflict, so you can’t always preserve all of them.2 This isn’t a novel interpretation, it’s just how rights work. Even Second Amendment zealots don’t argue that individuals ought to be able to own and operate intercontinental ballistic missiles.

When it comes to speech, there are already plenty of laws on the books restricting it on this basis. Ordering an assassination is not “protected speech,” because it violates the target’s right to life. And the restrictions aren’t only for extreme cases; lots of practical, everyday speech acts are prohibited in the same way. Credible death threats are illegal because they violate the target’s right to basic security. Shouting “fire” in a crowded theater is illegal because it causes direct physical harm. Libel is illegal because you have the right not to suffer harmful consequences based on falsehoods (of course, you do not have the right to avoid the consequences of truths, which is why only falsehoods qualify as libel. In other words, this is a specific instance of the right to due process). There’s even a legal category that’s actually called “fighting words,” referring to speech that directly precipitates harm or illegal action. The decision referenced in that link clearly conveys the balance of interests required in making these determinations:

It has been well observed that such utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.

Furthermore, speech not only conflicts with other rights, it also conflicts with itself. One of the problems with libel is that preemptive damage to the target’s reputation prevents them from being able to correct the record – one person’s speech restricts another’s. Similarly, highly provocative speech can prevent a discussion from taking place, and certain types of intellectual climates make certain ideas inadmissible. You can’t respond to these types of situations simply by picking the side with “more speech.” There can be valid speech on both sides, and you have to decide which side you value more.

In a broader and more important sense, this is the real problem with hate speech. It’s not that it hurts people’s feelings or even that it’s “harmful” in general. It’s okay and frequently desirable for harmful things to happen. Racists get their feelings hurt when people call them racists, but this is a good thing, because it’s correct for your feelings to hurt when people call you out for doing bad things. The problem is that hate speech is detrimental to overall human expression.3 Arguing that black people are inferior to white people necessarily reduces their effective ability to speak. The argument itself does this, even before anyone accepts it, because refuting the argument becomes a prerequisite to listening to black people. If you spend all you time arguing about whether black people’s ideas deserve to be taken seriously, you spend none of your time actually taking black people’s ideas seriously. This is exactly why the affected groups often try to shut these discussions down: because they have to, or they will never be able to say anything else.

So if you call yourself a “free speech absolutist” and refuse to make any determinations on the issue, all you’re actually doing is allowing existing forces to make those determinations on their own. The real world has a variety of conditions and constraints that allow certain types of expression to happen and disallow others, and a “hands off” approach means tacit agreement with the results. So you are not in fact an “absolutist” at all, you’re just a naive censor.

This also means that “maximizing speech” (as in “the solution to bad speech is more speech”) is not a coherent goal, because some ideas crowd out others. The idea that black people are inferior to white people and the idea that black people should be equal participants in society cannot just float around abstractly without affecting each other. They conflict on the basis of their inherent content. To the extent that one of those ideas is expressed more, the other is expressed less.

This is compounded by the fact that there is a limit on how much speech can actually exist. We are finite beings living in a finite world, so we can never inhabit a situation in which we are expressing and considering “all” ideas. (When someone says “all options are on the table,” they really just mean that an option that would make them look bad if they directly argued for it is in fact being argued for.) The space of potential ideas is infinite, and choosing which are worthy of consideration is a large portion of what it means to be an intelligent lifeform. Not all expression is of equal quality. Putting forth an argument that has been widely rebutted is inferior to a new version of the same argument that takes the rebuttals into account, or to an entirely new argument. Substituting one of the latter options for the former increases overall quality of expression. The way that the 24-hour news cycle effectively forces some new thing to become the Most Important Thing every day is anti-free-speech behavior, because it restricts the ability to distinguish between levels of real importance. Furthermore, context matters. It matters that the New York Times has completely godawful op-ed columnists because lots and lots of people read the New York Times and take it seriously just because it’s the New York Times. The fact that better ideas are free to exist elsewhere doesn’t cancel this out. Ideas being expressed in more prominent venues matter more.

I’m being pedantic; this is really all just the basic stuff we do when we communicate: we try to understand things and make useful contributions to the discussion and say things that are right instead of wrong. We try to get useful ideas expressed in the places where people can actually hear them. We criticize the elevation of trash, not because we think people don’t know better, but because there are better uses of our limited resources.

Obviously, we do not want to respond to this situation by censoring any idea that someone deems “not good enough.” But that’s exactly the point: the only question here is how we’re going to manage speech. In terms of what we want to accomplish, increasing the overall quality of ideas expressed is the only thing that makes sense. We don’t want a “robust discussion” about fascism, we want a discussion where nobody is arguing for fascism.

There is, of course, a specific meaning to the term “free speech,” which is that the government should not be able to restrict expression on the basis of its content. But this is still not an absolute condition. Here’s Ludwig von Mises being completely wrong about this:

But whoever is ready to grant to the government this power would be inconsistent if he objected to the demand to submit the statements of churches and sects to the same examination. Freedom is indivisible. As soon as one starts to restrict it, one enters upon a decline on which it is difficult to stop. If one assigns to the government the task of making truth prevail in the advertising of perfumes and tooth paste, one cannot contest it the right to look after truth in the more important matters of religion, philosophy, and social ideology.

Of course you can. I think it’s pretty obvious that the slope between banning poisonous toothpaste and banning political opinions is not particularly slippery. There are specific reasons why the government is (potentially) competent at the former but not the latter. First, the government has an obvious bias regarding which political ideas get expressed, which makes it an incompetent judge of which ideas deserve suppression. As the entity that manages power distribution, the government has the strongest possible vested interest in regulating ideas. But the government is just as capable as any other entity of running tests to determine what’s poisonous, and it has no vested interest in the results4. So the problem here has nothing to do with “big government,” it’s simply a matter of competent discrimination.

Second, because the government is the entity with a monopoly on the legitimate use of force, ideas prohibited by the government are absolutely prohibited. It’s okay to ban one brand of toothpaste, because that’s not a significant restriction on anyone’s choices (if the toothpaste really is significantly harmful, it’s actually an enhancement of people choices, because it prevents them from accidentally making a decision they never would have made based on real information). But ideas are more complicated. Even an obviously bad idea might have positive effects through clarifying arguments or inspiring counterpoints. So, unlike being poisoned, which is something you never want, bad ideas are not absolute negatives. You might want to restrict them in particular times and at particular places, but you don’t want them absolutely restricted. Since people obviously disagree about ideas, discrimination is properly applied on the level of voluntary groups – that is, organizations can decide individually which ideas are worthwhile for them and which are not.5 And while there are in fact ideas that deserve complete eradication (again, fascism), this has to be done organically. Ideas are not magic; they have physical causes. If you try to banish an idea without addressing why it came about in the first place, it’s inevitably going to regenerate at some point. That’s exactly what’s happening right now: everyone thought we had gotten over fascism, when in reality all we had done was to shove it into the category of “Bad Things” without doing anything about its real causes. But once you’ve processed an idea and moved into a new situation where it no longer applies, artificially preserving it restricts speech. It prevents you from moving on to the next stage of discussion.

So these are the two actual criteria that matter for assessing speech restrictions. The first is accurate judgment: whether the idea is being restricted on its own merits or out of other motivations such as prejudice or political interest. The second is breadth: whether the restriction is being applied at the correct level. It’s fine for one explicitly capitalist magazine to disallow socialist opinions, because that’s not what anyone reading it is there for. It’s not okay for a larger entity to disallow the creation of any other types of magazines. But banning death threats throughout all of society is the correct level of applicability for that case, because death threats affect all humans.

Understanding the issue in terms of these criteria shifts the terrain of the debate considerably. The main point here is that speech restrictions have to be considered in context and not as absolutes, so I’m not going to try to formulate any kind of rules about what’s good and what’s bad. But since this issue has attracted such an unfortunate amount of misdirected chatter, I will work through a few examples to show how this works.

An extremely important story that has not received nearly enough attention is a recent change Google made to its search algorithm to promote more “authoritative” results. Naturally, this resulted in traffic drops for a variety of “alternative” news sources. This isn’t the kind of thing that normally gets discussed as a free speech violation. After all, none of the affected websites have actually been “censored,” and there are other search engines available. But the result is the same, because it fails both criteria. It’s improper discrimination because it’s intended to improve the “quality” of results, but all it actually does is impose a particular political viewpoint on them, based on Google’s collective internal assumption as to what counts as “fake news.” And it’s also overly broad, because it affects everyone who goes looking for information on any topic, regardless of what their individual desires are. If you’re trying to find alternative news sources, this change will prevent you from doing so, and there’s no way to opt out of it. And of course Google doesn’t tell you how it’s filtering its results, and it’s constantly changing things without telling anyone, so you don’t know whether there really is something else out there or not. Furthermore, Google is entrenched enough that it’s more accurate than not to say that this affects “everybody,” even though there are technically alternatives available. In other words, Google users constitute an involuntary group that has not consented to this restriction. If this were just one explicit, publicly understood option among many – if, say, it were one search engine marketing itself as an “authoritative news source” or something – then there wouldn’t be a breadth problem. The people who chose to use it would know what they were getting.

This applies just as much to the general movement to get social media companies to “do something” about “fake news.” Again, this isn’t an absolute condition; there’s no such thing as a “neutral” platform. But the criteria still apply. Scams and death threats are examples of things that social media companies can (potentially) accurately identify and which merit prohibition. Banning Twitter users who make “jokes” about putting people into ovens is more free-speech-friendly than not doing so. People who pull that shit are specifically trying to intimidate others out of speaking. And this does actually bleed into politics somewhat: if your ideas cannot be expressed without direct dehumanization and death threats, then it is correct for them to be suppressed. When it comes to actually discriminating based on ideology, though, giving Facebook the ability to decide which ideas are worthy of expression means conducting public discourse from inside Mark Zuckerberg’s head, which is clearly the worst possible outcome.

As mentioned, the big issue is Nazis, and unfortunately there isn’t a trivial solution here. If we’re talking specifically about literal Nazis, then censorship is probably fine. We can be as certain as we are of anything that Nazism is not a viable political option, and removing it from the public discourse doesn’t prevent people from cosplaying as Nazis on their own time. But of course there is no actual Nazi Party anymore; the entire issue is identifying which ideologies are really dangerous. Trump was widely condemned as a white supremacist for equivocating after Charlottesville, but all the mainstream Republicans who denounced him are also white supremacists. In fact, they’re more effective white supremacists, because, unlike Trump, they’re actually capable of closing deals. Declaring only overt Nazism beyond the pale sets the paling far too far to the right.

The thing that’s being called the “alt-right” is not one thing. It’s an umbrella term that covers a lot of different ideas and reactions. We can assume they’re all wrong, but even then, they’ve come up for real reasons, in response to real problems. Trying to sweep this stuff under the rug is exactly how you get surprised by someone like Trump. Dealing with these problems for real requires creating a society that fixes them, and developing that blueprint requires engaging with the underlying ideas. Expecting the government to take care of the bad guys is not going to accomplish this. In fact, it’s the opposite: the government is on the side of the fascists more than it is on yours.

Importantly, though, “engaging” here does not mean restricting yourself to the realm of cable-friendly “rational debate.” It means having a real fight. Making group efforts to deny fascists the use of social resources meets both free speech criteria. Such efforts can only come to fruition when there is widespread, non-idiosyncratic agreement as to what’s going on, and shutting down individual gatherings is not equivalent to censorship. People making the collective decision to disallow certain types of speech from the platforms over which they have influence is pro-free-speech activity, because it allows better ideas (by the standards of the involved parties) to be expressed. So shutting down fascists is indeed the right thing to do, but it only works if you do it yourself. Anyone who claims to be doing it for you is actually just fattening you up so they can eat you.

Also, violence is not a unique problem. The problem with violence is simply that it violates the criteria: it discriminates on the basis of who’s better at fighting rather than which ideas are better, and it completely prohibits expression rather than singling out particular ideas. But in situations where this isn’t the case, or where violence is already being applied, there’s no case for rejecting violence as such. Like, it’s pretty ridiculous to get all huffy about individual acts of defensive violence when they only stand out because you’re living in a cocoon created by the greatest purveyor of offensive violence in world history. Violence is generally a bad thing, but, given the current situation, a lot of the time it’s less bad than doing nothing.

(By the way, antifa has nothing to do with any of this, because they don’t start shit. As Cornel West and others have testified, their whole thing is defending people against fascist violence. From what I understand, they will actually escort neo-Nazis out of danger in order to defuse violent situations. Fretting about “violence” here, especially in the face of fascists who come armed to what they intend to be public confrontations, is nothing but typical anti-leftist bogeymanning.)

On a lighter note, the whole thing about university speakers being protested is a perfect example of something that is not a real problem. First, such protests can only happen through mass mobilization on the part of the affected constituency, which is proper discrimination. Second, being denied a speaking slot at a university has basically no other repercussions. Your ideas are still out there for people to engage with. Even the specific students at that university can look them up if they want to. In fact, the direction of suppression here is exactly the opposite of how it’s normally portrayed. It is the granting of the speaking slot in the first place that is suppressive behavior. If a group of college students wants to create a discursive climate in which trans people are not bullied, giving Milo Yanniopolis a speaking slot censors that political opinion.

To be honest, none of this is particularly relevant. Invoking “free speech” is almost always a dodge away from discussing actual political issues. It’s a way for people who don’t have the guts to take a meaningful stand to pretend like they’re principled when they really just want to avoid the discomfort of genuine values conflicts. The real problem is the fact that it works. As long as “free speech” is thought to be at issue, everyone has to spend all their time preemptively defending themselves instead of making real arguments. In other words, talking about free speech is a means of suppressing speech.

 


  1. You might want to keep in mind that this is all highly theoretical, because of course the government kills people and commits other rights violations for utilitarian reasons all the time, so talking about any kind of “pure” standard here is fantasyland from the getgo. 
  2. This applies broadly. If you try to take just one right and treat it as absolute, you run into internal contradictions. For example, treating the right to life as absolute and sacrificing everything else to it leads to the Repugnant Conclusion: valuing only life destroys the things that make life valuable in the first place. 
  3. So, if you hadn’t noticed, “hate speech” is a complete misnomer. Hatefulness has nothing to do with anything. The problem is with dehumanizing speech. This is actually why a lot of people get confused: they think they’re looking for “hate,” so when they don’t see it, they assume there’s no problem. Having a level of conceptual organization beyond “bad things are bad” matters. 
  4. There can, of course, be other interests at work: the relevant agency might be in the pocket of Big Toothpaste (this is called “regulatory capture”), or the government might want to direct poisoning at a specific undesirable community (this is called “environmental racism”). But these aren’t arguments against regulation, they’re arguments for good regulation. 
  5. This doesn’t work for involuntary groups. You can’t argue both that people need to work to eat and that their employers should be able to restrict their political opinions, unless you’re willing to accept that people with the wrong kind of ideas ought to be murdered. Either work is involuntary and employment is protected, or working is not a prerequisite for staying alive and associations can be fully voluntary. 

Bigmouth strikes again

google_memo_guy_be_like

The Memo of Doom has occasioned more commentary than anyone requires regarding anything, and that’s actually the main problem. Parsing individual claims in a situation like this tends to involve ignoring what the text actually does as a rhetorical action. So I think it would help to review some of the general principles at work here.


Implausible deniability

Various strains of activism in the recent past have succeeded wildly at inculcating the idea that “equality” is a good thing and “discrimination” is a bad thing. It’s actually difficult to remember that this is a very modern idea; for most of human history it was exactly the opposite: the idea was that everyone had their divinely-ordained place in the world and the right thing to do was to treat everyone according to their formal status. Of course, it’s much easier to get people to mouth positive-sounding platitudes than it is to actually change their minds (let alone their behaviors), so the practical result of this is that everyone, up to and including literal Klansmen, always says that they “don’t have a racist bone in their body” and they’re “all in favor of diversity” and they’re “just being realistic” and etc. In fact, this effect is so strong that even unrelated arguments get cast in the same language; for example, conservatives will complain of “discrimination” against their “minority viewpoints” which reduces “diversity.” The fact that such statements are always present all the time means that they do not discriminate between disparate situations. Both racists and anti-racists are equally likely to say that they oppose racism, so a statement of opposition to racism proffers no information about which kind of person you’re talking to. So, given that such statements encode no information, the only rational thing to do is to ignore them completely.


Pure ideology

Anyone who uses the word “efficiency” is trying to sell you a bridge. Efficiency is a technical concept referring to a process’s ratio of inputs to outputs. An alternative process that costs half as much but delivers the same results is more efficient, despite being not more productive; an alternative with twice the cost and three times the output is more efficient despite being more costly. But in order to make such a determination, you must first specify which inputs and outputs you’re looking at. If you care about reducing pollution, then the question is which options give you the greatest reduction for a given cost. If you care about getting a product to market quickly, then the question is which options reduce your production time by the greatest amount. If you care about preserving a particular rare resource, then the question is which process uses the least of that resource, regardless of other costs. “Efficiency” doesn’t mean anything until you’ve made such a specification.

Due to the nature of the society that we live in, it’s common to talk about efficiency in terms of monetary expenditures and corporate profits. When people talk about whether something is “efficient” or “effective” or “a good idea” or any number of other vague references, they are often implicitly talking about corporate productivity. This is usually an unexamined assumption: people don’t consider the fact that not everything has to be discussed in terms of what’s good for rich fucks. So people often argue that diversity1 is more “efficient,” meaning it helps corporations make more money. Certain types of people will argue that engineering is actually about communication and problem-solving, and diverse opinions and traditionally feminine skillsets are more valuable in that endeavor – in other words, they’re better for the company. This may be true (though I don’t think you can really make a general determination about this sort of thing), but if you actually care about equality, it’s a bad faith argument. Anti-discrimination is the thing you care about; it’s your output. The question is not which amount of diversity results in the greatest profits, but rather which structures most effectively reduce discrimination.

And there’s even another layer on top of that, which is that corporate productivity isn’t one thing either. You could, hypothetically, design a facial recognition system that works really well on Europeans and not very well on Asians, or you could design one that works passably well on everybody. You can’t “compute” which of these is better, you have to make a values-based judgment as to which one you prefer. If Google adopts a particular set of policies and thereby becomes super productive while also being super discriminatory, that’s perfectly “efficient,” but it’s also a bad outcome for everyone except Google. Seeing as our current social system rewards monetary success2 at the expense of all other metrics, this is the kind of thing we need to be on guard against.


Manifesto Syndrome

Everyone thinks that their opinions are “thoughtful” and “nuanced” and “fact-based,” and that anyone who disagrees with then is a shallow ideologue who hasn’t done their homework. It’s temping, then, to express this by writing something extremely long. I mean, if you write 10,000 words about something, it has to be nuanced, right? It can’t just be a simplistic expression of unexamined prejudices. Anyone who dismisses it on that basis clearly didn’t read the whole thing.

So obviously writing a ton of words isn’t the same thing as actually saying something worthwhile, but it goes even farther than that: the act of writing a big long manifesto is itself a statement about the underlying topic. It’s the statement that there exists a big long manifesto’s worth of discussion to be had, when that is not necessarily the case. I talked about this earlier with regard to rape apologetics. Trying to “cover the whole story” by including a detailed examination of the rapist’s perspective makes the implicit statement that that perspective is valid. And this can then become a defensive gesture that prevents you from reassessing your own argument, because anyone dismissing you just “doesn’t appreciate the complexity of the topic.” That might be true, but they might be right anyway.

The fact that you sometimes need a certain level of detail to make a point does not entail that anything with that level of detail is necessarily making the same kind of point. When you fail to examine your assumptions, it’s possible to make a lengthy, nuanced argument that says nothing.


Act like it

Speech is a physical phenomenon that occurs in the real world. In no case is the content of speech ever a “pure idea”; it is always an action that has a particular effect, which is partly (and often mostly) determined by the context in which the action takes place. You can’t “neutrally” argue about whether black people have lower IQs than white people in a society with a history (and present) of using intellectualism as a vector for dehumanization. Whining about how it’s “unfair” that you can’t just have a “reasonable discussion” doesn’t change that. I mean, it really is unfair that you can’t bring up certain topics without engaging with racism, but tough shit. You have to decide whether you care more about racism or more about masturbating over bell curves.

One of the major problems that this results in is the idea of “proving” things. After all, “proof” is undeniably objective, so it has to be valid in any possible context, right? But arguing within this framework in the first place necessarily imposes an extremely high standard on whatever it is that “requires” proof, while also slipping in underlying assumptions that are not only not proven, but not even argued for, because you can’t start the discussion without some kind of grounding. With regards to global warming, for example, the underlying assumption is that we have to have capitalism, and the debate is only about whether the negative consequences have been “proven” to a high enough standard to require us to do anything about it. The idea of it being the other way around – of the potential environmental impact preemptively discrediting capitalism – is not a permissible line of argument. And since the future is indeterminate, the more responsible standard is one of risk mitigation: to the extent that our current system of production has possible negative consequences, we should be working to make them less possible. Insisting on “proof” biases the potential responses heavily toward not doing anything, because you’re never going to be completely sure about what’s going to happen. It’s important to remember that the popularly-cited 2° target is not the “everything’s okay” threshold; it’s the catastrophe threshold. If the living standards of humanity in general were really what we cared about, we would have been taking major steps long before Armageddon became a visible possibility, without requiring any sort of “scientific consensus.”

The big catch is that responding to these sorts of shenanigans carries the same caveats. A point-by-point refutation might seem like the most “thorough” way of debunking a claim, but as an action, it implicitly concedes the very point under discussion: that it’s all very complicated and we ultimately just don’t know whether women are good enough to deserve equality. If you’re writing a scientific rebuttal to something, you’re validating the point that scientific debate is the right way to handle it – doing that constitutes having that debate. And while science is all well and good, its modern prominence tends to function as a dodge away from moral issues. You can’t ever “conclude” scientifically that women are definitely being discriminated against, but you can make the moral case that certain behaviors are harmful to human development and ought to be combated. When you don’t do that, you leave people’s existing ideological assumptions in place, which generally means that people reading the discussion will see a bunch of charts on one side and a bunch of charts on the other and go on believing what they already believed anyway.

In order to deal with the amount of noise we all have to deal with these days, you have to remember the basics. You have to figure out what your actual priorities are rather than just accepting the parameters of whatever discussion you happen to be having at the time, and you have to take the specific actions that will advance those priorities rather than just saying the thing that seems like the right answer. Failure to do this is one of the reasons why, despite the wild open-endedness of the internet, everything feels stuck. And it’s why, despite the outcry and the rebuttals and the firing, the true goal of the Google memo has already been accomplished: we’re still having this discussion.

 


  1. For the sake of conciseness here I’m conceding to the use of “diversity” as an imprecise blanket term for various forms of social equality; I trust we’re all capable of keeping the problems with this in mind while focusing on the main argument. 
  2. And it’s actually even worse than that, because financial capitalism has decoupled economic productivity from monetary reward, so current “successful” companies are the ones that are the best at extracting money from investors rather than the ones that actually make things that help people. We are all Juicero now. 

Bubble babble

I’m entirely certain you’re well-acquainted with the idea that “media bubbles” are a big problem right now, effecting disinformation and perverting ideology and generally destroying society in an orgy of postmodern technological mediation. Certainly, there is cause for concern; unlike in the past, when everyone had complete correct information that they used to make fully rational decisions, nowadays humans have somehow become closed-minded and parochial. The figure of the barely-informed loudmouth shouting his kneejerk opinions into the public square represents a truly new development in history. And now that bad things are happening in politics, which has never been the case before, it’s clear that something must have gone horribly wrong.

No, okay, so I’m super annoyed about all the hyperventilation, there’s nothing more obnoxious than small-minded arguments against small-mindedness, but there’s also a real issue here. The internet certainly is generating a world-historical amount of garbage data, and political polarization really has increased to an extreme degree. The fundamental dynamic at issue here is what pretentious people like to call “epistemic closure.” When one’s sources of information or methods for evaluating it are limited in some fundamental way, certain areas of knowledge become inaccessible – or, worse, only accessible in the wrong way, such that the formation of inaccurate ideas comes to be considered true knowledge. Fox News will never give a sympathetic hearing to an idea like universal single-payer health care, so if that’s where all your information comes from, you can never develop an informed opinion on this topic. It’s important to realize that this is an absolute constraint; it’s not that it becomes harder to get to the truth, it’s that it becomes impossible. This is the double-edge of the Enlightenment ideal: since there’s no such thing as divine wisdom or whatever, you cannot form correct ideas without accurate and comprehensive information, regardless of how smart or conscientious or committed you are.

Now, one of the few positive results of the 2016 election is that no one is any longer laboring under the delusion that there’s any kind of “unbiased” source that can be relied on for complete information. “Traditional” news sources simply represent one particular set of biases. There’s plenty of issues on which they’re incapable of informing you. Most obviously, an enforced centrist perspective will fail to understand a situation where the “center” is falling apart and all new growth is happening on the “extremes” (that is, it will understand the situation incorrectly, as a “breakdown of communication” or a “legitimacy crisis” or whatever). So the popular response to this is the idea of a “balanced media diet.” The worry is that the internet allows and/or forces people to self-sort into ever more polarized communities, so you have to make the effort to seek out sources that oppose your existing beliefs. The villains then become “algorithms” that deliver pre-polarized information, or “cult-like” communities that suppress dissent.

Unfortunately, it’s not that simple. The most important source of epistemic closure is our finitude as physical beings. Simply put, there are only so many hours each day you can spend reading shit, so it’s more than a little odd to argue that people should be spending more of said hours reading things they believe to be more wrong. If you could really read everything, and also spend the requisite time to analyze and distill it all, then sure, that would solve the problem. In reality, though, you have to choose what you’re going to care about, and any choice you make is going to define a particular horizon. If you’re a feminist, for example, you could spend half of your time reading feminist sources and the other half reading anti-feminist sources, and this would give you a “balanced” perspective, in the sense that you’d understand what’s going on on both sides. But this understanding will necessarily be shallower than the one you’d get by focusing your time on one side; you’ll miss deeper arguments and distinctions and internal diversity. For one thing, you might come to believe that there are only “two sides,” which is not the case. Anyone who knows a second thing about feminism knows that its herstory is coated with blood spilled by many thousands of vicious internal disagreements. One way to get over feminist dogmatism is to read more anti-feminism, but an equally effective option is to read more feminism. There isn’t one choice that “works” and one choice that doesn’t. There are different choices that have different effects. Some bubbles are bigger than others, but you can’t not be in a bubble.

This is why blaming the internet or “algorithms” or whatever misses the mark. Like, I don’t enjoy defending tech assholes, but they really just aren’t relevant to this situation. There is a sort of consumer rights issue here; people should be able to find out how their feeds and things are being customized and change them if they want to. But arguing that search results should be more “responsible” is arguing the opposite: it’s arguing for non-transparent corporations to have more control over what people read. I mean, it’s pretty obvious that most people talking about this are only thinking things through from their side. They see lots of “bad” articles floating around, and they feel like “someone should do something,” so they imagine that Google can somehow code social responsibility for them. Practically speaking, though, you can’t make that kind of a distinction in general.1 “Misinformation” is a value judgment made by the end user. If you write an algorithm that adds more articles about global warming to the feeds of denialists, that same algorithm will necessarily also add more denialist articles to the feeds of people who believe in global warming. You can’t have it both ways. Rather, trying to have it both ways is exactly how things get fucked up. Someone at the New York Times gets it into their head that they have a “liberal bias” that needs to be corrected, so they hire an Islamophobic global warming denialist to write opinion columns. Problem solved.

People want to read things that accord with their beliefs, and – this is the important part – they have good reasons for doing so. The reason feminists, for example, disprefer reading misogynist diatribes isn’t because they’re offended or whatever, it’s because they believe feminism to be true, and they’re obviously more interested in reading things that are probably true than things that are probably false.

You don’t just automatically start understanding things once you’ve read broadly enough. You have to process the information, and how you do that – and why you’re doing it – is going to affect what conclusions you end up with. Like, there is a problem with certain types of feminists spending all of their time yelling at Bad Things and not actually developing their ideas. But if you’re one of these people, and you decide to “broaden your media diet,” all that’s going to happen is that you’re going to find more things to yell at. It’s going to strengthen your existing biases, and that’s going to happen regardless of what it is that you’re reading, and the reason for this is because it’s what you want. This isn’t even a bad thing, because the only way this is not the case is if you lack the ability to critically analyze information, which is, um, a somewhat worse situation to be in. If your goal is just to avoid being wrong, then you might as well not read anything. But if your reason for reading things and drawing conclusions is to do something with the information, then you can’t just wait around until you’re “sure,” because that’s never. In order to actually get somewhere, you have to take a stand somewhere and start moving, which will necessitate rejecting opposing ideas. Breathing underwater requires a bubble.

I’m not just applying this to my own side, either. The fact that people believe all kinds of weird conspiracy theories about the Clintons makes perfect sense, because the Clintons really are classic amoral political schemers, so if you’re opposed to them, it’s more accurate than not to assume that they’re up to some shady shit. Besides, liberals believe whatever nonsense people come up with about Trump, too. It’s the same thing. This is the normal way human communication works.

It does remain the case that the normal way human communication works is badly, and that real lies have real consequences. If you believe that Planned Parenthood is literally dismembering infants and selling their body parts to, uh, somebody (I’m not deep enough into this to know whence the nationwide demand for baby torsos supposedly originates), your advocacy on the subject is going to be somewhat more zealous. But learning the actual fact that only X% of Planned Parenthood’s expenditures go towards abortion-related services doesn’t change the moral calculus of the situation. If abortion is evil, then a little bit of it is still evil. It’s certainly worthwhile to correct lies, but you can’t fact-check your way around morality. If abortion is actually moral, then Planned Parenthood’s particular operating details don’t matter. An organization that spent 100% of its funds on abortion and sold the remains for ice cream money would be a moral organization. Focusing on the nuts and bolts here means dodging the real issue, and this is generally the case in political discussions. Even if Clinton really did use her secret email server to help the Illuminati plan Benghazi, the actual question at hand remains which policies we prefer to advance as a society. In general, misinformation does not add a unique problem to our existing difficulties in figuring out how to talk to each other. It makes things worse, but it’s not itself a crisis.

What is a crisis is when these sorts of discussions become impossible, when an enforced “healthy diet” drains the flavor from the world. When you’re stuck reading nothing but “respectable” media sources, that’s when you have a real problem, and extremism is the solution to that problem. It’s what makes new things possible. Which means that, yes, even the recent explosive growth of rightist extremism has to be understood as a positive development. InfoWars may be maximally false, but if you don’t have InfoWars, you also don’t have the truth. The fact that people have these beliefs is a bad thing, of course, but given that they do, it’s better for them to be out in the open. I mean, their agenda hasn’t actually changed, right? Reagan talked pretty on the TV, but his whole cut-services-and-fellate-corporations deal was exactly the same thing as what the current government’s up to right now. People lately have been praising Bush Jr. for talking nice about Islam, but he was doing this at the same time that his administration was turning Muslims into America’s new Great Civilizational Enemy; Trump is just picking up where he left off. Those situations were worse than the one we’re in now – rather, those situations are why we’re now in our current situation – because there was more obfuscatory rhetoric that had to be disentangled before you could get at what was really going on. This is now less of a problem; we’re getting closer to the point where people actually know what the stakes are.

It’s comforting to imagine that there’s a “middle ground” where we can all get along peaceably, but there’s not. Extremism doesn’t create disagreements, it reveals the disagreements that were already there, because people have real disagreements. Pretending this is not the case prevents anything worthwhile from ever happening. We don’t want a society where there’s “reasonable debate” about sexism, where half the time the Hyde Amendment is in place and half the time it isn’t. We want a society where sexism doesn’t exist. We want everyone trapped inside the feminism bubble, permanently.

This is the truth that must be acknowledged. All the things that people are so concerned about these days – political polarization, ideological extremism, the speed and diversity of information, the dethronement of traditionally respected sources of various kinds of authority – are the things that are, in spite of everything, going well. There’s no way to “fix” this, because it’s not broken. What was broken was the “end of history” bullshit that convinced people there were no fights left to be had, and that situation is now better. We are more confused now because we are closer to the truth – we have, in at least some sense, stopped lying. This is what has to happen. Getting the ocean without the roar of its many waters is not a real option. The real options are: retreat or advance.

 


  1. From a technical perspective, the reason this can’t work is that you have to write the code before you know what data it’s going to be run against, so you would have to be able to predict what information is going to be true or false before that information has actually been generated, meaning you can’t rely on the details of the information itself, meaning you can’t actually be making a real judgment as to whether it’s “disinformation” or not; you can only be relying on contextual coincidence. And if you try to get around this by using human intervention, all you’ve done is appointed an arbitrary, unaccountable person to act as an arbiter of truth, which is obviously several steps backwards. 

People’s choice

This extremely boring controversy over Facebook’s topic sorter algorithm or whatever it is is extremely boring, but it’s at least good for one thing: it’s clarifying how people implicitly view Facebook, and, correspondingly, what kind of society they think they live in.

Now, the whole thing has obviously been ginned up by the Right-Wing Scandal Generator, which at this point seems to have self-actualized and gone Skynet. It’s essentially conspiracy theorist Mad Libs: take any liberal-ish group or any government agency except the military, slap on a charge about converting kittens to Satanism or saying something mean about white people, and see if it has legs. Which it usually does, since these people are operating under a severe case of epistemic closure.

Anyway, for the rest of us, the newsworthy bit was that Facebook actually has people deciding which stories are popular instead of blind algorithms. Of course, in practice, there’s no difference. Algorithms are written by people, and they carry whatever implicit or explicit biases went into their creation. The point, though, is that people were assuming Facebook didn’t have its fingers in the pie, and they were upset to find out that it did. This has happened before. When Facebook ran its emotional manipulation experiment, for example, there wasn’t any practical consequence anyone could point to, but people didn’t like the idea of Facebook picking and choosing what they saw instead of letting it happen “naturally.”

What makes this all not make sense is the fact that Facebook is a corporation. Corporations obviously have their own interests and biases. In fact, we expect them to; we understand corporations as actors, if not persons. This is why we expect them to do things like withdraw advertising from bigoted programs or support charities, and why we get mad when they outsource jobs or use stereotypes to sell products. It’s also why we talk about pointless things like corporate “greed” or “corruption” instead of focusing on the actual structure that causes them to act the way they do. So if people thought of Facebook in this way, there wouldn’t be anything untoward about its behavior. Of course Facebook, staffed largely by young liberals (or at least tech libertarians), is not going to be interested in promoting Racist Grandpa’s email forwards. Accusing Facebook of censoring conservative stories makes exactly as much sense as accusing Fox News of censoring liberal stories. And remember, it’s the small-government fetishists who are getting mad about this, which, yeah, it’s opportunism, but it’s not even a sensible claim unless you assume that Facebook has a general public responsibility. After all, these same people are currently engaged in a deathly struggle to save private corporations from such scourges as having to sign contraception coverage waiver forms and having to bake cakes for gay people.

So what this means is that people don’t think of Facebook as a corporation. And this makes total sense, because Facebook doesn’t do any of the things that corporations are supposed to be for. It doesn’t create a product that people buy, or create content supported by advertising. It’s not even something like Google’s search engine where it feels like a utility but is still a tool with an actual function. Facebook is a bulletin board. It allows people to do things with it rather than doing anything itself. Sure, it’s a piece of software that requires development and maintenance, but in terms of function, Facebook is essentially a park. It’s a public space where people come to interact with each other. It’s a commons. The only reason demands for neutrality in its operation are comprehensible is that everyone implicitly understands that it doesn’t make sense for anyone to be profiting off of it.

The funny thing about capitalism as a world-defining ideology is that nobody actually believes in it. We expect corporations to be good people rather than to follow the incentives that define their existence in the first place. And we expect the commons to be respected and maintained rather than privatized and pillaged. Despite the much-vaunted “cynicism” of the American public, people actually go around assuming they’re living in a much better society than they actually are – one that basically works for people, and whose problems are the result of bad actors rather than the necessary consequences of the systems that constitute it. A world of bad actors is quite a lot better than a world of bad systems, because a world of bad actors can be fixed by getting rid of the bad actors. But a world of bad systems will go wrong no matter how the people in it act, and we haven’t yet figured out how to reliably change systems for the better. One assumes there’s a way, but one also doesn’t get one’s hopes up.

In the meantime, if you really want a neutral platform, there’s only one reasonable course of action. Nationalize Facebook.

Gamed to death

My post about level ups needs an addendum, as there’s a related issue that’s somewhat more practical. That is, it’s an actual threat.

The concept of power growth can be generalized to the concept of accumulation, the difference being that accumulation doesn’t have to refer to anything. When you’re leveling up in a game, it’s generally for a reason, e.g. you need more HP in order to survive an enemy’s attack or something. Even in traditional games, though, this is not always the case. There are many RPGs where you have like twelve different stats and it’s not clear what half of them even do, yet it’s still satisfying to watch them all go up when you level. This leads many players to pursue “stat maxing” even when there’s no practical application for those stats. Thus, we see that the progression aspect of leveling is actually not needed to engage players. It is enough to provide the opportunity for mere accumulation, a.k.a. watching numbers go up. This might sound very close to literally watching paint dry, but the terrible secret of video games is that people actually enjoy it.

The extreme expression of this problem would be a game that consists only of leveling up, that has no actual gameplay but merely provides the player with the opportunity to watch numbers go up and rewards their “effort” with additional opportunities to watch numbers go up. This game, of course, exists; it’s called FarmVille, it’s been immensely popular and influential and has spawned a wide variety of imitators. The terror is real.

Of course, as its very popularity indicates, FarmVille itself is not the problem. In fact, while FarmVille is often taken to be the dark harbinger of the era of smartphone games, its design can be traced directly back to the traditional games that it supposedly supplanted (the worst trait of “hardcore” gamebros is that they refuse to ever look in the damn mirror). Even in action-focused games such as Diablo II or Resident Evil 4, much of the playtime involves running around and clicking on everything in order to accumulate small amounts of currency and items. While this has a purpose, allowing you to purchase new weapons and other items that help you out during the action segments, it doesn’t have to be implemented this way. You could just get the money automatically whenever you defeat an enemy, as you do in most RPGs. But even in RPGs where this happens, there are still treasures and other collectibles littering the environment. This is a ubiquitous design pattern, and it exists for a reason: because running around and picking up vaguely useful junk is fun.

This pattern goes all the way back to the beginning. Super Mario Bros., for example, had coins; they’re one of the defining aspects of what is basically the ur-text of video games. Again, these coins actually did something (they gave you extra lives, eventually. Getting up to 100 coins in the original Super Mario Bros. is actually surprisingly hard), but again again, this isn’t the actual reason they were there. They were added for a specific design reason: to provide players with guidance. Super Mario Bros. was a brand-new type of game when it came out; the designers knew that they had to make things clear in order to prevent players from getting lost. So one of the things they did was add coins at strategic locations to encourage the player to take certain actions and try to get to certain places. And the reason this works is because collecting coins is fun on its own, even before the player figures out that they’re going to need as many extra lives as they can get.

The coins here are positioned to indicate to the player that they're supposed to jump onto the moving platform to proceed.

And there’s something even more fundamental than collectibles, something that was once synonymous with the concept of video games: score. Back in the days of arcade games, getting a high score was presented as the goal of most games. When you were finished playing, the game would ask you to enter your initials, and then show you your place on the scoreboard, hammering in the idea that this was the point of playing. Naturally, since arcade games were designed to not be “completable,” this was a way of adding motivation to the gameplay. But there’s more to it than that. By assigning different point values to different actions, the designers are implicitly telling the player what they’re supposed to be doing. Scoring is inherently an act of valuation.

In Pac-Man, for example, there are two ways you can use the power pellets: you can get the ghosts off your ass for a minute while you try to clear the maze, or you can hunt the ghosts down while they’re vulnerable. Since the latter is worth more points than anything else, the game is telling you that this is the way you’re supposed to be playing. The reason for this, in this case, is that it’s more fun: chasing the ghosts creates an interesting back-and-forth dynamic, while simply traversing the maze is relatively boring. Inversely, old light-gun games like Area 51 or Time Crisis often had hostages that you were penalized for shooting. In a case like this, the game is telling you what not to do; rather than shooting everything indiscriminately, you were meant to be careful and distinguish between potential targets.

So, in summary, the point of “points” or any other “numbers that go up” is to provide an in-game value system. What, then, does this mean for a game like FarmVille, which consists only of points? It means that such a game has no values. It’s nihilistic. It’s essentially the unironic version of Duchamp’s Fountain. The point of Fountain was that the work itself had no traditional artistic merit; it “counted” as art only because it was presented that way. Similarly, FarmVille is not what you’d normally call a “game,” but it’s presented as one, so it is one. The difference, of course, is that Duchamp was making a rather direct negative point. People weren’t supposed to admire Fountain, they were supposed to go fuck themselves. FarmVille, on the other hand, expects people to genuinely enjoy it. Which they do.

And again, the point is that FarmVille is not an aberration; its nihilism is only the most naked expression of the nihilism inherent in the way modern video games are understood. One game that made this point was Progress Quest, a ruthless satire of the type of gameplay epitomized by FarmVille. In Progress Quest, there is literally no gameplay: you run the application and it just automatically starts making numbers go up. It’s a watching paint dry simulator. The catch is that Progress Quest predates FarmVille by several years (art imitates life, first as satire, then as farce); it was not parodying “degraded” smartphone games, but the popular and successful games of its own time, such as EverQuest, which would become a major influence on almost everything within the mainstream gaming sphere. The call is coming from inside the house.

Because the fact that accumulation is “for” something in a game like Diablo II ultimately amounts to no more than it does for FarmVille. You kill monsters so that you can get slightly better equipment and stats, which you then use to kill slightly stronger monsters and get slightly better equipment again, ad nauseum. It’s the same loop, only more spread out and convoluted; it fakes meaning by disguising itself. In this sense, FarmVille, like Fountain, is to be praised for revealing a simple truth that had become clouded by incestuous self-regard.

There is, of course, a real alternative, which is for games to actually have some kind of aesthetic value, and for that to be the motivation for gameplay. This isn’t hard to understand. Nobody reads a book because they get points for each page they turn; indeed, the person who reads a famous book simply “to have read it” is a figure of mockery. We read books because they offer us experiences that matter. There is nothing stopping video games from providing the same thing.

The catch is that doing this requires a realization that the primary audience for games is currently unwilling to make: that completing a goal in a video game is not a real accomplishment. As games have invested heavily in the establishment of arbitrary goals, they have taken their audience down the rabbit hole with them. Today, we are in position where certain people actually think that being good at video games matters, that the conceptualization of games as skill-based challenges is metaphysically significant (just trust me on this one, there’s evidence for it but you really don’t want to see it). As a result, games have done an end-run around the concept of meaning. Rather than condemning Sisyphus to forever pushing his rock based on the idea the meaningless labor is the worst possible fate, we have instead convinced Sisyphus that pushing the rock is meaningful in the traditional sense; he now toils of his own volition, blissfully (I wish I could take credit for this metaphor, but this guy beat me to it).

This is an understandable mistake. As humans, limited beings seeking meaning in the raw physicality of the universe, we’ve become accustomed to looking for signs that distinguish meaningful labor from mere toil. It is far from an unusual mistake to confuse the sign for the destination. But the truth is that any possible goal (money, popularity, plaudits, power) is also something that we’ve made up. The universe itself provides us with nothing. But this realization does not have to stop us: we can insist on meaning without signs, abandon the word without losing the sense. This is the radical statement that Camus was making when he wrote that “we must imagine Sisyphus happy.” He was advising us to reject this fundamental aspect of our orientation towards reality.

We have not followed his advice. On the contrary, games have embraced their own meaninglessness. The most obvious symptom of this is achievements, which have become ubiquitous in all types of games (the fact that they’re actually built-in to Steam is evidence enough). Achievements are anti-goals, empty tokens that encourage players to perform tasks for no reason other than to have performed them. Many are quite explicit about this; they’re things like “ 1000 more times than you would have to do it to complete the game.” Some achievements are better than this, some even point towards interesting things that add to the gameplay experience, but the point is the principle: that players are expected to perform fully arbitrary tasks and to expect nothing else from games. In light of this, it does not matter whether a game is fun or creative or original or visually appealing. No amount of window dressing can counteract the fact that games are fundamentally meaningless.

If you want a picture of the future of games, imagine a human finger clicking a button and a human eye watching a number go up. Forever.


While renouncing games is a justifiable tactical response to the current situation, it’s not a solution. Games are just a symptom. Game designers aren’t villains, they’re just hacks. They’re doing this stuff because it works; the problem is in people.

Accumulation essentially exploits a glitch in human psychology, similar to gambling (many of these games have an explicit gambling component). It compels people to act against their reason. It’s not at all uncommon these days to hear people talk about how they kept playing a game “past the point where it stopped being fun.” I’m not exactly sure what the source of the problem is. Evolution seems unlikely, as pre-civilized humans wouldn’t have had much opportunity for hoarding-type behavior. Also, the use of numbers themselves seems to be significant, which suggests a post-literate affliction. I suppose the best guess for the culprit would probably be capitalism. Certainly, the concept of currency motivates many people to accumulate it for no practical reason.

Anyway, I promised you a threat, so here it is:

“They are told to forget the ‘poor habits’ they learned at previous jobs, one employee recalled. When they ‘hit the wall’ from the unrelenting pace, there is only one solution: ‘Climb the wall,’ others reported. To be the best Amazonians they can be, they should be guided by the leadership principles, 14 rules inscribed on handy laminated cards. When quizzed days later, those with perfect scores earn a virtual award proclaiming, ‘I’m Peculiar’ — the company’s proud phrase for overturning workplace conventions.”

(Okay real talk I actually didn’t remember the bit about the “virtual award.” I started rereading the article for evidence and it was right there in the second paragraph. I’m starting to get suspicious about how easy these assholes are making this for me.)

What’s notable about this is not that Amazon turned out to be the bad guy. We already knew that, both because of the much worse situation of their warehouse workers and because, you know, it’s a corporation in a capitalist society. What’s important is this:

“[Jeff Bezos] created a technological and retail giant by relying on some of the same impulses: eagerness to tell others how to behave; an instinct for bluntness bordering on confrontation; and an overarching confidence in the power of metrics . . .

Amazon is in the vanguard of where technology wants to take the modern office: more nimble and more productive, but harsher and less forgiving.”

What’s happening in avant-garde workplaces like Amazon is the same thing that’s happened in games. The problem with games was that they weren’t providing any real value, and the problem with work in a capitalist society is that most of it is similarly pointless. The solution in games was to fake meaning, and the solution in work is going to be the same thing.

And, just as it did in games, this tactic is going to succeed:

“[M]ore than a few who fled said they later realized they had become addicted to Amazon’s way of working.

‘A lot of people who work there feel this tension: It’s the greatest place I hate to work,’ said John Rossman, a former executive there who published a book, ‘The Amazon Way.’

. . .

Amazon has rules that are part of its daily language and rituals, used in hiring, cited at meetings and quoted in food-truck lines at lunchtime. Some Amazonians say they teach them to their children.

. . .

‘If you’re a good Amazonian, you become an Amabot,’ said one employee, using a term that means you have become at one with the system.

. . .

[I]n its offices, Amazon uses a self-reinforcing set of management, data and psychological tools to spur its tens of thousands of white-collar employees to do more and more.

. . .

‘I was so addicted to wanting to be successful there. For those of us who went to work there, it was like a drug that we could get self-worth from.’”

It’s only once these people burn out and leave that they’re able to look back and realize they were working for nothing. This is exactly the same phenomenon as staying up all night playing some hack RPG because you got sucked in to the leveling mechanism. It’s mechanical addiction to a fake goal.

The fundamental problem here, of course, is that Amazon isn’t actually trying to make anything other than money. A common apologist argument for capitalism is that economic coercion is required to motivate people to produce things, but this is pretty obviously untrue. First, people have been building shit since long before currency came into the picture; more importantly, it’s obvious just from simple everyday observation that people are motivated to try to do a good job when they feel like they’re working on something that matters, and people slack off and cut corners when they know that what they’re doing is actually bullshit. The problem with work in a capitalist society is that people aren’t fools; the reason employees have to be actively “motivated” is because they know that what they’re doing doesn’t merit motivation.

The focus with Amazon has mostly been on that fact that they’re “mean”; the Times contrasts them with companies like Google that entice employees with lavish benefits rather than psychological bullying. But this difference is largely aesthetic; the reason Google offers benefits such as meals and daycare is because it expects its employees to live at their jobs, just as Amazon does.

As always, it’s important to view the system’s cruelest symptoms not as abnormal but as extra-normative behavior. The reason Amazon does what it does is because it can: it has the kind of monitoring technology required to pull this off and its clout commands the kind of devotion from its employees required to get away with it. Amazon is currently on the cutting edge; as information technology becomes more and more anodyne, this will become less and less the case. Consider that Google’s double-edged beneficence is only possible because Google is richer than fuck, consider the kind of cost-cutting horseshit your company pulls, and then consider the kind of cost-cutting horseshit your company would pull if it had Amazon-like levels of resourcefulness and devotion.

So, while publications like the New York Times are useful for getting the sort of “average” ruling-class perspective on the issues of the day, you have to keep the ideological assumptions of this perspective in mind, which in this case is super easy: the Times assumes that Amazon’s goal of maximizing its “productivity” is a valid and even virtuous one (also, did you notice how they claimed that this is happening because “technology wants” it to happen? Classic pure ideology). All of the article’s hand-wringing is merely about whether Amazon’s particular methods are “too harsh” or “unsustainable.” The truth, obviously, is that corporate growth itself is a bad thing because corporate growth means profit growth and profits are by definition the part of the economy getting sucked out by rich fucks instead of actually being used to produce things for people. This goes double for Amazon specifically, which doesn’t contribute any original functionality of its own, but merely supersedes functionalities already being provided by existing companies in a more profitable fashion.

And this is where things get scary. With video games, the only real threat is that, by locking themselves into their Sisyphean feedback loop, games will become hyper-effective at wasting the time of the kind of people who have that kind of time to waste. Tragic, in a sense, but in another sense we’re talking about people who are making a choice and who are consequently reaping what they’ve sown. But the problem with the economy is that when rich fucks play games, the outcome affects everybody. And when those games are designed against meaning, and all of us are obligated to play in order to survive, what we’re growing is a value system, and what we’re harvesting is nihilism. Bad design is a fate worse than death.

In this vein, I strongly recommend that you get a load of this asshole:

“’In the office of the future,’ said Kris Duggan, chief executive of BetterWorks, a Silicon Valley start-up founded in 2013, ‘you will always know what you are doing and how fast you are doing it. I couldn’t imagine living in a world where I’m supposed to guess what’s important, a world filled with meetings, messages, conference rooms, and at the end of the day I don’t know if I delivered anything meaningful.’”

Can you imagine living in a world where values are determined by humans? It’s getting kind of difficult!

When the situation is this fucked, even the New York Times has its moments:

“Mr. Bohra declined to let any of his employees be interviewed. But he said the work was more focused now, which meant smaller teams taking on bigger workloads.”

You know you’re an asshole when the shit you’re pulling is so blatantly horrific that even the “paper of record” is scoring sick burns on you from behind its veil of ersatz objectivity.


The thing is, when it comes to values, “money” in society has the same function as “score” in video games: it’s a heuristic that maps only loosely onto the thing that it’s actually supposed to represent. Ideally, economic growth would represent the actual human-life-improving aspects of a society, and to an extent, it does. Despite everything, most people really are trying to make the world a decent place to live. But a capitalist society is one where “growth” is pursued for its own sake, where spending a million dollars to feed starving children is just as good as spending that money on car decals, or on incrementally faster smartphones, or on weapons.

This is why you need to watch the fuck out any time someone starts talking about “meritocracy.” The problem with “meritocracy” is the same as the problem with “utilitarianism”: you have to actually define “merit” or “utility,” and that’s the entire question in the first place. With utilitarianism this is less of a problem, since it’s more of a philosophical question and this understanding is usually part of the discussion (also, when utilitarianism was first introduced it was a revolutionary new idea in moral philosophy, it’s just that today it tends to be invoked by people who want to pretend like they’ve solved morality when they actually haven’t even started thinking about it). But the meritocracy people are actually trying to get their system implemented; indeed, they often claim that their “meritocracy” already exists.

To be explicit, the word “meritocracy” is internally inconsistent. Claiming that a society should be a “democracy,” for example, establishes a goal: a society’s rulership should be as representative of the popular will as possible (that is, assuming the word “democracy” is being used in good faith, which is rarely the case). But the concept of “merit” requires a goal in order to be meaningful. It’s trivial to say that society should favor the “best,” because the question is precisely: the best at what? The most creative, or the most efficient? The most compassionate, or the most ruthless? Certainly, our current society, including our corporations, is controlled by people who are the best at something, it’s just that that “something” isn’t what most of us want to promote.

The problem isn’t that these people are hiding their motives; they talk big but they aren’t actually that sophisticated, especially when it comes to philosophy. It’s worse: the problem is that they have no goals in the first place. For all their talk of “disruption,” they are in truth blindly following the value system implicitly established by the set of historical conditions they happen to be operating in (see also: Rand, Ayn). This is necessarily the case for anyone who focuses their life on making money, since money doesn’t actually do anything by itself; it means whatever society says it means. This is why rich fucks tend to turn towards philanthropy, or at least politics: as an attempt to salvage meaning from what they’ve done with their lives. But even then, the only thing they know how to do is to focus on reproducing the conditions of their own success. When gazing into the abyss, all they can see is themselves.

Thus far, the great hope of humanity has lain in the fact that our rulers are perpetually incapable of getting their shit together. The problem is that they no longer have to. If nuclear weapons gave them the ability to destroy the world on accident, information technology has given them the ability to destroy values just as accidentally. A blind, retarded beast is still capable of crushing through sheer weight. The reason achievements in games took off isn’t because anyone designed things that way, it’s because fake-goal-focused games appeal to people, they sell. The reason Amazon seems to be trying to design a dystopian workplace isn’t because of evil mastermindery, it’s simply because they have the resources to pursue their antigoal of corporate growth with full abandon. Indeed, what we mean by “dystopia” is not an ineffective society, it’s a society that is maximally effective towards bad ends. And if capitalists are allowed to define our values by omission, if the empty ideal of “meritocracy” is taken as common sense rather than an abdication of responsibility, if arbitrary achievement has replaced actual experience, then the rough beast’s hour has come round at last; it is slouching toward Silicon Valley to be born.

How to smell a rat

I’m all for taking tech assholes down a notch (or several notches), but this kind of alarmism isn’t actually helpful:

“It struck me that the search engine might know more about my unconscious than I do—a possibility that would put it in a position not only to predict my behavior, but to manipulate it. Lose your privacy, lose your free will—a chilling thought.”

Don’t actually read that article, it’s bad. It’s a bunch of pathetic bourgeois lifestyle details spun into a conspiracy theory that’s terrifying only in its dullness, like a lobotomized Philip K. Dick plot. But it is an instructive example of how to get things about as wrong as possible.

I want to start with a point about the “free will” thing, since there are some pretty common and illuminating errors at work here. The reason that people think there’s a contradiction between determinism and free will (there’s not) is that they think determinism means that people can “predict” what you’re going to do, and therefore you aren’t really making a decision. This isn’t even necessarily true on its own: it may not be practically possible to do the calculations required to simulate a human brain fast enough for the results to be useful (that is, faster than the speed at which the universe does them. The reason we can calculate things faster than the universe can is that we abstract away all the irrelevant bits, but when it comes to something as complex as the brain, almost everything is relevant. This is why our ability to predict the weather is limited, for example. There’s too much relevant data to process in the amount of time we have to do it). But the more fundamental point is that free will has nothing to do with predictability.

Imagine you’re out to dinner with a friend who’s a committed vegan. You look at the menu and notice there’s only one vegan entree. Given this, you can predict with very high accuracy what your friend is going to order. But the reason you can do this is precisely because of your friend’s free will: their predictability is the result of a choice they made. There’s only one possible thing they can do, but that’s because it’s the only thing that they want to do.

Inversely, imagine your friend instead has a nervous disorder that causes them to freeze up when faced with a large number of choices. Their coping mechanism in such situations is to quickly make a completely random choice. Here, you can’t predict at all what your friend is going to order, and in this case it’s precisely because they aren’t making a free choice. They can potentially order anything, but the one thing they can’t do is order something they actually want.

The source of the error here is that people interpret “free will” to mean “I’m a special snowflake.” Since determinism means that you aren’t special, you’re just an object like everything else, it must also mean that you don’t have free will. But this folk notion of “free will” as “freedom from constraints” is a fantasy; as demonstrated by our vegan friend, freedom, properly understood, is actually an engagement with constraints (there’s no such thing as there being no constraints; if you were floating in a featureless void there would be nothing that could have caused you to develop any actual characteristics. Practically speaking, you wouldn’t exist). Indeed, nobody is actually a vegan as such, rather, people are vegan because of facts about the real world that, under a certain moral framework, compel this choice.

This applies broadly: rather than the laws of physics preventing us from making free choices, it is only because we live in an ordered universe that our choices are real. The only two possibilities are order or chaos, and it’s obvious that chaos is precisely the situation in which there really wouldn’t be any such thing as free will.

The third alternative that some people seem to be after is something that is ordered but is “outside” the laws of physics. Let’s call this thing “soul power.” The idea is that soul power would allow a person’s will to impinge upon the laws of physics, cheating determinism. But if soul power allows you to obviate the laws of physics, then all that means is that we instead need laws of soul power to understand the universe; if there were no such laws, if soul power were chaotic, then it wouldn’t solve the problem. What’s required is something that allows us to use past information to make a decision in the present, i.e. the future has to be determined by the past. And if this is so, it must be possible to understand the principles by which soul power operates. Ergo, positing soul power doesn’t solve anything; the difference between physical laws and soul laws is merely an implementation detail.

Relatedly, what your desires are in the first place is also either explicable or chaotic. So, in the same way, it doesn’t matter whether your desires come from basic physics or from some sort of divine guidance; whatever the source, your desires are only meaningful if they arise from the appropriate sorts of real-world interactions. If, for example, you grow up watching your grandfather slowly die of lung cancer after a lifetime of smoking, that experience needs to be able to compel you to not start smoking. The situation where this is not the case is obviously the one in which you do not have free will. What would be absurd is if you somehow had a preference for or against smoking that was not based on your actual experiences with the practice.

Thus, these are the two halves of the free will fantasy: that it makes you a special little snowflake exempt from the limits of science, and that you’re capable of “pure” motivations that come from the deepest part of your soul and are unaffected by dirty reality. What is important to realize is that both of these ideas are completely wrong, and that free will is still a real thing.

When we understand this, we can start to focus on what actually matters about free will. Rather than conceptualizing it holistically, that is, arguing about whether humans “do” or “don’t” have free will, we can look at individual decisions and determine whether or not they are being made freely.

Okay, so, we were talking about mass data acquisition by corporations (“Big Data” is a bad concept and you shouldn’t use it). Since none of the corporations in question employ a mercenary army (yet), what we should be talking about is economic coercion. As a basic example: Amazon has made a number of power plays for the purpose of controlling as much commercial activity as possible. As a result, the convenience offered by Amazon is such that it is difficult for many people not to use it, despite it now being widely recognized that Amazon is a deeply immoral company. If there were readily available alternatives to Amazon, or if our daily lives were unharried enough to allow us to find non-readily available alternatives, we would be more able to take the appropriate actions with regard to the information we’ve received about Amazon’s employment practices. The same basic dynamic applies to every other “disruptive” company.

(Side note: how hilarious is it that “disruptive” is the term used by people who support the practice? It’s such a classic nerd blunder to be so clueless about the fact that people can disagree with their goals that they take a purely negative term and try to use it like a cute joke, oblivious to the fact that they’re giving away the game.)

The end goal of Amazon, Google, and Facebook alike is to become “company towns,” such that all your transactions have to go through them (for Amazon this means your literal financial transactions, for Google it’s your access to information and for Facebook it’s social interaction, which is why Facebook is the skeeviest one out of the bunch). Of course, another name for this type of situation is “monopoly,” which is the goal of every corporation on some level (Uber is making a play for monopoly on urban transportation, for example). But company towns and monopolies are things that actually have happened in the past, without the aid of mass data collection. So if the ubiquity of these companies is starting to seem scary (it is), it would probably be a good idea to keep our eyes on the prize.

And while the data acquisition that these companies engage in certainly makes all of this easier, it isn’t actually the cause. The cause, obviously, is the profit motive. That’s the only reason any of these companies are doing anything. I mean, a lot of this stuff actually is convenient. If we lived in a society that understood real consent and wasn’t constantly trying to fleece people, mass data acquisition would be a great tool with all sorts of socially positive uses. This wouldn’t be good for business, of course, just good for humanity.

But the people who constantly kvetch about how “spooky” it is that their devices are “spying” on them don’t actually oppose capitalism. On the contrary, these people are upset precisely because they’ve completely bought into the consumerist fantasy that their participation in the market defines them as a unique individual. This fantasy used to be required to sell people shit; it’s not like you can advertise a bottle of cancer-flavored sugar water on its merits. But the advent of information technology has shattered the illusion, revealing unavoidably that, from an economic point of view, each of us is a mere consumer. The only aspect of your being that capitalism cares about is how much wealth can be extracted from you. You are literally a number in a spreadsheet.

But destroying the fantasy ought to be a step forward, since it was horseshit in the first place. That’s why looking at the issue of mass surveillance from a consumer perspective is petty as all fuck. I actually feel pretty bad for the person who wrote that article (you remember, the one up at the top that you didn’t read), since he’s apparently living in a world where the advertisements he receives constitute a recognition of his innermost self. And, while none of us choose to participate in a capitalist society, there does come a point at which you’re asking for it. If you’re wearing one of those dumbass fitness wristbands all day long so that you can sync the data to your smartphone, you pretty much deserve whatever happens to you. Because guess what: there actually is more to life than market transactions. It is entirely within your abilities to sit down and read a fucking book, and I promise that nobody is monitoring your brainwaves to gain insight into your interpretation of Kafka.

(Actually, one of the reasons this sort of “paranoia” is so hard to swallow is that the recommendation engines and so forth that we’re talking about are fucking awful. I have no idea how anyone is capable of being spooked by how “clever” these bone-stupid algorithms are. Amazon can’t even make the most basic semantic distinctions: when you click on something, it has no idea whether you’re looking at it for yourself, or for a gift, or because you saw it on Worst Things For Sale, or because it was called Barbie and Her Sisters: Puppy Rescue and you just had to know what the hell that was. If they actually were monitoring you reading The Metamorphosis they’d probably be trying to sell you bug spray.)

Forget Google, this is the real threat to humanity: the petty bourgeois lifestyle taken to such an extreme that the mere recognition of forces greater then one’s own consumption habits is enough to precipitate an existential crisis. I’m fairly embarrassed to actually have to say this, but it’s apparently necessary: a person is not defined by their browsing history, there is such a thing as the human heart, and you can’t map it out by correlating data from social media posts.

Of course, none of this means that mass surveillance is not a critical issue; quite the opposite. We’ve pretty obviously been avoiding the real issue here, which is murder. The most extreme consequences of mass surveillance are not theoretical, they have already happened to people like Abdulrahman al-Awlaki. This is why it is correct to treat conspiracy theorists like addled children: for all their bluster, they refuse to engage with the actual conspiracies that are actually killing people right now. They’re play-acting at armageddon.

There is one term that must be understood by anyone who wants to even pretend to have the most basic grounding from which to speak about political issues, and that term is COINTELPRO.

“A March 4th, 1968 memo from J Edgar Hoover to FBI field offices laid out the goals of the COINTELPRO – Black Nationalist Hate Groups program: ‘to prevent the coalition of militant black nationalist groups;’ ‘to prevent the rise of a messiah who could unify and electrify the militant black nationalist movement;’ ‘to prevent violence on the part of black nationalist groups;’ ‘to prevent militant black nationalist groups and leaders from gaining respectability;’ and ‘to prevent the long-range growth of militant black nationalist organizations, especially among youth.’ Included in the program were a broad spectrum of civil rights and religious groups; targets included Martin Luther King, Malcolm X, Stokely Carmichael, Eldridge Cleaver, and Elijah Muhammad.”

“From its inception, the FBI has operated on the doctrine that the ‘preliminary stages of organization and preparation’ must be frustrated, well before there is any clear and present danger of ‘revolutionary radicalism.’ At its most extreme dimension, political dissidents have been eliminated outright or sent to prison for the rest of their lives. There are quite a number of individuals who have been handled in that fashion. Many more, however, were “neutralized” by intimidation, harassment, discrediting, snitch jacketing, a whole assortment of authoritarian and illegal tactics.”

“One of the more dramatic incidents occurred on the night of December 4, 1969, when Panther leaders Fred Hampton and Mark Clark were shot to death by Chicago policemen in a predawn raid on their apartment. Hampton, one of the most promising leaders of the Black Panther party, was killed in bed, perhaps drugged. Depositions in a civil suit in Chicago revealed that the chief of Panther security and Hampton’s personal bodyguard, William O’Neal, was an FBI infiltrator. O’Neal gave his FBI contacting agent, Roy Mitchell, a detailed floor plan of the apartment, which Mitchell turned over to the state’s attorney’s office shortly before the attack, along with ‘information’ — of dubious veracity — that there were two illegal shotguns in the apartment. For his services, O’Neal was paid over $10,000 from January 1969 through July 1970, according to Mitchell’s affidavit.”

The reason this must be understood is that COINTELPRO is what happens when the government considers something an actual threat: they shut it the fuck down. If the government isn’t attempting to wreck your shit, it’s because you don’t matter.

With regard to the suppression of political discontent in America, it’s commonly acknowledged that “things are better now,” meaning it’s been a while since we’ve had a real Kent State Massacre type of situation (which isn’t to say that the government is not busy killing Americans, only that these killings (most obviously, murders by police) are not political in the sense we’re discussing here (that is, they’re part of a system of control, but not a response to a direct threat)). But this is only because Americans are now so comfortable that no one living in America is willing to take things to the required level (consider that the police were able to quietly rout Occupy in the conventional manner, without creating any inconvenient martyrs). This is globalization at work: as our slave labor has been outsourced, so too has our discontent.

And none of this actually has anything to do with surveillance technology per se. Governments kill whoever they feel like using whatever technology happens to be available at the time. If a movement gets to be a big enough threat that the government actually feels the need to take it down the hard way, they certainly will use the data provided by tech companies to do so. But not having that data wouldn’t stop them. The level of available technology is not the relevant criterion. Power is.

It would, of course, be great if we could pass some laws preventing the government from blithely snatching up any data it can get its clumsy fingers around, as well as regulations enforcing real consent for data acquisition by tech companies. But the fact that lawmakers have a notoriously hard time keeping up with technology is more of a feature than a bug. The absence of a real legislative framework creates a situation in which both the government and corporations are free to do pretty much whatever the hell they want. As such, there’s a strong disincentive for anyone who matters to actually try to change this state of affairs.

In summary, mass surveillance is a practical problem, not a philosophical one. The actual thing keeping us out of a 1984-style surveillance situation is the fact that all the required data can’t practically be processed (as in it’s physically impossible, since there’s exactly as much data as total theoretically available person-hours). So what actually happens is that the data all gets hoovered up and stored on some big server somewhere, dormant and invisible, until someone makes the political choice to access it in a certain way, looking for a certain pattern – and then decides what action to take in response to their findings. The key element in this scenario is not the camera on the street (or in your pocket), but the person with their finger on the trigger.

Unless you work for the Atlantic, in which case you can write what appears to be an entire cover article on the subject without ever mentioning any of this. So when you hear these jokers going on about how “spooky” it is that their smartphones are spying on them, recognize this attitude for what it is: the expression of a state of luxury so extreme that it makes petty cultural detritus like targeted advertising actually seem meaningful.