I’m all for taking tech assholes down a notch (or several notches), but this kind of alarmism isn’t actually helpful:
“It struck me that the search engine might know more about my unconscious than I do—a possibility that would put it in a position not only to predict my behavior, but to manipulate it. Lose your privacy, lose your free will—a chilling thought.”
Don’t actually read that article, it’s bad. It’s a bunch of pathetic bourgeois lifestyle details spun into a conspiracy theory that’s terrifying only in its dullness, like a lobotomized Philip K. Dick plot. But it is an instructive example of how to get things about as wrong as possible.
I want to start with a point about the “free will” thing, since there are some pretty common and illuminating errors at work here. The reason that people think there’s a contradiction between determinism and free will (there’s not) is that they think determinism means that people can “predict” what you’re going to do, and therefore you aren’t really making a decision. This isn’t even necessarily true on its own: it may not be practically possible to do the calculations required to simulate a human brain fast enough for the results to be useful (that is, faster than the speed at which the universe does them. The reason we can calculate things faster than the universe can is that we abstract away all the irrelevant bits, but when it comes to something as complex as the brain, almost everything is relevant. This is why our ability to predict the weather is limited, for example. There’s too much relevant data to process in the amount of time we have to do it). But the more fundamental point is that free will has nothing to do with predictability.
Imagine you’re out to dinner with a friend who’s a committed vegan. You look at the menu and notice there’s only one vegan entree. Given this, you can predict with very high accuracy what your friend is going to order. But the reason you can do this is precisely because of your friend’s free will: their predictability is the result of a choice they made. There’s only one possible thing they can do, but that’s because it’s the only thing that they want to do.
Inversely, imagine your friend instead has a nervous disorder that causes them to freeze up when faced with a large number of choices. Their coping mechanism in such situations is to quickly make a completely random choice. Here, you can’t predict at all what your friend is going to order, and in this case it’s precisely because they aren’t making a free choice. They can potentially order anything, but the one thing they can’t do is order something they actually want.
The source of the error here is that people interpret “free will” to mean “I’m a special snowflake.” Since determinism means that you aren’t special, you’re just an object like everything else, it must also mean that you don’t have free will. But this folk notion of “free will” as “freedom from constraints” is a fantasy; as demonstrated by our vegan friend, freedom, properly understood, is actually an engagement with constraints (there’s no such thing as there being no constraints; if you were floating in a featureless void there would be nothing that could have caused you to develop any actual characteristics. Practically speaking, you wouldn’t exist). Indeed, nobody is actually a vegan as such, rather, people are vegan because of facts about the real world that, under a certain moral framework, compel this choice.
This applies broadly: rather than the laws of physics preventing us from making free choices, it is only because we live in an ordered universe that our choices are real. The only two possibilities are order or chaos, and it’s obvious that chaos is precisely the situation in which there really wouldn’t be any such thing as free will.
The third alternative that some people seem to be after is something that is ordered but is “outside” the laws of physics. Let’s call this thing “soul power.” The idea is that soul power would allow a person’s will to impinge upon the laws of physics, cheating determinism. But if soul power allows you to obviate the laws of physics, then all that means is that we instead need laws of soul power to understand the universe; if there were no such laws, if soul power were chaotic, then it wouldn’t solve the problem. What’s required is something that allows us to use past information to make a decision in the present, i.e. the future has to be determined by the past. And if this is so, it must be possible to understand the principles by which soul power operates. Ergo, positing soul power doesn’t solve anything; the difference between physical laws and soul laws is merely an implementation detail.
Relatedly, what your desires are in the first place is also either explicable or chaotic. So, in the same way, it doesn’t matter whether your desires come from basic physics or from some sort of divine guidance; whatever the source, your desires are only meaningful if they arise from the appropriate sorts of real-world interactions. If, for example, you grow up watching your grandfather slowly die of lung cancer after a lifetime of smoking, that experience needs to be able to compel you to not start smoking. The situation where this is not the case is obviously the one in which you do not have free will. What would be absurd is if you somehow had a preference for or against smoking that was not based on your actual experiences with the practice.
Thus, these are the two halves of the free will fantasy: that it makes you a special little snowflake exempt from the limits of science, and that you’re capable of “pure” motivations that come from the deepest part of your soul and are unaffected by dirty reality. What is important to realize is that both of these ideas are completely wrong, and that free will is still a real thing.
When we understand this, we can start to focus on what actually matters about free will. Rather than conceptualizing it holistically, that is, arguing about whether humans “do” or “don’t” have free will, we can look at individual decisions and determine whether or not they are being made freely.
Okay, so, we were talking about mass data acquisition by corporations (“Big Data” is a bad concept and you shouldn’t use it). Since none of the corporations in question employ a mercenary army (yet), what we should be talking about is economic coercion. As a basic example: Amazon has made a number of power plays for the purpose of controlling as much commercial activity as possible. As a result, the convenience offered by Amazon is such that it is difficult for many people not to use it, despite it now being widely recognized that Amazon is a deeply immoral company. If there were readily available alternatives to Amazon, or if our daily lives were unharried enough to allow us to find non-readily available alternatives, we would be more able to take the appropriate actions with regard to the information we’ve received about Amazon’s employment practices. The same basic dynamic applies to every other “disruptive” company.
(Side note: how hilarious is it that “disruptive” is the term used by people who support the practice? It’s such a classic nerd blunder to be so clueless about the fact that people can disagree with their goals that they take a purely negative term and try to use it like a cute joke, oblivious to the fact that they’re giving away the game.)
The end goal of Amazon, Google, and Facebook alike is to become “company towns,” such that all your transactions have to go through them (for Amazon this means your literal financial transactions, for Google it’s your access to information and for Facebook it’s social interaction, which is why Facebook is the skeeviest one out of the bunch). Of course, another name for this type of situation is “monopoly,” which is the goal of every corporation on some level (Uber is making a play for monopoly on urban transportation, for example). But company towns and monopolies are things that actually have happened in the past, without the aid of mass data collection. So if the ubiquity of these companies is starting to seem scary (it is), it would probably be a good idea to keep our eyes on the prize.
And while the data acquisition that these companies engage in certainly makes all of this easier, it isn’t actually the cause. The cause, obviously, is the profit motive. That’s the only reason any of these companies are doing anything. I mean, a lot of this stuff actually is convenient. If we lived in a society that understood real consent and wasn’t constantly trying to fleece people, mass data acquisition would be a great tool with all sorts of socially positive uses. This wouldn’t be good for business, of course, just good for humanity.
But the people who constantly kvetch about how “spooky” it is that their devices are “spying” on them don’t actually oppose capitalism. On the contrary, these people are upset precisely because they’ve completely bought into the consumerist fantasy that their participation in the market defines them as a unique individual. This fantasy used to be required to sell people shit; it’s not like you can advertise a bottle of cancer-flavored sugar water on its merits. But the advent of information technology has shattered the illusion, revealing unavoidably that, from an economic point of view, each of us is a mere consumer. The only aspect of your being that capitalism cares about is how much wealth can be extracted from you. You are literally a number in a spreadsheet.
But destroying the fantasy ought to be a step forward, since it was horseshit in the first place. That’s why looking at the issue of mass surveillance from a consumer perspective is petty as all fuck. I actually feel pretty bad for the person who wrote that article (you remember, the one up at the top that you didn’t read), since he’s apparently living in a world where the advertisements he receives constitute a recognition of his innermost self. And, while none of us choose to participate in a capitalist society, there does come a point at which you’re asking for it. If you’re wearing one of those dumbass fitness wristbands all day long so that you can sync the data to your smartphone, you pretty much deserve whatever happens to you. Because guess what: there actually is more to life than market transactions. It is entirely within your abilities to sit down and read a fucking book, and I promise that nobody is monitoring your brainwaves to gain insight into your interpretation of Kafka.
(Actually, one of the reasons this sort of “paranoia” is so hard to swallow is that the recommendation engines and so forth that we’re talking about are fucking awful. I have no idea how anyone is capable of being spooked by how “clever” these bone-stupid algorithms are. Amazon can’t even make the most basic semantic distinctions: when you click on something, it has no idea whether you’re looking at it for yourself, or for a gift, or because you saw it on Worst Things For Sale, or because it was called Barbie and Her Sisters: Puppy Rescue and you just had to know what the hell that was. If they actually were monitoring you reading The Metamorphosis they’d probably be trying to sell you bug spray.)
Forget Google, this is the real threat to humanity: the petty bourgeois lifestyle taken to such an extreme that the mere recognition of forces greater then one’s own consumption habits is enough to precipitate an existential crisis. I’m fairly embarrassed to actually have to say this, but it’s apparently necessary: a person is not defined by their browsing history, there is such a thing as the human heart, and you can’t map it out by correlating data from social media posts.
Of course, none of this means that mass surveillance is not a critical issue; quite the opposite. We’ve pretty obviously been avoiding the real issue here, which is murder. The most extreme consequences of mass surveillance are not theoretical, they have already happened to people like Abdulrahman al-Awlaki. This is why it is correct to treat conspiracy theorists like addled children: for all their bluster, they refuse to engage with the actual conspiracies that are actually killing people right now. They’re play-acting at armageddon.
There is one term that must be understood by anyone who wants to even pretend to have the most basic grounding from which to speak about political issues, and that term is COINTELPRO.
“A March 4th, 1968 memo from J Edgar Hoover to FBI field offices laid out the goals of the COINTELPRO – Black Nationalist Hate Groups program: ‘to prevent the coalition of militant black nationalist groups;’ ‘to prevent the rise of a messiah who could unify and electrify the militant black nationalist movement;’ ‘to prevent violence on the part of black nationalist groups;’ ‘to prevent militant black nationalist groups and leaders from gaining respectability;’ and ‘to prevent the long-range growth of militant black nationalist organizations, especially among youth.’ Included in the program were a broad spectrum of civil rights and religious groups; targets included Martin Luther King, Malcolm X, Stokely Carmichael, Eldridge Cleaver, and Elijah Muhammad.”
“From its inception, the FBI has operated on the doctrine that the ‘preliminary stages of organization and preparation’ must be frustrated, well before there is any clear and present danger of ‘revolutionary radicalism.’ At its most extreme dimension, political dissidents have been eliminated outright or sent to prison for the rest of their lives. There are quite a number of individuals who have been handled in that fashion. Many more, however, were “neutralized” by intimidation, harassment, discrediting, snitch jacketing, a whole assortment of authoritarian and illegal tactics.”
“One of the more dramatic incidents occurred on the night of December 4, 1969, when Panther leaders Fred Hampton and Mark Clark were shot to death by Chicago policemen in a predawn raid on their apartment. Hampton, one of the most promising leaders of the Black Panther party, was killed in bed, perhaps drugged. Depositions in a civil suit in Chicago revealed that the chief of Panther security and Hampton’s personal bodyguard, William O’Neal, was an FBI infiltrator. O’Neal gave his FBI contacting agent, Roy Mitchell, a detailed floor plan of the apartment, which Mitchell turned over to the state’s attorney’s office shortly before the attack, along with ‘information’ — of dubious veracity — that there were two illegal shotguns in the apartment. For his services, O’Neal was paid over $10,000 from January 1969 through July 1970, according to Mitchell’s affidavit.”
The reason this must be understood is that COINTELPRO is what happens when the government considers something an actual threat: they shut it the fuck down. If the government isn’t attempting to wreck your shit, it’s because you don’t matter.
With regard to the suppression of political discontent in America, it’s commonly acknowledged that “things are better now,” meaning it’s been a while since we’ve had a real Kent State Massacre type of situation (which isn’t to say that the government is not busy killing Americans, only that these killings (most obviously, murders by police) are not political in the sense we’re discussing here (that is, they’re part of a system of control, but not a response to a direct threat)). But this is only because Americans are now so comfortable that no one living in America is willing to take things to the required level (consider that the police were able to quietly rout Occupy in the conventional manner, without creating any inconvenient martyrs). This is globalization at work: as our slave labor has been outsourced, so too has our discontent.
And none of this actually has anything to do with surveillance technology per se. Governments kill whoever they feel like using whatever technology happens to be available at the time. If a movement gets to be a big enough threat that the government actually feels the need to take it down the hard way, they certainly will use the data provided by tech companies to do so. But not having that data wouldn’t stop them. The level of available technology is not the relevant criterion. Power is.
It would, of course, be great if we could pass some laws preventing the government from blithely snatching up any data it can get its clumsy fingers around, as well as regulations enforcing real consent for data acquisition by tech companies. But the fact that lawmakers have a notoriously hard time keeping up with technology is more of a feature than a bug. The absence of a real legislative framework creates a situation in which both the government and corporations are free to do pretty much whatever the hell they want. As such, there’s a strong disincentive for anyone who matters to actually try to change this state of affairs.
In summary, mass surveillance is a practical problem, not a philosophical one. The actual thing keeping us out of a 1984-style surveillance situation is the fact that all the required data can’t practically be processed (as in it’s physically impossible, since there’s exactly as much data as total theoretically available person-hours). So what actually happens is that the data all gets hoovered up and stored on some big server somewhere, dormant and invisible, until someone makes the political choice to access it in a certain way, looking for a certain pattern – and then decides what action to take in response to their findings. The key element in this scenario is not the camera on the street (or in your pocket), but the person with their finger on the trigger.
Unless you work for the Atlantic, in which case you can write what appears to be an entire cover article on the subject without ever mentioning any of this. So when you hear these jokers going on about how “spooky” it is that their smartphones are spying on them, recognize this attitude for what it is: the expression of a state of luxury so extreme that it makes petty cultural detritus like targeted advertising actually seem meaningful.