We all had a good laugh when Apple decided that the future of technology was making you unlock your phone by wiggling it in front of your face, every time you need to use it, in public. But the thing about extremely stupid ideas is that they have real underlying causes, which is why the funniest things are often simultaneously the most serious. This is no exception, and the real issue here is particularly not pretty.
We should start by admitting an oft-ignored truth, which is that passwords are good. They’re the correct form of security at the level of the individual user, and the reason for this is that they are a proper technical implementation of consent. The problem is that, when a system gets a request to provide access to an account, it has no idea why or from where the request is coming in; it just has the request itself. So the requirement is that access is provided if and only if the person associated with the account wants it to be provided. The way you implement this is by establishing an unambiguous communication signal. This works just like a safe word in a BDSM scene: you take a signal that would normally never occur and assign a fixed meaning to it, so that when it does occur, you know exactly what it means. That’s what a password is, and that’s why it works. “Security questions,” on the other hand, are precisely how passwords don’t work, because anything personally associated with you is not a low frequency signal. Anyone who knows that information can just send it in, so it doesn’t accord with user consent. All those celebrities who got hacked were actually compromised through their security questions, because of course they were, because personal information about celebrities is publicly available. They would have been perfectly fine had their email systems simply relied on generic passwords.
Furthermore, none of the alleged problems with passwords are real problems. The reason for all the stupid alternate-character requirements on passwords is supposedly that they increase complexity, but this doesn’t actually matter. The only thing that matters is that the signal is low frequency, and the problem with a password like “password123” isn’t that it lacks some particular combination of magic characters,1 but is simply that it’s high frequency. But anything that wouldn’t be within a random person’s top 100 guesses is, for practical purposes, zero frequency, so a password like “kittensarecute” or “theboysarebackintown” is essentially 100% secure. There’s no actual reason to complicate it any further, and in fact several reasons not to, because forgetting your password or having to write it down are real security threats.
Literally the only problem with simple passwords like this is that they can be hacked; that is, a computer program can derive them from a fixed pattern. If your password is a combination of dictionary words, then a “dictionary attack” can derive it from all the possible combinations of all the words in the dictionary in a relatively short amount of time, because that’s actually not all that much data. The frequency isn’t low enough. But the thing about this is that it’s portrayed as an end-user problem when it isn’t one at all; it’s a server problem. A user can’t actually guess how their password is going to be hacked; the attacker might use a dictionary attack, or they might pick a different pattern that happens to match the one you used in an attempt to evade a dictionary attack. The real way to prevent this is for the server to disallow it – the server shouldn’t allow a frequency of attempts high enough to convert a low-frequency signal into a high-frequency one. Preventing this isn’t the user’s job, because they can’t actually do anything about it. The server can.
And of course no one is ever actually going to hack your password. You don’t matter enough for anyone to care. What actually happens, as one hears about constantly in the news, is that a company’s server gets breached and all the passwords on it are compromised from the back end. When this happens, the strength and secrecy of your password are completely irrelevant, because the attacker already has your credentials, no matter what form they’re in. Again, this is not a problem with passwords. The passwords are doing their job; it’s the server that’s failing.
So the thing about biometrics is that they’re worse than passwords, because they don’t implement consent. At best, they implement identity, but that’s not what you want. If the police arrest you and want to snoop through your phone without a warrant, they have your identity, so if your phone is secured through biometrics, they have access to it without your consent. But they don’t have your password unless you give it to them. Similarly, the ability of passwords to be changed when needed is a strength. It’s part of the implementation of consent: if the situation changes such that the previously agreed-upon term no longer communicates the thing its supposed to communicate, you have to be able to change it. In BDSM terms, if your safe word is “lizard,” but then you want to do a scene about, y’know, lizard people or something, then the word isn’t going to convey the right thing anymore, so you have to come up with a new one. This is the same thing that happens in a data breach: because someone else knows your password, it no longer communicates consent – but precisely because you can change it, it can continue to perform its proper function. Whereas if someone steals your biometric data, you’re fucked forever. So when Apple touts the success rate or whatever of their face-scanning thing, they’ve completely missed the point. It doesn’t matter how accurate it is, because it implements the wrong thing.2
So, given all of this, why would a major company expend the amount of resources required to implement biometrics? We’ve already seen the answers. First, passwords look bad from the end-user perspective, because they feel insecure – unless you’re forced to use a random jumble of characters, in which case they feel obnoxious. And in either case you have to manage multiple passwords, which can be genuinely difficult. Biometrics, by contrast, feel secure, even though they’re not, and they’re very easy to use. They also feel “future-y,” allowing companies to sell them like some big new fancy innovation, when they’re actually a step backwards. In short, they’re pretty on the outside. At the risk of putting too fine a point on it, Apple is invested in the conceptualization of technology as magic.
More than that, though, biometrics demonstrate a focus on the appearance of security at the expense of its actuality – that is, they’re security theater. What all those data breaches in the news indicate is that, for all the ridiculous security paraphernalia that gets foisted on us, companies don’t actually bother much with security on their end. They don’t want to spend the money, so they make you do it, and because you can’t do it, because you don’t actually have the necessary means, the result is actual insecurity. Thus, the appearance of security, mediated by opaque technology that most people don’t understand, provides these companies with cover for their own incompetence. The only function being performed here by “technology” is distraction.3
What this means, then, is that technology isn’t technology. That is, the things that we talk about when we talk about “tech” aren’t actually about tech. Indeed, “tech companies” aren’t even tech companies4. Google and Facebook make their money through advertising; they’re ad companies. The fact that they use new types of software to sell their ads is only relevant to their business model in that provides a shimmery sci-fi veneer to disguise their true, hideous forms. Amazon is not actually a website; it’s a big-box retailer in exactly the same vein as Target and Wal-Mart. A lot of people thought it was “ironic” when Amazon stated opening physical stores, but that’s only the case if you assume that Amazon has some kind of ideological commitment to online ordering. What Amazon has an ideological commitment to is capturing market share, and they’re going to keep doing that using whatever technological means are available to them. Driving physical retailers out of business and then filling the vacuum with their own physical stores is precisely in line with how Amazon operates – it’s what you should expect them to do, if you actually understand what type of thing they are. Uber is only an “app” in the sense that that mediates their actual business model, which is increasing the profits of taxi services by evading regulations and passing costs on the the drivers (Uber’s business model doesn’t account for the significant maintenance costs incurred by constantly operating a vehicle, because those costs are borne by the drivers, who aren’t Uber’s employees. But Uber still takes the same cut of the profits regardless.) Apple is the closest, since they actually develop new technology, but even then they mostly make money by selling hardware (after having it manufactured as cheaply as possible), meaning they’re really just in the old-fashioned business of commodity production.
So if you try to understand these companies in terms of “tech,” you’re going to get everything wrong. There isn’t a design reason why Apple makes the choices it does; there’s a business reason. Nobody actually wanted an iPhone without a headphone port, but Apple relies on their sleek, minimalist imagery to move products, so they had to make the phone slimmer, even if it meant removing useful functionality. And of course no one is ever going to be interested in a solid-glass phone that shatters into a million pieces when you sneeze at it, but Apple had to come up with something that looked impressive to appease the investors and the media drones, so that’s what we got.
But this isn’t even limited to just these “new” companies; it’s the general dynamic by which technology relates to economics. There’s been a recent countertrend of elites pointing out that, actually, modern society is pretty great from a historical perspective, but they’re missing the point that this is despite our system of social organization, not because of it. That is, barring extreme disasters along the lines of the bubonic plague or the thing that we’re currently running headlong into, it would be incomprehensibly bizarre for the general standard of living not to increase over time. As long as humans are engaged in any productive activity at all, things are going to continuously get better, because things are being produced. The fact that we’re not seeing this – that real wages have been stagnant for decades and people are more stressed and have less leisure time then ever – indicates that we are in the midst of precisely such a disaster. Our current economic system is a world-historical catastrophe on par with the Black Death.
Do I even need to explicitly point out that this is why global warming is happening? It isn’t because of technology, it’s because rich fucks have decided they’d rather destroy the world for a short-term profit than be slightly less rich. It’s somewhat unfortunate that the physics are such that everyone is going to die, but the decision itself was made a long time ago. If it wasn’t greenhouse gasses, it would be something else. There’s always nuclear war or mass starvation or what have you. The fact of the matter is that we’ve chosen a social configuration that doesn’t support human life. That’s the whole story.
To address this technically, it’s certainly true that the age of capitalism has seen a vast increase in worldwide standards of living, but it’s not capitalism that caused that. It’s actually the opposite: trade and industrialization created the conditions for capitalism to become possible in the first place. Capitalism is not the cause of industrialization or globalization, it’s the response to these things. It is the determination of how the results of these things will be applied, and what actually happens it that it ensures that the gains will always be pointed in the wrong direction. The fact of globalization has nothing to do with any of the problems attributed to it; the problems reside entirely in how globalization is happening: who’s managing it, what their priorities are, and where the results are going. Like, it’s really amazing to consider how much potential productivity is being wasted right now. All the people employed in advertising, or in building yachts, or in think tanks, or on corporate advisory boards, or in failed attempts at “regime change,” or designing new gadgets that are less functional then the old ones, or all those dumbass “internet-connected” kitchen appliances, all of that, all of the time and energy and resources being spent on all of that stuff and far more, is all pure waste. Imagine the kind of society we could have if all of that potential were actually being put to productive use.
And it’s deeply hilarious how committed everyone is to misunderstanding this as thoroughly as possible. Like, the actual word we have for someone who negatively fetishizes technology is “Luddite,” but the Luddites were precisely people who cared about the practical results of technology – they cared about the fact that their livelihoods were being destroyed. They attacked machines because those machines were killing them. Every clueless takemonger inveighing about how globalization is leaving people behind or social media is dividing us or smartphones are alienating us is completely failing to grasp the basic point that the Luddites instinctively understood. The results of technological developments are not properties of the technology itself; they arise from political choices. The technology is simply the means by which those choices are implemented. In just the same way, attacking technology is not merely a symptom of incomprehension or phobia or lifestyle. It is also a political choice.
An engine doesn’t tell you where to go or how to travel. It just generates kinetic energy. It can take you past the horizon, but if you instead point it into a ditch, it will be equally happy to drive you straight into the dirt. There’s nothing counterintuitive about that; the function of technology is no great mystery. It just obeys the rules – not only the physical ones, but the social ones as well. All of the problems that people attribute to technology (excepting things like software glitches that are actual implementation failures) are actually problems with the rules. The great lesson of the age of technology is that technology doesn’t matter; as long as society continues on in its present configuration, everything will continue to get worse.
- The way you can tell that complexity requirements are bullshit is that they’re all different. There are plenty of nerds available to run the numbers on this, so if there really were a particular combination of requirements that resulted in “high security,” it would have been figured out by now and the same solution would have been implemented everywhere. But because the actual solution is contextual – that is, it’s the thing that no one else is guessing, which also means it’s unstable – you can’t implement it as a fixed list of requirements. The reason it feels like each website’s requirements are just some random ideas that some intern thought sounded “secure” enough is because that’s actually what they are. ↩
- I mean, face-scanning can’t actually work the way they say it does, because of identical twins. If the scan can distinguish between identical twins, that means it’s using contextual cues such as hair and expression, which means there are cases when these things would cause it to fail for an individual user, and if it can’t distinguish between identical twins (or doppelgangers), then that’s also a failure. I’d also be curious to know how much work the engineers put into controlling for makeup, because that’s a pretty common and major issue, and I’m guessing the answer is not much. ↩
- The real situation is significantly more dire than this. It isn’t just that Equifax, for example, sucks at security, it’s that Equifax should not exist in the first place. Taking the John Oliver Strategy and making fun of Equifax for being a bunch of dummies completely misses what’s really going on here. ↩
- I’m not giving up my “tech assholes” tag though, it’s too perfect. ↩