My last post requires an addendum. I mentioned that expecting social media companies to filter out bad political content is a fool’s errand, because all you’d be doing is shackling yourself to someone else’s biases. So there’s that, but there’s also a deeper, category-level confusion which has been occurring with increased prevalence and which pretty much nobody is picking up on.
Some time ago, Google changed its search interface to add little boxes and things for “recommended” results. This is supposed to make it easier to find answers to direct questions without having to go through a whole page of links. But people have been noticing that this approach leads to a lot of untoward results; for example, queries regarding the Holocaust used to produce Holocaust denial pages in the boxy results. It’s easy to understand why this happens: most people accept the occurrence of the Holocaust as a historical fact, so the only people who actually input queries along the lines of “did the Holocaust really happen?” are denialists (or at least budding denialists), who then click through to denialist sites. So the Google algorithm is just performing its usual function of showing people the most popular results correlated with their input.
There isn’t actually a way around this. As long as there are Holocaust denial sites on the internet, there will exist some query that directs you to them. I mean, if there wasn’t, Google wouldn’t be much of a search engine, right? But that doesn’t mean that there’s no problem here. Rather, the problem is specifically with the boxes that pick out some of the results and stamp them with the imprimatur of officiality. As long as that’s happening, Google actually is recommending those results. So the only sensible option here is to get rid of the boxy results. Google’s job is to show you what’s on the internet and nothing else.
Importantly, there is a technical reason why this is the correct solution. It is impossible for Google’s boxy results feature to work “correctly,” because it is internally contradictory. It is intended to be both a dynamically-generated response based on the most relevant data currently present on the internet and an Official Correct Answer. You can’t do both of those things at once. You have to pick one. Furthermore, picking the second one is also impossible, because the number of potential questions is literally infinite. What the boxy results actually are is an illusion. They look like a recommendation when they are actually no different than anything else that happens to come up in the list of results. The reason that boxy results specifically reflect badly on Google is because they are lies. It is correct to say in this case that Google is lying to you, even though the results are completely unintentional, because Google has constructed its interface to look like something that it is not, and is thereby conveying false information.1 So the only logically viable option is for Google to quit fucking around and just be a search engine, which, you might recall, was the whole thing it was good at in the first place.2
People seem to be having a certain amount of difficulty understanding this. Naturally, there’s always a performative moral crisis when something like this happens, but in this case the complaints are almost universally targeted at the same, specific, exactly wrong place. Consider this article, which correctly points out that the problem is specifically with the boxy results:
For most of its history, Google did not answer questions. Users typed in what they were looking for and got a list of web pages that might contain the desired information. Google has long recognized that many people don’t want a research tool, however; they want a quick answer. Over the past five years, the company has been moving toward providing direct answers to questions along with its traditional list of relevant web pages.
Type in the name of a person and you’ll get a box with a photo and biographical data. Type in a word and you’ll get a box with a definition. Type in “When is Mother’s Day” and you’ll get a date. Type in “How to bake a cake?” and you’ll get a basic cake recipe. These are Google’s attempts to provide what Danny Sullivan, a journalist and founder of the blog SearchEngineLand, calls “the one true answer.” These answers are visually set apart, encased in a virtual box with a slight drop shadow. According to MozCast, a tool that tracks the Google algorithm, almost 20 percent of queries — based on MozCast’s sample size of 10,000 — will attempt to return one true answer.
Unfortunately, not all of these answers are actually true.
and then immediately descends into psychotic gibberish:
Google needs to invest in human experts who can judge what type of queries should produce a direct answer like this, Shulman said. “Or, at least in this case, not send an algorithm in search of an answer that isn’t simply ‘There is no evidence any American president has been a member of the Klan.’ It’d be great if instead of highlighting a bogus answer, it provided links to accessible, peer-reviewed scholarship.”
. . .
The fastest way for Google to improve its featured snippets is to release them into the real world and have users interact with them. Every featured snippet comes with two links in the footnote: “About this result,” and “Feedback.” The former explains what featured snippets are, with guidelines for webmasters on how to opt out of them or optimize for them. The latter simply asks, “What do you think?” with the option to respond with “This is helpful,” “Something is missing,” “Something is wrong,” or “This isn’t useful,” and a section for comments.
This is all nonsense. The problem is that Google gives some of its results a false sense of authority, so the solution is for it to give a different set of its results even more of a false sense of authority, while also soliciting comments from everyone and putting in 3,000 different links allowing people to leave 30,000 different layers of feedback, because then the results won’t be confusing anymore.
Again, there are a literally infinite number of possible queries and results, which is the whole reason you write a search engine in the first place. Putting in custom results for specific queries both breaks the functionality of what Google is supposed to be doing, and is a futile game of whack-a-mole, a drop of water in a sea of bullshit. Furthermore, when you go down this road you’re trusting Google to provide the “right” results, which is a task at which it has absolutely no institutional competence. Is there seriously anyone who still hasn’t noticed that nerds are generally extremely bad at anything outside of their direct area of expertise? (That’s kind of the definition of “nerd,” actually.) To precisely the extent that you have a curated system, you do not have a search engine. You have some nerd’s journal.
Again, again, Google can either be a search engine or a source of direct information. It can’t be both things, and the practical effect of “solutions” like this is to transform Google into an extremely shitty direct information source. Think about this for literally five seconds: if the problem is that the web has a bunch of shitty content on it, then how is soliciting more information from the same place going to change anything? Are we seriously assuming that Holocaust deniers are going to be above gaming these sorts of things? The idea that individual people can change Google results by yelling at the company loudly enough is not any kind of solution; it’s properly horrifying. It means that search results are constantly subject to the random whims and biases of the people who are the best at yelling about things on the internet. This isn’t order; it’s chaos.
You may recall that the internet already has a source for crowdsourced direct information. It’s called Wikipedia. And, indeed, the problem that a lot of people are having here is that they are expecting Google to be the same thing as Wikipedia. In other words, they are incapable of understanding that a search engine and a source of information are different types of things, and thus, when one of them doesn’t behave like the other, they see it as a “problem” that needs to be “fixed”:
This is a really remarkable comment, especially coming from a guy with a fucking book emoji in his name. There’s not even an argument here, there’s just a completely unexamined assumption that Google and Wikipedia are directly comparable on some kind of “information quality” level or something and that one of them is “better” than the other. This is as far from intellectualism as it’s possible to get. (Don’t even get me started on the pathetic haughtiness of “do better,” as though it were any kind of meaningful statement (as though it imparted any semantic content at all), let alone a solution.)
Since I know I have to say this explicitly, I am absolutely not arguing that there is any such thing as a “neutral” platform or algorithm or that Google is not completely fucked up and deserving of excoriation. This isn’t about “neutrality” and “bias,” this is about what type of thing a thing is. What I am arguing is that things need to be criticized for what they are actually doing. It is correct for people to give Wikipedia shit about, for example, how it addresses trans people, because what’s on Wikipedia was put there by a specific person and approved by other specific people. Wikipedia’s “neutral point of view” thing is largely bullshit, because you can’t actually do that, but it is correct for it to attempt to stick to the facts and avoid editorializing. There’s no point in complaining that Wikipedia doesn’t promote your own personal political philosophy hard enough. But when it comes to something like which gender you use to refer to a trans person, there isn’t a “neutral option,” and the issue can’t be avoided. You have to make a choice, and that choice merits criticism.
So, as mentioned, the part of the Google results that is actually wrong is the boxy results, and they’re wrong in general, not just when they display “wrong” answers. Aspiring detectives may have noticed that I lied earlier. The Holocaust denial thing didn’t actually come up in one of the boxy results, it was just at the top of the normal list. So the people complaining there actually were full of shit. More specifically, they were full of shit insofar as they were directing their complaints at Google. The existence of the site is the problem, not the fact that Google’s algorithm noticed that it was on the internet and displayed it to the people to whom it calculated it was probably relevant.
This does not mean the algorithm is “neutral.”3 There’s no such thing. There are a lot of different methods you can use to find and display search results. They can be based on the site’s overall popularity, or on how many people clicked through from a given source, or on how well the content appears to match the search parameters regardless of traffic patterns. You can even switch this around; you could, for example, specifically promote less popular sites when they match certain search criteria. This would distribute traffic more equally and advance less popular opinions, though it might also increase the bullshit ratio. Hell, you could even take all the valid results and just display them randomly – this would actually have the positive effect of promoting previously unknown sources (hi), even though it would certainly increase the bullshit ratio, perhaps by quite a lot (depending on the extent to which “authoritative” sources are actually bullshit in the first place).
These are the real choices Google has to make even if it stops lying, and any choice made here is going to have political results. Pushing all the results towards the New York Times center is just as much of a political action as promoting fringe sites. So criticism of the behavior of Google’s algorithm is in fact within bounds here, as long as that’s actually what you’re criticizing. Pointing out that one bad result appears in one place is not a real argument, because nobody actually put it there. In order to make that argument, you have to argue against the general behavior that results in that particular output, and when you do that, you are implicitly arguing against all of the behavior that results from the parameters you’re selecting for criticism. You can coherently make the argument that Google should be promoting more “authoritative” results, but only if you’re willing to accept that non-authoritative results that you happen to agree with will also get downgraded. And the reason I’m claiming that people are full of shit here is that I don’t think anyone actually believes this. What people actually want is for the bad results to just not be there, because their existence is actively immoral. Which is an entirely praiseworthy opinion, but you can’t just wish them away. You have to think about how you actually want these things to be determined, because the consequences are going to be far greater than the one or two bad results you happen to encounter. I mean, if you really do want only “officially approved” sources displayed when you perform a general internet search, I’m within my rights to conclude that you’re an authoritarian.
There’s a reason this is happening, though. Google is not trying to act as a search engine and failing; it is choosing to promote itself as an source of information and is doing so dishonestly. The reason it is making this choice is that it is what people want. People don’t actually want to know what’s out there on the internet. They want a magic box to give them the right answer. That’s the only possible explanation for the proliferation of those stupid talking internet cylinders. My ability to comment intelligently on this aspect of the problem is somewhat limited, as I cannot for the life of me imagine why anyone would a) pay to b) put a robot in the middle of their house that c) talks at them and d) constantly monitors them in order to e) sell them shit, all for the sake of f) an inferior version of the functionality that you already have on your desktop and know how to use, because you ordered the thing off of Amazon in the first place. That is literally my idea of hell. Anyway, the reason people buy these things, one supposes, is that they want to be able to yell indistinctly at a robot and have the robot give them the magical Correct Answer. In other words, they want to be lied to. In order to respond to this desire, Google has to be dishonest, because it’s not possible to honestly create an incoherent system.
Pressuring Google to censor “bad” search results one at a time doesn’t solve a real problem.4 I don’t actually object to Holocaust denial sites being delisted (good riddance, obviously), but I do object to intentional delusion. I object to people who think that removing unpleasant things from their field of vision is the same as improving material conditions for living humans. Indeed, what we’re really talking about here is removing unpleasant truths, because it is a real fact that these sites really exist, and that their existence accurately reflects the fact that large numbers of people sincerely believe these things. This is real news. All obscuring it does is make liberals feel better because now they don’t have to see the bad things. You may recall that this dynamic has resulted in some problems recently.
The true fact of the matter is that the world is a disgusting place. This should neither be accepted nor ignored. But not ignoring it also means not fooling yourself about where things are coming from. It means choosing high-value targets and not easy ones. It means understanding how the things you are yelling at work so that you can yell at them accurately. It means taking actions that actually move the world in a better direction instead of the ones that merely move you into a more comfortable chair. Above all, it means keeping your eyes open to the things that are the most disgusting to look at. The only option for interacting with reality is to learn how to navigate the sea of bullshit.
It is for this reason that category errors matter. If you can’t tell the difference between a racist website written by a person and the racist output of an algorithm, you are not actually perceiving reality. Even though those things are both wrong – even though algorithms can be just as blameworthy as individual people – they’re wrong for different reasons, and they require different responses. There’s a reason we have different names for different things. Different things are different. A search engine is not the same thing as a news site. Treating different things as though they were the same thing is called stupidity. It makes you wrong about things.
We also have a name for the desire to retreat from a complicated world into a simplistic shell of officially-verified Correct Answers. It’s called cowardice.
- So, strictly speaking, this is a UX problem and not an algorithm problem. The extent to which a program’s interface determines its functionality both apart from and synergistically with its back-end code is kind of a whole other thing, though. ↩
- In case you’re wondering, AI, in addition to not being a solution, is not even a unique issue here. An actually intelligent AI would actually be intelligent, i.e. it would be a person. A practical AI that is not intelligent is just a fancy executable. This is actually another category error: the kind of AIs we have right now are just really complicated single-function computer programs; the sci-fi type of AI is an actual agent with human-like general reasoning capabilities (or perhaps not-so-human-like, but at least functionally similar). No matter how impressive the former is, it’s not the same type of thing as the latter. People are constantly getting this wrong and freaking out over really simple programs displaying barely surprising behavior; frankly, I don’t understand why people are so eager to leap to the completely unsupported conclusion that robots are about to take over the world. Anyway, the point is that we ought to be using two different terms for these things, because they are in fact different things. ↩
- You might want to note that a search engine is actually an object – it’s a fixed block of executable code. Objects aren’t neutral, but that doesn’t make them the same type of thing as subjects.5 Objects do not (non-metaphorically) have things like “desires” or “goals.” They have inputs that they accept, internal calculations that they perform, and outputs that they generate. (This applies just as well to ordinary physical objects. When you throw a rock, the input is force, the internal calculations involve weight and wind resistance and ductility and soforth, and then the output is force again.) ↩
- Also, this isn’t even the half of it. Google is up to way shadier shit than this; specifically, Google’s advertising monopoly – the fact that it both sells ads and controls and extracts money from ad blockers, meaning it is effectively selling ads to itself – is a book-length problem with serious implications for how the internet is going to work. This is exactly why we have (or are supposed to have) antitrust regulation. Google shouldn’t be allowed to be both things. The extent to which this is a bigger problem than racist websites showing up sometimes cannot be overstated ↩
- The big plot twist is that, even though objects and subjects are distinctly different types of things, living in a material world means being a material girl. Er, it means that all people (subjects) are also objects. They’re physical bodies existing in physical space. Importantly, though, a person is not an object in addition to being a subject, but is rather one thing that is both an object and a subject at the same time, in the same mode of being. Reconciling this apparent paradox is one of the Great Problems. ↩