Why Conspiracy Theories Work so Well on Facebook Leave a comment

Credit: Hiroshi Watanabe/DigitalVision/Getty

Panic attacks were common. Employees joked about suicide, had sex in the stairwells, and smoked weed on break. A disturbing investigative report by the Verge last week revealed that some of Facebook’s contract moderators—who are tasked with keeping content like be headings, bestiality, and racism out of your news feed—have turned to extreme coping mechanisms to get through the workday. Some contractors have been profoundly impacted by the content they’re exposed to, which may have implications for the rest of us who have grown accustomed to scrolling past sketchy links in our news feeds. For some workers, repeatedly viewing conspiracy videos and memes became a gateway to embracing conspiracy theories themselves. Contractors told the Verge’s Casey Newton that some of their peers would “embrace fringe views” after days spent combing through conspiracy videos and memes. “One auditor walks the floor promoting the idea that the Earth is flat,” Newton wrote. Because most misinformation isn’t banned by the platform’s rules, Facebook generally won’t remove it outright, arguing instead that it should be countered with factual reports. Such efforts have not been totally successful: A recent article in the Guardian found that “search results for groups and pages with information about vaccines were dominated by anti-vaccination propaganda” on the social network.

If the average social media user were akin to a casual smoker, moderators are consuming the equivalent of two packs-a-day.

Of course, this isn’t solely Facebook’s problem. A Qanon conspiracy book, abetted by Amazon’s algorithms, is rapidly climbing the site’s sales charts. YouTube announced Friday that it will withhold ads from all videos that contain Momo, a ghoulish figure some parents worry, needlessly, will instruct their children to self-harm or kill themselves in a viral “challenge.” That hasn’t stopped all kinds of Momo content from spreading, of course.

What causes an otherwise logical person to accept any of this? If Facebook’s moderators—the ones supposedly trained to view this troubling material—are so viscerally affected, what hope is there for the rest of us? A few psychological factors may be at play. People tend to develop a preference for ideas they are familiar with, a social science phenomenon that’s known as the mere-exposure effect or the familiarity principle. Ideas, in this framing, work a bit like infectious viruses. “The more often you see it, the more familiar something is, and the more familiar something is, the more believable it is,” says Jeff Hancock, communication professor and founding director of the Stanford Social Media Lab. Conspiracy content is engineered to be persuasive. People accept these theories because they help make sense of a world that feels random; even if they seem far-flung to the rest of us, they can offer some sense of comfort or security. And seeing those theories repeatedly pop up in your Facebook news feed “starts to undermine the sense that they are fringe,” says James Grimmelmann, a professor at Cornell Law School who studies internet law and social networks. While a Facebook user may be outraged the first time they see an objectionable video in their feed, it’s unlikely that they’ll muster an equally emotional negative reaction if that experience is repeated again and again. With enough viewings, they may find the messaging palatable or, at the very least, less jarring. Open the door to your mind a crack, and conspiracy can slink in.

“Repeated exposure could also make the moderators think that the conspiracy theory is more prevalent than it is actually in the population and therefore makes them slowly adhere to it,” says Dominique Brossard, professor in the life science communication department at University of Wisconsin-Madison. “This is why social norms are so powerful.” Should those beliefs become instilled, people will work to reinforce them, simply because we tend to work hard to defend views when we hold them. “Once you start to believe, confirmation bias kicks in,” says Hancock, referring to the human tendency of seeking out information or news sources that support preexisting beliefs. A smaller contributing factor may be the psychological phenomenon known as emotional contagion, which is when feelings or emotions spread from person to person. You may not know the term, but you know the feeling. Unconsciously mimicking the facial expression or mirroring the mood of someone you are spending time with are both examples of emotional contagion.

“Conspiracies exonerate them for their own misfortune.”

It could be that conspiracy videos instill or spread fear in viewers, who after mirroring that emotion, seek to attribute that unexplained fear to a tangible source. “‘I’m feeling weird and gross and scared. Why is that?’” asks Hopkins as a hypothetical. “‘That’s probably because the government is against us.’ You’re rationalizing why you’re upset”—even though what made you upset in the first place was the conspiracy content. While Facebook moderators’ extreme psychological distress is likely linked to the high quantity of disturbing content they consume on the job—one moderator who spoke with the Verge sifted through up to 400 posts a day—Facebook may still serve conspiracy or simply false videos, news, and memes to the rest of us. What does our collective exposure to conspiracy content on Facebook mean for public mental health at large? Are we all just clicks away from identifying as “flat-Earthers”—or worse? Joseph Walther, director of the Center for Information Technology and Safety at the University of California-Santa Barbara, cautioned against extrapolating from the extreme experience of Facebook moderators. If the average social media user were akin to a casual smoker, moderators are consuming the equivalent of two packs-a-day.

“The moderators you describe, they’re clearly under duress,” Walther tells OneZero. “Their level of exposure to these messages isn’t normal. Their experience doesn’t map onto the rest of us.” The ultimate difference between individuals who are unaffected by conspiracy content and those who become indoctrinated may come down to what’s happening to them offline. A major blow to an individual’s ego, such as being fired from a job or rejected by a romantic partner, can draw some people toward racism or conspiracy theories, which Walther says are often intertwined. When they’re hurting, humans like to find a scapegoat.

“If they’re already prone to racist ideology, and perceive some threat to the status or purity or privilege of their own race or sex, and then experience a particular vulnerability, conspiracies could be especially appealing,” he says. “Conspiracies exonerate them for their own misfortune.”

While the general public may be less at risk for conspiracy theory indoctrination than Facebook moderators are, that doesn’t mean that a public health risk doesn’t exist. Petter Tornberg, a sociologist at the University of Amsterdam in the Netherlands, compares the spread of misinformation in a social network to a wildfire. “An echo chamber has the same effect as a dry pile of tinder in the forest,” Tornberg tells OneZero. “It provides the fuel for an initial small flame, which can spread to larger sticks, branches, trees, to finally engulf the forest.” That social networks like Facebook tend to create an echo chamber—or what Eli Pariser famously called a “filter bubble”—isn’t by accident. It’s the inevitable result of the attention economy. “Since online platforms are advertisement-based, they are operating in an environment in which they are competing for attention,” Tornberg says. Outrage and anger are efficient ways of maximizing viewers’ attention—people interact with and share sensationalist coverage to such an extent that a cottage industry of spammers has cropped up on social media, where “junk news” can drive ad impressions. There’s little incentive for platforms like Facebook to discourage users from gravitating toward small interests groups where outrageous material is shared.

“This is just capitalism,” he says. “There are unpleasant jobs, like cleaning sewage. We want the kinds of things that those jobs support.”

“Social media do not necessarily have to be designed in a way that pushes us toward tribalization and self-segregation,” says Tornberg. “They could also be designed to help form the foundation for a common world.” For example, he points to Reddit’s “Change My View” community, where users post opinions they accept may be flawed in hopes of having a discussion and understanding other perspectives. “But that would require different incentives for the companies behind them.” And right now, social networks like Facebook—which made $22 billion in 2018, an increase in nearly 40 percent from the previous year—are doing a very effective job maximizing the current economic incentives. Automating content moderation would spare humans from the potentially damaging effects of viewing high quantities of traumatic videos and conspiracy theory posts. But while artificial intelligence may seem like an obvious solution for reducing moderators’ and the public’s exposure to conspiracy theories—albeit at the ultimate cost of many of those moderators’ jobs—Hancock says that the technology isn’t there yet.

“Humans are developing and producing this content,” he says. “For a good chunk of time, humans will be the only ones who can make these judgments.” This is in part because flagged posts on Facebook are, by definition, atypical, and current A.I. does not do well with the atypical. “It’s hard to anticipate a lot of the kinds of content that they see,” Hancock says. “Part of it is because it’s fringe stuff.” Fringe content surprises us because it is unusual, and computers are trained to react to situations that they have encountered before. Training a machine to react to yet-to-be imagined conspiracies is practically impossible at this point. For Grimmelmann, the existence of content moderation as a profession is a necessary evil for protecting the mental health of the country at large—as long as the country remains addicted to social media in its current form. “This is just capitalism,” he says. “There are unpleasant jobs, like cleaning sewage. We want the kinds of things that those jobs support.” Just because an unpleasant job is socially necessary, however, doesn’t mean that employers should do nothing to ameliorate those working conditions. Grimmelmann believes the public should pressure platforms like Facebook to commit to keeping conspiracy theories out of users’ news feeds and to making sure their content moderators have the support they need to protect the rest of us, including a reduced workload, more emotional support, and better pay. (Per the Verge, Cognizant’s contractors make $28,800 a year.) And it means that more people, not fewer, will need to take part in content moderation work. “Unfortunately this is going to have to be a lot bigger industry,” says Hancock. What we’re seeing now is just the beginning. Facebook alone says it employs 30,000 trust and safety workers, including contractors. Half of them work in content moderation, and investment in content review will almost certainly continue to swell. “We’ve more than doubled this team each of the last two years, adding thousands of people to these efforts,” Justin Osofsky, Facebook’s vice president of Global Operations, wrote in a post published to Facebook on Feb. 25.

So, just as people created these problems to begin with—by exploiting a platform to spread conspiracy theories and other disturbing material—people will have to fix them. No automated process could anticipate every permutation of junk content we’re capable of coming up with. Or, as Hancock puts it: “I don’t think humans will ever be out.”

Leave a Reply