Inside the Team at Facebook That Dealt with the Christchurch Shooting
At 9:30 P.M. on March 14th, Jay Bagwell, a thirty-eight-year-old Facebook employee with parted hair and perpetual stubble, was sitting in his living room, in Austin, Texas. His kids were in bed, and he had just turned on a cooking show on Netflix and pulled out his work laptop to send some e-mails. On his Facebook feed, he learned that, roughly two hours before, a man had entered a mosque in Christchurch, New Zealand, and opened fire. This turned out to be the second of two shootings, during which the gunman killed fifty people and injured another fifty before being arrested. Unlike most people, Bagwell couldn’t dwell on his feelings of despair. “As soon as I saw the news about the attack that night, I knew immediately this was going to be something my team would be working on,” he told me.
Bagwell, whose name has been changed to protect his security, is a lawyer by training. In 2015, he left a position working on intellectual-property operations at Facebook to run a new department known as the global-escalations team, which removes heinous images and videos from the platform. At Facebook, human content moderators, assisted by computers, spend their days sifting through posts that users have reported. These posts range from the mundane (teen-agers reporting pictures in which they think they look fat; neighbors reporting each other while squabbling over politics in a comments section) to the grotesque (a beheading by a Mexican drug cartel), exploitative (revenge porn posted by a jilted lover), illegal (communications about a drug deal using an invented language of numbers and emojis), and exhaustingly hateful (threads praising 9/11 or calling for the extermination of people with autism or hereditary baldness, and a seemingly endless stream of racist vitriol). This work takes a toll on moderators, and Bagwell’s team is focussed on the most virulent content. “There’s a spiritual resiliency they need to have to do the work,” Bagwell told me. This may explain why Bagwell has the permanently tired look of a much older man.
The Internet doesn’t run on a tight nine-to-five schedule, so Facebook maintains branches of the escalations team around the world, which work eight-hour shifts that “follow the sun,” so that someone is always on call to manage a crisis. When the shooting happened, a dozen content moderators on the global escalations team were working in Singapore, and Bagwell messaged them to get an update. The moderators have a three-step crisis-management protocol; in the first phase, “understand,” they spend as much as an hour gathering information before making any decisions. Bagwell learned that the shooter seemed to be trying to make the massacre go viral: he had posted links to a seventy-three-page manifesto, in which he espoused white-supremacist beliefs, and live-streamed one of the shootings on Facebook, in a video that lasted seventeen minutes and then remained on his profile. Bagwell forced himself to watch the video, and then to watch it again. “It’s not something I would ask others to do without having to watch it myself,” he said.
Facebook Live was launched in 2016, and we are still grappling with the possibilities of the medium. Since its early days, it has been used to share the antics of children with their grandparents, broadcast amateur cooking shows, and record academic panels, but it has also been exploited to showcase violence. In 2017, a group of teen-agers kidnapped and tortured a man with mental and developmental disabilities and streamed video of the event to their friends. Several people—sadly, many of them young—have streamed their suicides. Facebook is able to screen most photos and videos with an artificial-intelligence system, which makes sure that they haven’t been previously banned and automatically deletes those that have. But since live video, by definition, has not been banned before, Facebook mostly relies on users to flag inappropriate posts to moderators. (The company has developed some technology that can detect certain themes or imagery in live video and block them immediately.) Viewers are not always civic-minded; four thousand people watched the video of the Christchurch shooting before it was taken down, but no one flagged it until twenty-nine minutes after the live stream began.
Bagwell stayed up until 4 A.M. monitoring developments. During the “understand” phase, Facebook determined that the video qualified as terrorist content under its Dangerous Individuals and Organizations policy, which bans “organizations or individuals that proclaim a violent mission or are engaged in violence.” Bagwell’s team removed the video, and, as soon as the gunman’s identity was confirmed, removed his account. But, after years of doing this work, Bagwell knew that the video would continue to spread in dark corners of Facebook, along with praise for the massacre. “We know, for example, that people will begin to create fake accounts in the killer’s name,” he told me. “We know people begin to role-play mass murders; we know we see merchandise that starts to capture this tragedy. Taking down the video is just one part.”
At 6 A.M. in Dublin, five hours after the shootings had occurred, a thirty-nine-year-old named Cormac Keenan rolled over to check his phone, saw the news, and rushed to the office. Keenan is the head of the local branch of Facebook’s market team, which, worldwide, includes a few hundred people who, collectively, speak more than eighty languages and translate the idiosyncrasies of specific cultural contexts for the company. A typical day might involve reviewing a post that uses slang from Thailand to determine whether it’s a harmless joke or a local form of hate speech. The market team often helps the escalations team, though working on the shooting was a bigger job than normal. “When I came into the office, it just took over my entire day,” Keenan said.
Understanding context is one of the most difficult aspects of content moderation. Sometimes, a post seems clearly destructive. In April, 2017, Steve William Stephens, a vocational specialist, shot and killed Robert Godwin, Sr., an elderly black man who was walking on the sidewalk near his home in Cleveland. Stephens said, bafflingly, that he had decided to kill someone because he was mad at his ex-girlfriend, and posted a video of the killing on Facebook, where it remained for two hours before the company removed it. People were horrified by how long it stayed up. “Traditional media companies have finely-wrought guidelines and policies to help them make these decisions,” Emily Dreyfuss wrote, in Wired. “But Facebook depends on us to do it. And now it might very well be time for the company to roll up its own sleeves and get to work.”
But disturbing videos may not always be damaging. In July, 2016, Philando Castile, a black school-nutrition supervisor, was shot seven times by a police officer during a traffic stop in Minnesota. Castile’s girlfriend, Diamond Reynolds, live-streamed the aftermath, as Castile bled from his wounds and died after twenty minutes. The footage arrived amid a series of videos depicting police violence against black men but was striking because it was streamed live, which exempted it from claims that it had been edited by activists or the police department before it was released. Facebook initially removed the video, but then reinstated it with a content warning. To moderators looking at both, the videos might look similar—a grisly shooting of a black man in America—but the company eventually determined that the intentions behind the videos gave them distinct meaning: keeping up Reynolds’s video brought awareness to the systemic racism of the criminal-justice system, while taking down Stephens’s video silenced a murderer’s deranged homage to his ex-girlfriend.
Such scenarios, for better or worse, force tech companies to do the delicate work of determining whether a video of violence ultimately serves a harmful or noble purpose. In the case of the Christchurch shooting, Facebook decided that the danger of the video becoming a tool of extremist propaganda outweighed its informational value. The danger of violence spreading through imitation has long been documented. In 1978, the sociologist Mark Granovetter described how riots are contagious, growing and multiplying as people see each other joining in. A new study, co-authored by the Network Contagion Research Institute and the Anti-Defamation League’s Center on Extremism, confirms that extremist violence works in much the same way. Far-right ideologies begin on smaller social-media sites (such as Gab, 4chan, and 8chan), spread to more mainstream outlets, and then lead to real-world violence. Depictions of the violence then get cycled back into social media, fuelling the spread of the ideologies. “Fringe social-media platforms are enabling terrorism in a way that would have been unimaginable even five or ten years ago,” Eileen Hershenov, a vice-president at the A.D.L., wrote recently. “The research demonstrates how online propaganda can feed acts of violent terror, and, conversely, how violent terror can feed and perpetuate online propaganda. In essence, these platforms serve as round-the-clock white supremacist rallies, amplifying and fulfilling their vitriolic fantasies.”
By the time Keenan got involved, Facebook had already moved onto the second phase of its protocol, “isolate,” and was trying to stop the spread of the content. “At that stage, it was six or seven hours in, so a lot of the initial responses had kicked off,” he told me. His and Bagwell’s teams tracked down and removed posts from around the world that praised the attack or urged further violence, and deleted copies of the video. This created an ethical tangle. While obvious bad actors were pushing the video on the site to spread extremist content or to thumb their noses at authority, many more posted it to condemn the attacks, to express sympathy for the victims, or because of the video’s newsworthiness. For consistency, and in deference to a request from the New Zealand government, the team deleted even these posts. The situation was a no-win for Facebook. Politicians were quick to condemn the company for the spread of extremism, and users who had posted the video in good faith felt unreasonably censored.
At the time of the shooting, Sherif Ahmed, a forty-two-year-old who works on Facebook’s Dangerous Organizations Team (and whose name has also been altered), was sitting down to a dinner meeting with his co-workers when everyone’s phone started buzzing. Ahmed’s team tries to stop terrorist organizations and hate groups from using the platform to spread propaganda, and it has been increasingly effective; last fall, the Justice Department announced that an alleged ISIS supporter was cautioning members to hijack or acquire the accounts of “legitimate users” to get around Facebook’s growing abilities to detect terrorism. Ahmed helps craft mechanisms for the enforcement of policies; he left the dinner, and, for much of the next twelve hours, worked on the Christchurch case. He told me that the video of the shooting was different from other extremist videos that he had seen because of the “point of view of the filming,” which forced the viewer to see the scene from the perspective of the killer, and because of “how incredibly brutal it was.”
To remove videos or photos, platforms use “hash” technology, which was originally developed to combat the spread of child pornography online. Hashing works like fingerprinting for online content: whenever authorities discover, say, a video depicting sex with a minor, they take a unique set of pixels from it and use that to create a numerical identification tag, or hash. The hash is then placed in a database, and, when a user uploads a new video, a matching system automatically (and almost instantly) screens it against the database and blocks it if it’s a match. Besides child pornography, hash technology is also used to prevent the unauthorized use of copyrighted material, and over the last two and a half years it has been increasingly used to respond to the viral spread of extremist content, such as ISIS-recruitment videos or white-nationalist propaganda, though advocates concerned with the threat of censorship complain that tech companies have been opaque about how posts get added to the database.
By the time the handling of the Christchurch video switched to teams in the United States, some twelve hours after the shooting, moderators discovered a problem that they hadn’t encountered before at such a scale. When they tried to create a hash databank for the shooter’s video, users began purposefully or accidentally manipulating the video, creating slightly blurred or cropped versions that obscured the hash and could make it past Facebook’s firewall. Ahmed decided to try a new kind of hash technology that took a fingerprint from a vector of the video—its audio—which was likely to remain the same across different versions. This technique, combined with others, worked: in the first twenty-four hours, one and a half million copies of the video were removed from the site, with 1.2 million of those removed at the point of upload.
The Christchurch attacks were terrifying not only for their violence but because of how they unfolded in the digital sphere. “[A] surprising thing about it is how unmistakably online the violence was, and how aware the shooter on the videostream appears to have been about how his act would be viewed and interpreted by distinct internet subcultures,” Kevin Roose wrote, in the Times, following the attack. “In some ways, it felt like a first—an internet-native mass shooting.” In the aftermath of the shooting, Facebook faced widespread criticism for failing to take down the videos fast enough, for failing to provide transparency in how it handles violent extremism, and for not doing more to combat terrorism and hate speech online. Similar complaints have been made against other social-media companies, including Reddit, which recently removed groups devoted to videos depicting human death, and Twitter, which has long fostered trolls and harassers. After the attack, Representative Bennie Thompson, a Democrat from Mississippi, sent a letter to several major tech companies urging them to crack down on extremist content. “If you are unwilling to do so, Congress must consider policies to ensure that terrorist content is not distributed on your platforms,” he wrote.
Facebook and other tech companies have not always done enough to address hate speech. Many early policies took an absolutist view of the importance of free speech, which let racism or misogyny proliferate, making the platforms hostile places for women and people of color and accelerating the spread of extremist views. This Easter Sunday, a little over a month since Christchurch, hundreds were killed in Sri Lanka, in a series of coördinated suicide bombings at Christian churches and luxury hotels. Government officials cited Facebook’s role in spreading misinformation and shut down the platform within its borders following the attack. For the last few years, however, in the face of significant criticism, almost every major platform has tried to crack down on these issues, giving users new features for privacy and security, hiring armies of moderators, and consulting with outside academics to create better policies. At this point, Facebook wants to keep violence off its site as much as we do, if only to avoid scandal; in the twenty-four hours after the shooting, hundreds of people worked to keep the video from spreading. No doubt, more can be done. But the Christchurch shooting may demonstrate that, as long as social media exists, some amount of horror is bound to slip through the cracks. Ultimately, what likely disturbs us most about moments like Christchurch is that this kind of content exists and, perhaps worse, that there are bad people trying to make it spread.
By thirty-six hours after the shooting, the escalations team had moved into its third phase: “enforcement.” Moderators had mostly contained the video, and were monitoring to make sure that it didn’t reëmerge. The next day was Sunday, and Bagwell, a lifelong University of Texas fan, took his young son to his first baseball game—a rivalry game between U.T. and Texas Tech. The game was close, and, at the top of the ninth, Tech looked to be about to take the lead. But U.T. brought in a relief pitcher who shut down the next three batters, ending the game. “It doesn’t happen every day,” Bagwell told me, “but sometimes the good guys win.”