Facebook hopes to make its livestreaming tools safer—and protect its audience—with a new “one strike” policy.
The company faced heavy criticism after a gunman in New Zealand was able to broadcast a violent attack on a mosque. Social media companies generally have struggled to police violent speech on their platforms, but live video has presented unique problems for engineers trying to keep audiences safe.
Now, elected officials in New Zealand, led by Prime Minister Jacinda Ardern, are calling on government and tech leaders to do more to limit the spread of messages from hate organizations and terrorist groups.
In a New York Times opinion column Saturday, Ardern wrote of the balance that must be struck: “Social media connects people. And so we must ensure that in our attempts to prevent harm that we do not compromise the integral pillar of society that is freedom of expression. But that right does not include the freedom to broadcast mass murder.”
Officials from the U.S., Canada and Britain are expected to be at the summit, as well as Twitter CEO Jack Dorsey and staff from Facebook, Amazon and Google, The Washington Post reports.
A number of nations are expected to sign the Christchurch Call, the Times reports, but the U.S. is not among them, with concerns about free speech.
Facebook announced in a blog post that its new policy would limit access to livestreaming tools when users violated certain rules, such as posting content that violated the platform’s community standards.
Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate. As a direct result, starting today, people who have broken certain rules on Facebook — including our Dangerous Organizations and Individuals policy — will be restricted from using Facebook Live.
Tackling these threats also requires technical innovation to stay ahead of the type of adversarial media manipulation we saw after Christchurch when some people modified the video to avoid detection in order to repost it after it had been taken down. This will require research driven across industry and academia. To that end, we’re also investing $7.5 million in new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.
The company acknowledges that past policies haven’t done enough to police the discourse on its platform and curb violent rhetoric and hate speech. Facebook says a new “one strike” rule will help crack down on bad behavior and more quickly weed out unscrupulous or malicious users.
Today we are tightening the rules that apply specifically to Live. We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.
We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook.
We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook. Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.
Facebook also plans to implement more safeguards and invest in new technology to detect content that violates its policies more quickly.
The company said it plans to extend the new restrictions to other areas over the coming weeks, beginning with preventing the same people from creating ads on Facebook.
It also said it would fund research at three universities on techniques to detect manipulated media, which Facebook’s systems struggled to spot in the aftermath of the attack.
Facebook has said it removed 1.5 million videos globally that contained footage of the attack in the first 24 hours after it occurred. It said in a blog post in late March that it had identified more than 900 different versions of the video.
Facebook’s actions come as the platform’s role in spreading hate speech remains under intense scrutiny. Weeks after the Christchurch attacks, New Zealand’s Privacy Commissioner John Edwards called Facebook “morally bankrupt pathological liars” who “facilitate foreign undermining of democratic institution” in later deleted tweets.
“Facebook cannot be trusted,” he added.
However, Facebook has been able to get political leaders to act as proxies and praise its first steps in addressing the crisis.
“I’ve spoken to Mark Zuckerberg directly twice now, and actually we’ve had good ongoing communication with Facebook,” New Zealand Prime Minister Jacinda Ardern told CNN’s Christiane Amanpour Monday. “The last time I spoke to him a matter of days ago, he did give Facebook’s support to this call to action.”
What do you think of Facebook’s crisis response?