BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook Live Announces Immediate 'One Strike Bans' -- No More Warnings

Following
This article is more than 4 years old.

"Today," Facebook announced on Tuesday night, "we are tightening the rules that apply specifically to [Facebook] Live. We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses."

As ever, policy change at Facebook comes when pressure is applied - that pressure is certainly now being applied. And so, from now on, Facebook explained, "someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time."

Facebook Live has been at the heart of the backlash against social media content for two reasons. First, the live streaming and consequent sharing of the mosque attacks in New Zealand was the trigger for the current tidal wave of regulation hitting around the world. And second, the real-time, large-scale nature of the live video streaming platform is by far the hardest content Facebook has to police - in the wake of Christchurch, there were many calls for the service to be pulled because of this.

Last week, Facebook CEO Mark Zuckerberg met with President Emmanuel Macron of France for discussions that covered the policing of damaging and dangerous content. And this announcement, by far the most stringent restrictions the company has ever placed on its flagship Live service, is timed to coincide with New Zealand's Prime Minister Jacinda Ardern meeting President Macron on Wednesday. The leaders are expected to sign the "Christchurch Call," a non-binding agreement that places expectations on social media to better monitor and report on the toxic material published on their platforms. 

"I've spoken to Mark Zuckerberg directly twice now, and actually we've had good ongoing communication with Facebook," Jacinda Ardern told CNN on Monday. "The last time I spoke to him a matter of days ago, he did give Facebook's support to this call to action."

The terrorist attacks in Christchurch have been a clarion call for regulation around the world. At the time of the attacks, Facebook and its leadership were lambasted for their (lack of) response. The live stream was broadcast without being picked up and then widely edited and shared. In the aftermath, the company blamed the lack of AI training data and basically tried to shrug it off. They underestimated the global response that was about to hit. The company even defended against the use of time delays on feeds as a compromise to the "immediacy" of the Facebook Live experience.

"Following the horrific terrorist attacks in New Zealand," the company said in announcing this latest policy change, "we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate."

It was two weeks after the event before a Facebook exec publicly admitted the issue, with COO Sheryl Sandberg writing an open letter to the New Zealand Herald, accepting that "we have heard feedback that we must do more – and we agree. In the wake of the terror attack, we are taking three steps: strengthening the rules for using Facebook Live, taking further steps to address hate on our platforms, and supporting the New Zealand community."

On Tuesday, the company admitted this issue again. "One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People — not always intentionally — shared edited versions of the video, which made it hard for our systems to detect," adding that "although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research."

And so Facebook has also announced new partnerships with the University of Maryland, Cornell University and the University of California, Berkeley, to research AI techniques to "detect manipulated media across images, video and audio, and to distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs."

Facebook acknowledged in a blog post shortly after Christchurch that "people have asked why artificial intelligence didn’t detect the video from last week’s attack automatically. AI systems are based on 'training data', which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video." And so they need to rely on moderators and user reports, but  "during the entire live broadcast, we did not get a single user report."

"Tackling these threats," the company said on Tuesday, "requires technical innovation to stay ahead of the type of adversarial media manipulation we saw after Christchurch when some people modified the video to avoid detection in order to repost it after it had been taken down. This will require research driven across industry and academia. To that end, we’re also investing $7.5 million in new research partnerships with leading academics from three universities, designed to improve image and video analysis technology."

This is very much a step in the right direction. Given the scale of data at Facebook, if the company opens its platform to academic research into content monitoring on large-scale live feeds it will yield results.

"This work," they explain, "will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred). We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack."

As reported by Fast Company shortly after launch, "Live has been touted as Mark Zuckerberg’s pet project, one he’s 'obsessed' with. Some believe Live is the key to Facebook’s future—a resource that will help it compete against broadcast television."

But, somewhat unsurprisingly, providing the entire world with a platform to broadcast themselves live to anyone, anywhere hasn't proven such a great idea without rigorous enforcement in place. This policy change is, albeit belatedly, wholly inevitable. Although "the overwhelming majority of people use Facebook Live for positive purposes," Facebook said. "Live can be abused and we want to take steps to limit that abuse."

The changes at Facebook are far from done. We are at the very earliest stages of the company and the broader social media sector working to stave off truly revolutionary change, such as the break-up of the companies as Facebook's own co-founder Chris Hughes called for last week. And it is far from clear that they will succeed.

Follow me on Twitter or LinkedIn