Facebook Says Internal System Was Unprepared for New Zealand Shooting Livestream

First-person livestream of the shooting “was a type of video we had not seen before,” Facebook policy director Neil Potts tells U.K. lawmakers

Facebook Logo
Getty Images

Facebook’s artificial intelligence tools were unable to quickly scan and remove live footage of the New Zealand terror attack, where 50 Muslims were shot and killed last month in Christchurch, because it’s system was unprepared to detect such a violent attack, according to a company executive on Wednesday.

Neil Potts, Facebook’s public policy director, said the first-person livestream of the shooting “was a type of video we had not seen before,” according to Bloomberg. The livestream, recorded with a GoPro camera mounted on the shooter’s head, tricked Facebook’s AI system into inaction, he explained. Potts made the remark while testifying before a British Parliamentary committee investigating hate crimes.

The explanation comes after Facebook was ridiculed by New Zealand Prime Minister Jacinda Arden for not moving fast enough in removing the livestreamed attack. The livestream, which captured the anguished cries pierce the brief moments between gun shots, remained up for nearly 20 minutes before Facebook was alerted by New Zealand police officers that the attack was being broadcast. Shortly following the attack, Arden said Facebook and other major tech companies bear some “responsibility” for the video.

But entirely eradicating the video is immensely difficult, if not impossible. The attack, and others like it, isn’t something that can be proactively blocked from social media. Instead, the shooting, which was  livestreamed using Facebook Live and then recirculated on other platforms, is something that forces human moderators and artificial intelligence tools to act quickly to block. But the reaction isn’t instantaneous or flawless. While platforms like Facebook and YouTube are busy deleting posts, some users are working to share the attack. It’s the digital equivalent of capping a busted fire hydrant.

Facebook’s internal AI tools, while unable to immediately spot the livestreamed attack, were able to scanned the original attack video and allow its system to immediately spot similar footage and remove most uploads, a company rep explained last month. Facebook deputy general counsel Chris Sonderby said the livestream of the attack was viewed 200 times and wasn’t reported by anyone watching it; Facebook ultimately received a user complaint 29 minutes after it started, or about 12 minutes after the livestream ended. The video was ultimately viewed about 4,000 times, according to Sonderby, and the video was deleted 1.5 million times in the 24 hours after the attack — including 1.2 million uploads that were instantly removed.

 

Comments