As terrorist groups like ISIS continue to leverage social media as a recruiting tool, Facebook is using artificial intelligence to weed out terrorist activity on its platform, according to a report from USA Today.
“Sophisticated algorithms” are helping the social media giant scan for extremist videos, posts, and pictures in an effort to curb terrorist propaganda. “Hashes” — a form of digital fingerprinting — help Facebook spot extremist content before it’s even posted.
“Just as terrorist propaganda has changed over the years so have our enforcement efforts. We are now really focused on using technology to find this content so that we can remove it before people are seeing it,” Monika Bickert, a former federal prosecutor helping Facebook’s efforts, told USA Today.
AI cannot carry the heavy burden alone, though. Facebook has a team of 150 workers — including counterterrorism experts — dedicated to finding and removing extremist posts. The company is also working with researchers specializing in the social media tactics of terrorist groups.
Facebook and other platforms have been criticized by politicians for not doing enough to crack down on extremist content. Following the recent atrocities in Manchester and London, British Prime Minister Theresa May said Facebook and Google give terrorists a “safe space needed to breed.”
In response, Simon Milner, Facebook’s director of policy, said, “We do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists.”
Still, the amount of manpower that would be needed to track extremist content is overwhelming. AI helps Facebook spot trends in posts flagged for terrorist activity, as well as blocking terrorists from starting new profiles. This eliminates an endless game of “whack-a-mole” between Facebook and extremists.
In February, Mark Zuckerberg shared a 6,000 word manifesto on how Facebook is using AI to disrupt terrorist activity.
“Artificial intelligence can help provide a better approach,” he wrote. “We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.”