Facebook, in its ongoing battle against the spread of fake news, will start checking the authenticity of photos and video, the company announced on Thursday.
The social network has built internal tracking tools to flag potentially edited content, which it will share with its 27 third-party fact checkers for review. The fact-checkers will leverage “experts, academics, or government agencies” to verify the pictures and video.
“People share millions of photos and videos on Facebook every day. We know that this kind of sharing is particularly compelling because it’s visual. That said, it also creates an easy opportunity for manipulation by bad actors,” Facebook said in a blog post.
After months of research, Facebook said questionable photos and video fall under three categories: 1) “manipulated or fabricated,” 2) out-of-context claims, or 3) fabricated text or audio claims.
Thursday’s announcement is the latest attempt by Facebook to curb misinformation campaigns ahead of the 2018 U.S. midterm elections. Facebook was skewered by critics for its inability to weed out Kremlin-funded fake news during the 2016 presidential race. The company shared thousands of fake ads with Congress, many of which included manipulated pictures of President Obama, Hillary Clinton and Donald Trump.
“Obama was always a mere pawn in the hands of Arabian Sheikhs,” one ad read. “His latest orders are just proving it. All these refugees, which we are about to take in, are soldiers with one simple goal. They are going to try to terrorize the nation.”
CEO Mark Zuckerberg has said he’s “dead serious” about this issue, and Facebook has introduced several measures to curtail fake news in the last year. The social network introduced measures for advertisers to verify their location earlier this year, and added a tab to political ads, letting users know who is paying for it. And last month Facebook removed hundreds of Iran and Russia-tied accounts for coordinated misinformation campaigns.