Facebook is pushing back on a Washington Post report on Tuesday saying that the social network gives its users a “trustworthiness score” in a new attempt to combat the spread of misinformation on the platform.
To account for users deliberately flagging real news as fake news, Facebook has started ranking users based on their history, according to the Post. Per the report, users’ ratings increase on a o to 1 scale the more they incorrectly flag stories that are deemed truthful by Facebook’s third-party fact checkers like Snopes.
But in a statement to TheWrap, Facebook disputed both the accuracy of the Post’s headline “Facebook is rating the trustworthiness of its users on a scale from zero to one,” and details of the report.
“The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading,” said a Facebook spokesperson. “What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.”
The company clarified it doesn’t have a universal rating system, but that its scoring was one of many factors its moderators use to vet news stories. Facebook didn’t dispute the 0-1 scoring system reported by the Post, and it wouldn’t share more details with TheWrap on what goes into its ratings. The scoring system has been in place for about a year, according to the Post.
Based on the company’s response, this can be looked at as Facebook’s attempt to safeguard against “The Boy Who Cried Wolf.” Facebook’s fact checking team is swamped daily with stories to review. By pegging a score to its users, it allows Facebook’s review team to expedite or downgrade a flagged story based on a user’s history.
Facebook started letting users flag news they believed was incorrect in 2015. But the tool has been leveraged by users reporting news they merely dislike, according to the social network.
It isn’t “uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Facebook product manager Tessa Lyons told the Post.
The rating comes to light at a time when Silicon Valley — and Facebook in particular — is working to stop the spread of fake news during the 2018 U.S. midterms. Facebook said it spotted 32 “inauthentic accounts” looking to politically manipulate users on both Facebook and Instagram last month.