“Everything Facebook is doing as a company should be interpreted in the context of its own survival and self-interest,” one expert tells TheWrap
The future may be private, as Facebook CEO Mark Zuckerberg declared on Tuesday, but experts are divided on whether the social media giant’s new redesign will help the company move beyond the security and misinformation issues that have plagued it in recent years.
The redesign was “a masterstroke by Facebook,” BlitzMetrics chief technology officer Dennis Yu said, because “they a) addressed much of the privacy concerns that have been nailing them and b) they’re also able to grow their business.”
Join WrapPRO for Exclusive Content,
Full Video Access, Premium Events, and More!
But Jen King, director of consumer privacy at The Center for Internet and Society at Stanford University, said she remains “fairly skeptical” because the site’s facelift fails to tackle the “fundamental question of who can access your profile data.”
This was the issue that rocked Facebook during the spring of 2018, when the company admitted political data firm Cambridge Analytica had harvested the personal information of 87 million users. Several other embarrassing privacy concerns were unearthed in the months to follow; Facebook said in December it had given major tech platforms like Spotify and Netflix access to millions of private user messages, confirming key details from an illuminating report from The New York Times.
“Most of the problems we’ve seen on the privacy front were not so much from users invading other users’ privacy,” King said. “It was about the company and the third-party relationships it’s been enabling.”
Still, many see a positive in Facebook shifting its emphasis from its News Feed to groups since that should help address a major headache: fake news. “This is the most difficult problem on the internet, which is being able to spot and moderate fake news,” Yu said. “No one has solved this, and I don’t think Facebook will be able to solve this.”
But now Facebook users are being encouraged to discuss topics within dedicated groups rather than having bogus articles circulate on their News Feed. The company will even begin to recommend groups for users to join based on subjects in which they’ve shown an interest. The shift should add another level of protection, quarantining users inside communities and letting them decide if they’re comfortable with what is and isn’t being shared. And, conveniently for Facebook, it allows the company to take less of an active role in policing content.
“One of the underlying aspects of this is the content moderation problem is huge. However many people they’ve hired, it’ll never be enough,” King said.
After initially shrugging off how Russian trolls leveraged Facebook to spread misinformation before and after the 2016 U.S. election, Facebook has spent the last two years hiring thousands of moderators and developing internal tools to combat fake news. The company has since thumped its chest on several occasions in the last year after removing Russian and Iranian misinformation efforts. Still, this has become a game of whack-a-mole for Facebook.
“They need orders of magnitude more if they’re going to be much more proactive and timely in trying to assess content,” King continued. “It’s not just a problem saying ‘Oh, we’ll throw it all in front of the AI and let it solve this.’ AI can’t solve this problem. It may be able to help escalate things for humans to check, but ultimately it’s a huge mess.”
Then there’s the matter of Facebook’s inconsistent track record in addressing privacy concerns. While Zuckerberg announced end-to-end encryption across all messaging platforms and other new privacy protections, many see the shift as a cynical, preemptive strike against government regulation.
“There are looming threats that are existential to (Facebook), just as in the same way there were for Microsoft,” Alexander Howard, a D.C.-based digital governance expert, said. “Everything Facebook is doing as a company should be interpreted in the context of its own survival and self-interest.”
Currently, Facebook is staring at a potential $5 billion in fines from the Federal Trade Commission over its privacy issues. It also faces stricter rules recently enacted by the European Union. American politicians, so far, have shown a reluctance towards hammering Facebook, but that may be changing, too. Sen. Elizabeth Warren’s plan to breakup tech giants could impact Facebook — which also owns Facebook and WhatsApp — in the same way antitrust regulations hit Microsoft in the late ’90s.
Facebook’s renewed emphasis on user privacy is, in-part, an effort to “appease” government officials before they bring the pain, Yu said. Regulate yourself so the government doesn’t do it for you. Facebook likely sees this as just as big of a threat to its bottom line as a user exodus; indeed, even with a string of ugly headlines in the last year, Facebook continued to add users at a healthy clip, hitting 2.38 billion monthly users during the first quarter of 2019.
“Facebook appears to be preparing for a future where its vast ability to mine user data is reined in by greater data protection laws, and platforms lose their safe harbor for the content they feature,” tech ethicist David Ryan Polgar said. “Right now there is a drum beat that much of the business model of Facebook crosses ethical boundaries, but the law has not caught up. Eventually it will, and tech companies know this.”
And beyond the threat of government fines and regulation, Facebook has given its critics reason to question its newfound privacy-first mantra. Most glaringly, the company has failed to release its “Clear History” feature, allowing users to delete information the company collects from their visits to other websites and apps, a year after announcing it.
“If the icy relationship between Reagan and Gorbachev was noted for Reagan’s famous ‘Trust but verify’ quip, users’ frosty relationship with Facebook has evolved into a ‘Verify before trusting’ mentality,” Polgar said.
Howard echoed his sentiment. “The most important rule with this company, as it is with most others, is to listen to what they have to say and judge how likely it is to be accurate based on their past statements,” he said. “And then watch really closely what they do.”
The shift toward groups could also be critical for Facebook’s bottom line, Yu said, noting that Facebook now has 2.38 billion monthly users worldwide and could simply be running out of new ones to add. The company is already scrambling to bring high-speed internet to every corner of the world, including Africa and India. “The only way to grow is to get deeper in those relationships,” Yu said.
By fostering group conversations, while safeguarding those messages, Facebook aims to give its users a reason to spend more time on the platform. That, in turn, will allow Facebook to hit its users with more ads and keep the ad revenue rolling in.
“I’m cynical about the whole thing. That said, I do believe most of what they’re doing is legitimately good,” Yu said. “But I believe it’s because they’re being forced. It’s like the kid saying ‘Sorry, I hit my brother.’ No, you’re sorry because you got caught and are being forced to apologize.”
But at least the little brother might not have a sore arm tonight.