Instead of moderation, platforms "should focus on limiting virality," Kantrowitz says
2020 has stood out for plenty of reasons. But perhaps the most notable development in tech this year has been the willingness of Facebook and Twitter to increasingly moderate content, from purging fake news on COVID-19 to adding warning labels to President Trump's posts. Have those measures gone too far?
That's what TheWrap's Sean Burch and Alex Kantrowitz, publisher of the Big Technology newsletter, debated on this week's episode of Tech Talk.
Join WrapPRO for Exclusive Content,
Full Video Access, Premium Events, and More!
Kantrowitz outlined the growing tension between the free speech absolutists, who are against Facebook and Twitter censoring content, and those in favor of "thoughtful moderation," where tech giants are proactive in moderating their platforms. That tension has only increased in recent weeks, following Twitter and Facebook's decision to censor reports from the New York Post on Hunter Biden, son of President-elect Joe Biden.
Now, after deciding they'd take a more hands-on approach to policing their users, Twitter and Facebook have found themselves playing a game of "whack-a-mole," Kantrowitz said, where the moment you take action against one piece of content, another problem springs up. Ultimately, Kantrowitz explains there's a better way for these social media giants to spend time -- improving their platforms. To hear his full answer, though -- as well as an in-depth look at content moderation and Facebook's antitrust lawsuit -- you'll need to watch the full conversation.
Below is part of their talk -- and be sure to watch the full conversation above.
Sean: The main takeaway I've had this year is that the big social media companies -- Twitter, Facebook, even Google-owned YouTube -- have really stepped up their moderation policies. There's a question whether that's a good thing for consumers. What's your opinion?
Alex: It's really interesting, given where these platforms began. Just a few years ago, Twitter used to call itself the "free speech wing of the free speech party." Facebook said it didn't want anything to do with political content. YouTube let Alex Jones and a bunch of other wing-nuts build empires on its service until cracking down. So in recent years, we've definitely seen a crackdown.
And there's been this debate between free speech absolutists and the folks who are for thoughtful moderation. And the free speech absolutists are basically like, "If you start taking stuff down, you're going to end up taking down stuff you probably shouldn't." You might be excessive in your desire to remove content, and then really change the world in negative ways."
I think some of them might point to the action Facebook and Twitter took on the Hunter Biden story. It was run by the New York Post based on what we know were illicitly obtained files from (Hunter Biden's) laptop. Both of them were like, "We don't want a repeat of 2016," where foreign governments had exploited their services and impacted the election. Who knows if they changed the outcome but it certainly had some impact on it. They were on the alert for foreign governments to be doing something similar; they saw the Post story, thought it might be something similar, and removed it.
The free speech absolutists are saying we don't even want a chance of this happening, and the people for thoughtful moderation believe these platforms are changing society; they're having negative impacts in a number of different areas, and they do need to moderate.
I'd love to hear where you come down on the divide between free speech absolutists and thoughtful moderation. I find myself coming down more and more on the side of free speech absolutism. It worries me a bit when you have nameless, faceless people working at these companies, making decisions on what is and isn't OK to post. And on the decision to censor the New York Post's Hunter Biden story: I'm surprised these companies don't have more egg on their face this week.
We need to view these stories with two aspects to it: There are the outputs, the stuff they spit out, and the machine. Why is the machine spitting out stuff the way it is? I don't think there's any debate over the fact social media platforms encourage outrage. Misinformation spreads like wildfire. And they become outposts for the fringe, not because of the merits of the fringe ideas, but because the fringe benefits from spending energy on social platforms in the way mainstream [outlets] do not.
Let's take an example: Everyone whose saying the coronavirus is a hoax, spends their day posting on Facebook about how it's a lie. Dr. Fauci isn't going to do that, right? On social media, the level-headed sources tend to be outnumbered by lunatics. And I'm not saying there's a universal source of truth, but it's just the dynamic on the platform.
So I think you have an option. Let's start by saying content moderation is never going to be perfect when you decide to take stuff down. It's just never going to be perfect. So what I think what we need to do is focus on the machinery and say, "Can we create a platform where there's less outrage, less misinformation, and we can sort of even out the lunacy that exists on these platforms with good information?"
We need the platforms, instead of focusing their efforts on moderation, to focus on limiting virality. Where some of the wing-nut stuff can stay on the platform, but it doesn't have an advantage to spread further than everything else, which is the way it is now.
Watch the full conversation in the video above.