How Facebook and Twitter’s Content Moderation Could Open a Legal ‘Pandora’s Box’

Available to WrapPRO members

“It opens them up to liability for information that’s published on their platform,” Matt Bilinsky, a tech and media-focused attorney in Los Angeles, says


In a cold twist, a push by Silicon Valley’s internet giants to curb undesirable speech could potentially undercut their legal immunity. Legislation introduced last month would revamp Section 230 of the Communications Decency Act — the legal shield that safeguards tech stalwarts, such as Facebook, Twitter, Google and YouTube against being sued for what its users post. Sen. Josh Hawley, R-Missouri, one of tech’s loudest critics in Washington, D.C., proposed an amendment that essentially would strip these companies of their protection unless their content moderation is deemed “politically neutral.” The consequences of curtailing Section 230 would be dire for Facebook, Twitter and the like. Hawley did not respond to TheWrap’s request for comment. “It’s a very simple explanation: it opens them up to liability for information that’s published on their platform,” Matt Bilinsky, a tech and media-focused attorney in Los Angeles, told TheWrap. Today, if a user defames another on Facebook, erroneously claiming that person gave them a venereal disease, for instance, the social network is under no legal threat. As Bilinsky explained, the law differentiates between publishers and platforms. “But if you remove Section 230 protection, it opens up Pandora’s box. [Facebook and other tech giants] are going to start getting those lawsuits,” Bilinsky continued. “And where they land, who knows? The expense for Facebook and Twitter to find out would be costly.” The new legislation would task the Federal Trade Commission with certifying that tech companies approach moderation in a neutral fashion. It would apply to any company with more than 30 million monthly active users in the U.S., 300 million monthly active users globally or $500 million in global revenue. What’s more, a supermajority vote would be required every two years. A company could potentially lose its immunity and be subject to legal liability. How’d we get here? Section 230 dates back to 1996 — the Dark Age of the internet. Prior to its enactment, “there was an interpretation of the rules that, if you exercised any editorial control over content, you incurred more liability for it,” Jeff Kosseff, a law professor at the U.S. Naval Academy and author of “The Twenty-Six Words That Created the Internet,” a new book on Section 230, told TheWrap. Legislators, according to Kosseff, adopted Section 230 to give sites the “breathing room” to moderate content, rather than shy away due to fears of being slapped with a lawsuit. “Congress passed it because they did not want the platforms to be hands-off,” Kosseff continued. “They gave platforms the tremendous flexibility to moderate objectionable content. That was a policy choice that Congress made.” But rather than encourage heavy regulation, the act was intended to foster growth, Bilinsky said. And for two decades, it did just that, allowing companies like Facebook and Google-owned YouTube to flourish behind user-generated content, without fear they’d be sued for something a user said. It was taken as “gospel,” Bilinsky said, “that more was better: more sharing, more speech, more information.” Fast forward 23 years, and that gospel isn’t as readily accepted by the tech faithful. Now, moderation and fighting “hate speech” — a term the major platforms can’t agree on — has become a way to build trust with an increasingly wary audience. In May, Facebook banned a number of far-right commentators, including Alex Jones and Milo Yiannopoulos, as well as Nation of Islam leader Louis Farrakhan, for violating its policies on “dangerous individuals and organizations.” The company policy says that it “does not allow any organizations or individuals that proclaim a violent mission or are engaged in violence” to have Facebook accounts. But Facebook did not share specific infractions that led to the bans — making the decision appear capricious. Twitter also has spent much of the last year purging its platform, after chief Jack Dorsey said the company needed to improve the “health” of public conversation. But the company’s arbitrary enforcement of its arcane rules has led to confusion from both the left and the right. When pressed by Joe Rogan on his podcast earlier this year, Dorsey, on several occasions, said he didn’t know the “exact specifics” behind a number of bans. The result is no one is satisfied with the current arrangement. “On one side you have the Ted Cruz, Josh Hawley argument that there’s too much moderation of particular points of view,” Kosseff said. “Then you also have a whole other criticism, which is that the platforms don’t moderate enough. He continued: “The platforms really have two different, almost polar opposite criticisms of them, and they’re caught in the middle.” This was especially clear last month, when YouTube decided to demonetize, but not ban, conservative commentator Steven Crowder, after Vox reporter Carlos Maza shared a video compilation of Crowder calling him a “lispy queer” and “gay Latino,” among other slights, while critiquing his political opinions. (Maza said many of Crowder’s fans had also harassed him and texted him to debate the commentator.) Crowder, who hosts a right-leaning comedy show, had his entire channel demonetized for a “pattern of egregious actions” that — despite YouTube stating that he didn’t violate its policies — “harmed the broader community,” according to the Google-owned video service. That decision came a day after YouTube had initially said it wouldn’t take action against Crowder because he hadn’t violated its rules. Ultimately, YouTube’s move to demonetize Crowder did little to placate Maza, who tweeted that the company “made this problem worse than it already was” by not banning Crowder altogether. Republicans — including President Trump on several occasions — have ridiculed Silicon Valley’s attempts to moderate speech. And now, with Sen. Hawley’s “End Support for Censorship Act,” lawmakers are at least considering yanking tech’s ability to lean on Section 230 when making content decisions. “With Section 230,” Sen. Hawley said in a statement last month, “tech companies get a sweetheart deal that no other industry enjoys–complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship. Unfortunately, and unsurprisingly, big tech has failed to hold up its end of the bargain.” This is where it gets hairy for Silicon Valley. Operating “commonly understood and reasonable community guidelines” is completely kosher under Section 230, Bilinsky explained, but moderation in moderation is key. “The more extensive you make the guidelines, the more selectively you enforce them, the more discretion you start to exercise over your platform, you start to move away from what seems like a platform and more like what looks like a publisher,” Bilinsky said. That move is what has put Section 230 up for debate — and has put companies like Facebook, Twitter and Google in increased danger. Without their current protections, these companies are looking at bloated legal fees and the potential abandonment of the user-generated content that helped build them up in the first place.

Comments