We've Got Hollywood Covered

Instagram Uses Artificial Intelligence to Eliminate ‘Toxic’ Comments

”We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them,“ the company says

Instagram is the latest platform to turn to artificial intelligence — machines trained to carry out human-level functions — to combat spam and hateful comments.

Last October, the popular photo-sharing app started using DeepText, an AI system developed by its parent company, Facebook, to weed out unwanted comments, according to a report from Wired. DeepText leverages “word embeddings,” or the cluster of words and phrases, to analyze if a comment is worthy of deletion.

For example, the words “white” and “black” can be used both in an innocuous way as well as in mean-spirited way. DeepText is trained to decipher the difference and find when Instagram posts have violated the company’s Community Guidelines.

“We want to foster a positive, diverse community. We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages,” the Instagram guidelines state.

“It’s never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases,” the company continues. “When hate speech is being shared to challenge it or to raise awareness, we may allow it. In those instances, we ask that you express your intent clearly.”

Instagram initially used the technology to eradicate annoying spam on its app last fall. It then had its moderators feed a mountain of comment data into DeepText to help it formulate an algorithm to temper unwanted comments.

DeepText also takes into account other factors, like if a commenter knows the person they’re sending messages, as well as the commenter’s history. It now churns out a score between 0 and 1 for each comment — and if its more than an undisclosed number, a comment gets blasted into the ether.

In a blog post on Thursday morning, Instagram co-founder and CEO Kevin Systrom announced how the company was targeting “toxic comments.”

“Powered by machine learning, today’s filters are our latest tools to keep Instagram a safe place,” Systrom said. “Our team has been training our systems for some time to recognize certain types of offensive and spammy comments so you never have to see them.”

If this feels a little too Big Brother-ish for you, though, there’s good news: You can turn off Instagram’s comment filter by going into your app’s settings and flipping the switch on its “hide offensive comments” tab. You’re then free to enjoy the seedier, more depressing version of the app.