“It scares me,” Neal Baer, an MD and showrunner of Netflix’ “Designated Survivor,” of USC’s AI ratings tool
Can technology and human beings become effective partners in rating Hollywood movies?
And, more importantly, should they?
Join WrapPRO for Exclusive Content,
Full Video Access, Premium Events, and More!
Researchers at USC’s Viterbi School of Engineering think artificial intelligence has a future place in Hollywood’s ratings process. A new AI tool
is intended to rate a movie’s content at the script stage, allowing screenwriters and producers to edit or adjust the screenplay to achieve the desired rating from the Motion Picture Association’s ratings board, making desired changes before the movie is shot.
The AI project focused on the presence of violence, drug abuse and sexual content, areas that traditionally play a role in a film’s rating. The tool, researchers say, has the potential to provide instant feedback to storytellers and decision-makers.
Researchers told TheWrap they are currently testing the technology with various partners to make the AI tool available soon to the general public.
Shrikanth Narayanan, a USC engineering professor on the research team, and lead researcher Victor Martinez, a doctoral candidate in computer science at Viterbi, acknowledged the tool could save filmmakers time and money by flagging certain content at script stage rather than in the editing room if screenwriters want to make change to achieve the desired rating (G, PG, PG-13, R or NC-17).
However, the researchers told TheWrap they hope the technology may have future use beyond ratings prediction as a tool for writers to screen for racial, cultural and gender bias in movie scripts, TV programs and other media.
“We are all interested in unconscious bias and stereotypes,” Martinez said. “Our next goal is to understand if we can see (bias) is portrayed in a systemic fashion.”
The research was conducted with no input from the MPA (formerly the MPAA), and the organization declined to comment on the report, released in November. The association uses a ratings board made up of eight to 13 parents at a given time who view the films postproduction to determine the rating.
But after reading a summary of the research and its possible applications, screenwriters and movie critics shared their reservations about involving technology in the creative process at the script stage.
“It scares me,” Neal Baer, an MD and showrunner of the 2016-19 Netflix series “Designated Survivor, ” told TheWrap. “This notion that somehow AI is objective is ridiculous. We know AI and the algorithm that it relies on is created by human beings … I just don’t want somebody creating an algorithm that is making decisions about my writing.”
Baer added that numerous organizations, including GLAAD, monitor bias and offer resources to combat stereotyping on the screen. And in 2019, the popular screenwriting tool Final Draft introduced its Inclusivity Tool to help writers address diversity issues in their work.
Narayanan does not see artificial intelligence eliminating humans from the creative process. “It’s just a tool,” he said. “Just like we use tools to write, tools to correct spelling. It’s just another way of thinking about a complex process, a creative process where many people are involved.”
Martinez added the tool provides information akin to story notes from a network or studio executive.
The tool is based on language, with the AI model finding a relation between what characters say and expert content ratings from Common Sense Media, a nonprofit resource that provides entertainment and technology recommendations for families and schools.
Researchers used 992 movie scripts that included violence, substance abuse and sexual content to determine the patterns between written language and what the researchers call “risk behavior content” ratings from Common Sense Media.
“For example, the model picks up on the way in which Mr. Bond orders his usual alcoholic drink, when there are characters plotting to kill someone, or when they curse in a sexually explicit manner,” said a USC statement on the project. “The model does this by attending to both what is being said (semantics) and independently, how something is being said (sentiment).” Once the model has been trained in the process, it can apply the analysis to a script in less than two minutes. Classic films analyzed as part of the research included “From Russia With Love,” “The Exorcist” and “Pulp Fiction.”
At this point, the AI tool gives three ratings to a film. In the case of the three movies mentioned, here’s how the AI tool’s ratings looked compared to the MPA’s:
From Russia With Love, PG; AI: violence= HIGH, sexual = MED, substances=LOW
The Exorcist, R: AI violence=HIGH, sexual=MED, substances=MED
Pulp Fiction, R: AI violence=HIGH, sexual=HIGH, substances=HIGH
Despite researchers’ insistence that the tool is not intended to replace the human factor in the ratings process, film critics and creatives are wary of the analysis it may provide. As a TV producer,
Baer would prefer to rely on recommendations from traditional broadcast standards & practices department rather than an AI tool. “I just think it’s frankly quite silly, like, leave us alone,” he said with a laugh. “Let me talk to a person and negotiate with them. I’ve found when I (worked at) NBCUniversal, CBS and Netflix that the person at Standards & Practices that I was negotiating with was always quite supportive.”
In 2011, The Weinstein Company, producers of Best Picture Oscar winner “The King’s Speech,” sparked cries of censorship by agreeing to create an alternative version of the film with fewer F-words to earn a PG-13 rating from the MPA. An informal 2015 study conducted by TheWrap found that R-rated films returned about $42 million at the box office on average in 2014, while the average haul of PG ($82 million) and PG-13 movies ($79 million) was roughly double that amount.
Despite such statistics, Baer said a writer or producer is unlikely to have a financial incentive to rewrite a script for a PG or PG-13 rating in today’s movie landscape. “I don’t know any writers that want to get a PG versus an R, so that they can sell more tickets,” he said. “This is not the world of ‘Midnight Cowboy,’ which started as an X-rated film. It would probably be PG-13 right now.”
The MPA originally rated 1969’s “Midnight Cowboy” with an R, then changed it to an X for its depictions of prostitution and homosexuality. The movie went on to become the first and last X-rated film to win an Academy Award for Best Picture. The ratings organization later changed the rating back to an R.
Former studio story analyst and screenwriter Mitchell Levin sees little use for the AI tool. “I don’t really get how this thing would benefit a writer,” he wrote in an email. “I mean, it’s pretty clear what a movie will be rated when you write it — e.g., more than one F-bomb and it’s an R.”
Gil Robertson, president of the African American Critics Association, said he saw the value of the technological scriptwriting assessment tool as long as it remains an adjunct to human creativity and judgment. “We are getting into a space as a society where there is a lot of policing going on,” he said. “I don’t want to sound like I’m haunted by Big Brother, but there is some element of that in this.”
Chicago Tribune movie critic Michael Phillips offered a similar view, saying he finds the tool “sort of ‘Clockwork Orange-y.”
“I find that I’m intrigued by the study, and I’m all for any analysis, any analysis, that tells us more about what patterns seem to emerge in screenplays that we may not have considered,” Phillips said. “But I don’t love the idea of taking the human element out of deciding what scripts are going to get made.”