The Sports Illustrated article about volleyball was weird.
“Volleyball is one of the most popular sports in the world, and for good reason,” it began. “It’s fast-paced, has a high skill ceiling, and is generally an exciting sport to both play and watch.” It then pivoted awkwardly to: “Volleyball can be a little tricky to get into, especially without an actual ball to practice with.”
The article, deleted after its authorship was challenged as being AI-generated, is a recent example of how publications like Sports Illustrated and USA Today have begun experimenting with the technology, part of an ongoing shift at news publications driven by the breakneck speed at which AI is being developed, implemented and embraced.
The pressure by companies on Wall Street, led by Big Tech, is forcing media companies to take a position on the use of AI in their newsrooms. Already some of the biggest media outlets have adjusted their early opposition to using machines to generate news and have partnered with AI companies to avoid falling behind their competitors — or to prevent the tech companies from using their content without permission.
The choice news organizations are struggling with is whether “to make money or to inform the public,” Safiya Noble, a professor of Information Studies at UCLA, told TheWrap. “If shareholders maximizing profit every quarter is the only imperative that is valued, then we can say that journalism is going to turn into entertainment. Or [it’s] a race to the bottom.”
That has implications for the employment of journalists as newsrooms go through more layoffs, which this year have reached 19,000 job cuts as of October, compared to 3,000 in the same period in 2022, according to a report from Challenger, Gray & Christmas, an outplacement services firm.
Legacy media organizations are opting for partnerships with AI companies to prevent missing out on technological advancements for the industry. In July, the Associated Press said it was licensing part of its massive archive of news stories to OpenAI under a deal intended to explore generative AI’s usage in news. As part of the partnership, the AP also gains access to OpenAI’s technology and product expertise.
The financial terms of the deal have not been disclosed and the AP has yet to fully explain where the technology will be used in its reporting. In a statement, the news organization noted it had used AI technology for nearly a decade to automate certain “rote tasks and free up journalists to do more meaningful reporting” and that the AP does not currently use generative AI in its news stories.
“News organizations must have a seat at the table… so that newsrooms large and small can leverage this technology to benefit journalism,” said the AP’s senior vice president and chief revenue officer Kristin Heitmann.
News Corp., which owns and operates the New York Post and The Wall Street Journal, is one of a growing number of news organizations that appear to be shifting their positions on the potential use of AI in newsrooms.
In May, News Corp CEO Robert Thomson warned the rise of AI could “fatally undermine” journalism. Thomson reiterated his concerns in September. “The danger is, it’s rubbish in, rubbish out, rubbish all about,” he said. “Instead of the insight that AI can potentially bring, what it will evolve into, essentially, is maggot-ridden mind mold.”
Then earlier this month, Thomson appeared to change his tune during News Corp’s fiscal first-quarter 2024 earnings call, announcing “advanced discussions with a range of digital companies that we anticipate will bring significant revenue in return for the use of our unmatched content sets.”
The News Corp. CEO added that “generative AI engines are only as sophisticated as their inputs and need constant replenishing to remain relevant. And we are proud to partner with responsible purveyors of AI products and their prescient leaders.”
Even respected new media startups are starting to reflect a shift in posture on AI. When it formed in 2017, Axios offered a “Bill of Rights” that promised: “Every item will be written or produced by a real identity. There will be NO AI-written stories. NO bots. NO fake accounts.” But more recently, Axios put a note at the bottom which says: “We are currently evaluating generative AI tools to explore how they might augment our journalism. We presently do not use any AI in content creation. We will update this manifesto if this changes.” (Axios did not respond to a request for comment.)
An existential crisis in newsrooms
In the year since OpenAI debuted ChatGPT, its powerful chatbot capable of producing high-level prose in seconds, journalists have descended into an existential crisis that media organizations would soon choose to replace them with AI-generated content.
ChatGPT’s success spurred a flurry of competitors and put additional pressure on media companies to find cost savings amid declining advertising revenues and news consumption that is rapidly shifting from digital sites to social media platforms like TikTok.
Early experiments with AI-generated content are already creating friction in newsrooms.
In the latest example, Sports Illustrated repeatedly published articles with bylines attributed to entirely AI-generated author profiles without disclosing that the content was created by the technology. SI would even occasionally replace the AI-generated author profiles with different avatars. Each time an author was replaced, the articles supposedly written by them would be reattributed to the new avatar with no explanation.
Staffers responded with disgust. In a statement signed by “The Humans of SI Union,” they said, “If true, these [AI] practices violate everything we believe in about journalism. We deplore being associated with something so disrespectful to our readers.”
Some took to social media to express their contempt with management: “SI used my name and face in an email to sell subscriptions last week. To think that they were simply *making up names and faces* while making this kind of sales pitch in the name of ‘independent journalism’ from ‘the most trusted name in sports’ is beyond infuriating,” SI staff writer Emma Baccellieri wrote on Bluesky.
SI publisher The Arena Group laid responsibility for the AI-generated content — and authors sprouting up across the website — on a third-party company, AdVon Commerce.
“A number of AdVon’s e-commerce articles ran on certain Arena websites,” The Arena Group said in a statement. “We continually monitor our partners and were in the midst of a review when these allegations were raised. AdVon has assured us that all of the articles in question were written and edited by humans.”
If true, these [AI] practices violate everything we believe in about journalism. We deplore being associated with something so disrespectful to our readers.
The Humans of SI Union
On Wednesday, in the wake of the scandal, The Arena Group fired two top executives, Andrew Kraft, COO of The Arena Group and Rob Barrett, president of media. The company said in a statement that the dismissals were unrelated to the AI controversy, saying they were part of a move to “improve efficiency and revenue.”
This isn’t the first time AdVon has been blamed for suspicious AI-generated content showing up on digital sites without editorial knowledge or approval.
In October, Gannett-owned USA Today was accused of publishing AI-generated shopping articles through a deal with AdVon to foster paid search-engine traffic. Gannett denied the content was created with assistance from AI but conceded that the articles “did not meet our affiliate standards,” according to the Washington Post. Yet on AdVon’s LinkedIn page, it explicitly says, “AI solutions for E Commerce.”
When reached for comment, a spokesperson for Gannett denied any involvement with generative technology saying: “Reviewed content was created by third-party freelancers hired by a marketing agency partner, not AI. The pages were deployed without the accurate affiliate disclaimers and did not meet our editorial standards. Updates have since been published.”
The Sports Illustrated situation comes less than six months after G/O Media acknowledged it was experimenting with AI-generated content for some of its sites including Deadspin, The Onion, The A.V. Club and The Root. Editorial director Merrill Brown, who has since been fired for unknown reasons, assured employees in June that the AI experiment would produce only a few stories “basically built around lists and data,” and that “these features aren’t replacing work currently done by writers and editors.”
The A.V. Club published a list of upcoming 2023 movie releases, with the editorial note: “This article is based on data from IMDb. Text was compiled by an AI engine that was then reviewed and edited by the editorial staff.”
News site Futurism, which first reported SI’s AI-generated content, noted that the publication had not disclosed it as such. One former G/O staffer who spoke to TheWrap criticized that oversight. AI-generated content “should absolutely not be deployed unilaterally by media company management,” the former staffer said. “It’s a perversion of why they got into this business, allegedly.”
“For Sports Illustrated and USA Today, it is not their intent to improve journalism,” the former staffer said, noting that not even the top editors at the publications were informed of the content going up on the sites ahead of time… “[AI] is transparently there to get cheap content up on their sites to replace human labor.”
AI companies lobbying the media industry
For their part, AI companies are also lobbying news organizations both as potential clients and sources of data to train their chat boxes. In July, OpenAI said it would give $5 million to venture philanthropy firm American Journalism Project to determine how artificial intelligence can best be used to support local journalism.
“We proudly support the American Journalism Project’s mission to strengthen our democracy by rebuilding the country’s local news sector,” OpenAI CEO Sam Altman said in the announcement. “This collaboration underscores our mission and belief that AI should benefit everyone and be used as a tool to enhance work.”
Media watchdog groups, including News Media Alliance, which represents nearly 2,000 U.S. outlets, have expressed concern that AI companies are using news organizations’ content without permission.
“The research and analysis we’ve conducted shows that AI companies and developers are not only engaging in unauthorized copying of our members’ content to train their products, but they are using it pervasively and to a greater extent than other sources,” Danielle Coffey, chief executive of the News Media Alliance, said in in November when the group published its research.
For Noble, the existential threat for journalism is not that far in the future. While trained journalists with professional ethics and standards “have a broader moral commitment to the public” than “cheap, low cost text generation software,” she believes journalism may eventually disappear — or become “extremely narrow” in scope, where only the most well-resourced magazines and newspapers pay to hire journalists and send them to investigate deeply and report back to the public.
“And then most people will just be subject to propaganda machines or complete falsehoods that get generated by text machines,” she said. “And that seems incredibly dangerous.”
Editors Note: This post has been updated with news about the firing of two executives at The Arena Group.