The Federal Trade Commission is investigating OpenAI, the company behind ChatGPT, to determine whether the popular chatbot harmed consumers via its vast data collecting measures as well as its creation and circulation of false information.
The FTC sent OpenAI a 20-page document demanding records and details of how OpenAI keeps its language models in check. To note, language models are the technological framework underpinning ChatGPT.
In its Civil Investigative Demand, one of the FTC’s requests for information is that OpenAI “describe in detail the policies and procedures that the company follows in order to assess risk and safety before the company releases a new Large Language Model Product.”
The CID also asks that the company detail what steps it takes to prevent its language models from gathering personal information or info that runs the risk of becoming personal. The CID inquires about the role of human reviewers in OpenAI’s processes and seeks details on what steps the company has taken to prevent ChatGPT from producing “false, misleading or disparaging” statements.
An FTC representative declined TheWrap’s request for comment. An OpenAI representative pointed to the recent tweets of the company’s CEO, Sam Altman. In the tweets, Altman states that his company will work with the FTC, is transparent about the limits of their technology and designs systems not to go after private individuals.
The Washington Post first reported the news of the FTC investigation.
The FTC isn’t the only company scrutinizing OpenAI’s practices and products; EU regulators have also proven to be on the company’s mind. OpenAI CEO Sam Altman hinted that if the EU becomes too much of a hassle to deal with, his company would have to consider abandoning Europe. He quickly backtracked those comments.
OpenAI leadership has made it clear they’d rather not be regulated for products below a “significant capability threshold,” though that qualifier remains open to interpretation. It’s unclear what may come of the FTC’s investigation, but given the mounting legal concerns surrounding AI companies’ activities, serious attempts at regulatory measures in the near future seem likely.
Just recently, Sarah Silverman and two novelists filed a lawsuit against OpenAI for copyright infringement stemming from its AI models scraping their books for data. Google also faces a lawsuit for using vast quantities of data to fuel company AI endeavors. In response to the suit, Google confirmed it uses data for these purposes but denied that it has done anything illegal in the process.