Google, Microsoft, OpenAI and Anthropic have jointly established the Frontier Model Forum to develop “a public library of solutions to support industry best practices and standards” for AI ecosystems.
The forum seeks to advance AI safety research, minimize risks and promote responsible model development. In the same vein, the group hopes to determine industry best practices to keep AI model development safe and responsible.
The forum also aims to work with policymakers and academics to circulate knowledge and plans to support efforts to build timely apps the world can make use of (examples cited included climate change mitigation and early cancer detection technologies).
“Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,” said Brad Smith, president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
The forum will be a means for organizations to discuss and take action on AI responsibility and safety. Organizations beyond the initial four are eligible to join, so long as they develop relevant models, demonstrate a commitment to safety and are willing to participate in joint forum initiatives.
“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation,” said Kent Walker, president of global affairs at Alphabet and Google. “We’re all going to need to work together to make sure AI benefits everyone.”
You can read the full press release here. Anthropic did not immediately respond to TheWrap’s request for comment. A Google representative pointed toward the aforementioned release, declining to comment further. An OpenAI representative linked to their company’s blog post on the matter. A Microsoft representative declined to comment further.
This forum is being established as OpenAI faces a lawsuit accusing it of widespread copyright infringement as well as an FTC investigation examining its products’ capacity for consumer harm. At the same time, Google is being sued for using American data for AI training purposes (Google denies any wrongdoing). And though Microsoft doesn’t have as many high-profile, public-facing headaches as these two companies, its close ties to OpenAI are hard to ignore when the latter is under such immense scrutiny.