Big Tech Tells Biden It Will ‘Self Police’ to Protect the Public from AI Threats

Amazon, Google and Microsoft were among the companies that met with the president Friday to address “insidious” dangers of artificial intelligence

President Joe Biden (Photo Credit: Getty Collection)
President Joe Biden (Photo Credit: Getty Collection)

Leaders of seven top companies developing artificial intelligence software agreed in a meeting with President Joe Biden and VP Kamala Harris on Friday to self-police the industry to curb the potential dangers of AI.

While the agreement between the companies and the Biden administration is not legally binding, the White House said the companies were cooperating with the president, who is preparing an executive order and brokering bipartisan legislation in Congress to reign in the use of AI to steal information, images, ideas and artwork through the internet.

The companies meeting with Biden and Harris on Friday included representatives of OpenAI, Google, Meta, Amazon, Microsoft, Inflection and Anthropic to get their verbal agreement to protect the public from theft and other threats from AI.

In a statement Friday, the Biden administration said these companies agreed to “internal and external security testing of their AI systems” before releasing these systems to customers and the internet.

“This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects,” the White House statement said.

In Dec. 2022, Biden also signed an executive order, “Blueprint for an AI Bill of Rights,” directing federal agencies to “root out bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination.”

The president also is asking for transparency from these companies, to let the public know when content is AI-generated, and to avoid “the dangers of fraud and deception.”

“The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception,” the White House said.

The White House also requested that these companies guard against “societal risks” from AI software, and to make it a priority to guard against “harmful bias and discrimination” and invasion of privacy, which the White House claims is happening regularly right now.

“The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them,” the White House said.

Earlier this year, the National Science Foundation announced a commitment of $140 million to establish seven new National AI Research Institutes, bringing the total to 25 institutions in the U.S.