Henry Kissinger Is Scared of ‘Unstable’ Artificial Intelligence

Building ethical machines should be “high national priority,” says former secretary of state

Henry Kissinger
Getty Images

Henry Kissinger must be watching the latest season of “Westworld.”

The former U.S. secretary of state is warning against the threat of “unstable” artificial intelligence in a new essay in The Atlantic — fearing the rapid rise of machines could lead to questions humanity is not ready to tackle.

“What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?” asked Kissinger in the piece.

Up to this point, humans are reported to have only reached “limited” AI, where machines have mastered chess and other complex games. The same machines would be useless if used to play Monopoly. But this, according to Kissinger, already shows the intimidating nature of AI. AlphaZero, a computer program designed by DeepMind, Google’s AI wing, just a few hours after learning chess, “achieved a level of skill that took human beings 1,500 years to attain,” said Kissinger. Reaching “superintelligence,” as warned by Oxford Professor Nick Bostrom, where machines have supplanted humans on a variety of topics, could spell disaster. “If AlphaZero was able to achieve this mastery so rapidly, where will AI be in five years?” asked Kissinger.

“AI is inherently unstable,” added Kissinger. “AI systems, through their very operations, are in constant flux as they acquire and instantly analyze new data, then seek to improve themselves on the basis of that analysis. Through this process, artificial intelligence develops an ability previously thought to be reserved for human beings. ”

For Kissinger, there are three particular areas humans should be worried about: “unintended results,” where AI’s goals depart from its creators. (The classic example: tasking AI with removing email spam. The best way to do this? Erasing humans.); building an ethical AI; and whether AI will be able to explain its objectives to its creators.

Kissinger isn’t the only one worried about these issues. Elon Musk has called uncontrolled AI mankind’s “biggest existential threat” and “potentially more dangerous than nukes.” And Perhaps unaware that developers are debating the moral implications of AI each day, Kissinger urges the budding industry to consider the ethical questions raised in his piece. He concludes his warning cry by pushing for a government-led initiative to steer AI development.

“The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.’

Read the full essay in The Atlantic here.

Comments