Elon Musk and Mark Zuckerberg are on opposite ends of the artificial intelligence spectrum.
This much is obvious, after Musk shot back on Twitter on Tuesday, saying that Zuckerberg’s “understanding of the subject is limited.” This came after the Facebook CEO said AI pessimists are misguided.
“I think that people who are naysayers and kind of try to drum up these doomsday scenarios — I just don’t understand it,” said Zuckerberg on Facebook Live, responding to a question on AI that mentioned Musk. “I think it’s really negative and in some ways I actually think it is pretty irresponsible.”
Musk’s concerns boil down to AI achieving “superintelligence” — where machines have advanced beyond human-level intelligence and may have objectives not in-line with their creators. The term was coined by Oxford Professor Nick Bostrom in his 2014 book of the same name, warning this could cause AI to replace humans as the dominant species on earth. Musk has said this possibility is mankind’s “biggest existential threat” and “potentially more dangerous than nukes.” That’s, uh, scary.
Others invested in the future of AI — like Zuckerberg — aren’t as worried, though.
“Overall [Musk is] definitely too negative,” said Sameer Singh, Associate Professor of Computer Science at University of California, Irvine, during an interview with TheWrap. “I’m much more positive about it and so are a lot of AI researchers — otherwise we wouldn’t be working in this area if we thought it was going to spell doom for the society.”
Professor Singh focuses his research on machine learning, and believes the human-AI dynamic will continue to be a master-and-servant relationship for years to come.
“We’re quite far from any kind of sentience or any kind consciousness for these AI beings — so far that it’s not kind of worth thinking about this time,” said Singh. “It’s probably not going to happen in our lifetime.”
Still, Musk has been working to put safeguards in place against AI going off the rails. He co-founded OpenAI in 2015, a research company aiming to build “safe AI” through a collaborative effort, rather than individual companies having an arms race. Imagine: The only thing worse than building hostile AI would be a single country or company having a monopoly on the technology.
Jack Clarke, OpenAI’s Strategy and Communications Director, told TheWrap the worst-case scenario of superintelligent robots developing consciousness and becoming our overlords is unlikely.
“I think that’s kind of a) very far off and b) we don’t necessarily know that [it’s possible]. There may be constraints we’re unaware of,” said Clarke. “It’s not logical this is where we end up.”
Clarke, along with Singh, is bullish on AI’s short-term prospects. Both pointed to AI advancements in healthcare (think robot assistants in the operating room) and driving as areas to keep an eye on.
Zuckerberg echoed those sentiments in his Facebook Live post.
“In the next five to 10 years, AI is going to deliver so many improvements to our lives,” said Zuckerberg. “If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents, and you’re arguing against being able to better diagnose people when they are sick.”
At the same time, Clarke believes the best defense against an AI backlash is continuing to foster a collaborative research environment.
“In terms of unsafe AI, how it might get built is if you have people who are reckless in how they build it and develop it as a science, then it can have qualities that can be dangerous,” Clarke told TheWrap. “And that’s one reason we’re set up — to help create safe AI and to also help promulgate methods and research that are also safe as well.”
Taking a step back, there appears to be some overlap: The research Musk is funding is working to curtail an “AI Apocalypse” and push machine learning in the same direction Zuckerberg sees it headed. Maybe the Silicon Valley heavyweights can agree on that.