Recently, I reached out to 17 thought leaders — AI experts, computer engineers, roboticists, physicists, and social scientists — with a single question: “How worried should we be about artificial intelligence?”
There was no consensus. Disagreement about the appropriate level of concern, and even the nature of the problem, is broad. Some experts consider AI an urgent danger; many more believe the fears are either exaggerated or misplaced.
Here is what they told me.
Don’t freak out
I am infinitely excited about artificial intelligence and not worried at all. Not in the slightest. AI will free us humans from highly repetitive mindless repetitive office work, and give us much more time to be truly creative. I can’t wait. — Sebastian Thrun, computer science professor, Stanford University
We should worry a lot about climate change, nuclear weapons, antibiotic-resistant pathogens, and reactionary and neo-fascist political movements. We should worry some about the displacement of workers in an automating economy. We should not worry about artificial intelligence enslaving us. — Steven Pinker, psychology professor, Harvard University
AI offers the potential for tremendous societal benefits. It will reshape medicine, transportation, and nearly every other aspect of our lives. Any technology that has the power to influence so many aspects of our lives is one that will call for some care in terms of policies for how best to make use of it, and how to constrain it. It would be foolish to ignore the dangers of AI entirely, but when it comes to technology, a “threat-first” mindset is rarely the right approach. — Margaret Martonosi, computer science professor, Princeton University
Worrying about evil-killer AI today is like worrying about overpopulation on the planet Mars. Perhaps it’ll be a problem someday, but we haven’t even landed on the planet yet. This hype has been unnecessarily distracting everyone from the much bigger problem AI creates, which is job displacement. — Andrew NG, VP and chief scientist of Baidu; co-chair and co-founder of Coursera; adjunct professor, Stanford University
But take fears about AI seriously
The transition to machine superintelligence is a very grave matter, and we should take seriously the possibility that things could go radically wrong. This should motivate having some top talent in mathematics and computer science research the problems of AI safety and AI control. — Nick Bostrom, director of the Future of Humanity Institute, Oxford University
If [AI] contributed either to the capacities of Russians hacking or the campaigns for Brexit or the US presidential elections, or to campaigns being able to manipulate voters into not bothering to vote based on their social media profiles, or if it’s part of the socio-technological forces that have led to increases of wealth inequality and political polarization like the ones in the late 19th and early 20th centuries that brought us two world wars and a great depression, then we should be very afraid.
Which is not to say we should panic, but rather that we should all be working very, very hard to navigate and govern our way out of these hazards. Hopefully AI is also helping make us smart enough to do that. — Joanna Bryson, computer science professor, University of Bath; affiliate at Princeton’s Center for Information Technology Policy
One obvious risk is that we fail to specify objectives correctly, resulting in behavior that is undesirable and has irreversible impact on a global scale. I think we will probably figure out decent solutions for this “accidental value misalignment” problem, although it may require some rigid enforcement.
My current guesses for the most likely failure modes are twofold: The gradual enfeeblement of human society as more knowledge and know-how resides in and is transmitted through machines and fewer humans are motivated to learn the hard stuff in the absence of real need. Secondly, I worry about the loss of control over intelligent malware and/or deliberate misuse of unsafe AI for nefarious ends. — Stuart Russell, computer science professor, UC Berkeley
Read the source article at Vox.com.