Bruce Schneier is scary smart. The things he talks about – AI weaponization, remote hacking of commercial airliners and self-driving cars, malicious alteration of medical records – are scarier.
The author of 13 books, including the cleverly titled Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World – and hundreds of articles and academic papers, Schneier is a public intellectual whom The Economist called a “security guru.” He’s testified before Congress, he’s a Harvard fellow and lecturer, a member of the boards of two technology organizations and a successful entrepreneur. You get the idea. Schneier knows his stuff.
Schneier asserts that advances toward AI give us plenty to worry about, but not from Elon Musk’s robots-will-kill-us perspective; long before we get to “super-intelligent” AI, machine learning in its current state is edging toward posing major dangers.
“I don’t worry about the risks of AIs taking over the world,” Schneier said during a Q&A discussion at the AI World conference in Boston this week. “I worry about risks much sooner, the near-term precursor risks.”
Schneier discussed a range of cybersecurity issues, painting a good guys vs. bad guys picture both alarming and, assuming the good guys stay ahead of the innovation curve, encouraging. His core point: as AI and its associated technologies continue to evolve to protect information assets and networks, so does the opportunity to use AI to attack these systems.
One of his themes, echoed by the U.S. Department of Defense, is the emerging “AI arms race,” the competition among countries and non-state actors to “leapfrog” each other militarily by adopting AI and machine learning for cybersecurity and weaponry. We wrote about this earlier in the year when Russian President Vladimir Putin declared that “the one who becomes the leader in this sphere will be the ruler of the world.” We’ve also written about “algorithms at war,” DoD’s work in AI-based military systems.
The same day Schneier spoke in Boston a federal government IT publication, MeriTalk, published an unsettling story that casts doubt on whether the DoD, or Congress, is “putting enough of its money where its mouth is,” citing a recent Govini report.
“The U.S. military can either lead the coming revolution, or fall victim to it,” declared the report’s author, former Deputy Secretary of Defense Robert Work. “This stark choice will be determined by the degree to which (DoD) recognizes the revolutionary military potential of AI and advanced autonomous systems…, advanced computing, artificial neural networks, computer vision, natural language processing, big data, machine learning, and unmanned systems and robotics….”
Read the source article at Enterprise Tech.