Corporations Face Off with Hackers Around AI Cybersecurity
Corporations Face Off with Hackers Around AI Cybersecurity

Corporations Face Off with Hackers Around AI Cybersecurity

The mantra of modern technology is to improve and innovate continuously. It makes sense as we strive to look for more improved ways to get processes, actions and activities done.

Automation and machine learning, for instance, is currently used across many industries to streamline basic processes and remove the repetition from a normal worker’s routine. Not to mention, machines tend to be more efficient and less resource intensive. A robotic or automated system continues to work at its set performance, never tiring, growing hungry or getting burnt out.

As we create more innovative solutions as a society, do we also set ourselves up for harder, more damaging falls? In regards to AI, for example, do we open up the gates to potentially more dangerous and common attacks?

It’s no secret that the technology at our disposal can be used for both good and bad, it just depends on who has control and possession of the necessary systems. With AI, who is truly in control? Is it possible that hackers and unscrupulous parties may take advantage to create more havoc and trouble for the rest of us?

Does modern AI pose a cybersecurity risk?
A recent study, comprised of 25 technical and public policy researchers from Cambridge, Oxford and Yale, alongside privacy and military experts, reveals a potential risk for misuse of AI by rogue states, criminals and other unscrupulous parties. A list of potential threats would come with digital, physical and political ramifications depending on how the systems and tools were leveraged, used and structured.

The study specifically focuses on plausible and reality-based developments that can or may happen over the next five years. Instead of a “what if” scenario, the idea is more of a “when” over the course of the coming decade.

There’s no reason to be alarmed just yet: the paper doesn’t explicitly say AI is dangerous or will definitely be used to harm modern society, only that there’s a series of risks evident.

In fact, one researcher from Oxford’s Future of Humanity Institute named Miles Brundage said: “We all agree there are a lot of positive applications of AI.”

He also goes on to state that “there was a gap in the literature around the issue of malicious use.” It’s not quite so dire, but instead should serve as a warning. If we intend to use AI more openly in the future, which we certainly do, then we need to come up with more advanced security and privacy measures to protect organizations, citizens and devices.

With self-driving vehicles — controlled primarily by a computer-based AI — it’s possible for hackers to gain access to said vehicles in motion, and take control. By no stretch of the imagination, they could easily careen vehicles off the road, disengage locks and various features, or much worse.

Imagine commercial and military drones being turned into remote-access weapons used by shadow parties and criminals?

These are, of course, worst-case scenarios that will only happen if the necessary administrators and developers don’t spend as much time building robust security and protections into the foundation of these devices.

Read the source article at InfoSecurity Magazine.