NetMag Global
Research: Automation to be turned into devastation by maliciously used AI

Research: Automation to be turned into devastation by maliciously used AI

Research Automation to be turned into devastation by maliciously used AI. With the increasing advancements in artificial intelligence, the risks of getting hacked are also increasing as hackers can use such technologies to launch malicious attacks and  top researchers warn us about this fact in a report released on Wednesday.

“The Malicious Use of Artificial Intelligence,” titled report cautioned against several security threats posed by the misuse of AI.

Researchers from universities such as Oxford, Cambridge, Yale, and organizations like the Elon Musk-backed OpenAI, say that hackers can use such AI to turn consumer drones & autonomous vehicles into potential weapons.



For instance, Self-driving cars could be tricked into misinterpreting a stop sign causing road accidents, while some swarm of drones, being controlled by an AI system, can get used for surveillance or launching quick, coordinated attacks.

While intelligent machines, can lower down the cost of carrying out cyberattacks by automating certain, labor-intensive tasks & more effectively scoping out potential targets.

The report pointed to an example of “spear phishing,” where attackers use personalized messages for every potential target, for stealing sensitive information or money.

“If some of the relevant research and synthesis tasks can be automated, then more actors may be able to engage in spear phishing,” researchers say.

Must Read: Here is what we expect from Samsung, Nokia, Sony and other giants at MWC

AI could also be used for surveillance, on the political front,  creating more targeted propaganda & spreading misinformation. For instance, “highly realistic videos” of state leaders making inflammatory comments they never made actually, could be made using advances in image & audio processing.

Novel attacks taking advantage of an improved capacity to analyse human behaviors, moods, & beliefs on the basis of available data are also expected by the researchers.

Artificial intelligence can already be tapped on for creating fake, superimposed images onto other person in videos.

Researchers say that the scenarios highlighted in the report were not definitive predictions of the way AI could be maliciously used, in fact some of the scenarios might not be technically possible in the next 5 years, while the rest of the scenarios were already occurring in limited form.

Any specific ways were not offered for stopping the malicious use of AI, in the report published on Wednesday.

However, the report provided certain recommendations including more collaboration between policymakers & researchers, and called for the involvement of more stakeholders to tackle the malicious use of AI.

The technology is still nascent, but billions of dollars are spent on developing artificially intelligent systems. Last year, International Data Corporation, predicts that by 2021, global spending on cognitive & AI systems could reach $57.6 billion.

AI has been predicted to be so massive that Google CEO Sundar Pichai said it could have a more profound impact than electricity or fire, possibly.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *