Artificial intelligence raises risk of hacking attacks as malicious users could exploit the technology for the possible misuses. Experts are worried that hackers could use the technology for criminal activities such as driverless car crashes or turn commercial drones into targeted weapons, according to the report from the Malicious Use of Artificial Intelligence.
The study, published by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities along with privacy and military experts on Wednesday warned that rogue states, terrorists, and criminals could use artificial intelligence (AI) in the future for crimes, terror attacks and to manipulate public opinion.
Artificial intelligence raises risk of hacking attacks so designers of the AI system will need to do more to reduce future misuses of their technology as the cybercrime will rapidly increase in the future, the report suggests.
The technology is being used in major applications such as smartphones, autonomous vehicles, and digital assistants. The reports states, AI could fuel the cybercrimes growth with attacks including automated hacking, finely-targeted spam emails using information scraped from social media, exploiting the vulnerabilities of AI systems themselves, or speech synthesis used to impersonate targets the reports states.
This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this,” said Dr Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk and one of the co-authors.