Malicious Use Of Artificial Intelligence In InfoSec


Malicious Use Of Artificial Intelligence In InfoSec

Heading into 2018, some of the most prominent voices in information security predicted a ‘machine learning arms race’ wherein adversaries and defenders frantically work to gain the edge in machine learning capabilities. Despite advances in machine learning for cyber defense, “adversaries are working just as furiously to implement and innovate around them.”  This looming ‘arms race’ points to a larger narrative about how artificial intelligence (AI) and machine learning (ML) — as tools of automation in any domain and in the hands of any user — are dual-use in nature and can be used to disrupt the status quo. Like most technologies, not only does AI and ML provide more convenience and security as a tool for consumers, but each can be exploited by nefarious actors as well.

joint publication released today by researchers from Oxford, Cambridge, and other organizations in academia, civil society, and industry (including Endgame) outlines “the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent, and mitigate these threats.”  Unfortunately, there is no easy solution to preventing and mitigating the malicious uses of AI, since the tools are ultimately directed by willful actors.  While the report touches on physical, political and digital security, we’d like to provide some additional context around the potential malicious use of machine learning by attackers in information security, and highlight defender takeaways.

READ MORE

Mohamed Mohideen
Mohamed Mohideen is the Marketing Manager at Spire Solutions.