X

New Report Discusses A.I. As A Tool For Malicious Intent

Following a two-day Oxford workshop, a brand new report has now been released constituting the efforts of 26 authors across 14 institutions, which highlights the threats A.I. could potentially pose as well as possibilities for mitigation and threat prevention. As might be expected, the ideas and concepts expressed are taken from experts across a wide array of fields from academics to industry. The primary purpose of the document seems to be a focus on several recommended points of focus, beginning with recommendations in response to the consistently changing threats and A.I. The report suggests that the threat of A.I. as a tool for malicious entities needs to be addressed from the top down, beginning with policymakers. Collaboration between policymakers and researchers is needed to investigate prevention and mitigation for malicious use of A.I. Researchers need to be focused on identifying best practices and taking a proactive approach regarding A.I.-related security from the perspective of preventing misuses of the technology where possible. Everybody involved should include more stakeholders and domain experts to participate in discussions about the challenges involved.

Those guidelines are intended to begin addressing threats presented by the malicious use of A.I. across three categories as identified by the report. Those include the digital, physical, and political security. The connotations of each of those are fairly self-explanatory – pertaining to threats to physical infrastructure or wellbeing, data, and surveillance and privacy – but the threats themselves may be less well understood. However, that hasn’t stopped the report’s authors from outlining the ways they think A.I. could negatively impact security. For starters, the use of A.I. will almost certainly expand on existing threats in terms of scale, sophistication, and the rate of attack. Those attacks could become much more precisely targeted and able to exploit inherent vulnerabilities in A.I. systems. Moreover, it will become more difficult to attribute attacks to their respective actors. Finally, A.I. may be used to introduce entirely new and unforeseen threats, simply because those threats would not previously have been possible with only humans perpetrating attacks.

In the meantime, the report also suggests that research and the entire industry need to collectively work out four primary issues. The first of those pertains to the openness of research, since the industry and organizations involved in A.I. and machine learning may be able to find security solutions more readily if they work together. The cybersecurity community should play an integral role in that. Beyond that, policies need to be built out to promote privacy, protection, and the use of A.I. public security – in addition to policies for monitoring of the use of A.I.-related technologies. Education plays a key role in that, as well, with a substantial portion of the general populace likely to make use of those technologies. Promoting education and responsibility among users and ethics and standards among researchers and organizations is a pivotal aspect of security. At 101-pages, the report does go into quite a bit more detail than could ever be outlined in a single article, however. Thankfully, it is also free to download – and accessible via the button below – for anybody interested in a more in-depth look.

Check Out The Full Report