Over the past few years, the rise in artificial intelligence (AI) has made a huge positive difference in cybersecurity – allowing organisations to analyse large amounts of risk data resulting in faster response times, augment under-resourced cybersecurity teams and mitigate cyber threats more effectively.
Nevertheless, AI is merely a tool, and like any other tool, how it is used depends largely on the motives of the person using it. Looking at today’s increasingly volatile cyber threat landscape, CSA reached out to those in the know in the industry to find out whether it’s possible that AI could (if it hasn’t already) be used by cyber adversaries to improve the effectiveness of the various tools they have at their disposal to wreak digital havoc.
While we’re not talking about Skynet-level of AI destruction that could spell the end of humanity as we know it, the damages that cybercrime can inflict on people, businesses and governments are growing, with severe real-world consequences. AI, in the wrong hands, could make the cyber threat situation exponentially worse for all.
Orli Gan, Head of Products for Threat Detection from Check Point Software Technologies, believes that while we have yet to witness attacks directly using AI tools, AI is inevitably going to be a powerful tool that cybercriminal will use. AI tools are readily available today as open source and represent a significant advancement in the ability to make decisions based on collected data.
Comments from Boris Cipot, Senior Sales Engineer from Synopsys Software Integrity Group, supported the notion that we’re already seeing indications that AI-driven malware and attacks are possible scenarios that could well plague the future of the cyber threat landscape.
Boris explained, “There are currently signs of “smart” malware that is capable, to a certain point, of mimicking user behaviours and system behaviours. This could lead to potential network penetration or device infection by waiting for the right moment to strike. However, these would still be hacker-driven attacks rather than fully AI-driven methods. This small advancement does mean that development is moving in an AI-focused direction.”
The Threat is Here, Now
On the contrary, when we asked Ian Yip, CTO of McAfee, he said that AI-based cyber attacks are very much a threat right now, adding that, “There is already an entire underground economy where hackers can access new services and products to help them fly under the radar, including those powered by AI. With barriers of entry lower than it has ever been, the threat will inevitably heighten in the near future.”
|Ian Yip, APAC Chief Technical Officer, McAfee|
Ian mentioned that in recent years, we have seen malware using evasion techniques to bypass machine learning engines. For example, in 2017, the Cerber ransomware dropped legitimate files on systems to trick the engine that classifies files and in 2018, PyLocky ransomware used InnoSetup to package the malware and avoid machine learning detection.
Additionally, the proliferation of Internet of Things (IoT) systems and devices – for both consumers and enterprises – over the past years has led to an exponential rise in volumes of data available. “For cybercriminals, this is a lot of new data to take advantage of – and AI assists them in making sense of it all – not just online, but offline as well.”
As to what makes AI such an effective tool for those on the wrong side of the cyber-law, Jeffrey Kok, Vice President of Solution Engineers- Asia Pacific and Japan, CyberArk, commented, “Like automation, AI enables attackers to reduce time/effort for some parts of the attack chain, and focus their attention and capabilities on higher functions in the attack chain. It enables cybercriminals to target victims more proficiently, which allows them to disrupt operations, control computing environments and steal data.”
|Jeffrey Kok, VP of Solution Engineers Asia Pacific and Japan, CyberArk|
He added, “With AI, it is becoming easier for hackers to discover and exploit vulnerable corporate networks, infiltrate IT infrastructures and stay undetected by security teams for extended periods. As we have seen from the recent incidents, this advantage enables cybercriminals time to learn about the environments they’ve entered and take control of the entire system.”
Orli Gan of Check Point Software Technologies summed it up nicely when she said that there are two ways that cyber attackers can leverage AI for their own benefit:
As a tool to bring about new sophisticated attacks. This can be in the form of easier ways to find vulnerabilities and exploit methods in existing software packages, or in finding effective ways to escape detection by existing cyber defense tools.
As a target. The growing use of AI-based algorithm, replacing human beings in making critical decisions, offers a new opportunity for cyber attackers, whereby they can manipulate the decision through tampering with an AI algorithm.
In the coming segments we will explore in depth the different possible use cases of nefarious AI by cyber adversaries.