<
>

PART II- Nefarious AI and AI Based Cyber Attacks: Should We Be Afraid?

AI-Enabled Malware
 

Boris Cipot, Senior Sales Engineer, Synopsys Software Integrity Group

While bypassing artificial intelligence engines is apparently already on the criminal to-do list, criminals can also implement artificial intelligence in their malicious software. In fact, Boris Cipot of Synopsys concurred that one of the most common ways in which cybercriminals could use AI for their own benefit is through malware.

“Imagine what a piece of malware could do to a large enterprise if it could adapt to the systems and situations it is in, mimic user behaviour, and understand when to execute commands or send data undetected by anomaly detection systems. Such malware could go undetected for years. Vast amounts of data being sent could bring about catastrophic damage to the company, while greatly benefiting the cybercriminal.”

Malwarebytes Labs’ compelling report titled “When Artificial Intelligence Goes Awry”, discussed several realistic (and frightening) possibilities of AI-enabled malware, including:

  • Worms that are capable of avoiding detection – By learning from each detection event, the worm is able to avoid the behaviours or characteristics that gave it away in the next infection attempts, and continuously propagate in networks.
  • Self-replicating trojans – AI can be used by trojans to improve the methods they use to create new file versions of themselves, as already seen in trojan malware variants such as Swizzor.

  • A new breed of smarter malware – AI could be combined with contemporary malware techniques to form a new breed of malware that is stealthy and able to avoid detection until the precise moment it recognises a specific target. As a case study, IBM Research developed an AI-powered attack tool called DeepLocker which resulted in malware that was very difficult to find and almost impossible to reverse engineer.

The report also stated that other possibilities include malware that could change its behaviour and characteristics based on its environment, deletes itself when it suspects it’s being analysed, and deploy malicious activities only on specified systems.
 
Artificially Intelligent Social Engineering
One interesting point from the Malwarebytes report is that “Cybercriminals are opportunistic. If they don’t need to develop sophisticated attacks to ensnare their victims, they will not. Instead, they often rely on tried and true methods, such as sending spam emails with malicious attachments, which require little technical prowess but are nonetheless effective.”

This is why highly effective (but relatively low-tech) social engineering methods such as fraud, spoofing and phishing are so popular among cybercriminals. With AI, adversaries will be able to greatly improve their social engineering and reconnaissance activities to scour for information on a larger scale, faster.

Would-be adversaries could gather as much information as they can about an organisation as well as the people within it from the internet and social media to look for exploitable weak points – a form of malicious data-driven information and intel gathering, if you will. This, in turn, will allow them to produce more sophisticated and personalised, targeted threats, and more convincing phishing attacks, in addition to more convincing fake news and clickbait tactics.

There are already methods of creating fake videos of real people, and mimicking the voice of a person using AI. Such technology can be used to make phishing and BEC attacks much more persuasive if an employee were to receive a direct call from the company CEO asking for an urgent bank transfer or sensitive company information, for instance.

The report states that impersonation attacks are already on the rise today; about two-thirds of businesses saw an increase in impersonations in the last 12 months, and of those who received attacks, 73% suffered a direct loss.

Beyond threats to digital security, such as data or credential theft, with the proliferation of IoT and smart devices, the threat can very well cross into the real world and computer-generated attacks can be deployed to target systems critical to daily life. An attack on any of these systems, from autonomous vehicles and smart city solutions to critical utilities and healthcare infrastructure, can put human lives at risk.

It can even help traditional forms of criminal activity “go digital”, as Ian Yip from McAfee explained, “Machine learning can aid a burglar in identifying patterns and behaviour from data of households’ energy consumption, which could potentially inform them of an optimal time to conduct a burglary. A time window when power usage is lowest, for instance, hints at the vacancy of a residence.”

He added that criminal cyber gangs, state-sponsored attackers and ideological hackers could use AI to better scale and expand their attacks, and with the increased connectivity of systems and devices, perpetrators have an even bigger pool of data to potentially exploit.
 
Training Malicious ML Algorithms
When it comes to training AI and making it as accurate as possible, the most vital ingredient is quality data, and lots of it. Orli Gan of Check Point Software Technologies explained that the data fed into the algorithms would differ depending on what the attacker is trying to train it for.

She continued, “If, for example, looking to find exploitable vulnerabilities in code, then data exists in the form of software packages and previously found vulnerabilities. If looking to figure out ways to evade detection, data can be collected on the go, when the attacker auto-generate variations of his attack, and then runs it through cyber defense systems to learn which variations manage to fool these tools. By doing so, over time his tools are likely to develop expertise in bypassing detection.”
But the question is, how readily available are these tools, and do the criminals have the necessary data to build good “bad AI”?

What’s truly frightening is that these days, there is an entire economy of advanced malicious tools being brokered on the dark web – all you really need is access to these sites, which you could learn on many open forums and standard sites that many of us use on a daily basis. Malware and identity theft kits, for instance, are easy to find and inexpensive to buy on dark web exchanges - such as Phishing-as-a-Service at USD4.23 a month.
 

Vicky Ray, Principal Researcher, Unit 42 Asia-Pacific at Palo Alto Networks

To add to that, most of the tools to develop ML and AI solutions are readily available as open source software, and anyone with the right knowledge and skillset can build AI-based solutions. However, according to Vicky Ray, Principal Researcher, Unit 42, Palo Alto Networks, “With the high volume of data sets required for the learning models, this may not be easily achieved by threat actors at present.”

In that sense, perhaps those in the cybersecurity industry, with access global threat intelligence and millions upon millions of user, endpoint and live system data, are at an advantage as they will have better data and training sets with which to feed their machine learning algorithms. The bad guys will have a harder time building models that are as advanced.

Nevertheless, as stated in the Malwarebytes report, it’s only a matter of time and you only need one smart cybercriminal to successfully develop and deploy malicious AI in an attack for this method to catch on, especially if AI-enabled cyberattacks can open up a new avenue for profit.

 
You might also like
Most comment
share us your thought

0 Comment Log in or register to post comments