Kaspersky's Cybersecurity Weekend Sparks Intense Debate on the Intersection of AI and Cybersecurity

Recently, Kaspersky hosted its annual cybersecurity weekend event at the Grand Hyatt Hotel in Kuala Lumpur. The event was centred around the evolving challenges of modern cybercrime, with a particular focus on the transformative role of Artificial Intelligence (AI) in cybersecurity and the associated risks and opportunities. The event boasted presentations from key figures within Kaspersky and included a panel discussion featuring prominent figures from both local and international sectors of the cybersecurity industry.

The first speaker, Mohamad Amin Hasbini, Head of Research Centre at Kaspersky (Middle East), discussed how AI is now accessible to everyone, including cybercriminals. According to Amin Hasbini, by utilising the generative AI advanced Language Learning Model (LLM), cybercriminals are now capable of generating convincing phishing emails in several different languages, enabling them to launch mass phishing email attacks more frequently and widely.

Amin Hasbini then elucidated that it has become increasingly difficult for cybersecurity teams to identify phishing emails without AI assistance. To remediate the situation, cybersecurity teams are now adopting AI content detection to combat phishing email attacks effectively. The AI content detection will analyse two major characteristics of an email to determine whether it is a phishing email or not:

  • Perplexity:
    The system assesses the perplexity of the email's language. Low perplexity suggests that the email may have been authored by an AI, while high perplexity indicates human composition.

  • Burstiness:
    Additionally, the system evaluates the variability in sentence length and structure. Human-authored emails typically exhibit greater burstiness, characterised by fluctuations in style and structure, whereas AI-generated emails tend to maintain a more consistent pattern.

Example of phishing emails imitating the Dubai Police Department and Emirates Post Group, shared by Mohamed Amin Hasbini during the presentation.

Concluding his presentation, Amin Hasbini shared statistics on phishing emails from the previous year. According to Amin Hasbini, AI-generated phishing emails experienced a surge in frequency during the last few months of 2023. This increase in frequency can be attributed to the festive season sales, such as Cyber Monday, Black Friday and Christmas/New Year’s Sales that were happening then—times when consumers are most vulnerable and susceptible to any product sales-related emails. Cybercriminals took advantage of this fact to generate phishing emails disguised as product sales or promotion emails to entice excited consumers into sharing their financial credentials.

The image is courtesy of Mohamed Amin Hasbini, Head of Research Centre at Kaspersky (Middle East).

The second speaker for the morning session was Vladislav Tushkanov, Lead Data Scientist at Kaspersky, who expounded on the risks and dangers lurking behind generative AI. Tushkanov explained that once generative AI technology sees prevalent application and usage in daily life, it will inadvertently increase vulnerability points.

He then presented a speculative scenario envisioning a future where robots seamlessly integrate into every household, handling diverse tasks like shopping, cooking, and cleaning. However, this optimistic vision quickly transformed into a nightmare scenario. In this dystopian narrative, a robot's AI fell victim to hacking, corrupting its logic engine, and causing it to disregard allergens in recipes. Consequently, the compromised robot unwittingly prepared a meal containing ingredients that could trigger severe allergic reactions.

Tushkanov shared another example that hits close to home, where a young individual, while browsing the Internet for materials to write an essay, inadvertently stumbled upon a website hosting a chatbot. This chatbot had been stealthily programmed to entice unsuspecting visitors into sharing their credit card and banking details.

These threats and vulnerabilities arise as a consequence of the widespread application and reliance on AI tools, giving rise to a new cybersecurity subfield. Key areas of concern include:

  • Prompt injection.

  • Jailbreak.

  • Prompt extraction.

  • Perplexity testing.

  • Token smuggling.

  • Prompt sandwiching.

  • LLM defence evaluation.

Tushkanov added that based on Kaspersky’s research, LLM-based chatbots are vulnerable to prompt injections and jailbreaks. Given the nature of these threats being LLM-based, he recommends that today’s cybersecurity professionals start familiarising themselves with the intricacies of AI and LLM-based threats. This will enable them to effectively identify and counter such threats, which are expected to become increasingly commonplace soon.

  • Businesses intending to utilise LLM-based chatbots should implement moderate inputs and outputs, and limit autonomy. Additionally, they should adhere to established guidelines such as OWASP, MITRE, etc.

  • Stay informed about the latest threats. Users can keep abreast of developments by regularly checking Kaspersky Threat Intelligence and Digital Footprint Intelligence.

  • Exercise caution when engaging with chatbots. Users should refrain from immediately complying with requests that involve sharing financial or other sensitive information. It's essential to verify the legitimacy and trustworthiness of the source before divulging such details.

Following the presentations, a panel discussion ensued featuring prominent figures from the digital and cybersecurity sectors: Ts Mohd Zabri Adil Talib (Acting Head of Division, Cyber Security Responsive Services, CyberSecurity Malaysia), Shazad Shahid (CEO at TPS Worldwide), and Vladislav Tushkanov (Lead Data Scientist at Kaspersky).

From the left: Ts Mohd Zabri Adil Talib (Acting Head of Division, Cyber Security Responsive Services, CyberSecurity Malaysia), Shazad Shahid (CEO at TPS Worldwide), Genie Gan (Senior Accredited Board Director. Government Affairs & Public Policy) and Vladislav Tushkanov (Lead Data Scientist at Kaspersky).

The panel discussion predominantly revolved around topics of data privacy, transparency, and user safety in light of the extensive integration of AI. However, Shahid's bold stance on the issue injected a moment of fervour into the discussion. He staunchly advocates for granting individuals the authority not only to retrieve the data they provided consensually but also to request its deletion.

Shahid's perspective on data transparency and privacy caught Tushkanov's attention, prompting him to craft counterarguments. Tushkanov clarified that while Shahid's suggestions may be feasible for data stored in traditional databases, they pose a different set of challenges when applied to Machine Learning (ML) models like ChatGPT. He emphasised that ML models such as ChatGPT not only store data but also learn and internalise it. Requesting the removal of data from such models is akin to asking them to forget the acquired knowledge.

Tushkanov stressed the necessity of ethical frameworks to uphold data integrity and privacy for all parties involved. He highlighted Kaspersky's initiative at the Internet Governance Forum in Kyoto, where they introduced six cybersecurity principles for the responsible use of AI.

Following Tushkanov's counterarguments, Mohd Zabri echoed similar sentiments. He advocates for government collaboration with the private sector to expedite the implementation of laws and regulations ensuring the ethical and responsible use of AI technology.

Mohd Zabri emphasised the importance of talent development to address the existing talent gap in Malaysia. He believes that by resolving this issue, the government can then work in tandem with the private sector to accelerate the formulation of regulations and enhance cybersecurity capabilities, thus promoting ethical AI use in Malaysia.

You might also like
Most comment
share us your thought

0 Comment Log in or register to post comments