Infosecurity Europe
2-4 June 2026
ExCeL London

How CISOs Can Defend Against the Rise of AI-Powered Cybercrime

Cybercriminals are wielding AI as a force multiplier, turning once-complex attacks into cheap, scalable and automated threats.

In 2025, the National Computer Emergency Response Team of Ukraine (CERT-UA) identified ‘LameHug’ malware, that leveraged an AI-powered large language model (LLM) to generate commands for execution on compromised Windows systems.

Later that same year, the GenAI firm Anthropic reported that cyber threat actors used Claude Code, its coding assistant, to conduct cyber-attacks.

Group-IB’s January 2026 report, Weaponized AI: Inside the Criminal Ecosystem Fueling the Fifth Wave of Cybercrime, warns that generative AI (GenAI) and dark large language models (LLMs) are democratising cybercrime, lowering the barrier to entry for low-skilled threat actors while supercharging the sophistication of phishing, deepfake fraud and malware-as-a-service schemes.

According to Check Point, VoidLink, one of the latest malware frameworks targeting Linux-based cloud servers, was likely almost entirely generated by AI.

These examples show that cybercriminals are really starting to leverage AI capabilities significantly in their attacks.

For defenders, traditional security measures can struggle to keep pace with these AI-driven attacks that adapt in real time, evade detection and exploit trust at scale. 

How CISOs and IT Leaders Can Mitigate AI Threats

Here are five concrete steps defenders must take to counter the AI threat wave Infosecurity gathered across several cybersecurity reports:

  • Make AI central to security strategy: organisations should integrate AI across detection, response and fraud prevention systems, leveraging its speed and scalability to match attackers
  • Layer defences against synthetic threats: Traditional safeguards (like spelling-error checks or static know-your-customer processes) will likely not be efficient against for AI-generated deepfakes and personalised lures. Organisations should instead prioritise multi-layered defences combining biometric verification, device and session analysis, behavioural risk scoring and AI-powered fraud detection
  • Evolve employee training: Cybersecurity firms advise organisations to pivot their security awareness programmes from spotting typos to recognising contextual manipulation, such as AI-crafted urgency, fabricated authority or hyper-personalised social engineering
  • Adopt ‘AI vs. AI’ defence tactics: Organisations should leverage automated threat hunting, AI-driven incident response and predictive modelling to pre-empt attacks before they launch
  • Collaborate across borders and industries: more cross-sector intelligence sharing and public-private partnerships can help disrupt AI-powered criminal ecosystems


Conclusion

As cybercriminals increasingly weaponise AI to automate, scale and personalise their attacks, organisations can no longer rely on legacy controls or traditional training approaches.

The examples emerging from 2025 and 2026 make it clear that AI-enabled threats are not theoretical, they are actively reshaping the cybercrime landscape.

To keep pace, CISOs and cybersecurity leaders must treat AI as a foundational component of modern defence, investing in layered protections, adaptive risk scoring and intelligence-driven response.

The organisations that move fastest to operationalise AI within their security posture will be best positioned to withstand this new era of AI‑accelerated threats.


ADVERTISEMENT


Enjoyed this article? Make sure to share it!



Looking for something else?


Tags


ADVERTISEMENT


ADVERTISEMENT