Infosecurity Europe
3-5 June 2025
ExCeL London

How is AI used in Cybersecurity?

Cybersecurity practitioners are using AI – but which ones exactly?

The emergence of commercial generative AI tools in 2022 led many cybersecurity providers to launch novel AI-powered solutions and services in 2023.

While generative AI is bringing new capabilities, this is not the first type of artificial intelligence that has been integrated into security products.

AI techniques have been leveraged by cybersecurity vendors for years now – and with many benefits. Threat hunting, anomaly detection and user behaviour analytics (UEBA), for instance, are a few cyber domains where AI has been remarkably efficient.

According to IBM’s August 2023 Cost of a Data Breach global survey, the extensive use of AI and automation benefited organisations by saving nearly $1.8m in data breach costs over the past year and accelerated data breach identification and containment by over 100 days, on average.

Infosecurity selected some of the AI algorithms most useful for cybersecurity. 

Most-Used AI Algorithms in Cybersecurity

Some of the most-used AI algorithms in cybersecurity can be classified as machine learning (ML) algorithms. Machine learning referrs to a branch of AI and computer science that focuses on using data to imitate how humans learn, gradually improving its accuracy.

These are:

  • Classifiers: these are used to filter emails, identify malware, and detect other suspicious activity. For example, a classifier could be trained to identify phishing emails by looking for common patterns in the emails, such as the sender's email address, the subject line, and the body of the email.
  • Regression models: these are used to predict the risk of a cyber-attack. For example, a regression model could be trained to predict the risk of a data breach based on the organisation's industry, size, and security practices.
  • Clustering algorithms: these are used to identify groups of data points that are similar to each other. This can be useful for identifying groups of compromised hosts or groups of malicious IP addresses.

More sophisticated AI models used in cybersecurity leverage deep learning methods, a sub-domain of machine learning where models are based on multi-layered artificial neural networks with representation learning. Some of these are:

  • Convolutional neural networks (CNNs): these are used to identify objects and patterns in images. This can be useful for identifying malware in images and detecting facial features in surveillance footage.
  • Recurrent neural networks (RNNs): these are used to process sequential data, such as text and network traffic. This can be useful for detecting phishing emails and analysing malware code.

A third category of AI algorithms is the computer vision algorithms, including:

  • Object detection algorithms: these are used to identify and locate objects in images and videos. This can be useful for detecting malware in images and identifying facial features in surveillance footage.
  • Anomaly detection algorithms: these are used to identify unusual or suspicious activity in images and videos. This can be useful for detecting cyber-attacks in progress.

Finally, cybersecurity providers had started using natural language processing (NLP) algorithms before the broad adoption of large language models (LLMs), which are a part of NLP. Two of the most common NLP algorithms before the emergence of generative AI tools include:

  • Text classification algorithms: these are used to classify text into different categories. This can be useful for detecting phishing emails, identifying spam, and analysing social media posts for threats.
  • Named entity recognition (NER): these are used to identify named entities in text, such as people, organisations, and locations. This can be useful for identifying potential targets of cyber-attacks and tracking the spread of malware.

These are just a few examples of how AI algorithms are used in cybersecurity. As AI continues to develop, we can expect to see new and innovative ways to use AI to protect our systems and data from cyber-attacks.

Limitations of Using AI for Cybersecurity

Although powerful for specific use cases, AI technologies are not widely used in cybersecurity.

For instance, IBM’s latest Cost of a Data Breach survey found that most organisations (72%) have not broadly or fully deployed AI in cybersecurity operations.

One of the reasons is that AI technologies also present some limitations that prevent them from becoming mainstream security tools. Here are some of these limitations:

  • Need for accuracy: AI systems are only as good as the data they are trained on. If the data is biased or incomplete, the AI system will produce biased or incomplete results. This can lead to false positives and false negatives, which can make it challenging to identify and respond to real threats.
  • Data greediness: security companies need to use many different data sets of anomalies and malware codes to train the AI system. Getting accurate data sets can require a lot of time and resources (money, personnel, computing power…), which some companies cannot afford.
  • Lack of transparency: AI systems can be complex and difficult to understand. This can make it difficult to understand how they make decisions and to identify potential biases. This lack of transparency can make it difficult to trust AI systems and to use them effectively.
  • Vulnerable to adversarial attacks: Adversarial attacks are attacks that are designed to specifically fool AI systems. For example, an attacker might create a slightly modified image that is classified as a cat by an AI system, even though it is actually a picture of a dog. Adversarial attacks can be difficult to defend against, and they pose a significant threat to AI-powered cybersecurity systems.


Enjoyed this article? Make sure to share it!

Looking for something else?