Infosecurity Europe
3-5 June 2025
ExCeL London

Cybersecurity Considerations for ChatGPT 

Every now and then there is a technology that bursts onto the scene that has everyone talking, and in the early part of 2023, its ChatGPT.

What is ChatGPT? It is an artificially intelligence (AI) powered language model from OpenAI, which has been deployed in a chatbot format.

Having burst onto the tech scene in late 2022 and with Microsoft committing $10bn of investment into the AI chatbot, the product has come under much scrutiny from the cybersecurity world. Some believe it could be used to generate malware, others are concerned about its ability to create very convincing phishing emails, while the data privacy sector are scrutinizing the information used in the AI model itself.

Check Point security’s data group manager, Omer Dembinsky, has predicted that AI tools like ChatGPT will help to fuel a continued surge in attacks in 2023 by making it quicker and easier for bad actors to generate malicious code and emails.

In this article we will list the biggest security concerns ChatGPT has raised so far. 

1. Social Engineering 

As ChatGPT is already being utilized by students to write college essays, so it follows that it can be used by nefarious actors to create convincing phishing emails. These emails are a prime tactic for many threats actors and are used to fool victims into clicking links, downloading malware, and ultimately stealing credentials and sensitive information in order to perform other cyber-attacks such as ransomware. One tip often touted in terms of spotting phishing emails is poor grammar, bad English and incorrect vocabulary, but using a tool like ChatGPT to create the phishing email will eliminate these markers. 

2. Malware and Ransomware 

Many security vendors have been putting ChatGPT to the test to see if it will write malware following text-based instructions sent to the chatbot. According to cybersecurity researchers at CyberArk, it can. This was achieved by bypassing the content filters that prevent ChatGPT from creating malicious tools. CyberArk said this was done through persistently asking the chatbot the same question. Similarly, ChatGPT is programmed not to write ransomware but Picus Security co-founder, Suleyman Ozarslan, said he was able to get what he wanted by simply asking the right questions.

3. Encryption Tools 

According to Check Point Research, cyber-criminals who have little to no development skills could leverage ChatGPT to develop malicious tools and become fully-fledged cyber-criminals with technical capabilities. This includes the creation of a multi-layered encryption tool in the Python programming language. Check Point noted that on December 21, 2022, a threat actor dubbed USDoD posted a Python script, which he emphasized was the first script he ever created. It was confirmed by this threat actor that OpenAI had provided a “helping hand” when creating the script. The script implements a variety of different functions including generating a cryptographic keys and ability to for the user to encrypt all files in a specific directory. 

4. Data Privacy Concerns 

Some experts are questioning the very existence of OpenAI’s chatbot for privacy reasons. The method used by OpenAI to collect the data ChatGPT is built on ought to be scrutinized according to some and there are many tensions between the technology’s foundational model and GDPR. As the models has likely scraped billions of data points from the internet, it has been questioned how it fits into various data protection rules globally.

OpenAI has put some content filters and rules into place to prevent the chatbot from producing nefarious content. However, clearly more needs to be done as those in the security world test ChatGPT to the very limits. 

Enjoyed this article? Make sure to share it!

Looking for something else?