Infosecurity Europe
4-6 June 2024
ExCeL London

Future-Proofing Our World: Addressing the Top 5 Futuristic Cyber-Threats

Technology continues to advance in a supersonic manner but excitement about the benefits offered to humanity are being tempered by several risks, including from cyber-threat actors.

To ensure such technologies, including areas like AI and quantum, can be utilized safely and securely, it is vital the cybersecurity industry and governments develop the necessary tools, techniques and regulations to safeguard these domains now.

Here are the top five futuristic cyber threats that are on the horizon and how they can be addressed by cybersecurity professionals.

Deepfakes

First, what are Deepfakes? Deepfakes refer to manipulations or completely fake creations of audio, video and imagery using AI and machine learning algorithms. Typically used to impersonate high-profile figures, deepfakes began to be used in a light-hearted manner. Starting in early 2021, a series of deepfake videos purportedly showing Hollywood Tom Cruise undertaking a range of activities including a magic trick were published on TikTok, generating tens of millions of views.

However, as this technology has developed and enabled the creation of increasingly realistic impersonations, it has provided opportunities for fraud and social engineering attacks.

In the first recorded case of a deepfake scam, in 2019, the CEO of a UK-based energy company was duped into transferring $243,000 to fraudsters after receiving a phone call from someone who claimed to be the firm’s chief executive. In fact, AI voice technology was used to spoof the German chief executive’s voice.

In another case, in October 2021, court documents revealed that a Hong Kong bank had been swindled out of $35m following an elaborate deepfake plot. The fraudsters used ‘deep voice’ technology to clone the voice of a company director about to make an acquisition and asked the bank to authorize transfers worth $35m.

Research has shown that humans are unable to reliably detect deepfake speech, highlighting the need to develop effective detection tools for deepfake audio and imagery alongside education strategies.

Generative AI

Open AI’s launch of AI chatbot tool ChatGPT in November 2022 has thrust generative AI into the public consciousness. Since ChatGPT’s public release, other conversational generative AI tools, such as Google’s Bard, have also become available.

While these tools offer huge potential for enhancing businesses’ performance, they also present a range of cybersecurity risks.

Researchers have demonstrated that ChatGPT can be used to support the creation of malware code, potentially lowering the bar to entry for threat actors.

Additionally, generative AI tools are being used to create more realistic social engineering campaigns, including in languages not native to the attackers. Experts believe large-scale attacks, like phishing emails or malicious messages, can be deployed more efficiently and effectively using AI chatbots.

There are also concerns about data leakage emanating from the use of these tools, which led Samsung to ban the use of ChatGPT and other AI powered chatbots by its employees in May 2023.

Organizations must put appropriate guardrails in place to ensure they are able to enjoy the productivity benefits of these technologies, such as monitoring and allowing or blocking posts and file uploads to AI chatbots.



The Metaverse

The metaverse promises a single, immersive world utilizing virtual technologies. This arena is rapidly developing, with tech giants Microsoft, Facebook and Google involved in the development of this technology.

However, the expansion of the metaverse is bringing about urgent security issues that threaten to make the virtual world attractive to cyber-criminals. Experts have warned that an environment where people can hide behind avatars can lead to a hotbed of hacking, fraud and misinformation.

In 2022, a report by Europol highlighted the range of threats that law enforcement should be aware of regarding the metaverse. These included ransomware targeting devices such as VR headsets, identity threat through stealing users’ biometric details and harassment and child abuse and exploitation, including grooming, the sharing of sexual abuse content, and potentially the use of haptics and tactile technology to physically interact with victims.

A key component to securing the metaverse will be strong identity verification, ensuring people can’t spoof their identity in this environment.

Quantum Computing

Quantum computers are expected to be capable of breaking existing cryptographic algorithms within the next 10 years, at which point all digital information will be vulnerable to cyber-threat actors under current encryption protocols.

Experts believe that threat actors are already extracting encrypted data in anticipation of ‘Q Day,’ in what are known as ‘harvest now, decrypt later’ attacks.

The race is on to create and roll out quantum-secure cryptographic systems ahead of Q Day. This is an area the US has taken major strides on, with the National Institute of Standards and Technology (NIST) recently published its draft post-quantum cryptography (PQC) standards, which incorporate four encryption algorithms.

The US’ Quantum Computing Cybersecurity Preparedness Act places obligations on federal agencies to migrate their IT systems to post-quantum cryptography once the NIST standards have been finalized.

Efforts are also underway in the UK to develop quantum-secure technology, with the government reportedly having acquired its first quantum computer in 2022 with the aim of boosting its research capabilities in cyber-defense and other critical areas of national security.

In addition, BT Toshiba and EY have launched a trial of a world-first quantum secured metro network (QSMN) in London.

The European Policy Centre has urged the EU take more action to prepare for quantum cyber-attacks.

Space Travel

The modern world is hugely reliant on the space industry, which is essential in areas like communication, navigation, timing and weather monitoring.

However, the growth of the space economy is making it a growing target for threat actors – both nation-states and cyber-criminals.

There are concerns, for example, that nation-state actors will target rivals’ satellites to cause damage to connectivity and other issues. This threat was illustrated by the Russian attack on US firm Viasat’s KA-SAT satellites in February 2022, just prior to the invasion of Ukraine.

As commercial space travel and activities increase in the future, it is likely that cyber-criminals will view this as a profitable domain. Particularly dystopian examples include attackers taking over flight control systems on a spaceship that has tourists on it and holding it for ransom.

One major issue with resolving future cyber-threats in space is there is currently a lack of established rules regarding appropriate behaviours in space, making it difficult for law enforcement and governments to catch and prosecute attackers.

Global collaborations, to both develop rules to govern space and cybersecurity standards in this domain, will be essential going forward. International standardisation groups such as the Consultative Committee for Space Data Systems (CCSDS) and the European Cooperation for Space Standardization (ECSS) are working on such initiatives for cybersecurity in space.

Experts also believe that AI technologies will be crucial in securing space systems, with intrusion prevention systems needing to interact with physical systems. Additionally, there is work ongoing around optical communications which uses light to securely carry the signal of data flows to and within space.


ADVERTISEMENT


Enjoyed this article? Make sure to share it!



Looking for something else?


Tags


ADVERTISEMENT


ADVERTISEMENT