Infosecurity Europe
3-5 June 2025
ExCeL London

The UK’s AI Strategy: Balancing Economic Potential with Security Risks

The UK’s approach to AI security and governance has come under significant scrutiny since the start of 2025.

The Labour government, appointed in July 2024, is taking its own unique approach to AI development compared to other countries, following a series of announcements and updates in the opening months of the year.

The strategy appears to prioritise innovation and looks to unlock the economic potential of the technology – ensuring its development is not stifled by too many top-down requirements.

This represents a more limited approach to AI regulation compared to other regions, most notably the EU.

Experts have expressed concern that the UK’s apparent light-touch strategy will fail to adequately address the major security, privacy and ethical issues around the technology.

This article will how the UK’s approach and vision for AI is currently being shaped and the potential implications of this strategy.

 

AI Security and Data Privacy Risks 

Experts have highlighted the significant security and privacy risks the use of AI tools pose. These risks have been exacerbated following the availability of open-source large language models (LLMs) such as OpenAI’s ChatGPT.

Malicious actors have been observed exploiting such tools to support cyber campaigns, such as crafting more realistic phishing emails and for malware development.

Accidental leakage of personal and proprietary data from AI models is also a growing concern.

Additionally, significant ethical concerns have been raised around AI’s impact on individuals and wider society as it is adopted more broadly. These include the potential for bias and discrimination to occur when AI is applied in critical fields such as medicine and justice.

The recent launch of Chinese LLM, DeepSeek R1, has further emphasised such issues, with the app quickly coming under fire for multiple security weaknesses.

Such concerns have resulted in some businesses, and even countries, banning certain AI tools.

Therefore, it is essential that AI development and use is underpinned by strong safeguards, to ensure the technology’s benefits can be realised safely and securely.

UK Sets Out its AI Stall

In January 2025, the UK government published an AI Opportunities Action Plan. The plan has the core ambition of unlocking the economic benefits offered by AI and seeks to establish Britain as a global leader in the development of the technology.

The Action Plan set out how this vision can be achieved in partnership with the private sector, including by building sustainable infrastructure and regulation, as well as pushing for cross-economy AI adoption.

While the government’s focus on AI has been welcomed by the tech community, concerns quickly emerged about the lack of discussion around security and ethical considerations in the publication.

Deryck Mitchelson, Global CISO at Check Point Software, warned: "The government’s AI action plan is ambitious, but it risks becoming another example of public sector technology promises failing to deliver. Without robust safeguards, this could result in catastrophic breaches of personally identifiable information (PII) and a further erosion of public trust in technology-driven services.”



UK and US Fail to Sign AI Agreement

These concerns have been exacerbated by the refusal of the UK and US to sign an international agreement on AI at a global summit in Paris.

This agreement pledged an "open", "inclusive" and "ethical" approach to the technology's development, and was signed by 60 countries, including France, China and India.

The UK government was reported as saying that it had concerns about national security and global governance regarding the agreement.

So far, the UK has steered clear of specific legislation governing the security and privacy of AI. It has instead, signalled its intention for a lighter touch, flexible regime for safeguarding the technology.

UK Publishes AI Code of Practice

Shortly after the AI Action Plan was published, the government announced a new AI Code of Practice which it claimed will form the basis of a global standard for securing the technology.

The voluntary code contains 13 principles, covering the secure design, development, deployment, maintenance and end-of-life aspects of the AI lifecycle.

In February 2025, another announcement was made around AI governance. This was decision to rebrand the UK’s AI Safety Institute to the AI Security Institute. This marked a government shift in its AI governance strategy to focus on serious AI risks with security implications. The Institute was launched in April 2023, with the purpose of testing the safety of emerging types of AI.

The UK’s current AI governance strategy contrasts significantly with other regions in the world, most notably the EU, which passed its own AI Act in 2024. The legislation sets harmonised security requirements for AI products sold in the EU market and adopts a risk-based approach to ensure safety and transparency.

The EU AI Act contains horizontal obligations for all models and special treatments for ‘high risk’ and ‘unacceptable risk’ AI practices.

Impact of UK’s Current AI Governance Strategy

The recent announcements by the UK government around AI development and governance indicate it is prioritising innovation and limiting legislative security requirements.

Jason Raeburn, Partner and Head of Intellectual Property & Technology Litigation, at law firm Paul Hastings, noted: “There is clearly an opportunity to distinguish the UK from the EU in establishing a more flexible exception to the current EU regime, however, how far the Government will realistically go is yet to be seen.”

Adam Pilton, Senior Cybersecurity Consultant at CyberSmart, warned that excitement around the opportunities of AI should not lead to decisions that put the UK’s security at risk.

“Whether this be our private health care information held by the NHS or sensitive information meant for government eyes only, it is vital that throughout all the steps the UK takes in adopting and embracing AI, we embed security,” Pilton commented.

The question now is whether the UK’s voluntary, guidance-based approach creates a sufficiently strong security and privacy culture around the use of AI.

In particular, it must ensure that AI developers and researchers have strong data security practices in place when training their AI models.

Michael Adjei, Director, Systems Engineering at Illumio, said: “No data should be handed over if a third party does not have an AI security framework or fails to meet the required standards.”

Flexible Frameworks Key to Effective AI Security

Several industries have voluntarily adopted robust AI security practices.

Dr Kjell Carlsson, Head of AI strategy at Domino Data Lab, explained that in many high-risk industries, such as healthcare and finance, robust AI governance frameworks are used daily to validate models, ensure fairness and maintain compliance with existing data protection rules.

As such, a strategy that leverages industry best practices and maintains a flexible approach as the technology evolves could be more effective than prescriptive regulations.

“The challenge now is not defining new principles but scaling these governance capabilities across organisations and industries,” commented Carlsson.

Training and awareness will be critical to this approach working – ensuring consumers are demanding robust AI safeguards from manufacturers, and that users utilise the tools safely.

Andrew Rose, CSO at SoSafe, said: “The priority should be on awareness, education and training for both users, their families and your customers to minimise the possibility of a successful attack against any of your extended population base.”

Conclusion

AI regulation is in its early stages, with governments around the world still grappling with the best approach to safeguarding the technology’s risks, without stifling innovation.

Currently, the UK appears to be taking a relatively light touch approach to AI regulation in comparison to the EU, looking to ensure its potential is not overburdened by prescriptive rules. This involves guidance and promotion of industry best practices.

It is likely that this strategy will be kept under review, and the government could be under heavy pressure to introduce legislation mandating strong security and ethical practices from developers if the risks are deemed too high with the technology. 


ADVERTISEMENT


Enjoyed this article? Make sure to share it!



Looking for something else?


Tags


ADVERTISEMENT


ADVERTISEMENT