Infosecurity Europe
4-6 June 2024
ExCeL London

The Rise of AI Regulation: What You Need to Know

The emergence of generative AI tools such as large language models-based (LLMs) chatbots such as ChatGPT, Bard or Claude as well as image generators like DALL-E and Midjourney, has prompted a global movement towards the regulation of consumer AI.

Stanford University's 2023 AI Index shows that a record-breaking 37 AI-related bills were passed into law globally in 2022.

However, regulators worldwide seem to adopt different approaches, with some taking a stricter approach to regulating how these AI tools are being built and used, and others willing to coordinate with innovators. Here are Infosecurity’s top five recent AI regulation developments you need to know about.

1. EU Set to Pioneer Regulation (Again)

Do you remember what happened with concerns over online personal data violations a few years ago? Then, the EU authorities were the first to implement a strict, binding regulation to impose measures ensuring organisations operating within EU borders respect personal data protection principles. In 2018 we saw the birth of the General Data Protection Regulation (GDPR), which in 2023 celebrated its fifth anniversary.

The EU is now set to pioneer AI regulation with its AI Act which is being driven by fears of privacy breaches and security threats posed by this new technology.

Introduced in April 2021, the text even pre-empted the surge in generative AI tool adoption that started in late 2022. Its latest draft, approved by the Internal Market Committee and the Civil Liberties Committee on May 11, introduced new measures to control “foundational models.”

Those include regulating ‘high-risk’ AI practices and banning those that involve ‘unacceptable risk.’

Just like with GDPR, the AI Act could also be the first legislation in the world to impose heavy fines for non-compliance, with up to €30m ($32m) or 6% of global profits mentioned in the proposed bill.

This time, however, EU representatives hope other countries will follow them. In April, 12 EU lawmakers working on AI legislation called for a global summit to find ways to control the development of advanced AI systems. They urged US President Joe Biden and European Commission President Ursula von der Leyen to convene a meeting of world leaders.

2. US States and Localities Quicker than the Federal Government

At the time of writing, the US federal government is yet to propose such a law. 

Instead, the Biden administration has published a few non-binding guidance documents, including the October 2022 Blueprint for an AI Bill of Rights, which addresses concerns about AI misuse and provides recommendations for safely using AI tools in the public and private sectors.

Regulatory work within US states and localities seemed to have preceded what is happening at the federal level. In 2022, 15 states and localities proposed or passed legislation concerning AI – some focused on regulating AI tools in the private sector, while others set standards for public-sector AI use. Other states, like Colorado and Vermont, have created task forces to study AI applications that could be harmful, such as facial recognition.

Interestingly, New York City was the first US locality to adopt a law on AI use. The law on “automated employment decision tools” aims to prevent AI bias in the employment process. It became effective in January 2023.



3. UK and Canada Take a ‘Pro-Innovation’ Stance

The UK is also diverging from the EU approach. In March, the UK government said it was taking “a pro-innovation approach to AI regulation.” It launched a white paper explaining its plan, in which there will be no new legislation and regulatory body for AI. Instead, responsibility will be passed to existing regulators in the sectors where AI is applied.

In April, the UK announced that it would invest £100m ($125m) to launch a Foundation Model Taskforce, which is hoped to help spur the development of AI systems to boost the nation's GDP.

Canada did choose to introduce a new legislation, the AI and Data Act, part of the federal Bill C-27 for the Digital Charter Implementation, proposed in June 2022.

While very little is known at this stage, a companion paper published in March indicates that Canada will not ban automated decision-making tools, even in critical areas. Instead, the Canadian government plans to entice AI developers to implement measures for preventing harm, such as creating a mitigation plan to reduce risks and increase transparency when using AI in high-risk systems.

4. China Has Started Its Regulatory Journey

On April 11, 2023, the Cyberspace Administration of China (CAC) launched the first draft of an AI legislation, called Administrative Measures for Generative Artificial Intelligence Services, and submitted it to a public consultation.  

The measures outlined apply to “research and development into, as well as the use of, generative AI” that is offered to “the public” in mainland China. They cover various issues, including data protection, non-discrimination, bias and the quality of training data models for new technologies such as generative AI.

In the text, the Chinese government also encourages “the indigenous development of generative AI technology, and encourages companies to adopt secure and trustworthy software, tools, computing and data resources to that end.”

The draft suggests that penalties could be imposed in case of non-compliance.

5. AI Promoters Welcome Regulation… Or Do They?

Michael Schwarz, the chief economist at Microsoft, ChatGPT’s main backer, said during the World Economic Forum Growth Summit on May 3 that regulators “should wait until we see [any] harm before we regulate [generative AI models].”

The next day, US Vice President Kamala Harris met with CEOs of four American companies involved in developing generative AI – Alphabet, Anthropic, Microsoft, and OpenAI – in a bid from the Biden administration to announce new measures to promote ‘responsible’ AI.

On May 16, Sam Altman, OpenAI’s CEO, testified before a US Senate committee. He said AI companies like his should be independently audited and given licenses to operate by a dedicated government agency.

Altman, along with representatives from Google DeepMind and Microsoft, and hundreds of AI researchers and leaders, signed an open letter calling “for further regulations to prevent humanity’s extinction,” published by the Center for AI Safety in May.

At the same time, Altman said during a visit to London on May 25 that he had “many concerns” about the EU’s planned AI Act, which could mean that OpenAI’s services must cease in EU markets.

Enjoyed this article? Make sure to share it!



Looking for something else?

Tags


ADVERTISEMENT