Europe’s proposed new law for AI development and deployment: What’s under the hood?

Research Money
December 20, 2023

The European Commission, European Council, and the European Parliament reached a provisional agreement on the details of the European Union Artificial Intelligence Act, 2 ½ years after the proposed law was first introduced.

The new law aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field.

The law would not apply to AI systems used exclusively for military or defense purposes or to AI systems used for the sole purpose of research and innovation.

The rules establish obligations for AI based on its potential risks and level of impact. Under the law, prohibited AI applications include:

  • biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  • emotion recognition in the workplace and educational institutions
  • social scoring based on social behaviour or personal characteristics
  • AI systems that manipulate human behaviour to circumvent people's free will
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

The agreement includes a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and for strictly defined lists of crime. “Post-remote” RBI would be used only in the targeted search of a person convicted or suspected of having committed a serious crime, in targeted searches of victims (abduction, trafficking, sexual exploitation), and in preventing a specific and present terrorist threat. RBI’s use would be limited in time and location.

However, the agreement “continues to have carveouts that could lead to potential abuse and create the conditions for AI to be used for mass surveillance purposes,” said Konstantinos Komaitis, a nonresident fellow with the Democracy + Tech Initiative of the U.S. think tank Atlantic Council’s Digital Forensic Research Lab. Although the law’s intention is to ban emotional recognition in the workplace and schools, its use could still be allowed by law enforcement, he wrote in a piece for the Atlantic Council. “The same goes for biometric categorization, where there is a general prohibition but there are some narrow exemptions for law enforcement.”

The agreement provides for a fundamental rights impact assessment to be conducted before a high-risk AI system is put in the market by its developers. The agreement also includes a right of citizens to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.

General-purpose AI systems (such as ChatGPT and other generative systems) and the models they are based on, will have to adhere to transparency requirements as initially proposed by the European Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

For high-impact general purpose models with systemic risk, there are more stringent obligations. If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.

Agreement includes mechanisms to implement and enforce the new law

Following the new rules on general purpose AI models and the need for their enforcement at EU level, an AI Office within the European Commission will be set up to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states.

A scientific panel of independent experts will advise the AI Office about GPAI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high-impact foundation models, and monitoring possible material safety risks related to foundation models.

The AI Boardwhich would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission and will give an important role to member states on the implementation of the regulation, including the design of codes of practice for foundation models.

In addition, an advisory forum for stakeholders, such as industry representatives, SMEs, startups, civil society, and academia, will be set up to provide technical expertise to the AI Board.

The EU will need to strongly enforce the AI Act,  wrote Nicole Lawler, a program assistant at the Atlantic Council’s Europe Center. “Without that, experts argue, the legislation will inevitably lack teeth, and member states could rely on weak enforcement of the AI Act to protect their interests.”

The agreement also promotes so-called regulatory sandboxes and real-world-conditions testing, established by national authorities to develop and train innovative AI before placement on the market. This provision is aimed at helping to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain.

However, some experts argue that while European firms wait for EU regulators to approve their AI system, their products risk becoming outdated in the rapidly evolving market.

Non-compliance with the EU's new AI law’s rules can lead to fines ranging from 35 million euro or seven per cent of an offending company’s global annual turnover to 7.5 million euro or 1.5 per cent of turnover, depending on the infringement and size of the company.

Both Parliament and the EU Council still need to adopt the agreed-to text, but the political deal means the law will take effect in 2026.

An EU law wouldn’t just affect the European Union’s 27 member countries. Any company around the world that collects personal data of European residents and uses a generative AI system for processing data would have to meet the law’s regulations.

In the U.S., in the absence of federal legislation on AI, President Joe Biden at the end of October issued a sweeping new executive order  establishing new standards for AI safety and security.

In Canada, the federal government’s Bill-27, the proposed Artificial Intelligence and Data Act (AIDA) is still before the House of Commons industry committee. The government has proposed several amendments to Bill-27, in response to feedback from stakeholders and parliamentarians.

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.