AI Act. EU countries have reached an agreement on the regulation of artificial intelligence.

On December 8, 2023, after three days of intense negotiations, the European Union countries reached a preliminary agreement on the content of the AI Act - the world's first comprehensive regulations governing the use of artificial intelligence.

AI Act. EU countries have reached an agreement on the regulation of artificial intelligence.
00:00 00:00

Summary

  • Europe has established clear rules for the use of artificial intelligence (AI) with the AI Act, following trilateral negotiations between the European Commission, the European Parliament, and member states.
  • The Act, which was under pressure to be concluded by December 2023, aims to provide a long-term perspective resistant to rapid technological changes.
  • It categorizes AI threats into four levels: minimal, limited, high, and unacceptable risk. The first two levels require service providers to provide minimal transparency.
  • High-risk AI tools, such as recommendation systems of large internet platforms and tools used by public administration and law enforcement agencies, will be subject to rigorous restrictions and strict public monitoring.
  • Service providers will need to disclose all sources used to train the AI algorithm and demonstrate its legal development. Non-compliance will result in application removal or a fine of up to 7% of the company's global annual turnover.
  • The AI Act has faced controversy, particularly around "unacceptable risk" systems used for surveillance, criminal prediction, and biometric profiling. These tools have been banned in the EU, but the law includes exceptions for unexpected terrorist threats, searching for missing persons, and prosecuting serious crimes.
  • France and Germany expressed concerns that strict regulations could hinder industry development, but agreement on the proposed solutions was eventually reached.
  • The AI Act will now be processed by the European Parliament and the European Council. If approved, the Act could be implemented no earlier than 2025.

AI Act is an agreement of EU countries

"Historic moment! Europe has become the first continent to set clear rules for the use of artificial intelligence" - wrote on the evening of December 8 on platform X Thierry Breton, EU Commissioner for the Internal Market and one of the initiators of the AI Act law. He was echoed by European Commission President Ursula von der Leyen:

"The EU's AI Act is the first such document in the world. These are unique legal frameworks for the development of artificial intelligence that you can trust. And for safety and fundamental rights of people and companies. A commitment we made in our political guidelines - and we fulfilled it. I warmly welcome today's political agreement", wrote von der Leyen on X.

AI Act - the road to agreement

Trilateral negotiations between representatives of the European Commission, the European Parliament and member states lasted a total of 37 hours. The pressure to conclude them at the beginning of December 2023 was extremely high due to the next year's EP elections, which will most likely modify the current compositions of EU institutions. Lack of agreement this year would mean the need to postpone the AI Act project for an indefinite future and the nullification of previous efforts.

Let us remind you that the preliminary approval of the content of the AI Act project took place on June 14, 2023, after two years of work. It was a groundbreaking event not only because of the unprecedented nature of this document. It is also one of the rare cases where the law tries to keep up with new technologies before they fully enter the mainstream.

- Artificial intelligence raises many doubts in social, ethical and economic issues. But now is not the time to press the "stop" button. On the contrary, we need to act quickly and take responsibility for it - said Thierry Breton in a press statement at the time.

And these were not empty declarations. The creators of the AI Act project approached the matter as universally as possible, and therefore, they tried to include in the content of the document a long-term perspective that will be resistant to rapid technological changes.

– The essence of good law is an attempt to capture the essence of a social or economic phenomenon that one wants to regulate, so that the law remains current, even if the phenomenon itself evolves. And that's what we tried to achieve: regardless of how AI technology changes, the value system embodied in the law will remain the same – explained on Euronews Dragos Tudorache, a Romanian lawyer, Member of the European Parliament and one of the main initiators of the AI Act.

According to Cecilia Bonefeld-Dahl, the Director General of Digital Europe, a non-governmental organization involved in creating a fair legal environment in the field of new technologies, in practice this means that although the scope of the law has been significantly expanded compared to the original proposal from 2021, the document itself has become much more precise. The European Parliament focused not so much on regulating artificial intelligence itself, but rather on identifying and minimizing the social threats that its various applications carry.

AI Act – how Europe will regulate artificial intelligence

Legislators divided the threats associated with AI into four categories: minimal, limited, high and unacceptable risk. The first two levels include tools and programs with a small – according to the EP – potential for social harm, including spam filters and video games (minimal risk) and chatbots and deepfakes (limited risk). These tools will only require service providers to provide minimal transparency, i.e. clear information that users are dealing with a system based on artificial intelligence.

Programs and applications classified into the remaining two threat categories, on the other hand, have been subject to much more rigorous restrictions. The list of high-risk tools primarily includes recommendation systems of large internet platforms, systems used to intentionally influence voters and tools used by public administration and law enforcement agencies, for example, to assess the legal status of detainees.

These are systems that can directly affect individuals' lives and as such will primarily be subject to the absolute principle of transparency. Service providers will have to, among other things, disclose all scientific, artistic, and media sources that they have used to train the artificial intelligence algorithm, and if necessary – demonstrate that the development of the algorithm was in accordance with the law.

High-risk systems will also be subject to strict monitoring by public institutions and the requirement to register in a separate database. In addition, entities implementing such systems will be obliged to assess the impact of their products on fundamental civil rights. Failure to comply with any of the above restrictions will result in the immediate removal of the application or a fine of up to 7% of the company's global annual turnover.

– By subjecting those who implement artificial intelligence systems to detailed public scrutiny, we can finally shed light on how algorithms affect people and society. This is a great success and an important step towards democratic control of the use of artificial intelligence in our society – evaluates Angela Müller, head of the Policy & Advocacy team at AlgorithmWatch, a non-governmental organization monitoring the ethical use of modern technologies and algorithms.

AI Act – controversy around the law

From the very beginning, some issues included in the AI Act were controversial. The initial content of the document left legal loopholes in the last, most critical category of tools based on artificial intelligence. We are talking about "unacceptable risk" systems, which are used for surveillance, criminal prediction and biometric profiling of individuals in real time based on their appearance and movement characteristics.

These tools pose a direct threat to the foundations of democracy, civil society and European values, and their use has been banned in the European Union. Therefore, the European Parliament did not want to agree to these provisions from the beginning. Finally, Thierry Breton announced that the collection of biometric data by services was considered unacceptable. However, the law included three exceptions:

  • the occurrence of an unexpected threat of a terrorist attack,
  • the need to search for victims,
  • the necessity to prosecute serious crimes. 
"At the last straight, the European People's Party tried to push through their amendment, which would allow governments to introduce real-time biometric surveillance. This dangerous change was blocked. However, it was not possible to convince MEPs to completely ban the use of such systems. Biometric surveillance will be able to be conducted – but only in exceptional situations and with the consent of the court" – we read in the report by Filip Konopczyński and Anna Obem from the Panoptykon Foundation.

Some member states also expressed their doubts about the shape of the AI Act, primarily France and Germany. According to their representatives, strict regulations governing the business use of artificial intelligence could excessively hinder the development of the industry. The proposed solution was a code of good practice. However, as Carme Artigas, the Spanish Secretary of State for Artificial Intelligence, conveyed, it was possible to obtain agreement on the proposed solutions also from France and Germany. 

When the AI Act will come into effect

The successful conclusion of trilateral negotiations between the European Parliament, the European Commission and the member states is only half the success. Now the AI Act will be processed by the European Parliament and the European Council. If adopted, it could come into effect no earlier than 2025.