Find a balance between protecting fundamental rights and promoting innovation, and draft laws that are responsive to artificial intelligence (AI) innovation. These are two of the challenges faced by MEPs Dragos Tudrace and Brand Benifei, co-leaders of the EU AI Act.
On 13 March, the European Parliament approved: AI law The vote was overwhelming, with 523 votes in favor, 46 votes against, and 49 abstentions.
The regulation is scheduled to come into force in May 2024 and is already a historic landmark. Although generative AI is already part of our lives and has made headlines in international media, the European Union (EU) has not yet regulated AI systems while protecting people's fundamental rights. It is the first body to approve a series of laws that
To better understand this ground-breaking regulation, in this special episode of Euronews Tech Talk, we bring you the experiences of two MEPs who co-led the AI document in the Strasbourg parliament: Dragos Tudrace and Brando Benifay To do. The former is Romania's former Minister of the Interior and head of the liberal Renews Europe Group. The latter belonged to the socialist side and was one of the youngest Italian MPs ever elected.
How did they work together? What challenges did they face?
Origins of EU AI law
The story of the EU AI law began long before the popular tool ChatGPT was launched in 2022. “At the European Commission, before 2019 there was a high-level expert group on AI, its applications and potential risks,” Tudorache explains. “Then, in 2019, President von der Leyen announced that there would be legislative proposals on AI. In 2020, a special committee was formed, and in 2021, he began to elucidate the concept of AI,” he said. I added.
From the first stages of negotiations, the main challenges were clear. It is about ensuring that the rapid and unpredictable development of AI technology does not overtake regulations. To address this issue, lawmakers have devised the following strategy: “We needed to make sure the obligation was technology-neutral. If we define a transparency obligation, that obligation will remain relevant no matter how complex the algorithms become,” underlined Liberal MP. Along with this methodology, the legal techniques are also: “Whatever new uses, with new risks that may arise, can be added to Appendix 3 at any time. Appendix 3 is a list of high-risk uses of artificial intelligence, which was intentionally left out. 'Please open,' Tudras added.
More precisely, AI law follows a risk-based approach. In other words, the riskier the AI application, the more scrutiny it will receive.
“We limit the use of real-time biometric cameras in public places to tracking suspects of very serious crimes, but we also ban emotional recognition in workplaces and schools.” added the MEP.
But how can we ensure that the protection of human rights does not hinder technological development?
“The purpose of this law is not to stifle innovation, but rather to build trust. Our model, where consumers are fiercely protected and human rights are central, will not be thwarted by this disruptive technology. . Instead, it is integrated into the development of AI models,” says Brando.