Europe’s Artificial Intelligence Act takes a new step forward with the approval of the European Parliament report on the proposal for a regulation to establish a legal framework for artificial intelligence in the European Union.
The report was approved with 87 votes in favour, 7 against and 12 abstentions at the joint meeting of the Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees.
The report tabled by co-rapporteurs Brando Benifei (S&D) and Dragoş Tudorache (Renew Europe) will now have to be voted on at the next plenary session of the Eurochamber. This is scheduled between 12 and 15 June, in view of inter-institutional negotiations with the EU Council.
The aim is to give the green light by the end of the parliamentary term (in spring 2024) to the world’s first horizontal and wide-ranging legislation on artificial intelligence. Which will regulate one of the most crucial aspects of the management of the EU’s dual digital and green transition.
A difficult balance between rights and innovation
“We can be very proud of what we have achieved in these intense months of fruitful discussions. It is the first attempt in the world to regulate artificial intelligence in a horizontal way. We had to explore new concepts. But also complex definitional issues and rapid market developments,” said the co-rapporteur for the European Parliament, Brando Benifei, on the Artificial Intelligence Act on the eve of the vote.
In a separate vote, the compromise of banning the use of biometric recognition programmes in public places in the EU Parliament’s position on the AI Act was also approved. This is a permanent ban on the use of biometric details to recognise people in publicly accessible spaces. Such as fingerprints, DNA, voice, gait.
Tougher obligations for basic Ai models
At the end of April, after months of negotiations between MEPs, the European Parliament reached provisional political agreement on the Artificial Intelligence Act. This is the world’s first regulation on artificial intelligence.
The EU Parliament confirmed the Commission’s proposals to impose stricter obligations on basic models. This is a category of generic Ai that also includes ChatGpt. With regard to generative AI, Strasbourg decided that these systems must be designed in compliance with EU law and fundamental freedoms.
The Parliament also extended the ban on biometric identification software, previously only banned for use in real time. For which ex post use is now allowed only for serious crimes and after authorisation by a judge.
Furthermore, the use of emotion recognition software is banned in the areas of law enforcement, border management, labour and education.
The ban on predictive checking has been extended from criminal to administrative offences, based on the Dutch child benefit scandal. In which thousands of families were wrongly indicted for fraud due to an algorithm.
The Commission’s proposal defined high-risk systems as those applied, for example, to critical networks, employment, education and training, and essential public services.
MEPs introduced an additional layer. And considered high-risk also those systems that may cause damage to health, security or fundamental rights.
Significant risk means ‘the result of the combination of its severity, intensity, likelihood of occurrence and duration of its effects. And its ability to affect an individual, a plurality of persons or to affect a particular group of persons’.
Finally, the recommendation systems of large online platforms, as defined by the Digital Services Act, are also considered high-risk.
Protections for sensitive data also increased with tighter controls. Namely, on how providers of high-risk systems can process sensitive data, such as sexual orientation or political and religious orientation. In practice, to be able to process this type of information, prejudices must not be detectable through the processing of synthetic, anonymised, pseudonymised or encrypted data.
The process must take place in a controlled environment. And the data must not be transmitted and must be deleted after the assessment of bias. Providers must also document the reasons for data processing.