European Union’s AI Act continues its journey in Brussels: what it provides for

Elizabeth Smith

The progress of the AI Act, the European bill to regulate the use of artificial intelligence (AI) – a world first – continues. In fact, the European Parliament gave the first green light to the measure and approved a series of amendments. Thus including precise limits on facial recognition technologies, considered one of the greatest threats to the protection of the fundamental rights of consumers and citizens.

Progress on the AI Act

With 84 votes in favour, 7 against and 12 abstentions, MEPs gave the go-ahead to the EU Parliament’s position on the AI Act. Which aims to establish European standards to regulate artificial intelligence in its uses and especially in its influence on people’s daily lives.

The text was approved during the joint meeting of the Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees of the EU Parliament.

Now the plenary vote is expected between 12 and 15 June to start the trialogues. I.e. the inter-institutional negotiations with the EU Council.

The aim is to complete the project by spring 2024. Which also coincides with the end of the parliamentary term and the European elections.

The AI Act risk scale

The AI Act, besides proposing common rules for the Twenty-Seven, goes on to identify the risks that AI can pose when it comes into contact with human beings. Not all, however, are considered equally.

There are in fact risks that are considered higher by the regulation. Such as those that concern fundamental rights, health or security. And for which manufacturers of AI systems are therefore required to have certification procedures that need.

Among other things, verification of the quality of the data, human control, and the explainability of the algorithms.

Then there are systems whose uses are instead considered to be untenable risks. And, therefore, banned. As in the case of biometric recognition cameras in public spaces, predictive policing and the use of AI for emotional recognition in certain areas.

For Sofo, too, the AI Act must be a tool that sets limits where technology goes to violate privacy, the right to process personal data, and social scoring. I.e. the social credit system devised by China to rank the reputation of its citizens.

Sofo had also given the example of what is already happening in the Metaverse. Where, the use of data and the creation of avatars make it possible to study behaviour and obtain predictive analyses even on political opinions. With obvious consequences on personal freedom and the manipulation of people.

The AI risk pyramid

In order to establish the risks at a common level, the bill provides for a pyramid of four levels:

  • low (AI-enabled video games and spam filters);

  • limited (chatbots);

  • high (scoring of school and professional examinations, sorting of curricula, assessment of the reliability of evidence in court, robot-assisted surgery);

  • unacceptable (anything that poses a ‘clear threat to people’s security, livelihood and rights’. Such as the awarding of a ‘social score’ by governments).

For low-risk systems there is no intervention, for low-risk ones there are demands for transparency. High-risk technologies, on the other hand, are to be regulated. And those at a level considered unacceptable are banned.

Read also: ChatGPT worries Biden: CEOs of Microsoft, Google and OpenAI summoned to the White House

What about research?

Regulating AI, however, does not mean shutting down innovation and progress. Which is why MEPs have provided exemptions for research activities and for AI components supplied under open-source licences.

In fact, Benifei spoke of ‘limited limits’ for research since the regulation does not concern the study but the final product that is placed on the market and interacts with human beings. On the contrary, he called for more joint action at European level. Not least to be able to compete with powers such as the United States and China.

On this point, Sofo also said that it is necessary to find a balance between the protection of freedom and fundamental rights and research, and the development of a technology that will in any case go ahead in the world. And that will otherwise be imported if we do not have strategic European autonomy.

Related articles...
Latest news
Target price: what is it and why is it so important?
The garden that doesn’t waste water: all about the dry garden
How not to get your phone hacked: 6 effective tips
The 10 major exporting countries in 2024
Is there any country in the world that doesn’t have an army?
Sustainable football: How EURO 2024 attempts to be climate-neutral

Newsletter

Sign up now to stay updated on all business topics.