Leading lawmakers pitch extending scope of AI rulebook to the metaverse

- Advertisement -

The key lawmakers proposed extending the scope of the AI Act to metaverse environments that meet certain conditions. The latest amendments also covered risk management, data governance and documentation for high-risk systems.

The European Parliament’s co-rapporteurs Dragoş Tudorache and Brando Benifei circulated two new batches of compromise amendments, seen by EURACTIV, on Wednesday (28 September), ahead of the technical discussion with the other political groups on Friday.

These latest batches introduce significant changes to the regulation’s scope, subject matter and obligations for high-risk AI systems concerning risk management, data governance and technical documentation.

Scope

A new article has been added to extend the regulation’s scope to AI system operators in specific metaverse environments that meet several cumulative conditions.

These criteria are that the metaverse requires an authenticated avatar, is built for interaction on a large scale, allows social interactions similar to the real world, engages in real-world financial transactions and entails health or fundamental rights risks.

The scope has been expanded from AI providers to any economic operators placing an AI system on the market or putting it into service.

The text specifies that the regulation does not prevent national laws or collective agreements from introducing stricter obligations intended to protect workers’ rights when employers use AI systems.

At the same time, AI systems intended solely for scientific research and development are excluded from the scope.

The question of whether any AI system is likely to interact with or impact children should be considered high-risk, as asked by some MEPs, has been postponed to a later date.

In addition, the amendment from centre-right lawmakers that would restrict the scope for AI providers or users in a third country has also been kept for future discussions as it is linked to the definition, the note at the document’s margin says.

Subject matter

The rules laid out in the regulation are not to be intended only for the placement of the market of AI, but also for its development. The objectives of harmonising the rules for high-risk systems and supporting innovation with a particular focus on innovation have also been added.

The amendment from centre-left MEPs, led by Benifei, to introduce principles applicable to all AI systems has been ‘parked’, according to a comment at the margin of the text. Similarly, the discussion on the governance model, if an EU agency or an enhanced version of the European Artificial Intelligence Board, was also put on hold.

Requirements for high-risk AI

The compromise amendments state that the high-risk AI systems should comply with the AI Act’s requirements throughout their lifetime and consider the state-of-the-art and relevant technical standards.

The question of considering the foreseeable uses and misuses of the system in the compliance process has been parked as it will be addressed together with the topic of general purpose AI, large models that can be adapted for a variety of tasks.

For what concerns the risk management system, the lawmakers clarified that it could be integrated with existing procedures set up about sectorial legislation, as is the case in the financial sector, for instance.

Risk management

The risk management system would have to be updated every time there is a significant change to the high-risk AI “to ensure its continuing effectiveness.”

The list of elements that risk management would have to consider has been extended to health, legal and fundamental rights, impact on specific groups, the environment and the amplification of disinformation.

If, after the risk assessment, the AI providers consider there are still relevant residual risks, they should provide a reasoned judgement to the user on why these risks can be considered acceptable.

Data governance

The compromise amendments mandate that, for high-risk AI, techniques such as unsupervised learning and reinforcement learning that do not use validation and testing data sets have to be developed based on training datasets that meet a specific set of criteria.

The intention is to prevent the development of biases, and it is reinforced by the requirements to consider potential feedback loops.

Moreover, the text indicates that validation, testing and training datasets must all be separate, and the legality of the data sources must be verified.

Technical documentation

Wording has been introduced to give more latitude to SMEs to comply with the obligation to keep technical documentation about high-risk systems in place upon approval from the national authorities.

The list of technical information has been significantly extended to include information such as the user interface, how the AI system works, expected inputs and outputs, cybersecurity measures, and the carbon footprint.

SourceEuroactiv
- Advertisement -
- Advertisement -
- Advertisement -

Latest

- Advertisement -

Must Read

Read Next
Recommended to you