AI Risk classification in Europe

Preliminary remarks by Carme ARTIGAS BRUGAL, State Secretary for Digitalisation and Artificial Intelligence of Spain, during the press conference following the Artificial Intelligence Act Trilogue on 9 December 2023 in Brussels.


🔴🔊 Everyone attentive as the EU has just established the world's first rules for Artificial Inteligence. 🌐

The EU's proposed regulatory framework for AI is a pioneering legal initiative aimed at managing AI risks while positioning Europe as a global leader in this field. The framework, intended to guide AI developers, deployers, and users, focuses on clear requirements for AI use, especially for high-risk applications. It emphasizes reducing administrative and financial burdens, particularly for SMEs. This proposal is part of a larger AI package that includes the updated Coordinated Plan on AI, ensuring safety, rights, and fostering AI investment and innovation. The framework categorizes AI risks into four levels, from unacceptable to minimal, with stringent obligations for high-risk AI systems. It also proposes a governance structure at both European and national levels, ensuring AI's trustworthiness and addressing its specific challenges. The proposal also outlines a future-proof approach, adapting to AI's rapid technological advances, and is currently in a transitional phase towards implementation.

=> the AI act should apply two years after its entry into force, with some exceptions for specific provisions.

READ THE OFFICIAL PRESS NOTE HERE

“The provisional agreement also clarifies that the regulation does not apply to areas outside the scope of EU law and should not, in any case, affect member states’ competences in national security or any entity entrusted with tasks in this area. Furthermore, the AI act will not apply to systems which are used exclusively for military or defence purposes. Similarly, the agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons.”

CLASSIFICATION INTO FOUR AI RISK LEVELS.

The regulatory framework classifies AI risks into four levels: unacceptable, high, limited, and minimal.

Unacceptable risk AI, like social scoring and dangerous voice-assisted toys, will be banned.

High-risk AI includes technology used in critical infrastructure, education, healthcare, employment, services, law enforcement, and justice. These will face strict requirements such as risk assessment, data quality, traceability, documentation, user information, oversight, and security. Remote biometric systems, especially for law enforcement, have stringent rules and exceptions.

Limited risk AI demands transparency, while minimal-risk AI, like video games or spam filters, is freely usable.

Previous
Previous

SILVA Enters into the Global Biodata Coallition

Next
Next

Ureterostomy, utis and metagenomics