Photo by Cash Macanaya on Unsplash

I. INTRODUCTION

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was adopted on 17 May 2024 by the Committee of Ministers of the Council of Europe at its 133rd Session held in Strasbourg. It will be opened for signature at the Conference of Ministers of Justice in Vilnius, Lithuania, on 5 September 2024. This Framework Convention, negotiated by the CAI, is the first binding international treaty on AI, awaiting signature and ratification by countries. The negotiating parties aimed to ensure that existing protections for human rights, democracy, and the rule of law would apply to AI-related challenges, without creating new substantive human rights or undermining existing protections.

The treaty was designed as a global instrument, with participation from 46 Council of Europe member States and several non-European countries, including Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, Uruguay, and the United States, along with the European Union. The negotiation process was inclusive, involving many CoE bodies, other IGOs like the OECD, OSCE, UNESCO, and around 70 representatives from civil society, business, and the technical and academic community, who actively participated and made comments and text proposals until the final day of negotiations.

Negotiating a binding instrument in such an inclusive process within a short deadline was intense and challenging. The parties had to bridge differences between States' legal systems and traditions, including differences in the interpretation of some human rights, and manage expectations about developing a legal instrument that would impact AI governance globally. After intense negotiations, representatives from 57 States agreed on the result.

The Framework Convention establishes fundamental principles and rules that safeguard human rights, democracy, and the rule of law while promoting progress and technological innovations. It complements existing international standards and aims to fill legal gaps resulting from rapid technological advances. The Convention is technology-neutral and does not regulate technology. Its implementation follows a graduated and differentiated approach based on the severity and probability of adverse impacts on human rights, democracy, and the rule of law. The Convention applies to both public and private sectors, with limited exemptions for national security and research and development. Matters relating to national defense are excluded from the Convention's scope, in line with the Statute of the Council of Europe.

"The Framework Convention imposes a requirement on all future Parties to manage the risks associated with activities across the lifecycle of AI conducted by both public and private entities. It emphasizes considering the distinct roles and responsibilities of all stakeholders, offering flexibility to Parties in fulfilling agreed-upon obligations within their national legal and institutional frameworks, and aligning with their international human rights commitments. The treaty and its implementation framework will also create opportunities for collaboration with all stakeholders, including States that may not have ratified it yet, thereby enhancing its potential for global impact. To effectively govern AI now and in the future, our societies must establish a comprehensive set of technical, legal, and socio-cultural norms suitable for the diverse applications of AI in different societal and economic contexts worldwide. This effort parallels the historical development of norms over the past two centuries to regulate the use of engines in various vehicles and machines for different purposes. Thus, the Framework Convention will not function independently but will represent a crucial milestone in establishing a governance framework for AI. This framework aims to ensure that all members of our societies benefit from AI systems and participate in innovative societies and economies, while upholding and reinforcing existing safeguards for human rights, democracy, and the rule of law."

II. HOW WAS THE FRAMEWORK CONVENTION ELABORATED

In 2019, efforts began with the establishment of the ad hoc Committee on Artificial Intelligence (CAHAI) to assess the feasibility of creating a Framework Convention. This committee was succeeded in 2022 by the Committee on Artificial Intelligence (CAI), which was responsible for drafting and negotiating the treaty text. The Framework Convention was crafted collaboratively by 46 member states of the Council of Europe, along with observer states such as Canada, Japan, Mexico, the Holy See, and the United States of America, as well as the European Union. Additionally, several non-member states including Australia, Argentina, Costa Rica, Israel, Peru, and Uruguay participated actively. Following the Council of Europe's tradition of engaging multiple stakeholders, the development process included 68 international representatives from civil society, academia, and industry, along with contributions from various other international organizations."

III. WHAT DOES THE FRAMEWORK CONVENTION REQUIRES STATES TO DO

Fundamental Principles

The Framework Convention mandates states to ensure that activities involving AI systems adhere to the following core principles:

  1. Human Dignity and Individual Autonomy: Respect and uphold the inherent worth and independence of every person.
  2. Equality and Non-Discrimination: Ensure fairness and impartiality, preventing any form of bias or unequal treatment.
  3. Respect for Privacy and Personal Data Protection: Safeguard individuals' private information and personal data.
  4. Transparency and Oversight: Maintain openness and allow for monitoring and scrutiny.
  5. Accountability and Responsibility: Ensure that entities using AI are answerable for their actions and decisions.
  6. Reliability: Guarantee that AI systems are dependable and function as intended.
  7. Safe Innovation: Promote advancements in technology while ensuring safety.

Remedies, Procedural Rights & Safeguards

  • Record pertinent information about AI systems and their usage, and provide this information to affected individuals.
  • Ensure that the information is adequate for affected individuals to contest decisions made by or significantly influenced by AI systems, and to challenge the use of the systems themselves.
  • Offer a viable avenue for affected individuals to file complaints with competent authorities.
  • Guarantee effective procedural protections, safeguards, and rights to affected individuals when AI systems have a substantial impact on human rights and fundamental freedoms.
  • Provide notification that individuals are interacting with an AI system rather than a human being.

Risk and Impact Management Requirements

  1. Conduct assessments of risks and impacts on human rights, democracy, and the rule of law, in an iterative process.
  2. Implement sufficient measures for prevention and mitigation based on these assessments.
  3. Empower authorities to impose bans or moratoriums on certain uses of AI systems ("red lines").

IV. WHO IS COVERED BY FRAMEWORK

The Framework Convention applies to the use of AI systems by governmental entities, including private actors acting on their behalf, and also encompasses private entities. It provides Parties with two options to comply with its principles and obligations when regulating the private sector: Parties may choose to directly follow the Convention's applicable provisions, or alternatively, they can implement other measures to meet the treaty's requirements while fully respecting their international obligations concerning human rights, democracy, and the rule of law. Parties are not obliged to extend the provisions of the treaty to activities related to protecting their national security interests, but they must ensure that such activities adhere to international law and uphold democratic institutions and processes. The Framework Convention does not extend to matters of national defense and excludes research and development activities, unless the testing of AI systems has the potential to affect human rights, democracy, or the rule of law.

V. HOW IS THE IMPLEMENTATION OF THE FRAMEWORK CONVENTION MONITORED

The Framework Convention sets up a monitoring mechanism called the Conference of the Parties. This body consists of official representatives from the countries that are signatories to the Convention. Their role is to assess how well the provisions of the Convention are being put into practice. The findings and recommendations from the Conference of the Parties are crucial for ensuring that states adhere to the Framework Convention and for maintaining its effectiveness over time. Additionally, the Conference of the Parties will promote collaboration with relevant stakeholders, which includes organizing public hearings on important aspects of the Convention's implementation.

VI. ARTIFICIAL INTELLIGENCE & HUMAN RIGHTS

Artificial Intelligence (AI) is rapidly transforming various aspects of our lives, influencing how we access information, make decisions, and interact within society. As AI technologies continue to evolve, their impact on governance, public institutions, and citizen engagement in democratic processes is expected to grow significantly. While AI presents numerous benefits, it also poses serious risks that must be addressed to protect fundamental human rights.

The Council of Europe plays a crucial role in ensuring that human rights, democracy, and the rule of law are upheld in the digital landscape. It is imperative that AI is developed and utilized in a manner that aligns with these core values. The organization has a history of establishing innovative standards that often set the stage for global norms. In line with this tradition, the Council is actively addressing the challenges posed by AI through a collaborative approach that involves various stakeholders, including international organizations, civil society, businesses, and academic institutions.

A significant milestone in this effort is the recent adoption of the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law by the Committee of Ministers of the Council of Europe. This pioneering legal instrument is the first of its kind globally, aimed at ensuring that AI systems adhere to established standards concerning human rights and democratic principles. The Convention seeks to mitigate the risks associated with AI technologies that could potentially undermine these values.

Key aspects of the Framework Convention include the promotion of transparency, accountability, and fairness in AI systems. It emphasizes the need for robust safeguards to protect individuals from discrimination and violations of their rights. Additionally, the Convention advocates for the involvement of affected individuals in decision-making processes related to AI, ensuring that their voices are heard and their rights are respected.

Moreover, the Convention outlines the importance of conducting thorough assessments of the potential impacts of AI on human rights and democracy. This includes evaluating both the positive and negative consequences of AI applications, allowing for informed decision-making and the implementation of necessary preventive measures. By establishing clear guidelines and responsibilities for AI developers and users, the Convention aims to foster a culture of responsibility and ethical conduct in the deployment of AI technologies.

As AI continues to shape our world, it is essential to prioritize the protection of human rights and democratic values. The Council of Europe's Framework Convention represents a significant step toward achieving this goal, providing a comprehensive framework for the responsible development and use of AI. By adhering to these principles, we can harness the potential of AI while safeguarding the rights and freedoms that are fundamental to our societies.

VII. ARTIFICIAL INTELLIGENCE ACT

Article - Subject matter

1. The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of Fundamental Rights, including democracy, the rule of law and environmental protection, against the harmful effects of artificial intelligence systems (AI systems) in the Union, and to support innovation.

2. This Regulation lays down:

  • harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;
  • prohibitions of certain AI practices;
  • specific requirements for high-risk AI systems and obligations for operators of such systems;
  • harmonised transparency rules for certain AI systems;
  • harmonised rules for the placing on the market of general-purpose AI models;
  • rules on market monitoring, market surveillance governance and enforcement;
  • measures to support innovation, with a particular focus on SMEs, including startups.
Article 2 - Scope

1. This Regulation applies to:

  • providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
  • deployers of AI systems that have their place of establishment or are located within the Union;
  • providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
  • importers and distributors of AI systems;
  • product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
  • authorised representatives of providers, which are not established in the Union;
  • affected persons that are located in the Union.

2. For AI systems classified as high-risk AI systems in accordance with Article 6(1) and (2) related to products covered by the Union harmonisation legislation listed in section B of Annex I, only Article 112 applies. Article 57 applies only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation.

3. This Regulation does not apply to areas outside the scope of Union law, and shall not, in any event, affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States with carrying out tasks in relation to those competences.

This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.

This Regulation does not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.

4. This Regulation applies neither to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.

5. This Regulation shall not affect the application of the provisions on the liability of providers of intermediary services as set out in Chapter II of Regulation (EU)

6. This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

7. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulation without prejudice to the arrangements provided for in Article 10(5) and Article 59 of this Regulation.

8. This Regulation does not apply to any research, testing or development activity regarding AI systems or models prior to their being placed on the market or put into service. Such activities shall be conducted in accordance with applicable Union law. testing in real world conditions shall not be covered by that exclusion.

9. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer

10. This Regulation does not apply to the obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.

11. This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favourable to workers.

12. This Regulation applies to AI systems released under free and open source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.

PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES

Article 5

Prohibited AI Practices

1. The following AI practices shall be prohibited:

(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of, materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing a person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;

(b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;

(c) the placing on the market, the putting into service or the use of AI systems for the purpose of the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

  1. detrimental or unfavourable treatment of certain natural persons or whole groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;
  2. detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;

(d) the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the likelihood of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;

(e) the placing on the market, the putting into service for this specific purpose, or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

(f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

(g) the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;

(h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:

(i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as searching for missing persons;

  1. the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
  2. the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation, prosecution or executing a criminal penalty for offences referred to in AnnexII and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.
Article 6

Classification rules for high-risk AI systems

1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:

  • the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
  •  the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.

3. By derogation from paragraph 2, an AI system shall not be considered to be high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case where one or more of the following conditions are fulfilled:

  • the AI system is intended to perform a narrow procedural task;
  • the AI system is intended to improve the result of a previously completed human activity;
  • the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.

4. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

5. The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than … [18 months from the date of entry into force of this Regulation], provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

6. The Commission shall adopt delegated acts in accordance with Article 97 to amend the conditions laid down in paragraph 3, first subparagraph, of this Article. The Commission may adopt delegated acts in accordance with Article 97 in order to add new conditions to those laid down in paragraph 3, first subparagraph, or to modify them, only where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.

The Commission shall adopt delegated acts in accordance with Article 97 in order to delete any of the conditions laid down in the paragraph 3, first subparagraph, where there is concrete and reliable evidence that this is necessary for the purpose of maintaining the level of protection of health, safety and fundamental rights in the Union.

Any amendment to the conditions laid down in paragraph 3, first subparagraph, shall not decrease the overall level of protection of health, safety and fundamental rights in the Union.

When adopting the delegated acts, the Commission shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and shall take account of market and technological developments.

Requirements for high-risk AI systems

Article 8

Compliance with the requirements

  1. High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their intended purposes as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
  2. Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements required under applicable Union harmonisation legislation.

In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I.

Article 9

Risk management system

1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.

2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:

(a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;

(b) the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse;

(c) the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72;

(d) the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a).

3. The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.

4. The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.

5. The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. In identifying the most appropriate risk management measures, the following shall be ensured:

(a) elimination or reduction of identified and evaluated risks pursuant to paragraph 2 as far as technically feasible through adequate design and development of the highrisk AI system;

(b) where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;

(c) provision of information required pursuant to Article 13 and, where appropriate, training to deployers.

With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used.

6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section.

7. Testing procedures may include testing in real-world conditions in accordance with Article 60.

8. The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.

9. When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other groups of vulnerable persons.

10. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part of, or combined with, the risk management procedures established pursuant to that law.

Article 10

Data and data governance

1. High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used.

2. Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular:

  • the relevant design choices;
  • data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection;
  • relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation;
  • the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and represent;
  • an assessment of the availability, quantity and suitability of the data sets that are needed;
  • examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations;
  • appropriate measures to detect, prevent and mitigate possible biases identified according to point (f);
  • the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed.

3. Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof.

4. Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI system is intended to be used.

5. To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulation (EU) 2016/679, Directive (EU) 2016/680 and Regulation (EU) 2018/1725, all the following conditions shall apply in order for such processing to occur:

  • the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data;
  • the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state of the art security and privacy-preserving measures, including pseudonymisation;
  • the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons with appropriate confidentiality obligations have access to those personal data;
  • the personal data in the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties;
  • the personal data in the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first;
  • the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data.

6. For the development of high-risk AI systems not using techniques involving the training of AI models, paragraphs 2 to 5 apply only to the testing data sets.

Article 12

Record-keeping

1. High-risk AI systems shall technically allow for the automatic recording of events (‘logs’) over their lifetime.

2. In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for:

  • identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1) or in a substantial modification;
  • facilitating the post-market monitoring referred to in Article 72; and (c) monitoring the operation of high-risk AI systems referred to in Article 26(6).

3. For high-risk AI systems referred to in point 1 (a) of Annex III, the logging capabilities shall provide, at a minimum:

  • recording of the period of each use of the system (start date and time and end date and time of each use);
  • the reference database against which input data has been checked by the system;
  • the input data for which the search has led to a match;
  • the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5).
Article 13

Transparency and provision of information to deployers

1. High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3.

2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.

3. The instructions for use shall contain at least the following information:

  • the identity and the contact details of the provider and, where applicable, of its authorised representative;
  • the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
    (i) its intended purpose;
    (ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
    (iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2);
    (iv) where applicable, the technical capabilities and characteristics of the highrisk AI system to provide information that is relevant to explain its output;
    (v) when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;
    (vi) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;
    (vii) where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately;

(c) the changes to the high-risk AI system and its performance which have been predetermined by the provider at the moment of the initial conformity assessment, if any;

(d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;

(e) the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;

(f) where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12.

Article 14

Human oversight

1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.

2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.

3. The oversight measures shall be commensurate to the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:

(a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;

(b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.

4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate to the following circumstances:

(a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance ;

(b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;

(c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;

(d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;

(e) to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.

5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.

Article 15

Accuracy, robustness and cybersecurity

  1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.
  2. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholder and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies.
  3. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.
  4. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken towards this regard. The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (‘feedback loops’), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.
  5. High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (‘data poisoning’), or pre-trained components used in training (‘model poisoning’), inputs designed to cause the AI model to make a mistake (‘adversarial examples’ or ‘model evasion’), confidentiality attacks or model flaws.

Obligations of providers and deployers of high-risk AI systems and other parties

Article 16

Obligations of providers of high-risk AI systems Providers of high-risk AI systems shall:

(a) ensure that their high-risk AI systems are compliant with the requirements set out in Section 2;

(b) indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable their name, registered trade name or registered trade mark, the address at which they can be contacted;

(c) have a quality management system in place which complies with Article 17;

(d) keep the documentation referred to in Article 18;

(e) when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 19;

(f) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its being placed on the market or put into service;

(g) draw up an EU declaration of conformity in accordance with Article 47;

(h) affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48;

(i) comply with the registration obligations referred to in Article 49(1);

(j) take the necessary corrective actions and provide information as required in Article 20;

(k) upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2;

(l) ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives.

Article 25

Responsibilities along the AI value chain

1. Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:

(a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations therein are allocated otherwise;

(b) they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to Article 6;

(c) they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6.

2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation.

3. In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the provider of the high-risk AI system, and shall be subject to the obligations under Article 16 under either of the following circumstances:

(a) the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer;

(b) the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market.

4. The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open licence.

The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available free of charge in an easily usable electronic format.

5. Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law.

Article 40

Harmonised standards and standardisation deliverables

1. High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in Chapter IV of this Regulation, to the extent that those standards cover those requirements or obligations.

2. The Commission shall issue standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, obligations set out in Chapter IV of this Regulation, in accordance with Article 10 of Regulation (EU) No 1025/2012, without undue delay. The standardisation request shall also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and other resources consumption during its lifecycle, and on the energy-efficient development of generalpurpose AI models.

When preparing a standardisation request, the Commission shall consult the Board and relevant stakeholders, including the advisory forum. When issuing a standardisation request to European standardisation organisations, the Commission shall specify that standards have to be clear, consistent, including with the standards developed in the various sectors for products covered by the existing Union harmonisation legislation listed in Annex I, and aiming to ensure that AI systems or AI models placed on the market or put into service in the Union meet the relevant requirements laid down in this Regulation.

The Commission shall request the European standardisation organisations to provide evidence of their best efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Article 24 of Regulation (EU).

3. The participants in the standardisation process shall seek to promote investment and innovation in AI, including through increasing legal certainty, as well as the competitiveness and growth of the Union market, and shall contribute to strengthening global cooperation on standardisation and taking into account existing international standards in the field of AI that are consistent with Union values, fundamental rights and interests, and shall enhance multi-stakeholder governance ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU).

TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS

Article 50

Transparency obligations for providers and users of certain AI systems

  1. Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence.
  2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state-of-the-art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.
  3. Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in compliance with Union law.
  4. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
  5. The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. 6. Paragraphs 1 to 4 shall not affect the requirements and obligations set out in Chapter III, and shall be without prejudice to other transparency obligations laid down in Union or national law for deployers of AI systems.
  6. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The Commission is empowered to adopt implementing acts to approve those codes of practice in accordance with the procedure laid down in Article 56 (6), (7) and (8). If it deems the code is not adequate, the Commission is empowered to adopt an implementing act specifying common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 98(2).
Article 56

Codes of practice

  1. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to contribute to the proper application of this Regulation, taking into account international approaches.
  2. The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 and 55, including the following issues: (a) means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in the light of market and technological developments; (b) the adequate level of detail for the summary about the content used for training; (c) the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate; (d) the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in the light of the possible ways in which such risks may emerge and materialise along the AI value chain.
  3. The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing-up of codes of practice. Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process.
  4. The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of those objectives, and that they take due account of the needs and interests of all interested parties, including affected persons, at Union level.
  5.  The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office on the implementation of the commitments and the measures taken and their outcomes, including as measured against the key performance indicators as appropriate. Key performance indicators and reporting commitments shall reflect differences in size and capacity between various participants.
  6. The AI Office and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of this Regulation. The AI Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, as well as the issues listed in paragraph 2 of this Article, and shall regularly monitor and evaluate the achievement of their objectives. They shall publish their assessment of the adequacy of the codes of practice. The Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the Union. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).
  7. The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice. For providers of general-purpose AI models not presenting systemic risks this adherence may be limited to the obligations provided for in Article 53, unless they declare explicitly their interest to join the full code.
  8. The AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in particular in the light of emerging standards. The AI Office shall assist in the assessment of available standards.
  9. Codes of practice shall be ready at the latest by … [nine months from the date of entry into force of this Regulation]. The AI Office shall take the necessary steps, including inviting providers pursuant to paragraph 7. If, by ... [12 months from the date of entry into force], a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide, by means of implementing acts, common rules for the implementation of the obligations provided for in Articles 53 and 55, including the issues set out in paragraph 2 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).

MEASURES IN SUPPORT OF INNOVATION

Article 57

AI regulatory sandboxes

1. Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by … [24 months from the date of entry into force of this Regulation]. That sandbox may also be established jointly with the competent authorities of one or more other Member States. The Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes. The obligation under the first subparagraph may also be fulfilled by participating in an existing sandbox in so far as that participation provides an equivalent level of national coverage for the participating Member States.

2. Additional AI regulatory sandboxes at regional or local level, or established jointly with the competent authorities of other Member States may also be established.

3. The European Data Protection Supervisor may also establish an AI regulatory sandbox for Union institutions, bodies, offices and agencies, and may exercise the roles and the tasks of national competent authorities in accordance with this Chapter.

4. Member States shall ensure that the competent authorities referred to in paragraphs 1 and 2 allocate sufficient resources to comply with this Article effectively and in a timely manner. Where appropriate, national competent authorities shall cooperate with other relevant authorities, and may allow for the involvement of other actors within the AI ecosystem. This Article shall not affect other regulatory sandboxes established under Union or national law. Member States shall ensure an appropriate level of cooperation between the authorities supervising those other sandboxes and the national competent authorities.

5. AI regulatory sandboxes established under paragraph (1) shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the prospective providers and the competent authority. Such regulatory sandboxes may include testing in real world conditions supervised in the sandbox.

6. Competent authorities shall provide, as appropriate, guidance, supervision and support within the AI regulatory sandbox with a view to identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, and their effectiveness in relation to the obligations and requirements of this Regulation and, where relevant, other Union and Member State law supervised within the sandbox.

7. Competent authorities shall provide providers and prospective providers using the AI regulatory sandbox with guidance on regulatory expectations and how to fulfil the requirements and obligations set out in this Regulation. Upon request of the provider or prospective provider of the AI system, the competent authority shall provide a written proof of the activities successfully carried out in the sandbox. The competent authority shall also provide an exit report detailing the activities carried out in the sandbox and the related results and learning outcomes. Providers may use such documentation to demonstrate their compliance with this Regulation through the conformity assessment process or relevant market surveillance activities. In this regard, the exit reports and the written proof provided by the national competent authority shall be taken positively into account by market surveillance authorities and notified bodies, with a view to accelerating conformity assessment procedures to a reasonable extent.

8. Subject to the confidentiality provisions in Article 78, and with the agreement of the provider or prospective provider, the Commission and the Board shall be authorised to access the exit reports and shall take them into account, as appropriate, when exercising their tasks under this Regulation. If both the provider or prospective provider and the national competent authority explicitly agree, the exit report may be made publicly available through the single information platform referred to in this Article.

9. The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives:

  • improving legal certainty to achieve regulatory compliance with this Regulation or, where relevant, other applicable Union and national law;
  • supporting the sharing of best practices through cooperation with the authorities involved in the AI regulatory sandbox;
  • fostering innovation and competitiveness and facilitating the development of an AI ecosystem;
  • contributing to evidence-based regulatory learning;
  • facilitating and accelerating access to the Union market for AI systems, in particular when provided by SMEs, including start-ups.

10. National competent authorities shall ensure that, to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national or competent authorities are associated with the operation of the AI regulatory sandbox and involved in the supervision of those aspects to the extent of their respective tasks and powers.

11. The AI regulatory sandboxes shall not affect the supervisory or corrective powers of the competent authorities supervising the sandboxes, including at regional or local level. Any significant risks to health and safety and fundamental rights identified during the development and testing of such AI systems shall result in an adequate mitigation. National competent authorities shall have the power to temporarily or permanently suspend the testing process, or the participation in the sandbox if no effective mitigation is possible, and shall inform the AI Office of such decision. National competent authorities shall exercise their supervisory powers within the limits of the relevant law, using their discretionary powers when implementing legal provisions in respect of a specific AI sandbox project, with the objective of supporting innovation in AI in the Union.

12. Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. To the extent that other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law.

13. The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they facilitate cross-border cooperation between national competent authorities.

14. National competent authorities shall coordinate their activities and cooperate within the framework of the Board.

15. National competent authorities shall inform the AI Office and the Board of the establishment of a sandbox, and may ask it for support and guidance. The AI Office shall make publicly available a list of planned and existing AI sandboxes and keep it up to date in order to encourage more interaction in the AI regulatory sandboxes and crossborder cooperation.

16. National competent authorities shall submit to the AI Office and to the Board, annual reports, starting one year after the establishment of the AI regulatory sandbox and every year thereafter until its termination and a final report. Those reports shall provide information on the progress and results of the implementation of those sandboxes, including best practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application and possible revision of this Regulation, including its delegated and implementing acts, and on the application of other Union law supervised by the competent authorities within the sandbox. The national competent authorities shall make those annual reports or abstracts thereof available to the public, online. The Commission shall, where appropriate, take the annual reports into account when exercising its tasks under this Regulation.

17. The Commission shall develop a single and dedicated interface containing all relevant information related to AI regulatory sandboxes to allow stakeholders to interact with AI regulatory sandboxes and to raise enquiries with competent authorities, and to seek nonbinding guidance on the conformity of innovative products, services, business models embedding AI technologies, in accordance with Article 62(1), point (c). The Commission shall proactively coordinate with national competent authorities, where relevant.

Article 59

Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox

1. Personal data lawfully collected for other purposes may be processed in an AI regulatory sandbox solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met:

(a) AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas:

(i) public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems;

(ii) a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures;

(iii) energy sustainability;

(iv) safety and resilience of transport systems and mobility, critical infrastructure and networks;

(v) efficiency and quality of public administration and public services;

(b) the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other non-personal data;

(c) there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing;

(d) any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to those data;

(e) providers can further share the originally collected data only in compliance with Union data protection law; any personal data crated in the sandbox cannot be shared outside the sandbox;

(f) any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data;

(g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period;

(h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law;

(i) a complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV;

(j) a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing prevention threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific or Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.

3. Paragraph 1 is without prejudice to Union or national law which excludes processing of personal data for other purposes than those explicitly mentioned in that law, as well as to Union or national law laying down the basis for the processing of personal data which is necessary for the purpose of developing, testing or training of innovative AI systems or any other legal basis, in compliance with Union law on the protection of personal data.

Governance at Union level

Article 64

AI Office

  • The Commission shall develop Union expertise and capabilities in the field of AI through the AI Office.
  • Member States shall facilitate the tasks entrusted to the AI Office, as reflected in this Regulation.
Article 65

Establishment and structure of the European Artificial Intelligence Board

1. A European Artificial Intelligence Board (the ‘Board’) is hereby established. 2. The Board shall be composed of one representative per Member State. The European Data Protection Supervisor shall participate as observer. The AI Office shall also attend the Board’s meetings, without taking part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the issues discussed are of relevance for them. 3. Each representative shall be designated by their Member State for a period of three years, renewable once.

4. Member States shall ensure that their representatives on the Board:

(a) have the relevant competences and powers in their Member State so as to contribute actively to the achievement of the Board’s tasks referred to in Article 66;

(b) are designated as a single contact point vis-à-vis the Board and, where appropriate, taking into account Member States’ needs, as a single contact point for stakeholders;

(c) are empowered to facilitate consistency and coordination between national competent authorities in their Member State as regards the implementation of this Regulation, including through the collection of relevant data and information for the purpose of fulfilling their tasks on the Board.

5. The designated representatives of the Member States shall adopt the Board’s rules of procedure by a two-thirds majority. The rules of procedure shall, in particular, lay down procedures for the selection process, the duration of the mandate of, and specifications of the tasks of, the Chair, detailed arrangements for voting, and the organisation of the Board’s activities and those of its sub-groups.

6. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and to notify authorities about issues related to market surveillance and notified bodies. The standing sub-group for market surveillance should act as the administrative cooperation group (ADCO) for this Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020. The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. Where appropriate, representatives of the advisory forum referred to in Article 67 may be invited to such sub-groups or to specific meetings of those subgroups as observers.

7. The Board shall be organised and operated so as to safeguard the objectivity and impartiality of its activities.

8. The Board shall be chaired by one of the representatives of the Member States. The AI Office shall provide the secretariat for the Board. convene the meetings upon request of the Chair, and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and its rules of procedure.

Article 66

Tasks of the Board The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective application of this Regulation.

For this purpose, the Board may in particular:

(a) contribute to the coordination among national competent authorities responsible for the application of this Regulation and, in cooperation with and subject to the agreement of the market surveillance authorities concerned, support joint activities of market surveillance authorities referred to in Article 74(11);

(b) collect and share technical and regulatory expertise and best practices among Member States;

(c) provide advice on the implementation of this Regulation, in particular as regards the enforcement of rules on general-purpose AI models;

(d) contribute to the harmonisation of administrative practices in the Member States, including in relation to the derogation from the conformity assessment procedures referred to in Article 46, the functioning of regulatory sandboxes, and testing in real world conditions referred to in Articles 57, 59 and 60;

(e) upon the request of the Commission or on its own initiative, issue recommendations and written opinions on any relevant matters related to the implementation of this Regulation and to its consistent and effective application, including:

(i) on the development and application of codes of conduct and codes of practice pursuant to this Regulation, as well as of the Commission’s guidelines;

(ii) the evaluation and review of this Regulation pursuant to Article 112, including as regards the serious incident reports referred to in Article 73, and the functioning of the database referred to in Article 71, the preparation of the delegated or implementing acts, and as regards possible alignments of this Regulation with the legal acts listed in Annex I;

(iii) on technical specifications or existing standards regarding the requirements set out in Chapter III, Section 2;

(iv) on the use of harmonised standards or common specifications referred to in Articles 40 and 41;

(v) trends, such as European global competitiveness in AI, the uptake of AI in the Union, and the development of digital skills;

(vi) trends on the evolving typology of AI value chains, in particular on the resulting implications in terms of accountability;

(vii) on the potential need for amendment to Annex III in accordance with Article 7, and on the potential need for possible revision of Article 5 pursuant to Article 112, taking into account relevant available evidence and the latest developments in technology;

(f) support the Commission in promoting AI literacy, public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems;

(g) facilitate the development of common criteria and a shared understanding among market operators and competent authorities of the relevant concepts provided for in this Regulation, including by contributing to the development of benchmarks;

(h) cooperate, as appropriate, with other Union institutions, bodies, offices and agencies, as well as relevant Union expert groups and networks, in particular in the fields of product safety, cybersecurity, competition, digital and media services, financial services, consumer protection, data and fundamental rights protection;

(i) contribute to effective cooperation with the competent authorities of third countries and with international organisations;

(j) assist national competent authorities and the Commission in developing the organisational and technical expertise required for the implementation of this Regulation, including by contributing to the assessment of training needs for staff of Member States involved in implementing this Regulation;

(k) assist the AI Office in supporting national competent authorities in the establishment and development of regulatory sandboxes, and facilitate cooperation and information sharing among regulatory sandboxes;

(l) contribute to, and provide relevant advice on, the development of guidance documents;

(m) advise the Commission in relation to international matters on AI;

(n) provide opinions to the Commission on the qualified alerts regarding general-purpose AI models;

(o) receive opinions by the Member States on qualified alerts regarding general-purpose AI models, and on national experiences and practices on the monitoring and enforcement of AI systems, in particular systems integrating the general-purpose AI models.

Article 67

Advisory forum

  1. An advisory forum shall be established to provide technical expertise and advise the Board and the Commission, and to contribute to their tasks under this Regulation.
  2. The membership of the advisory forum shall represent a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia. The membership of the advisory forum shall be balanced with regard to commercial and non-commercial interests and, within the category of commercial interests, with regard to SMEs and other undertakings.
  3. The Commission shall appoint the members of the advisory forum, in accordance with the criteria set out in paragraph 2, from amongst stakeholders with recognised expertise in the field of AI.
  4. The term of office of the members of the advisory forum shall be two years, which may be extended by up to no more than four years.
  5. The Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) shall be permanent members of the advisory forum.
  6. The advisory forum shall draw up its rules of procedure. It shall elect two co-chairs from among its members, in accordance with criteria set out in paragraph 2. The term of office of the co-chairs shall be two years, renewable once.
  7. The advisory forum shall hold meetings at least twice a year. The advisory forum may invite experts and other stakeholders to its meetings.
  8. The advisory forum may prepare opinions, recommendations and written contributions upon request of the Board or the Commission.
  9. The advisory forum may establish standing or temporary sub-groups as appropriate for the purpose of examining specific questions related to the objectives of this Regulation.
  10. The advisory forum shall prepare an annual report on its activities. That report shall be made publicly available.
Article 68

Scientific panel of independent experts

1. The Commission shall, by means of an implementing act, make provisions on the establishment of a scientific panel of independent experts (the ‘scientific panel’) intended to support the enforcement activities under this Regulation. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).

2. The scientific panel shall consist of experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of AI necessary for the tasks set out in paragraph 3, and shall be able to demonstrate meeting all of the following conditions:

(a) having particular expertise and competence and scientific or technical expertise in the field of AI;

(b) independence from any provider of AI systems or general-purpose AI models or systems;

(c) an ability to carry out activities diligently, accurately and objectively. The Commission, in consultation with the Board, shall determine the number of experts on the panel in accordance with the required needs and shall ensure fair gender and geographical representation.

3. The scientific panel shall advise and support the AI Office, in particular with regard to the following tasks: (a) supporting the implementation and enforcement of this Regulation as regards general-purpose AI models and systems, in particular by:

(i) alerting the AI Office of possible systemic risks at Union level of generalpurpose AI models, in accordance with Article 90;

(ii) contributing to the development of tools and methodologies for evaluating capabilities of general-purpose AI models and systems, including through benchmarks;

(iii) providing advice on the classification of general-purpose AI models with systemic risk;

(iv) providing advice on the classification of various general-purpose AI models and systems;

(v) contributing to the development of tools and templates;

(b) supporting the work of market surveillance authorities, at their request;

(c) supporting cross-border market surveillance activities as referred to in Article 74(11), without prejudice to the powers of market surveillance authorities;

(d) supporting the AI Office in carrying out its duties in the context of the safeguard clause pursuant to Article 81.

4. The experts on the scientific panel shall perform their tasks with impartiality and objectivity, and shall ensure the confidentiality of information and data obtained in carrying out their tasks and activities. They shall neither seek nor take instructions from anyone when exercising their tasks under paragraph 3. Each expert shall draw up a declaration of interests, which shall be made publicly available. The AI Office shall establish systems and procedures to actively manage and prevent potential conflicts of interest.

5. The implementing act referred to in paragraph 1 shall include provisions on the conditions, procedures and detailed arrangements for the scientific panel and its members to issue alerts, and to request the assistance of the AI Office for the performance of the tasks of the scientific panel.

Article 69

Access to the pool of experts by the Member States

  1. Member States may call upon experts of the scientific panel to support their enforcement activities under this Regulation.
  2. The Member States may be required to pay fees for the advice and support provided by the experts. The structure and the level of fees as well as the scale and structure of recoverable costs shall be set out in the implementing act referred to in Article 68(1), taking into account the objectives of the adequate implementation of this Regulation, cost-effectiveness and the necessity of ensuring effective access to experts for all Member States.
  3. The Commission shall facilitate timely access to the experts by the Member States, as needed, and ensure that the combination of support activities carried out by Union AI testing support pursuant to Article 84 and experts pursuant to this Article is efficiently organised and provides the best possible added value.
Article 74

Market surveillance and control of AI systems in the Union market

1. Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. For the purposes of the effective enforcement of this Regulation:

(a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Article 2(1) of this Regulation;

(b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation.

2. As part of their reporting obligations under Article 34(4) of Regulation (EU) 2019/1020, the market surveillance authorities shall report annually to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules. They shall also annually report to the Commission about the use of prohibited practices that occurred during that year and about the measures taken.

3. For high-risk AI systems related to products covered by the Union harmonisation legislation listed in Section A of Annex I, the market surveillance authority for the purposes of this Regulation shall be the authority responsible for market surveillance activities designated under those legal acts. By derogation from the paragraph 2, and in appropriate circumstances, Member States may designate another relevant authority to act as a market surveillance authority, provided they ensure coordination with the relevant sectoral market surveillance authorities responsible for the enforcement of the legal acts listed in Annex I.

4. The procedures referred to in Articles 79 to 83 of this Regulation shall not apply to AI systems related to products covered by the Union harmonisation legislation listed in section A of Annex I, where such legal acts already provide for procedures ensuring an equivalent level of protection and having the same objective. In such cases, the relevant sectoral procedures shall apply instead.

5. Without prejudice to the powers of market surveillance authorities under Article 14 of Regulation (EU), for the purpose of ensuring the effective enforcement of this Regulation, market surveillance authorities may exercise the powers referred to in Article 14(4), points (d) and (j), of that Regulation remotely, as appropriate.

6. For high-risk AI systems placed on the market, put into service, or used by financial institutions regulated by Union financial services law, the market surveillance authority for the purposes of this Regulation shall be the relevant national authority responsible for the financial supervision of those institutions under that legislation in so far as the placing on the market, putting into service, or the use of the AI system is in direct connection with the provision of those financial services.

7. By way of derogation from paragraph 6, in appropriate circumstances, and provided that coordination is ensured, another relevant authority may be identified by the Member State as market surveillance authority for the purposes of this Regulation. National market surveillance authorities supervising regulated credit institutions regulated under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism established by Regulation No 1024/2013, should report, without delay, to the European Central Bank any information identified in the course of their market surveillance activities that may be of potential interest for the prudential supervisory tasks of the European Central Bank specified in that Regulation.

8. For high-risk AI systems listed in point 1 of Annex III, in so far as the systems are used for law enforcement purposes, border management and justice and democracy, and for highrisk AI systems listed in points 6, 7 and 8 of Annex III to this Regulation, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Regulation (EU) or Directive (EU), or any other authority designated pursuant to the same conditions laid down in Articles 41 to 44 of Directive (EU). Market surveillance activities shall in no way affect the independence of judicial authorities, or otherwise interfere with their activities when acting in their judicial capacity.

9. Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as their market surveillance authority, except in relation to the Court of Justice of the European Union acting in its judicial capacity.

10. Member States shall facilitate coordination between market surveillance authorities designated under this Regulation and other relevant national authorities or bodies which supervise the application of Union harmonisation legislation listed in Annex I, or in other Union law, that might be relevant for the high-risk AI systems referred to in Annex III.

11. Market surveillance authorities and the Commission shall be able to propose joint activities, including joint investigations, to be conducted by either market surveillance authorities or market surveillance authorities jointly with the Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness or providing guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are found to present a serious risk across two or more Member States in accordance with Article 9 of Regulation (EU). The AI Office shall provide coordination support for joint investigations.

12. Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of highrisk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (‘API’) or other relevant technical means and tools enabling remote access.

13. Market surveillance authorities shall be granted access to the source code of the highrisk AI system upon a reasoned request and only when both of the following conditions are fulfilled:

(a) access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2; and,

(b) testing or auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient.

14. Any information or documentation obtained by market surveillance authorities shall be treated in compliance with the confidentiality obligations set out in Article 78.

Article 75

Mutual assistance, market surveillance and control of general-purpose AI systems

  1. Where an AI system is based on a general-purpose AI model, and the model and the system are developed by the same provider, the AI Office shall have powers to monitor and supervise compliance of that AI system with obligations under this Regulation. To carry out its monitoring and supervision tasks, the AI Office shall have all the powers of a market surveillance authority within the meaning of Regulation (EU).
  2. Where the relevant market surveillance authorities have sufficient reason to consider general-purpose AI systems that can be used directly by deployers for at least one purpose that is classified as high-risk pursuant to this Regulation to be non-compliant with the requirements laid down in this Regulation, they shall cooperate with the AI Office to carry out compliance evaluations, and shall inform the Board and other market surveillance authorities accordingly.
  3. Where a national market surveillance authority is unable to conclude its investigation of the high-risk AI system because of its inability to access certain information related to the AI model despite having made all appropriate efforts to obtain that information, it may submit a reasoned request to the AI Office, by which access to that information shall be enforced. In that case, the AI Office shall supply to the applicant authority without delay, and in any event within 30 days, any information that the AI Office considers to be relevant in order to establish whether a high-risk AI system is noncompliant. National market authorities shall safeguard the confidentiality of the information they obtain in accordance with Article 78 of this Regulation. The procedure provided for in Chapter VI of Regulation (EU) shall apply mutatis mutandis.
Article 76

Supervision of testing in real world conditions by market surveillance authorities

1. Market surveillance authorities shall have competences and powers to ensure that testing in real world conditions is in accordance with this Regulation.

2. Where testing in real world conditions is conducted for AI systems that are supervised within an AI regulatory sandbox under Article 59, the market surveillance authorities shall verify the compliance with the provisions of Article 60 as part of their supervisory role for the AI regulatory sandbox. Those authorities may, as appropriate, allow the testing in real world conditions to be conducted by the provider or prospective provider, in derogation from the conditions set out in Article 60(4), points (f) and (g).

3. Where a market surveillance authority has been informed by the prospective provider, the provider or any third party of a serious incident or has other grounds for considering that the conditions set out in Articles 60 and 61 are not met, it may take either of the following decisions on its territory, as appropriate:

(a) to suspend or terminate the testing in real world conditions;

(b) to require the provider or prospective provider and users to modify any aspect of the testing in real world conditions.

4. Where a market surveillance authority has taken a decision referred to in paragraph 3 of this Article, or has issued an objection within the meaning of Article 60(4), point (b), the decision or the objection shall indicate the grounds therefore and how the provider or prospective provider can challenge the decision or objection.

5. Where applicable, where a market surveillance authority has taken a decision referred to in paragraph 3, it shall communicate the grounds therefore to the market surveillance authorities of other Member States in which the AI system has been tested in accordance with the testing plan.

Article 77

Powers of authorities protecting fundamental rights

  1. National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights, including the right to non-discrimination, in relation to the use of high-risk AI systems referred to in Annex III shall have the power to request and access any documentation created or maintained under this Regulation in accessible language and format when access to that documentation is necessary for effectively fulfilling their mandates within the limits of their jurisdiction. The relevant public authority or body shall inform the market surveillance authority of the Member State concerned of any such request.
  2. By [three months after the entry into force of this Regulation], each Member State shall identify the public authorities or bodies referred to in paragraph 1 and make a list of them publicly available . Member States shall notify the list to the Commission and to the other Member States, and shall keep the list up to date.
  3. Where the documentation referred to in paragraph 1 is insufficient to ascertain whether an infringement of obligations under Union law protecting fundamental rights has occurred, the public authority or body referred to in paragraph 1 may make a reasoned request to the market surveillance authority, to organise testing of the high-risk AI system through technical means. The market surveillance authority shall organise the testing with the close involvement of the requesting public authority or body within a reasonable time following the request.
  4. Any information or documentation obtained by the national public authorities or bodies referred to in paragraph 1 of this Article pursuant to this Article shall be treated in compliance with the confidentiality obligations set out in
Article 78.

Confidentiality

1. The Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union and national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:

  • the intellectual property rights and confidential business information or trade secrets of a natural or legal person, including source code, except in the cases referred to in Article 5 of Directive (EU) of the European Parliament and of the Council60 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure;
  • the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits;
  • public and national security interests;
  • the conduct of criminal or administrative proceedings;
  • information classified pursuant to Union or national law.

2. The authorities involved in the application of this Regulation pursuant to paragraph 1 shall request only data that is strictly necessary for the assessment of the risk posed by AI systems and for the exercise of their powers in compliance with this Regulation and Regulation (EU). They shall put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law.

3. Without prejudice to paragraphs 1 and 2, information exchanged on a confidential basis between the national competent authorities or between national competent authorities and the Commission shall not be disclosed without prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in point 1, 6 or 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities and when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to in point 1, 6 or 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those authorities. Those authorities shall ensure that the market surveillance authorities referred to in Article 74(8) and (9), as applicable, can, upon request, immediately access the documentation or obtain a copy thereof. Only staff of the market surveillance authority holding the appropriate level of security clearance shall be allowed to access that documentation or any copy thereof.

4. Paragraphs 1, 2 and 3 shall not affect the rights or obligations of the Commission, Member States and their relevant authorities, as well as those of notified bodies, with regard to the exchange of information and the dissemination of warnings, including in the context of cross-border cooperation, nor shall they affect the obligations of the parties concerned to provide information under criminal law of the Member States.

5. The Commission and Member States may exchange, where necessary and in accordance with relevant provisions of international and trade agreements, confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.

VIII. ARTIFICIAL INTELLIGENCE: POTENTIAL DRAWBACKS AND SOLUTIONS

Artificial Intelligence (AI) has the potential to revolutionize various sectors, but it also comes with several drawbacks. Here, we will discuss the potential drawbacks of AI and the solutions provided by relevant acts and regulations.

Potential Drawbacks of AI

1. Bias and Discrimination:

  • Issue: AI systems can perpetuate and even amplify existing biases present in the data they are trained on. This can lead to discriminatory outcomes in areas like hiring, lending, and law enforcement.
  • Example: An AI hiring tool might favor male candidates over female candidates if the training data reflects historical gender biases.

2. Privacy Concerns:

  • Issue: AI systems often require large amounts of data, which can include sensitive personal information. This raises concerns about data privacy and security.
  • Example: Facial recognition technology can be used for surveillance, potentially infringing on individuals' privacy rights.

3. Job Displacement:

  • Issue: Automation and AI can lead to job displacement, particularly in sectors that rely heavily on routine and repetitive tasks.
  • Example: Manufacturing jobs being replaced by robotic automation.

4. Security Risks:

  • Issue: AI systems can be vulnerable to hacking and other cybersecurity threats, which can have severe consequences.
  • Example: Autonomous vehicles being hacked and controlled remotely.

5. Accountability and Transparency:

  • Issue: It can be challenging to determine accountability when AI systems make decisions, especially if the decision-making process is opaque.
  • Example: Determining liability in the case of an accident involving an autonomous vehicle.

SOLUTIONS

Artificial Intelligence (AI) presents several challenges, but there are various solutions and regulatory frameworks designed to mitigate these drawbacks. Here, we will delve deeper into the solutions for each major drawback of AI.

1. Bias and Discrimination

  • Solution: Implementing Fairness and Accountability Measures
  • Algorithmic Audits: Regular audits of AI algorithms to identify and correct biases.
  • Diverse Training Data: Ensuring that training data is representative of all demographic groups to minimize biases.
  • Transparency: Making AI decision-making processes transparent so that biases can be identified and addressed.
  • Regulations: Enforcing laws like the Personal Data Protection Bill, 2019 which mandate fairness in data processing.
  • Example: Companies like IBM and Google have established internal ethics boards to review AI projects for potential biases and ethical concerns.

2. Privacy Concerns

  • Solution: Strengthening Data Protection Laws
  • Data Minimization: Collecting only the data that is necessary for the AI system to function.
  • Consent Mechanisms: Ensuring that users give explicit consent for data collection and processing.
  • Anonymization: Using techniques to anonymize data to protect individual privacy.
  • Regulations: Adhering to frameworks like the General Data Protection Regulation (GDPR) in the EU and the Personal Data Protection Bill, 2019 in India.
  • Example: The GDPR requires companies to implement data protection by design and by default, ensuring that privacy is considered at every stage of data processing.

3. Job Displacement

  • Solution: Investing in Reskilling and Upskilling
  • Skill Development Programs: Government and private sector initiatives to reskill workers for new roles created by AI.
  • Lifelong Learning: Encouraging continuous education and training to keep up with technological advancements.
  • Social Safety Nets: Implementing policies to support workers who lose their jobs due to automation.

4. Security Risks

  • Solution: Enhancing Cybersecurity Measures
  • Advanced Encryption: Using state-of-the-art encryption techniques to protect data.
  • Regular Security Audits: Conducting frequent security assessments to identify and mitigate vulnerabilities.
  • Incident Response Plans: Developing and implementing plans to respond to cybersecurity incidents.
  • Regulations: Complying with laws like the Information Technology (IT) Act, 2000 in India, which mandate stringent security measures.
  • Example: Companies like Microsoft and Google invest heavily in cybersecurity research and development to protect their AI systems from potential threats.

5. Accountability and Transparency

  • Solution: Promoting Ethical AI Practices
  • Explainability: Ensuring that AI systems can explain their decision-making processes in understandable terms.
  • Accountability Mechanisms: Establishing clear lines of accountability for AI decisions.
  • Ethical Guidelines: Adopting ethical guidelines like the OECD Principles on AI, which emphasize transparency, accountability, and fairness.
  • Regulations: Implementing laws that require AI developers to provide documentation and mechanisms for redressal in case of harm.
  • Example: The European Union's proposed Artificial Intelligence Act aims to regulate high-risk AI systems, ensuring they are transparent, safe, and accountable.

IX. CONCLUSION

In conclusion, the European Parliament's enactment of the Artificial Intelligence Act marks a pivotal moment in the regulation and governance of AI technologies within the European Union. This comprehensive legislation represents a significant step towards balancing innovation with accountability, aiming to harness the potential of AI while safeguarding fundamental rights and values.

One of the Act's primary strengths lies in its risk-based approach, categorizing AI systems according to their potential risks to safety, fundamental rights, and societal values. By distinguishing between unacceptable risks and high-risk applications, the legislation ensures that stringent requirements are applied where they are most needed, such as in critical infrastructure, law enforcement, and essential public services. This targeted approach not only mitigates risks but also fosters public trust in AI technologies.

Moreover, the Act emphasizes transparency and accountability throughout the AI lifecycle. Requirements for data governance, documentation, and human oversight ensure that AI systems are developed and deployed responsibly. By promoting transparency in AI decision-making processes, including explainability and traceability, the legislation enhances accountability and facilitates recourse in cases of adverse outcomes or misuse.

The European Parliament's commitment to promoting human-centric AI is evident in the Act's provisions for ensuring fundamental rights and ethical considerations. Safeguards against discrimination, bias, and manipulation seek to prevent AI systems from perpetuating or exacerbating existing societal inequalities. By prioritizing human agency and dignity, the legislation upholds the EU's commitment to a digital future that respects and protects individuals' rights.

Furthermore, the Act sets a precedent for global AI governance standards by advocating for international cooperation and alignment. By encouraging dialogue and collaboration with international partners, the European Union aims to establish harmonized norms and frameworks that promote innovation while upholding shared values and principles. This approach not only enhances global competitiveness but also strengthens global governance of emerging technologies.

Critically, the Act acknowledges the dynamic nature of AI technologies and the ongoing need for adaptive regulation. By incorporating mechanisms for continuous monitoring, evaluation, and revision, the legislation ensures that regulatory frameworks remain effective and responsive to technological advancements and societal changes. This forward-looking approach positions the European Union as a leader in AI governance, capable of navigating future challenges and opportunities in the digital age.

Nevertheless, challenges remain in implementing and enforcing the Artificial Intelligence Act effectively. Coordination among EU member states, industry stakeholders, and regulatory bodies will be crucial to ensure consistent interpretation and application of the legislation across borders. Addressing technical complexities, such as defining and assessing AI risks, will require ongoing collaboration and expertise from diverse fields including technology, law, ethics, and policy.

Hence, the European Parliament's adoption of the Artificial Intelligence Act represents a milestone in shaping the future of AI regulation globally. By promoting responsible innovation, protecting fundamental rights, and fostering international cooperation, the Act establishes a robust framework for harnessing the benefits of AI while mitigating risks. As AI continues to transform industries and societies, the European Union's commitment to human-centric AI governance sets a precedent for inclusive and sustainable digital development worldwide. Through continuous adaptation and collaboration, the EU is poised to lead in shaping a future where AI serves humanity while respecting our shared values and principles.

.    .    .

REFERENCES:

Discus