Photo by Cash Macanaya on Unsplash
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was adopted on 17 May 2024 by the Committee of Ministers of the Council of Europe at its 133rd Session held in Strasbourg. It will be opened for signature at the Conference of Ministers of Justice in Vilnius, Lithuania, on 5 September 2024. This Framework Convention, negotiated by the CAI, is the first binding international treaty on AI, awaiting signature and ratification by countries. The negotiating parties aimed to ensure that existing protections for human rights, democracy, and the rule of law would apply to AI-related challenges, without creating new substantive human rights or undermining existing protections.
The treaty was designed as a global instrument, with participation from 46 Council of Europe member States and several non-European countries, including Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, Uruguay, and the United States, along with the European Union. The negotiation process was inclusive, involving many CoE bodies, other IGOs like the OECD, OSCE, UNESCO, and around 70 representatives from civil society, business, and the technical and academic community, who actively participated and made comments and text proposals until the final day of negotiations.
Negotiating a binding instrument in such an inclusive process within a short deadline was intense and challenging. The parties had to bridge differences between States' legal systems and traditions, including differences in the interpretation of some human rights, and manage expectations about developing a legal instrument that would impact AI governance globally. After intense negotiations, representatives from 57 States agreed on the result.
The Framework Convention establishes fundamental principles and rules that safeguard human rights, democracy, and the rule of law while promoting progress and technological innovations. It complements existing international standards and aims to fill legal gaps resulting from rapid technological advances. The Convention is technology-neutral and does not regulate technology. Its implementation follows a graduated and differentiated approach based on the severity and probability of adverse impacts on human rights, democracy, and the rule of law. The Convention applies to both public and private sectors, with limited exemptions for national security and research and development. Matters relating to national defense are excluded from the Convention's scope, in line with the Statute of the Council of Europe.
"The Framework Convention imposes a requirement on all future Parties to manage the risks associated with activities across the lifecycle of AI conducted by both public and private entities. It emphasizes considering the distinct roles and responsibilities of all stakeholders, offering flexibility to Parties in fulfilling agreed-upon obligations within their national legal and institutional frameworks, and aligning with their international human rights commitments. The treaty and its implementation framework will also create opportunities for collaboration with all stakeholders, including States that may not have ratified it yet, thereby enhancing its potential for global impact. To effectively govern AI now and in the future, our societies must establish a comprehensive set of technical, legal, and socio-cultural norms suitable for the diverse applications of AI in different societal and economic contexts worldwide. This effort parallels the historical development of norms over the past two centuries to regulate the use of engines in various vehicles and machines for different purposes. Thus, the Framework Convention will not function independently but will represent a crucial milestone in establishing a governance framework for AI. This framework aims to ensure that all members of our societies benefit from AI systems and participate in innovative societies and economies, while upholding and reinforcing existing safeguards for human rights, democracy, and the rule of law."
In 2019, efforts began with the establishment of the ad hoc Committee on Artificial Intelligence (CAHAI) to assess the feasibility of creating a Framework Convention. This committee was succeeded in 2022 by the Committee on Artificial Intelligence (CAI), which was responsible for drafting and negotiating the treaty text. The Framework Convention was crafted collaboratively by 46 member states of the Council of Europe, along with observer states such as Canada, Japan, Mexico, the Holy See, and the United States of America, as well as the European Union. Additionally, several non-member states including Australia, Argentina, Costa Rica, Israel, Peru, and Uruguay participated actively. Following the Council of Europe's tradition of engaging multiple stakeholders, the development process included 68 international representatives from civil society, academia, and industry, along with contributions from various other international organizations."
The Framework Convention mandates states to ensure that activities involving AI systems adhere to the following core principles:
The Framework Convention applies to the use of AI systems by governmental entities, including private actors acting on their behalf, and also encompasses private entities. It provides Parties with two options to comply with its principles and obligations when regulating the private sector: Parties may choose to directly follow the Convention's applicable provisions, or alternatively, they can implement other measures to meet the treaty's requirements while fully respecting their international obligations concerning human rights, democracy, and the rule of law. Parties are not obliged to extend the provisions of the treaty to activities related to protecting their national security interests, but they must ensure that such activities adhere to international law and uphold democratic institutions and processes. The Framework Convention does not extend to matters of national defense and excludes research and development activities, unless the testing of AI systems has the potential to affect human rights, democracy, or the rule of law.
The Framework Convention sets up a monitoring mechanism called the Conference of the Parties. This body consists of official representatives from the countries that are signatories to the Convention. Their role is to assess how well the provisions of the Convention are being put into practice. The findings and recommendations from the Conference of the Parties are crucial for ensuring that states adhere to the Framework Convention and for maintaining its effectiveness over time. Additionally, the Conference of the Parties will promote collaboration with relevant stakeholders, which includes organizing public hearings on important aspects of the Convention's implementation.
Artificial Intelligence (AI) is rapidly transforming various aspects of our lives, influencing how we access information, make decisions, and interact within society. As AI technologies continue to evolve, their impact on governance, public institutions, and citizen engagement in democratic processes is expected to grow significantly. While AI presents numerous benefits, it also poses serious risks that must be addressed to protect fundamental human rights.
The Council of Europe plays a crucial role in ensuring that human rights, democracy, and the rule of law are upheld in the digital landscape. It is imperative that AI is developed and utilized in a manner that aligns with these core values. The organization has a history of establishing innovative standards that often set the stage for global norms. In line with this tradition, the Council is actively addressing the challenges posed by AI through a collaborative approach that involves various stakeholders, including international organizations, civil society, businesses, and academic institutions.
A significant milestone in this effort is the recent adoption of the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law by the Committee of Ministers of the Council of Europe. This pioneering legal instrument is the first of its kind globally, aimed at ensuring that AI systems adhere to established standards concerning human rights and democratic principles. The Convention seeks to mitigate the risks associated with AI technologies that could potentially undermine these values.
Key aspects of the Framework Convention include the promotion of transparency, accountability, and fairness in AI systems. It emphasizes the need for robust safeguards to protect individuals from discrimination and violations of their rights. Additionally, the Convention advocates for the involvement of affected individuals in decision-making processes related to AI, ensuring that their voices are heard and their rights are respected.
Moreover, the Convention outlines the importance of conducting thorough assessments of the potential impacts of AI on human rights and democracy. This includes evaluating both the positive and negative consequences of AI applications, allowing for informed decision-making and the implementation of necessary preventive measures. By establishing clear guidelines and responsibilities for AI developers and users, the Convention aims to foster a culture of responsibility and ethical conduct in the deployment of AI technologies.
As AI continues to shape our world, it is essential to prioritize the protection of human rights and democratic values. The Council of Europe's Framework Convention represents a significant step toward achieving this goal, providing a comprehensive framework for the responsible development and use of AI. By adhering to these principles, we can harness the potential of AI while safeguarding the rights and freedoms that are fundamental to our societies.
Article - Subject matter
1. The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of Fundamental Rights, including democracy, the rule of law and environmental protection, against the harmful effects of artificial intelligence systems (AI systems) in the Union, and to support innovation.
2. This Regulation lays down:
Article 2 - Scope
1. This Regulation applies to:
2. For AI systems classified as high-risk AI systems in accordance with Article 6(1) and (2) related to products covered by the Union harmonisation legislation listed in section B of Annex I, only Article 112 applies. Article 57 applies only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation.
3. This Regulation does not apply to areas outside the scope of Union law, and shall not, in any event, affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States with carrying out tasks in relation to those competences.
This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
This Regulation does not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
4. This Regulation applies neither to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.
5. This Regulation shall not affect the application of the provisions on the liability of providers of intermediary services as set out in Chapter II of Regulation (EU)
6. This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
7. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulation without prejudice to the arrangements provided for in Article 10(5) and Article 59 of this Regulation.
8. This Regulation does not apply to any research, testing or development activity regarding AI systems or models prior to their being placed on the market or put into service. Such activities shall be conducted in accordance with applicable Union law. testing in real world conditions shall not be covered by that exclusion.
9. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer
10. This Regulation does not apply to the obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.
11. This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favourable to workers.
12. This Regulation applies to AI systems released under free and open source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.
Article 5
Prohibited AI Practices
1. The following AI practices shall be prohibited:
(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of, materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing a person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;
(b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;
(c) the placing on the market, the putting into service or the use of AI systems for the purpose of the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(d) the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the likelihood of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;
(e) the placing on the market, the putting into service for this specific purpose, or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
(f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
(g) the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;
(h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as searching for missing persons;
Article 6
Classification rules for high-risk AI systems
1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.
3. By derogation from paragraph 2, an AI system shall not be considered to be high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case where one or more of the following conditions are fulfilled:
(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.
4. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.
5. The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than … [18 months from the date of entry into force of this Regulation], provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.
6. The Commission shall adopt delegated acts in accordance with Article 97 to amend the conditions laid down in paragraph 3, first subparagraph, of this Article. The Commission may adopt delegated acts in accordance with Article 97 in order to add new conditions to those laid down in paragraph 3, first subparagraph, or to modify them, only where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.
The Commission shall adopt delegated acts in accordance with Article 97 in order to delete any of the conditions laid down in the paragraph 3, first subparagraph, where there is concrete and reliable evidence that this is necessary for the purpose of maintaining the level of protection of health, safety and fundamental rights in the Union.
Any amendment to the conditions laid down in paragraph 3, first subparagraph, shall not decrease the overall level of protection of health, safety and fundamental rights in the Union.
When adopting the delegated acts, the Commission shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and shall take account of market and technological developments.
Article 8
In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I.
Article 9
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:
(a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;
(b) the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse;
(c) the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72;
(d) the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a).
3. The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
4. The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.
5. The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. In identifying the most appropriate risk management measures, the following shall be ensured:
(a) elimination or reduction of identified and evaluated risks pursuant to paragraph 2 as far as technically feasible through adequate design and development of the highrisk AI system;
(b) where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;
(c) provision of information required pursuant to Article 13 and, where appropriate, training to deployers.
With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used.
6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section.
7. Testing procedures may include testing in real-world conditions in accordance with Article 60.
8. The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
9. When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other groups of vulnerable persons.
10. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part of, or combined with, the risk management procedures established pursuant to that law.
Article 10
1. High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used.
2. Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular:
3. Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof.
4. Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI system is intended to be used.
5. To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulation (EU) 2016/679, Directive (EU) 2016/680 and Regulation (EU) 2018/1725, all the following conditions shall apply in order for such processing to occur:
6. For the development of high-risk AI systems not using techniques involving the training of AI models, paragraphs 2 to 5 apply only to the testing data sets.
Article 12
1. High-risk AI systems shall technically allow for the automatic recording of events (‘logs’) over their lifetime.
2. In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for:
3. For high-risk AI systems referred to in point 1 (a) of Annex III, the logging capabilities shall provide, at a minimum:
Article 13
1. High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3.
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.
3. The instructions for use shall contain at least the following information:
(c) the changes to the high-risk AI system and its performance which have been predetermined by the provider at the moment of the initial conformity assessment, if any;
(d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;
(e) the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;
(f) where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12.
Article 14
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.
3. The oversight measures shall be commensurate to the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:
(a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
(b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.
4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate to the following circumstances:
(a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance ;
(b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
(c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;
(d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;
(e) to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.
Article 15
Article 16
Obligations of providers of high-risk AI systems Providers of high-risk AI systems shall:
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Section 2;
(b) indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable their name, registered trade name or registered trade mark, the address at which they can be contacted;
(c) have a quality management system in place which complies with Article 17;
(d) keep the documentation referred to in Article 18;
(e) when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 19;
(f) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its being placed on the market or put into service;
(g) draw up an EU declaration of conformity in accordance with Article 47;
(h) affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48;
(i) comply with the registration obligations referred to in Article 49(1);
(j) take the necessary corrective actions and provide information as required in Article 20;
(k) upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2;
(l) ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives.
Article 25
1. Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:
(a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations therein are allocated otherwise;
(b) they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to Article 6;
(c) they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6.
2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation.
3. In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the provider of the high-risk AI system, and shall be subject to the obligations under Article 16 under either of the following circumstances:
(a) the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer;
(b) the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market.
4. The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open licence.
The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available free of charge in an easily usable electronic format.
5. Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law.
Article 40
1. High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in Chapter IV of this Regulation, to the extent that those standards cover those requirements or obligations.
2. The Commission shall issue standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, obligations set out in Chapter IV of this Regulation, in accordance with Article 10 of Regulation (EU) No 1025/2012, without undue delay. The standardisation request shall also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and other resources consumption during its lifecycle, and on the energy-efficient development of generalpurpose AI models.
When preparing a standardisation request, the Commission shall consult the Board and relevant stakeholders, including the advisory forum. When issuing a standardisation request to European standardisation organisations, the Commission shall specify that standards have to be clear, consistent, including with the standards developed in the various sectors for products covered by the existing Union harmonisation legislation listed in Annex I, and aiming to ensure that AI systems or AI models placed on the market or put into service in the Union meet the relevant requirements laid down in this Regulation.
The Commission shall request the European standardisation organisations to provide evidence of their best efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Article 24 of Regulation (EU).
3. The participants in the standardisation process shall seek to promote investment and innovation in AI, including through increasing legal certainty, as well as the competitiveness and growth of the Union market, and shall contribute to strengthening global cooperation on standardisation and taking into account existing international standards in the field of AI that are consistent with Union values, fundamental rights and interests, and shall enhance multi-stakeholder governance ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU).
Article 50
Article 56
Article 57
1. Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by … [24 months from the date of entry into force of this Regulation]. That sandbox may also be established jointly with the competent authorities of one or more other Member States. The Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes. The obligation under the first subparagraph may also be fulfilled by participating in an existing sandbox in so far as that participation provides an equivalent level of national coverage for the participating Member States.
2. Additional AI regulatory sandboxes at regional or local level, or established jointly with the competent authorities of other Member States may also be established.
3. The European Data Protection Supervisor may also establish an AI regulatory sandbox for Union institutions, bodies, offices and agencies, and may exercise the roles and the tasks of national competent authorities in accordance with this Chapter.
4. Member States shall ensure that the competent authorities referred to in paragraphs 1 and 2 allocate sufficient resources to comply with this Article effectively and in a timely manner. Where appropriate, national competent authorities shall cooperate with other relevant authorities, and may allow for the involvement of other actors within the AI ecosystem. This Article shall not affect other regulatory sandboxes established under Union or national law. Member States shall ensure an appropriate level of cooperation between the authorities supervising those other sandboxes and the national competent authorities.
5. AI regulatory sandboxes established under paragraph (1) shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the prospective providers and the competent authority. Such regulatory sandboxes may include testing in real world conditions supervised in the sandbox.
6. Competent authorities shall provide, as appropriate, guidance, supervision and support within the AI regulatory sandbox with a view to identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, and their effectiveness in relation to the obligations and requirements of this Regulation and, where relevant, other Union and Member State law supervised within the sandbox.
7. Competent authorities shall provide providers and prospective providers using the AI regulatory sandbox with guidance on regulatory expectations and how to fulfil the requirements and obligations set out in this Regulation. Upon request of the provider or prospective provider of the AI system, the competent authority shall provide a written proof of the activities successfully carried out in the sandbox. The competent authority shall also provide an exit report detailing the activities carried out in the sandbox and the related results and learning outcomes. Providers may use such documentation to demonstrate their compliance with this Regulation through the conformity assessment process or relevant market surveillance activities. In this regard, the exit reports and the written proof provided by the national competent authority shall be taken positively into account by market surveillance authorities and notified bodies, with a view to accelerating conformity assessment procedures to a reasonable extent.
8. Subject to the confidentiality provisions in Article 78, and with the agreement of the provider or prospective provider, the Commission and the Board shall be authorised to access the exit reports and shall take them into account, as appropriate, when exercising their tasks under this Regulation. If both the provider or prospective provider and the national competent authority explicitly agree, the exit report may be made publicly available through the single information platform referred to in this Article.
9. The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives:
10. National competent authorities shall ensure that, to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national or competent authorities are associated with the operation of the AI regulatory sandbox and involved in the supervision of those aspects to the extent of their respective tasks and powers.
11. The AI regulatory sandboxes shall not affect the supervisory or corrective powers of the competent authorities supervising the sandboxes, including at regional or local level. Any significant risks to health and safety and fundamental rights identified during the development and testing of such AI systems shall result in an adequate mitigation. National competent authorities shall have the power to temporarily or permanently suspend the testing process, or the participation in the sandbox if no effective mitigation is possible, and shall inform the AI Office of such decision. National competent authorities shall exercise their supervisory powers within the limits of the relevant law, using their discretionary powers when implementing legal provisions in respect of a specific AI sandbox project, with the objective of supporting innovation in AI in the Union.
12. Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. To the extent that other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law.
13. The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they facilitate cross-border cooperation between national competent authorities.
14. National competent authorities shall coordinate their activities and cooperate within the framework of the Board.
15. National competent authorities shall inform the AI Office and the Board of the establishment of a sandbox, and may ask it for support and guidance. The AI Office shall make publicly available a list of planned and existing AI sandboxes and keep it up to date in order to encourage more interaction in the AI regulatory sandboxes and crossborder cooperation.
16. National competent authorities shall submit to the AI Office and to the Board, annual reports, starting one year after the establishment of the AI regulatory sandbox and every year thereafter until its termination and a final report. Those reports shall provide information on the progress and results of the implementation of those sandboxes, including best practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application and possible revision of this Regulation, including its delegated and implementing acts, and on the application of other Union law supervised by the competent authorities within the sandbox. The national competent authorities shall make those annual reports or abstracts thereof available to the public, online. The Commission shall, where appropriate, take the annual reports into account when exercising its tasks under this Regulation.
17. The Commission shall develop a single and dedicated interface containing all relevant information related to AI regulatory sandboxes to allow stakeholders to interact with AI regulatory sandboxes and to raise enquiries with competent authorities, and to seek nonbinding guidance on the conformity of innovative products, services, business models embedding AI technologies, in accordance with Article 62(1), point (c). The Commission shall proactively coordinate with national competent authorities, where relevant.
Article 59
1. Personal data lawfully collected for other purposes may be processed in an AI regulatory sandbox solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met:
(a) AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas:
(i) public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems;
(ii) a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures;
(iii) energy sustainability;
(iv) safety and resilience of transport systems and mobility, critical infrastructure and networks;
(v) efficiency and quality of public administration and public services;
(b) the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other non-personal data;
(c) there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing;
(d) any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to those data;
(e) providers can further share the originally collected data only in compliance with Union data protection law; any personal data crated in the sandbox cannot be shared outside the sandbox;
(f) any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data;
(g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period;
(h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law;
(i) a complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV;
(j) a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.
2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing prevention threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific or Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.
3. Paragraph 1 is without prejudice to Union or national law which excludes processing of personal data for other purposes than those explicitly mentioned in that law, as well as to Union or national law laying down the basis for the processing of personal data which is necessary for the purpose of developing, testing or training of innovative AI systems or any other legal basis, in compliance with Union law on the protection of personal data.
Article 64
Article 65
1. A European Artificial Intelligence Board (the ‘Board’) is hereby established. 2. The Board shall be composed of one representative per Member State. The European Data Protection Supervisor shall participate as observer. The AI Office shall also attend the Board’s meetings, without taking part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the issues discussed are of relevance for them. 3. Each representative shall be designated by their Member State for a period of three years, renewable once.
4. Member States shall ensure that their representatives on the Board:
(a) have the relevant competences and powers in their Member State so as to contribute actively to the achievement of the Board’s tasks referred to in Article 66;
(b) are designated as a single contact point vis-à-vis the Board and, where appropriate, taking into account Member States’ needs, as a single contact point for stakeholders;
(c) are empowered to facilitate consistency and coordination between national competent authorities in their Member State as regards the implementation of this Regulation, including through the collection of relevant data and information for the purpose of fulfilling their tasks on the Board.
5. The designated representatives of the Member States shall adopt the Board’s rules of procedure by a two-thirds majority. The rules of procedure shall, in particular, lay down procedures for the selection process, the duration of the mandate of, and specifications of the tasks of, the Chair, detailed arrangements for voting, and the organisation of the Board’s activities and those of its sub-groups.
6. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and to notify authorities about issues related to market surveillance and notified bodies. The standing sub-group for market surveillance should act as the administrative cooperation group (ADCO) for this Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020. The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. Where appropriate, representatives of the advisory forum referred to in Article 67 may be invited to such sub-groups or to specific meetings of those subgroups as observers.
7. The Board shall be organised and operated so as to safeguard the objectivity and impartiality of its activities.
8. The Board shall be chaired by one of the representatives of the Member States. The AI Office shall provide the secretariat for the Board. convene the meetings upon request of the Chair, and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and its rules of procedure.
Article 66
Tasks of the Board The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective application of this Regulation.
(a) contribute to the coordination among national competent authorities responsible for the application of this Regulation and, in cooperation with and subject to the agreement of the market surveillance authorities concerned, support joint activities of market surveillance authorities referred to in Article 74(11);
(b) collect and share technical and regulatory expertise and best practices among Member States;
(c) provide advice on the implementation of this Regulation, in particular as regards the enforcement of rules on general-purpose AI models;
(d) contribute to the harmonisation of administrative practices in the Member States, including in relation to the derogation from the conformity assessment procedures referred to in Article 46, the functioning of regulatory sandboxes, and testing in real world conditions referred to in Articles 57, 59 and 60;
(e) upon the request of the Commission or on its own initiative, issue recommendations and written opinions on any relevant matters related to the implementation of this Regulation and to its consistent and effective application, including:
(i) on the development and application of codes of conduct and codes of practice pursuant to this Regulation, as well as of the Commission’s guidelines;
(ii) the evaluation and review of this Regulation pursuant to Article 112, including as regards the serious incident reports referred to in Article 73, and the functioning of the database referred to in Article 71, the preparation of the delegated or implementing acts, and as regards possible alignments of this Regulation with the legal acts listed in Annex I;
(iii) on technical specifications or existing standards regarding the requirements set out in Chapter III, Section 2;
(iv) on the use of harmonised standards or common specifications referred to in Articles 40 and 41;
(v) trends, such as European global competitiveness in AI, the uptake of AI in the Union, and the development of digital skills;
(vi) trends on the evolving typology of AI value chains, in particular on the resulting implications in terms of accountability;
(vii) on the potential need for amendment to Annex III in accordance with Article 7, and on the potential need for possible revision of Article 5 pursuant to Article 112, taking into account relevant available evidence and the latest developments in technology;
(f) support the Commission in promoting AI literacy, public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems;
(g) facilitate the development of common criteria and a shared understanding among market operators and competent authorities of the relevant concepts provided for in this Regulation, including by contributing to the development of benchmarks;
(h) cooperate, as appropriate, with other Union institutions, bodies, offices and agencies, as well as relevant Union expert groups and networks, in particular in the fields of product safety, cybersecurity, competition, digital and media services, financial services, consumer protection, data and fundamental rights protection;
(i) contribute to effective cooperation with the competent authorities of third countries and with international organisations;
(j) assist national competent authorities and the Commission in developing the organisational and technical expertise required for the implementation of this Regulation, including by contributing to the assessment of training needs for staff of Member States involved in implementing this Regulation;
(k) assist the AI Office in supporting national competent authorities in the establishment and development of regulatory sandboxes, and facilitate cooperation and information sharing among regulatory sandboxes;
(l) contribute to, and provide relevant advice on, the development of guidance documents;
(m) advise the Commission in relation to international matters on AI;
(n) provide opinions to the Commission on the qualified alerts regarding general-purpose AI models;
(o) receive opinions by the Member States on qualified alerts regarding general-purpose AI models, and on national experiences and practices on the monitoring and enforcement of AI systems, in particular systems integrating the general-purpose AI models.
Article 67
Article 68
1. The Commission shall, by means of an implementing act, make provisions on the establishment of a scientific panel of independent experts (the ‘scientific panel’) intended to support the enforcement activities under this Regulation. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).
2. The scientific panel shall consist of experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of AI necessary for the tasks set out in paragraph 3, and shall be able to demonstrate meeting all of the following conditions:
(a) having particular expertise and competence and scientific or technical expertise in the field of AI;
(b) independence from any provider of AI systems or general-purpose AI models or systems;
(c) an ability to carry out activities diligently, accurately and objectively. The Commission, in consultation with the Board, shall determine the number of experts on the panel in accordance with the required needs and shall ensure fair gender and geographical representation.
3. The scientific panel shall advise and support the AI Office, in particular with regard to the following tasks: (a) supporting the implementation and enforcement of this Regulation as regards general-purpose AI models and systems, in particular by:
(i) alerting the AI Office of possible systemic risks at Union level of generalpurpose AI models, in accordance with Article 90;
(ii) contributing to the development of tools and methodologies for evaluating capabilities of general-purpose AI models and systems, including through benchmarks;
(iii) providing advice on the classification of general-purpose AI models with systemic risk;
(iv) providing advice on the classification of various general-purpose AI models and systems;
(v) contributing to the development of tools and templates;
(b) supporting the work of market surveillance authorities, at their request;
(c) supporting cross-border market surveillance activities as referred to in Article 74(11), without prejudice to the powers of market surveillance authorities;
(d) supporting the AI Office in carrying out its duties in the context of the safeguard clause pursuant to Article 81.
4. The experts on the scientific panel shall perform their tasks with impartiality and objectivity, and shall ensure the confidentiality of information and data obtained in carrying out their tasks and activities. They shall neither seek nor take instructions from anyone when exercising their tasks under paragraph 3. Each expert shall draw up a declaration of interests, which shall be made publicly available. The AI Office shall establish systems and procedures to actively manage and prevent potential conflicts of interest.
5. The implementing act referred to in paragraph 1 shall include provisions on the conditions, procedures and detailed arrangements for the scientific panel and its members to issue alerts, and to request the assistance of the AI Office for the performance of the tasks of the scientific panel.
Article 69
Article 74
1. Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. For the purposes of the effective enforcement of this Regulation:
(a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Article 2(1) of this Regulation;
(b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation.
2. As part of their reporting obligations under Article 34(4) of Regulation (EU) 2019/1020, the market surveillance authorities shall report annually to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules. They shall also annually report to the Commission about the use of prohibited practices that occurred during that year and about the measures taken.
3. For high-risk AI systems related to products covered by the Union harmonisation legislation listed in Section A of Annex I, the market surveillance authority for the purposes of this Regulation shall be the authority responsible for market surveillance activities designated under those legal acts. By derogation from the paragraph 2, and in appropriate circumstances, Member States may designate another relevant authority to act as a market surveillance authority, provided they ensure coordination with the relevant sectoral market surveillance authorities responsible for the enforcement of the legal acts listed in Annex I.
4. The procedures referred to in Articles 79 to 83 of this Regulation shall not apply to AI systems related to products covered by the Union harmonisation legislation listed in section A of Annex I, where such legal acts already provide for procedures ensuring an equivalent level of protection and having the same objective. In such cases, the relevant sectoral procedures shall apply instead.
5. Without prejudice to the powers of market surveillance authorities under Article 14 of Regulation (EU), for the purpose of ensuring the effective enforcement of this Regulation, market surveillance authorities may exercise the powers referred to in Article 14(4), points (d) and (j), of that Regulation remotely, as appropriate.
6. For high-risk AI systems placed on the market, put into service, or used by financial institutions regulated by Union financial services law, the market surveillance authority for the purposes of this Regulation shall be the relevant national authority responsible for the financial supervision of those institutions under that legislation in so far as the placing on the market, putting into service, or the use of the AI system is in direct connection with the provision of those financial services.
7. By way of derogation from paragraph 6, in appropriate circumstances, and provided that coordination is ensured, another relevant authority may be identified by the Member State as market surveillance authority for the purposes of this Regulation. National market surveillance authorities supervising regulated credit institutions regulated under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism established by Regulation No 1024/2013, should report, without delay, to the European Central Bank any information identified in the course of their market surveillance activities that may be of potential interest for the prudential supervisory tasks of the European Central Bank specified in that Regulation.
8. For high-risk AI systems listed in point 1 of Annex III, in so far as the systems are used for law enforcement purposes, border management and justice and democracy, and for highrisk AI systems listed in points 6, 7 and 8 of Annex III to this Regulation, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Regulation (EU) or Directive (EU), or any other authority designated pursuant to the same conditions laid down in Articles 41 to 44 of Directive (EU). Market surveillance activities shall in no way affect the independence of judicial authorities, or otherwise interfere with their activities when acting in their judicial capacity.
9. Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as their market surveillance authority, except in relation to the Court of Justice of the European Union acting in its judicial capacity.
10. Member States shall facilitate coordination between market surveillance authorities designated under this Regulation and other relevant national authorities or bodies which supervise the application of Union harmonisation legislation listed in Annex I, or in other Union law, that might be relevant for the high-risk AI systems referred to in Annex III.
11. Market surveillance authorities and the Commission shall be able to propose joint activities, including joint investigations, to be conducted by either market surveillance authorities or market surveillance authorities jointly with the Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness or providing guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are found to present a serious risk across two or more Member States in accordance with Article 9 of Regulation (EU). The AI Office shall provide coordination support for joint investigations.
12. Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of highrisk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (‘API’) or other relevant technical means and tools enabling remote access.
13. Market surveillance authorities shall be granted access to the source code of the highrisk AI system upon a reasoned request and only when both of the following conditions are fulfilled:
(a) access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2; and,
(b) testing or auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient.
14. Any information or documentation obtained by market surveillance authorities shall be treated in compliance with the confidentiality obligations set out in Article 78.
Article 75
Article 76
1. Market surveillance authorities shall have competences and powers to ensure that testing in real world conditions is in accordance with this Regulation.
2. Where testing in real world conditions is conducted for AI systems that are supervised within an AI regulatory sandbox under Article 59, the market surveillance authorities shall verify the compliance with the provisions of Article 60 as part of their supervisory role for the AI regulatory sandbox. Those authorities may, as appropriate, allow the testing in real world conditions to be conducted by the provider or prospective provider, in derogation from the conditions set out in Article 60(4), points (f) and (g).
3. Where a market surveillance authority has been informed by the prospective provider, the provider or any third party of a serious incident or has other grounds for considering that the conditions set out in Articles 60 and 61 are not met, it may take either of the following decisions on its territory, as appropriate:
(a) to suspend or terminate the testing in real world conditions;
(b) to require the provider or prospective provider and users to modify any aspect of the testing in real world conditions.
4. Where a market surveillance authority has taken a decision referred to in paragraph 3 of this Article, or has issued an objection within the meaning of Article 60(4), point (b), the decision or the objection shall indicate the grounds therefore and how the provider or prospective provider can challenge the decision or objection.
5. Where applicable, where a market surveillance authority has taken a decision referred to in paragraph 3, it shall communicate the grounds therefore to the market surveillance authorities of other Member States in which the AI system has been tested in accordance with the testing plan.
Article 77
Article 78.
1. The Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union and national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
2. The authorities involved in the application of this Regulation pursuant to paragraph 1 shall request only data that is strictly necessary for the assessment of the risk posed by AI systems and for the exercise of their powers in compliance with this Regulation and Regulation (EU). They shall put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law.
3. Without prejudice to paragraphs 1 and 2, information exchanged on a confidential basis between the national competent authorities or between national competent authorities and the Commission shall not be disclosed without prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in point 1, 6 or 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities and when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.
When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to in point 1, 6 or 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those authorities. Those authorities shall ensure that the market surveillance authorities referred to in Article 74(8) and (9), as applicable, can, upon request, immediately access the documentation or obtain a copy thereof. Only staff of the market surveillance authority holding the appropriate level of security clearance shall be allowed to access that documentation or any copy thereof.
4. Paragraphs 1, 2 and 3 shall not affect the rights or obligations of the Commission, Member States and their relevant authorities, as well as those of notified bodies, with regard to the exchange of information and the dissemination of warnings, including in the context of cross-border cooperation, nor shall they affect the obligations of the parties concerned to provide information under criminal law of the Member States.
5. The Commission and Member States may exchange, where necessary and in accordance with relevant provisions of international and trade agreements, confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.
Artificial Intelligence (AI) has the potential to revolutionize various sectors, but it also comes with several drawbacks. Here, we will discuss the potential drawbacks of AI and the solutions provided by relevant acts and regulations.
Potential Drawbacks of AI
Artificial Intelligence (AI) presents several challenges, but there are various solutions and regulatory frameworks designed to mitigate these drawbacks. Here, we will delve deeper into the solutions for each major drawback of AI.
In conclusion, the European Parliament's enactment of the Artificial Intelligence Act marks a pivotal moment in the regulation and governance of AI technologies within the European Union. This comprehensive legislation represents a significant step towards balancing innovation with accountability, aiming to harness the potential of AI while safeguarding fundamental rights and values.
One of the Act's primary strengths lies in its risk-based approach, categorizing AI systems according to their potential risks to safety, fundamental rights, and societal values. By distinguishing between unacceptable risks and high-risk applications, the legislation ensures that stringent requirements are applied where they are most needed, such as in critical infrastructure, law enforcement, and essential public services. This targeted approach not only mitigates risks but also fosters public trust in AI technologies.
Moreover, the Act emphasizes transparency and accountability throughout the AI lifecycle. Requirements for data governance, documentation, and human oversight ensure that AI systems are developed and deployed responsibly. By promoting transparency in AI decision-making processes, including explainability and traceability, the legislation enhances accountability and facilitates recourse in cases of adverse outcomes or misuse.
The European Parliament's commitment to promoting human-centric AI is evident in the Act's provisions for ensuring fundamental rights and ethical considerations. Safeguards against discrimination, bias, and manipulation seek to prevent AI systems from perpetuating or exacerbating existing societal inequalities. By prioritizing human agency and dignity, the legislation upholds the EU's commitment to a digital future that respects and protects individuals' rights.
Furthermore, the Act sets a precedent for global AI governance standards by advocating for international cooperation and alignment. By encouraging dialogue and collaboration with international partners, the European Union aims to establish harmonized norms and frameworks that promote innovation while upholding shared values and principles. This approach not only enhances global competitiveness but also strengthens global governance of emerging technologies.
Critically, the Act acknowledges the dynamic nature of AI technologies and the ongoing need for adaptive regulation. By incorporating mechanisms for continuous monitoring, evaluation, and revision, the legislation ensures that regulatory frameworks remain effective and responsive to technological advancements and societal changes. This forward-looking approach positions the European Union as a leader in AI governance, capable of navigating future challenges and opportunities in the digital age.
Nevertheless, challenges remain in implementing and enforcing the Artificial Intelligence Act effectively. Coordination among EU member states, industry stakeholders, and regulatory bodies will be crucial to ensure consistent interpretation and application of the legislation across borders. Addressing technical complexities, such as defining and assessing AI risks, will require ongoing collaboration and expertise from diverse fields including technology, law, ethics, and policy.
Hence, the European Parliament's adoption of the Artificial Intelligence Act represents a milestone in shaping the future of AI regulation globally. By promoting responsible innovation, protecting fundamental rights, and fostering international cooperation, the Act establishes a robust framework for harnessing the benefits of AI while mitigating risks. As AI continues to transform industries and societies, the European Union's commitment to human-centric AI governance sets a precedent for inclusive and sustainable digital development worldwide. Through continuous adaptation and collaboration, the EU is poised to lead in shaping a future where AI serves humanity while respecting our shared values and principles.
REFERENCES: