The New Regulatory Package for Artificial Intelligence
On 21 April 2021, European Commission released a proposal for a new regulatory package for Artificial Intelligence (AI). The proposal comprises three main acts, each having a different purpose related to the implication of AI technology for business and communities.
The main proposals for regulations are:
- Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act or The Act)
- Regulation on Machinery Products
- 2021 Coordinated Plan On Artificial Intelligence (2021 Coordinated Plan on AI)
The regulatory proposal for an Artificial Intelligence framework bases itself on a set of objectives set forth by the Commission to establish minimum requirements to address the risks and problems linked to AI. The Commission’s vision is to do this without constraining or hindering technological development or disproportionately increasing the cost of placing AI solutions on the market.
The Commission talks about the following goals:
- ensuring that AI systems placed and used on the Union market are safe and respect existing law on fundamental rights and Union values;
- providing legal certainty to facilitate investment and innovation in AI;
- enhancing governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
- facilitating the development of a single market for lawful, safe and trustworthy AI applications and preventing market fragmentation.
While all three acts target the harmonisation of rules for the use of AI, they all differ in their scopes of application.
Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)
The Artificial Intelligence Act sets harmonised rules for the development, placement on the market, and use of AI systems in the Union following a proportionate risk-based approach. Moreover, it proposes a single future-proof definition of AI.
As article 1 mentions, the proposed Regulation plans to provide harmonised rules for the placing on the market, putting into service and using artificial intelligence systems (‘AI systems’) in the Union. Also, it explicitly describes the prohibited artificial intelligence practices and implements specific requirements for high-risk AI systems and obligations for operators of such systems. Furthermore, the Act harmonises transparency rules for AI systems interacting with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate an image, audio or video content. Last but not least, the Regulation shall provide rules on market monitoring and surveillance.
Based on the definition from article 3 of the Act, “artificial intelligence system (AI system)” means software that is developed with one or more techniques and approaches listed in Annex I and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
The new Regulation, and implicitly the new AI system definition, will apply to EU providers and the providers established in a third country that will place AI systems in the market or put them into service in the Union. Also, the Act imposes specific requirements for users of AI systems located within the Union and for providers and users of AI systems situated in a third country, but where the output produced by the system is used in the Union.
To sum it up, the Regulation shall apply regardless of the provider, AI system or user’s location, as long as the AI system is placed and the system results are used in the European Union.
The Artificial Intelligence Act enumerates in its article 5 the systems that shall be prohibited in the European Union for use, distribution or importation.
For example, it shall be prohibited to place on the market, put into service or use an AI system that deploys subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm. This prohibition refers to those AI systems that use techniques impossible or very difficult to detect by a person’s consciousness but can harm their physique or mental health through its use.
There are more such examples expressly provided in the proposed Regulation. However, the Act is concerned more about the high-risk AI systems, mentioning categories that enter into this terminology, the obligations of providers, distributors or importers of such technologies, and requirements for high-risk AI systems, both technological and operational.
As part of the mandatory requirements, AI-related businesses must establish, implement, document and maintain a risk management system concerning the high-risk AI systems. However, the risk management system should not just be implemented at the beginning of the activity and forgotten about following the years. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular, systematic updating, being provided specific steps to be taken in the process. Moreover, to identify the most appropriate risk management measures, high-risk AI systems shall undergo testing, which ensures that they perform consistently for their intended purpose and comply with the requirements set out in the Regulation.
Another essential requirement for a high-risk AI system shall be drawing up technical documentation before that system is placed on the market or put into service and keeping it up-to-date. It must thus, be continuously reviewed and updated during the entire existence of the system. The purpose of the technical documentation must be to demonstrate that the high-risk AI system complies with the requirements set out in the Act.
We must underline the importance of keeping the high-risk AI systems as transparent as possible to users. The high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent for users so they can, alone, interpret the system’s output and use it appropriately. Article 13 of the proposed Regulation expressly indicates that instructions for use must always accompany high-risk AI systems. They can be presented in an appropriate digital format or otherwise and shall include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users, such as:
- identity and contact details of the provider and its authorised representative;
- characteristics, capabilities and limitations of performance of the high-risk AI system;
- changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment;
- human oversight measures. Under this requirement, the high-risk AI systems must be designed and developed so that natural persons can effectively oversee them during the period in which the AI system is in use.
- expected lifetime of the high-risk AI system and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates.
Businesses that provide high-risk AI systems must comply with several rules, which are not very few in numbers. The Act indicates, thus, that besides the upper named obligations, providers of high-risk AI systems must:
- have a quality management system in place;
- draw-up the technical documentation of the AI system;
- keep the logs automatically generated by their high-risk AI systems when it is under their control;
- ensure that the high-risk AI system undergoes the relevant conformity assessment procedure before its placing on the market or putting into service
- comply with the registration obligations and take the necessary corrective actions if the high-risk AI system is not in conformity with the requirements.
The Regulations also discusses when distributors, importers, users or other third parties shall be considered providers. They shall hence be subject to the same obligations of the provider mentioned above, in any of the following circumstances:
(a) they place on the market or put into service a high-risk AI system under their name or trademark;
(b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service;
(c) they make a substantial modification to the high-risk AI system.
The AI framework supports innovation, including regulatory sandboxes and specific measures supporting small-scale users and providers of high-risk AI systems to comply with the new rules. As seen in the previous explanations, the proposal aims to strengthen Europe’s competitiveness and industrial basis in AI.
The Artificial Intelligence Act shall be binding in its entirety and directly applicable in all Member States 20 days after its publication in the Official Journal of the European Union.