On March 13, 2024, the EU AI Act (“the Act”), which regulates the use of artificial intelligence in the European Union, was passed in the European Parliament, after many discussions. The law is also expected to set the tone iView Postn relation to similar laws around the world, similar to the change brought about by the GDPR regulation upon its entry into force. It is very possible that legislators in other countries will take the EU’s regulatory approach and try to set similar standards. The joy is great, but what does it mean for an Israeli company? We will try to explain briefly. We emphasize that this publication does not exhaust the comprehensive legislation, but illuminates a number of important points in it.
1. Applicability of the law
Subject to exceptions and specific definitions, as a general rule the law will apply when:
• This is an Israeli company that operates an AI system in the European Union; or
• This is an Israeli company that provides an AI system to customers in the European Union; or
• The product of an Israeli company’s AI system will be used in the European Union; This is a definition that expands the application of the law for Israeli companies, and is intended to prevent a situation in which a foreign company escapes the application of the law; or
• The people affected by the AI systems are in the European Union.
The law also refers to providers, manufacturers, distributors, deployers and representatives and imposes various obligations on them.
The law refers to three categories:
• AI systems are prohibited from being used – for example, the use of which is intended for fraud, manipulation or deception, biometric cataloging based on sensitive information, social ranking or real-time remote biometric identification for the purpose of enforcement in public places.
• GPAI – General Purpose AI Models
a type of “general artificial intelligence”; General models used as part of AI systems that are able to: “completely perform a wider variety of different tasks regardless of the way the model is placed in the market, and which can be integrated into a variety of downstream systems or applications”. It is understood by the European legislator that these general models are the building blocks of the AI systems and their importance is great. There are specific guidelines for their development and use so that users can understand the functionality and algorithms that underlie the general models.
• High-Risk AI Systems, which will be discussed in detail below.
We emphasize that the legislation applies to both GPAI models and High-Risk AI Systems (no action should be taken in AI systems defined as prohibited from use).
2. Classification of AI systems and its actual effect on Israeli society
• The classification affects the amount and scope of the limitations and obligations associated with AI systems.
• The European legislator imposes many obligations on both types; Obligations such as increased transparency, clear explanation to users and shared responsibility for both GPAI and High-Risk AI Systems.
For example, the user who is “conversing” with the AI system must be informed of this, in such a way that the user understands that he is not conversing with a person. There are exceptions to this in cases where the matter is self-evident or in cases where the system is used for criminal prosecution.
Also, products of AI systems must be marked as such, including when it comes to GPAI products (for example, videos, images or text). In cases of deep fake, it must be marked that the content is artificially created or manipulated.
In addition, if the AI system is used for emotion recognition or biometric recognition – users must be informed of this.
• The law imposes additional duties on High-Risk AI Systems. These systems are subject to very extensive control, supervision, responsibility and transparency duties. As a rule, and with some exceptions, these are systems in these areas:
• Products subject to EU safety regulation;
• AI systems in critical areas, including:
Justice and Democracy Administration; law enforcement; immigration; biometric identification; education; essential services – including credit rating, emergency, certain insurances); employment; essential infrastructure.
It should be noted that there are exceptions and the definitions are complex.
The additional regulatory requirements for High-Risk AI Systems include:
• Data governance – strict adherence to instructions and compliance in training the system;
• compliance checks;
• Accuracy, system robustness and cyber security – Data Accuracy;
• Implementation and use of a risk management system – Risk Assessment;
• informing users about the nature and purpose of the system;
• Human supervision to reduce risks;
• Registration of systems in public prescriptions;
• reliable data use;
• Keeping records throughout the “life cycle” of the system;
• Transparency and clear instructions (which also refer to human supervision);
• Accurate technical documentation before using AI systems. After starting to use High-Risk AI Systems, regular monitoring is necessary, including:
• Implementing and documenting control measures on the systems, in accordance with a control plan prepared ahead of time as part of the necessary documentation. Relevant information on the use of AI systems must be collected, recorded and analyzed throughout;
• Rapid reporting of serious incidents to the relevant control authority when it comes to an AI system located in the European Union and cooperation with the authority, including through an internal investigation. Here too, the law imposes duties in relation to High-Risk AI Systems that also relate to the provider, distributor and deployer of the systems, with each having separate duties.
3. Fines
The fines that will be imposed for violating the law are expected to be high; Thus, prohibited use of AI systems may result in a fine of up to 35 million euros or another 7% of the global annual turnover (depending on the severity of the violation). for
High-Risk AI Systems, violating the law could result in a fine of 15 million euros or up to 3% of the global annual turnover (here, too, depending on the severity of the violation).
4. Entry into force of the law
The law will enter into force 20 days after its official publication, which is expected in May or June 2024. Most of the sections of the law will enter into force two years after the official publication; However, the Prohibited AI Systems rules will come into effect after 6 months, the GPAI rules will come into effect after 12 months and the High-Risk AI Systems rules will come into effect after 36 months.
5. If so, what should an Israeli company do at this stage?
• First, it is recommended that its Israeli company in connection with the subject assess its potential exposure to the law; For example, it is recommended to map the practice of AI, examine whether there is a potential impact on the European Union and whether there will be any use of the AI systems in use in the European Union, whether there are end customers there or any effect of the company’s AI systems or their products, etc.
• Second, it is recommended for an Israeli company to assess what the classification of the AI system associated with it is expected to be.
• Thirdly, it is recommended to seek professional advice in order to prepare for the entry into force of the law.
We will be happy to assist you in any matter related to artificial intelligence legislation, both in the European Union and in other jurisdictions.
The authors are lawyers Rotem Perelman-Parhi, partner and head of the technology, intellectual property and privacy protection department, and Einat Goldstein from her department.
For the avoidance of doubt, the foregoing serves as general information only and does not constitute legal advice or a substitute for legal advice.