The AI Act is coming – Why Israeli businesses should care and start preparing for it
Lengthy and fierce discussions about the European Act on Artificial Intelligence (AI Act) look like they have come to an end – the Council of the European Union recently approved the AI Act, it has passed the European Parliament’s Committee on the Internal Market and Consumer Protection and is expected to be approved by the European Parliament in April 2024. The AI Act is the worldwide first comprehensive AI regulation, which sets out harmonised rules for the placing on the market, putting into service and use of artificial intelligence systems (AI Systems) in the EU.
- Why is the AI Act relevant for Israeli businesses?
There are a few reasons why the AI Act may be relevant for businesses in Israel:
- Compliance with the AI Act may be required for companies that either operate an AI system in the EU or that provide AI systems to customers in the EU. The law further applies to providers or deployers of AI systems outside the EU where the output produced by the AI system is used within the EU – a provision which broadens the possible scope of the AI Act significantly, but which is expressly intended to prevent circumvention of the law.
- In addition, the AI Act may influence the development of AI regulations in other jurisdictions. Other lawmakers may look to the EU’s regulatory approach as a model for their own AI regulations, or might align their regulations with the EU’s standards in order to enable cross-border trade and cooperation.
- Striking a Balance Between Risk and Innovation
The AI Act follows a risk-based approach and classifies AI systems into different categories:
- Prohibited AI: Guarding against Manipulation and Privacy Invasion
The legislation takes a firm stance against malicious practices by prohibiting AI systems such as purposefully manipulative or deceptive techniques, biometric categorisation systems that individually categorise a person based on sensitive information, social scoring, or real-time remote biometric identification systems in the public for law enforcement purposes.
- High-Risk AI Systems: Balancing Power and Responsibility
A large part of the AI Act is dedicated to strict and extensive regulations for high-risk AI systems. Companies involved in AI must identify if their AI system is “high-risk” to comply with the law. The AI Act recognizes two types of high-risk AI systems:
1) AI as a product covered by specific EU legislation in industries such as civil aviation, vehicle security, and personal protective equipment, and
2) AI listed in Annex III, which includes remote biometric identification, AI used in education, employment, law enforcement, migration, and more.
- General-Purpose AI Models: Illuminating the Algorithms
General-purpose AI (GPAI) models, being the building blocks of AI systems, play a pivotal role in shaping our technological future. They are defined as AI models that display “significant generality” and are “capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” Recognizing their significance, the AI Act sets out specific requirements for the development and deployment of such AI systems, ensuring that users understand their underlying algorithms and functionality.
- Requirements for High-risk AI Systems:
Providers of high-risk AI systems must meet strict requirements to ensure that their AI systems are trustworthy, transparent and accountable. This includes, among other things, conducting risk assessments, using reliable data, documenting technical and ethical choices, maintaining performance records, informing users about the nature and purpose of their systems, enabling human oversight, ensuring accuracy and resilience, addressing cybersecurity concerns, testing for compliance, and registering systems in a publicly accessible EU database.
In addition, the AI Act imposes strict obligations across the value chain of a high-risk AI system. Not only on the ‘provider’ of a high-risk AI system needs to be compliant, but also on the ‘importer’, ‘distributor’ and ‘deployer’ of such systems. Broadly speaking, the importer needs to verify the system’s conformity by reviewing various documentation, whereas the distributor is required to verify the CE (conformité européenne) conformity.
The deployer (in previous drafts also called the user of the AI system), also has various obligations when it utilizes a high-risk AI system, one being the obligation to use the high-risk AI system in accordance with the provider’s instructions of use. This will be important in any potential liability discussion with the provider.
- Transparency Obligations for AI Systems and GPAIs
The AI Act puts transparency in the foreground. If a person interacts with an AI system, they need to be informed that they are interacting with an AI system instead of with another person. Exceptions apply if the AI interaction is obvious or if the AI system is used for criminal prosecution.
Similarly, outputs of generative AI systems, including General Purpose AI models (e.g. audio, image, video or text content) need to be marked as artificially generated or manipulated.
In case of AI systems used for emotion recognition or biometric categorization, the people exposed to such system have to be informed of the operation.
In case of deep fakes, the content must be labelled as having been artificially created or manipulated.
- Sanctions
The penalties under the AI Act can be very high. Engaging in a prohibited AI practice can lead to a penalty of up to EUR 35 million or 7% of the total worldwide annual turnover for companies, depending on the severity of the infringement. For high-risk AI systems, the penalty may be as high as EUR 15 million or 3%, whichever is higher.
- Next steps
We expect that the AI Act will be published in its final form in mid-2024. The AI Act will enter into force 20 days after publication in the Official Journal of the EU. Most of its provisions will apply after 24 months. The rules on prohibited AI systems will apply after 6 months, provisions for GPAI models after 12 months, and the provisions regulating high-risk AI systems after 36 months.
____________________________________________________________________________________________________________________
The review was written by Rotem Perelman – Farhi, Partner and Heads of the firm’s Technology, IP & Data Department and Dr. Laura Jelinek, Associate in the the firm’s Technology, IP & Data Department.
_____________________________________________________________________________________________________________________
* This newsletter is provided for informational purposes only, is general in nature, does not constitute a legal opinion or legal advice and should not be relied on as such. If you are seeking legal advice, it is essential to review the specific facts of each case in detail with a qualified lawyer.