POLICY

The AI Liability Directive: The Business Case for DPIAs and the Need for Granular Digital Risk Assessments

3 March, 2023

The AI Liability Directive The Business Case for DPIAs and the Need for Granular Digital Risk Assessments

Written by Jennifer Waters, Edited by Tess Buckley

The AI Liability Directive is a valuable update to existing EU law surrounding liability rules, and paves the way for a more trustworthy approach to the integration of AI into public and private enterprises. The Directive ensures that those harmed by AI systems are not disproportionately affected by the time and expense of litigation. As will be discussed, the goals and measures taken in the new directive compliment the EU AI Act and incentivise the incorporation of its Data Protection Impact Assessments (DPIAs)– for both high-risk and low-risk data processing services. The Directive’s incentivisation of DPIAs is a valuable tool for promoting an informed public as well as ethical tech investments.

The Directive’s emphasis on increasing societal trust of opaque AI through access to documentation like DPIAs is very beneficial, however its final provision – potentially introducing strict liability rules for certain AI systems– may lead to controversial questions about what harms aren’t unacceptably risky, and why.

Contents of the Directive

The AI Liability Directive is a proposal to adapt EU law to account for the realities of AI developments, specifically the rules of liability when AI systems cause damage or harm. AI systems are characteristically opaque which leads to a much more difficult (therefore expensive) task of identifying who should be held accountable for damages or harm caused by the system. Further, sharing the case for why the human is held responsible for AI’s harm establishes a tractable, consistent approach to accountability in a time where the autonomous, opaque characteristics of AI may have caused its obfuscation. The Directive conveys a clear message: the harm of an AI system is due to nothing more than the non-compliance of the humans who operate it.

The Directive provides two modifications to existing liability rules: 1) introducing a ‘presumption of causality’ and 2) access to new kinds of proof, specifically the ability for the courts to access relevant information about high-risk or low-risk AI systems. The first modification establishes that those harmed by AI are not required to understand and reference the functions of the AI system itself to make the case that their harm was due to its operator’s non-compliance to surrounding regulation (including policy like GDPR, the AI Act, but also national laws and ‘duties of care’). When the presumption of causality can be applied, the mechanisms of an AI can be ignored. Those affected only need to prove that someone did not adhere to the risk mitigation, privacy, cybersecurity, etc. procedures required of them. Additionally, the ‘presumption of causality’ can be objected to by the operators of the AI system if they can establish that they had in fact adhered to every compliance and risk mitigation requirement.

One way operators or those harmed can establish whether the ‘presumption of causality’ should be applied is through the second provision of the Directive, the new proof available to courts– specifically the documentation required by the AI Act. The AI Act requires increased levels of documentation for high-risk AI systems, including the procedures taken to minimise the risk of harm. The ability to review a company’s required risk mitigation procedures and pinpoint where non-compliance may have (or have not) occurred provides a consistent basis to assess who is responsible for the harm, and the extent of any potential negligence.

The real-world impact of the Directive is quite large. Not only does the Directive empower those harmed by AI systems (for example job applicants who believe that they have been discriminated against by a company’s AI-driven application process), it avoids an arbitrary and confusing application of the law, as Member States have begun to amend their own liability rules in light of AI, but in differing ways. Cross-border businesses, for example, would need to spend money and time learning the laws of the particular countries that their AI system is accused of causing harm in. If such fractures continued, investigations, and even assignments of responsibility would vary depending on the country, leading to heightened legal uncertainty and expense for everyone involved.

Understanding DPIAs

The Directive is complementary to the EU AI Act and specifically mentions its documentary requirements for high-risk AI systems in its proposals – the Data Protection Impact Assessment (DPIA.)

Data Protection Impact Assessments (DPIAs) DPAIs are a systematic description of the processing operations and their purposes, necessity and proportionality assessments, risk assessments regarding the rights and freedoms of data subjects, and the measures taken to address such risks. DPIAs are a safe middle ground between the subjects’ exercise of the right to explanation and a business’s protected trade secrets. They are required to be understood by the general public, not simply experts, providing an approachable tool for public understanding, therefore trust of, their respective AI systems. The approachability of DPIAs also means that DPIAs do not reveal any trade secrets of a particular system, only its general functions, risks, and mitigations for those risks. The main focus of DPIAs is an AI system’s necessity, risk, and overall functionality, not the specific elements that make its outcomes distinct or different from the competition.

DPIAs in the AI Act The AI Act only requires DPIAs be made if the AI system qualifies as high-risk. However, while the Directive is considered complimentary to the AI Act, and claims that its provisions do not introduce the ‘expectation’ of documentary evidence at levels similar to those required for high-risk systems, the ability for courts to apply the presumption of causality and order the disclosure of relevant information to low-risk systems incentivises their creation by low-risk operators nonetheless.

The Business Case for DPIAs If the DPIA allows legal proceedings to be carried out more quickly, and can also be referenced by the defendant to establish their rebuttal, DPIAs would be valuable precautions for companies with AI systems of any risk level. The economic incentive in the Directive applies to providers of high-risk and low-risk AI systems, especially SMEs who may lack an in-house legal team or technical expertise. While such documentation isn’t strictly ‘expected’ the possibility for judges to apply the presumption of causality to any AI system may allude to the Directive’s indirect extension of the documentary requirements of the AI Act to include low-risk systems.

Whether high-risk or low-risk AI systems are creating DPIAs, they are valuable tools to assess the values and efforts of a company. While DPIAs are not required to be made publicly available, companies that also publicly disclose their DPIA and take the extra step towards transparency and public understanding of their systems should be commended and encouraged. When investing ethically, the public availability and content of DPIAs are important factors to consider when selecting for high levels of corporate digital responsibility. Not only will these companies represent upstanding business practices, but if any issues may arise, these companies will be best suited to navigate litigation involving the EU’s liability rules.

Strict Liability Rules and the Need for Granular Risk Assessments

The final provision of the Directive is to promote discussions of the potential need for strict liability rules for particular AI systems in approximately five years' time.

Where fault based liability claims pertain to an oversight, negligence, non-compliance etc. of a human that led to harm, strict liability claims pertain to an inherent responsibility for harm even if those responsible were completely compliant. A common example differentiates the requirements for cases involving cattle or a tiger. If damages were caused by roaming cattle, the person harmed would have to prove that the farmer’s negligence with, say, repairing his fence makes him responsible for their damages and must cover particular expenses. If the damages were caused by a tiger, the claimant would not be required to provide the condition of the farmer’s fence as evidence, as it is considered inherent that the farmer’s ownership of a dangerous animal leaves him responsible for any harm that animal causes- no matter his care.

Introducing the need for strict liability rules for AI systems and a complimentary ‘mandatory insurance’ advocated by the European Parliament is a profound step in allocating responsibility with respect to AI systems. The introduction of strict liability rules seems to have established a more granular spectrum regarding AI systems and the risks outlined in the AI Act. Given the variability of AI systems, a more granular, nuanced, risk assessment would be extremely valuable, however, large amounts of explanation and justifications will need to be made. If particular AI systems are considered inherently dangerous to the extent that they warrant strict liability rules, what kinds of factors would differentiate ‘inherently dangerous’ from the ‘unacceptably risky’ AI systems established in the AI Act?

The line between ‘inherently dangerous’ and ‘clearly threatening’ may be thinner than some may be comfortable with and may lead to many controversial liability cases to come. For example, the recent controversies involving deep-fakes, specifically deep-fake porn, have led to proposals to ban non-consensual deep-fake porn. Would AI systems that facilitate deep fake porn and don’t ask for verification of consent qualify as unacceptably risky for their harms on real people, or would regulators instead argue they fall under strict liability rules? Discussions surrounding any ruling would almost certainly revolve around gender-based violence, and any ruling would carry symbolic weight regarding the priority of risk for gender-based harm in the EU.

Conclusion

While the development of strict liability rules for AI systems continues the train of thought established by the AI Liability Directive, the Directives other provisions serve to empower individuals and ensure a trustworthy integration of AI into multiple facts of our everyday lives. However, it must also be emphasised that the issue at the root of this Directive is unexplainable AI. While the use of DPIAs and compliance with the AI Act, DSA & DMA package, etc. become more commonplace, high amounts of resources and attention must continue to be paid to the daunting task of making AI more explainable.