POLICY

Regulating AI: An Overview of Existing and Forthcoming Governance Frameworks

4 March, 2023

Regulating AI An Overview of Existing and Forthcoming Governance Frameworks

Summary and analysis of GDPR, CODA, DMCA, the Online Safety Bill, and EU AI Act

By: Tess Buckley

This article highlights three current laws used to govern AI: the GDPR, CODA and DMCA. Each law is shared with a summary of goals, tangible impacts, and a case study of its enforcement. The implications of reappropriating old laws made to govern humans for digital technologies are discussed. Finally, incoming AI- and digitalisation-specific regulations, namely the Online Safety Bill and AI Act, are outlined with their implications and potential use cases.

General Data Protection Regulation (GDPR) (EU)

Summary: The GDPR came into effect on May 25th of 2018. It focuses on key areas of data governance, data management and data transparency. The goal of the GDPR is to provide data privacy and security measures and allow for users to control the storage and use of their own data.

Impact: This legislation provides a legal framework to keep personal data safe by requiring companies to implement data handling and storing processes to protect personal information. Companies now must abide by ‘data protection principles’ (seen in the Data Protection Act) and strive for data minimization, security, accuracy, and storage limitation. The key stakeholders of this law's impact are data controllers and processors.

Case study quote: “E.U. authorities had determined that placing the legal consent within the terms of service essentially forced users to accept personalized ads, violating the “Meta suffered a major defeat on Wednesday that could severely undercut its Facebook and Instagram advertising business after European Union regulators found it had illegally forced users to effectively accept personalized ads. The decision, including a fine of 390 million euros ($414 million), has the potential to require Meta to make costly changes to its advertising-based business in the European Union, one of its largest markets.”

Given that regulation against AI is only in its nascent stages, there are few case studies of regulatory measures designed explicitly to mitigate the multitude of ways AI can negatively impact society. Thus far, regulators have primarily adopted an approach to AI regulation which relies on reappropriating policies initially created to penalize humans (citizens with personhood), or technology in general, such as COPPA and the DMCA. Governments and legislators have begun proposing enforceable AI-specific laws, which are forthcoming, such as the AI Act (UK) and the AI Bill of Rights (US).

It is not optimal to be reappropriating old laws made to govern humans for digital technologies, why?

Firstly, AI lacks personhood and so laws that require this status (ie. Copyright) are not entirely applicable. There is a lack of clear loci of responsibility in the complex web of relations inherent in the multiplayer model of tech teams. This is a responsibility gap that is not accounted for when applying human law to technologies. Often cases can result in attempting to find an individual responsible for the AI system, however, the multistakeholder teams innate in big tech prove this more challenging. There are at least ten entities among many possible stakeholders who are partially, indirectly, or temporarily involved in the invention process. These categories can overlap or remain distinct; Software programmers, Data Suppliers, Trainers/Feedback Suppliers, Owners of the AI System, and Operators of the System to name a few.

Secondly, this is a grey area in the law. The same law can be understood in a multitude of ways, often this flexibility proves helpful for lawyers. With such a divisive topic as AI similar cases are being decided in opposing ways resulting in the chain novel of law being skewed. There is a continual narrative that law cannot ‘keep up’ with technology, with many regulatory institutions falling more than five years behind the pace of emerging tech. This seems to be a band-aid solution, which only highlights the slowed pace which is not compatible.

Thirdly, reappropriating laws for machines risks delegitimizing those same laws when applied to humans. AI has implications for our relationship to law, the legal profession and in the legitimacy of the rule of law. The legitimacy of judicial institutions is founded to a large extent on their moral authority. This moral authority is commanded because they are seen to respect the individual and first-person subjectivity, which AI systems lack. As a result, AI decision-making may lead to a wider gulf between machine law and human conceptions of justice, to the point where legal institutions cease to command loyalty and legitimacy.

Children’s Online Privacy Protection Act (COPPA) (USA)

Summary: COPPA came into effect in 1998. This law focuses on children's safety and parental consent. The goal of COPPA is to impose certain requirements for operators of websites for children under thirteen while placing parents in control of what information the internet collects from their young child online.

Impact: COPPA requires commercial website operators and online services to provide notice of data collection, collect only reasonably necessary data for a child to participate online, deploy and upkeep security measures to protect information and obtain consent from the children's parents before collecting or using children’s information.

Case study quote: “Google and its YouTube subsidiary in the United States will pay a record U$170 million to settle allegations by the Federal Trade Commission and the New York Attorney General that the YouTube video sharing service illegally collected personal information from children without their parents' consent. The settlement is the largest ever obtained by the FTC under a Children’s Online Privacy Protection Rule (COPPA) case in the rule’s 21-year history. The companies will pay the FTC $136 million and $34 million to New York for the alleged COPPA violations.”

Digital Millennium Copyright Act (DCMA) (USA)

Summary: The DCMA came into effect in 1998. This federal law criminalizes the production and dissemination of technology, devices or services intended to circumvent measures which control access to copyrighted works. The goal of DCMA is to protect copyright holders from online theft.

Impact: The DCMA protects artists from piracy and holds legally accountable anyone attempting to duplicate digital copyrighted works and selling or freely distributing them by, criminalizing these acts. Under the DCMA companies who host digital content which is copyrighted are responsible for removing the digital content when notified by the copyright holder.

Case study quote: “the United States District Court of the Northern District of California, artists Karla Ortiz, Kelly Mckernan, and Sarah Andersen, represented by Joseph Saveri Law Firm, claim that Stability AI and DeviantArt companies have violated copyright laws by using their images, along with those of tens of thousands of other artists, to train their image generators and produce derivative works. The plaintiffs claim that these companies have infringed on 17 U.S. Code § 106, exclusive rights in copyrighted works, the Digital Millennium Copyright Act, and are in violation of the Unfair Competition law”

This article will now turn to incoming regulations designed specifically to deal with the potential harms of AI and digital technology. Namely, the Online Safety Bill (UK), considered one of the most far-reaching attempts to regulate how tech companies deal with users' content to date, and the AI Act, which aims to harmonise rules placed on AI systems in the EU.

Online Safety Bill (UK)

Summary: The Online Safety Bill was published in draft in May of 2021. It claims to defend free expression online. The target audience for the rules in this bill are firms that ‘host user-generated content’ or interact with others. Ideally, this bill works to minimise the harms of results from user searches, with a focus on protecting children and supporting adults. This regulatory framework places the responsibility of protecting citizens from harmful user-generated content with tech companies.

Use Case: This bill will force platforms to tackle and remove illegal material on their sites. The bill highlights material relating to terrorism, child sexual exploitation and abuse. Any platform that fails to comply will be met with fines of up to 10% of the platform's revenue with the highest penalty the regulators can enforce being blocked. This bill also prioritises child users by making it the platforms duty to protect younger people from legal yet harmful content (e.g., eating disorder or self-harm content). Further, high-risk platforms will have to address their user’s exposure to harmful material for adults as well including issues such as abuse or harassment (e.g., cyber flashing or epilepsy trolling), making all these actions clear in their terms and conditions. On top of this, the services will have to include ‘empowerment tools’ to give adults control over how they interact with the platform and whom they interact with. Finally, the largest platforms must find ways to prevent harmful scam advertisements.

Implications: As detailed by Article 19, this Bill implies a duty of care to providers of user-to-user services, yet the scope of said duties varies significantly. They are dependent on the service itself and the nature of the content in question. Although the bill addresses key duties; the duty to address illegal content, the duty to address content harmful to adults, the duty to address content that is harmful to children, it fails to address the problematic business models of platforms which are the root causes of the propagation of problematic content. The bill requires censorship of protected speech and outsources the decisions of the illegality of users' speech to the platforms themselves. The bill also weakens anonymity online and gives too much power to the Secretary of State, endangering the independence of UK regulator, Ofcom.

The AI Act (AIA) (EU)

Summary: The AI Act is likely to become binding law in late 2023/24. The AIA will aim to establish and enforce horizontal legislation, making sure that AI systems regulation is compatible. The act shares rules for the trustworthy and safe deployment of any product that uses AI on the EU market. This is a major regulation that clarifies AI risk categories.

Use Case: The three AI risk categories presented by the EU AIA are unacceptable risks (e.g., government-run social scoring), high-risk applications (e.g., CV scanning to rank applicants), and unregulated high-risk applications. This bill not only sets a precedent in the EU but will be treated as a global standard which inspires AI regulation internationally. Its legal framework for AI has already inspired Brazil’s Congress.

Implications: The Future Society, Leverhulme Centre for the Future of Intelligence and Centre for the Study of Existential Risk provided their comment and suggestions for the AIA. Three of the key complaints from these institutes were the lack of flexibility in being able to label a system high-risk, the consideration of society at large rather than just the individual, and the presence of loopholes in the law itself. The exceptions to the law included, for example, police use of facial recognition systems to assist in finding missing citizens. The AIA draft is a start, but these unresolved implications are some of the reasons it will not be enforced until the end of 2023/24.