EthicsGrade Logo

Who is Responsible for the Actions of AI?

robotreg

Written by Tess Buckley

This piece will ask "who is responsible for the actions of artificial intelligence (AI)?" We must question who is held accountable for both the risk and reward of AI action without accountability blocks moral and legal culpability.

Sadly, current policy does not properly account for technological developments, and the multiplayer model of AI teams, making the goal of accountability in action specifically difficult to attain. The multiplayer model team structure is not accounted for in current law, which results in a lack of clarity and distributed responsibility, where ‘everyone and no one’ is held accountable for the actions of AI. Many flawed IT systems linger in government, leading to an accountability vacuum.

There is an arms race between law and AI. Technology and policy have opposing structural paces, and this can cause a lack of policy to support accountability in AI teams. This grey area in practice can skew the chain novel of law, resulting in miscommunication and wrongfully accused agents. There must be a new approach in the governance of multiplayer AI teams to hold one entity fully responsible, and therefore accountable for both the rewards and risks of robotics.

Strong AI: Intentional Action, Autonomy and Agency

A unique distinction between the two types of AI that leads to questions of accountability is the degree of autonomy that is present in Strong AI. Strong AI has complex algorithms that help systems act in situations where AI can then make independent decisions without any human interaction.

In this way, strong AI is like human beings. Humans hold a strong capacity for autonomy because we can choose our own goals, while strong AI still makes choices between the options presented by humans. Strong AI lacks the capacity for intentional action and therefore lacks both autonomy and agency, prerequisites for accountability, begging the question: who is responsible for its actions?

Intention in AI: Distinct Relationship is AI creation and The Post Office Scandal

Morality is what we must question, if the actions of AI are not intentional then how can we discipline them properly? Morality is a construct of justice, which allows us to distinguish between right and wrong. AI cannot act of its own volition and has no distinct ‘intentions’ behind its action, so it is not a moral agent as to be morally responsible AI must be conscious.

There is a danger that governing bodies attribute ethical responsibility to algorithms and ML for decisions made by market and state actors. In these circumstances, AI is used as a scapegoat, and this should be resisted. There is a particular line of commentary, in which actions could have been performed due to faulty algorithms beyond the control of their human counterparts. In many cases, machine error becomes a path for officials to shift their responsibility to the ‘error-prone’ machine.

ML algorithms have been blamed for bias in resume selections and rejections for jobs, and machines are frequently blamed when public distribution systems fail. What we must interrogate, but often forget, is the corporate or political decision to use specific technologies in certain places and for certain processes.

This can be seen in the ‘Post Office Scandal’, one in which dozens of former postmasters were prosecuted based on information from a computer system called Horizon. Between 2000 and 2014 hundreds of postmasters went to prison for fraud, false accounting, and theft, this resulted in long-term impacts of a criminal conviction, financial ruin and being socially shunned. The Horizon system in question was developed by the Japanese company Fujitsu and is used for transactions, accounting and stocktaking. Over 20 years later the accused postmasters won a legal battle to have their cases reconsidered after claiming the computer system was flawed. In 2019, the Post Office settled over 555 claimants and stated it had been incorrect as the Horizon system was not remotely robust and contained bugs, errors and defects.

This scandal is a classic case of officials putting blind faith in computer systems as definitive sources of credible information and truth. One may call this dangerous phenomenon automation bias. Instead of outsourcing IT systems, governments must take responsibility for the digital systems at the heart of their structures and restrain from scapegoating IT firms and their products when things go south.

Many of these flawed systems linger on in government, leading to an accountability vacuum, where whole sections of government administration exist for which, nobody feels responsible. Without accountability and transparency, government IT systems can lead to a multitude of harms.

AI actions may be unpredictable, but they are not random, they are in fact based on reasons responsive to the internal states of the system substantiated with datasets chosen by a human. Unpredictable behavior is not a claim to full autonomy. This leads to a responsibility gap, and a grey area in law because AI (like dogs) cannot be held responsible, but in some cases neither can ‘their’ humans.

There are multiple reasons for this responsibility gap; lack of control over AI action, the pace at which AI functions can lead to a lack of time or intervention, the black box problem and the large number of people involved in the design of the system.

The Multiplayer Model: Acknowledging Stakeholders who are Partially, Indirectly or Temporarily Involved in the Invention Process of AI.

The Multiplayer Model describes the many participants and stakeholders that overlap and are independent in the process of creating AI systems. The traditional efforts of law to identify one agent of responsibility is not applicable in this multi-layered creation of AI and the actions taken by them.

There are at least ten entities among many possible stakeholders who are partially, indirectly, or temporarily involved in the invention process. The category of stakeholders can overlap or remain separate and distinct. These include but are not limited to; The Software Programmers, The Data Suppliers, The Trainers/Feedback Suppliers, The Owners of the AI Systems, The Operators of the Systems, The New Employers of Other Players, The Public, The Government, The Investor and finally the AI system.

If any of the ten players listed above can claim ownership over the AI’s invention, then the question of how to identify the actual inventor who is held responsible for both the risks and rewards of AI must be addressed. Many may raise their hands for the rewards and lower them at the risks. AI teams could take responsibility for robotics reward and redirect the robotics risk, however how do we acknowledge blameworthiness and place responsibility in a collective?

There are concerns about the value of ascribing collective responsibility in practice. As groups do not meet the same stringent conditions of moral responsibility that individuals or placing full responsibility would as distribution allows for less consequence across the board.

Many of the players may have a contractual obligation to assign the invention to the company, but who actually is responsible for the actions of AI remains unsolved. As seen in the multiplayer model of AI teams there are several loci of responsibility of which none is satisfactory, the programmer, machine, contractor, and more. If none of the players qualify as an inventor according to the current legal definition, it becomes hard to decipher what other entity should be held accountable for AI actions.

Distributed Responsibility in Law: The Lack of Structure in Law for Collective Responsibility

The current elusive definition of responsibility for AI actions creates an ambiguous chain novel in law. The lack of a clear definition means that the existing concept of responsibility and in return accountability is inadequate for addressing new, possibly harmful artificial action.

The current approach of holding one person responsible in law does not adequately hold the many parties accountable that are involved in the development of AI responsible for their contributions. Policymakers need to rethink and adjust current laws governing AI systems. The traditional structure must be replaced or repurposed with tools that allow for growth in the new era of AI teams that function in a multiplayer model.

To mitigate distributive responsibility in teams, policymakers can review the implementation of transparent AI systems and teams. Transparency is key in a new approach to law for collective responsibility in tech teams because assessing whether a decision-making system is just or fair and holding it accountable often requires being able to understand what the system is doing and how it is doing it. Transparency is also a key ingredient in justice, accountability, and morality. Satisfying transparency requirements in AI development does not necessarily require that all team members understand how every decision is made. In some contexts, such a high understanding can be harmful and compromise system values, efficiency, and accuracy. The proper level of required transparency in AI teams could help mitigate issues of fairness, discrimination, and trust when deciding who is responsible for AI actions.

We must create a new approach that integrates the unique and complex structure of tech teams into governing bodies and policy. AI has enormous potential to catalyze economic value and social progress. However, it is unacceptable to use these technologies in situations where their actions cannot be properly accounted for. We must concretely articulate a way in which the multiple stakeholders involved in AI creation and action can be held responsible for the risks of the system, not just rewards.

EthicsGrade © 2023 All Rights Reserved
  • Privacy
  • Terms
  • Careers
  • Contact