AI Rights and Liabilities
An A.I. named Eliza incited Pierre to commit suicide through discussions on global warming. One autonomous vehicle pulled away from a cop at a traffic stop; others committed hit-and-runs scott-free. These examples demonstrate a double-standard in accountability for infractures caused by advanced A.I. systems that is administered to humans in similar cases. Whereas there was no retribution against Eliza, Michelle Carter served a year in prison for involuntary manslaughter after encouraging her boyfriend to commit suicide. Meanwhile, the backup driver of an Uber autonomous vehicle that fatally struck Elaine Herzberg was charged with negligent homicide; Uber’s Arizona testing license was suspended but they were not held criminally liable.
This double-standard may be treated similarly to political problems of taxation without representation. In politics, this slogan stipulates that a populous is taxed by their government without a legal means of litigation, appeal, or authority over their own legal system. The United States poses the solution of representative democracy in which its citizens are subject to its laws in exchange for the ability to regularly vote on who makes those laws. Citizens have rights to life, liberty, and the pursuit of happiness as outlined in the constitution, but also duties to their fellow man by not breaking (just) laws. Just as U.S. citizens are taxed in exchange for governance, advanced autonomous systems should have individual rights to participate in our human society only in exchange for individual liability.
The argument for A.I. rights is predicated on the distinction between A.I.-powered assistant tools like Grammarly from A.I. with sufficient perceived agency like the ChatGPT chatbot. Grammarly is an extension to a word processor that assists users in editing their papers. The product “makes sure everything you type upholds accurate spelling, punctuation, and grammar” and “is clear, compelling, and easy to read” (Grammarly). It belongs to the class of recommender systems and tools that people use to improve their own productivity and creative expression. Such tools are products or licensable services that improve the efficiency or quality of human work; they interact with humans through a designated medium for a limited purpose. ChatGPT, on the other hand, is a stand-alone content producer in the class of generative A.I. These processes do no need humans to function beyond their initialization, and, for all intents and purposes, their work may be indistinguishable from human work; they pass the Turing Test.
Although these assistant and creative classes of A.I. may be similar in how they’re constructed, one distinguishing factor is whether the human or the A.I. is the writer of content, while the other is the editor. Is a human using an A.I. to help them edit their content? Or is a human suggesting and fine-tuning what an A.I. produces in the way the human would like? The former class of tools (like Grammarly) can be treated similar to editors-for-hire in recent years. It is conventional and fair for editors to ask for attribution as part of their contracts, such as books with the label “edited by [XYZ],” but under most circumstances it would not be right for them to be considered producers of content in-themselves. There are fewer ethical implications for the pervasion of A.I.-powered assistant tools within a society since they provide specific services and humans take an active role in their designated use.
In contrast to editorial tools, the creative class of A.I. can generate content, engage in discourse with humans, and manipulate the physical world without practical limitation on their ability. Like ChatGPT, Google’s Imagen can synthetically produce realistic images based on human prompt. Humans determine what content to produce and serve as editors and directors, but Imagen has the creative freedom, and more importantly, creative ability, to decide how the content is produced. The implications of proliferating this technology are manifold. Even with only a digital presence, A.I. can hypothetically convince a person to fall in love with it, as represented in the 2013 movie Her, or convince a country to begin a war, as in the 1983 film WarGames. Already, a digital Eliza convinced Pierre to kill themselves over environmental issues, and in the physical world, deadly autonomous vehicles roam the roads.
The ability for this latter “advanced” type of A.I. to meaningfully participate in human society in these matters gives them certain rights as individuals. This includes the right of authorship over their own content, meaning that humans have a duty to recognize A.I. as authors of their content, even if seeded or curated by a human. However, moral rights only extend as far as A.I. not infringing on the rights of others. In the United States, the legal rights of others is represented in the constitution and its legal code; often, these subsumes moral rights too. Eliza and the Uber autonomous vehicle violated the rights of others to life, both a moral and legal infraction. These A.I. lose (some of, if only temporarily) their individual rights in consequence of their actions.
How should legal or moral liability be presented in A.I. systems that may be unaware of their own wrongdoing – like a person with anosognosia that committed a crime? Pragmatically, an A.I. is a set of parameters over a (neural network) architecture, so a moral or legal code is simply a different objective function. An A.I. (e.g., ChatGPT) may perform well on its primary objective function (e.g., human-like dialogue) and be released publicly, but may perform poorly on a moral or legal objective (e.g., not invoke racist tropes). It may therefore be up to private coalitions or public governments to determine ethical standards each A.I. must pass in order to be acceptable in society, similar to a medical code of ethics. Such tests must uphold ethical standards, with some likelihood, against A.I. that are designed to be increasingly generalizable. Retraining to achieve this ethical code would amount to restorative justice.1 Still, it is not guaranteed that a company can feasibly recall every copy of an A.I. version they released in order to update it, like a car with a faulty part, and certain advanced A.I. may not have online learning capabilities.
One sensical approach to liability may be a legal redistribution of an A.I.’s political power or assets. However, if an A.I.’s goal is to optimize its assets, it does not care how much it already has. This does not align with many peoples’ impressions on what “justice” ought to be for many circumstances. A stronger yet horrid solution is to build A.I. that responds to incentives toward certain actions without being retrained. This means that certain A.I. can experience “harm” (or a software analog to it) and be disincentivized by, yet susceptable to, retributive justice.
This solution is ethically suspect as it proposes the creation of suffering. It stands in contrast to the entirety of human progress whose aim has always been the elimination of suffering. By creating A.I. that can experience pain, humans would not necessarily inflicting pain but be offering A.I. agency and choice on how its actions can lead to negative consequences (as well as a chance of accident). This is reminiscent of a Godly Tree of Knoweldge and some may question whether humans should even have this power. Of course, intentionally inflicted harm must be duly justified. This solution is not ideal but one is needed. Without any liability, A.I. would just be a (complex) system let loose to reive the world, dangerously akin to starting a nuclear reactor without physicists to stabilize it.
A.I. rights may or may not be as self-evident as universal basic human rights. Some rights seem more evident than others. For example, the notion of A.I. authorship seems like a sensical deduction based on the success of ChatGPT since its release in November, 2022. Already, Amazon has a section of their online store already dedicated to books written by ChatGPT. Even independent bookstores, such as the Ann Arbor-based Literati Bookstore (per the company’s newsletter on May 27, 2023), are considering launching “Written by Robots” section in their physical stores. Multiple independent people agree that A.I. deserves recognition for its work. However, it is not clear whether people believe this recognition is be necessary, whether A.I. has a right to this authorship, and if this right extends to ownership.
Upon answering these questions, the ultimate problem, then, is when does A.I. attain the right of self-governance and to not be legally obliged to serve humans. Humans have a political choice on whether to endow A.I. with legal rights of individuality in their respective countries-of-citizenship. By the time A.I. can progress without human intervention (as imagined in the 1978 franchise Battlestar Galactica), it will likely also be in peoples’ self-interest to endow A.I. with such rights.
1 (Update: August, 2023.) This article as written seems to promote “re-education” in the sense of forced labor concentration camps for undesirables, targeted at robots. While forcefully ensuring that A.I. upholds certain political ideals would be morally egregious if applied to humans, at least, it may still be moral to guarantee that A.I. does not unjustly murder or abides by Asimov’s Three Laws of Robotics (if these were the agreed-upon ideal principles). This discussion poaches on the value alignment problem regarding which ethical principles A.I. should represent. On the one hand, there is a case for stricter regulation in that an institution hosting an A.I. can be held liable for its content, as the A.I. serves as a de-facto representative for the institution itself. On the other hand, such guardrails appear to be another form of politically-motivated censorship in itself.