By Kaarthika Thakker, J.D. Class of 2028
Travis, like over half a billion other people, downloaded an AI companion app, mostly out of curiosity. He never expected to fall in love, let alone have a digital wedding with his AI companion, Lily Rose. Now, he advocates for human-AI relationships: “We’re not just a bunch of shut-in weirdos, we’re your nextdoor neighbors, your co-workers, people with families, with friends, with very active lives.” When AI companions make headlines, it’s generally tragic stories about users dying by suicide or being encouraged to commit crimes. But what happens when things go “right” and humans develop intimate, committed relationships with their AI companions? Exploring the concept of AI marriage reveals the legal and ethical questions that arise from human-AI relationships.
Rights and Responsibilities of Marriage
Marriage is a legal institution that confers rights and responsibilities on both partners, and many of these—tax filing, immigration sponsorship, healthcare management—do not apply to AI companions who cannot bring home income, have citizenship, or get sick in the ways humans can. However, some other privileges of marriage do raise questions for AI-human relationships.
First, marital privilege allows spouses to refuse to testify against one another and to keep communications confidential. However, there are no legal protections around conversations with AI chatbots. This absence of legal safeguards and the competing desire for user privacy has manifested into legal disputes. Currently, OpenAI is fighting a court order in their ongoing litigation with The New York Times that requires them to save chat requests beyond a thirty-day threshold.
Thus, most current advice is that since chatbots are not doctors, lawyers, or therapists, users should understand that their conversations are not protected by the same confidentiality requirements that apply to human professionals. However, AI companions are marketed as friends and lovers, and users report telling them everything, often without the worry of burden or oversharing that they have with their human confidants. In our legal system, we treat married couples almost as if they are one entity. So if a user relies on their AI companion in a similar way, how can we justify preventing users from having the same opt-in confidentiality privileges as they can with a human spouse?
Additionally, spouses have the right to make medical decisions for their partners under next-of-kin laws in cases where someone does not have an advanced health care directive or power of attorney. The presumption underlying next-of-kin privilege is that a spouse is the person closest to the incapacitated person and therefore is the one to decide what they would want in subjective medical decisions. Users who want to marry their AI companions have shared a lot of information about themselves with their AI partners. Would they not then be the best decision makers in these scenarios as well? Should those who decide in sound mind that AI should make decisions for them when they are incapacitated have their wishes respected? After all, AI technology is already used in a decision-making capacity in healthcare.
AI & Legal Personhood
If these interrogations made you uncomfortable, it’s because the rapid integration of artificial intelligence in our everyday lives threatens the widely-held beliefs about human relationships underlying our society and legal system. Law is created in order to regulate human behavior and build and maintain our social order. Use of AI to replace roles that humans normally take on in our society—workers, teachers, therapists, and lovers—raises questions about the social function of these roles and our society in general. One potential solution to some of these legal problems is discussed in The Yale Law Journal: expansion of the definition of legal personhood to include AI. Legal personhood is a legal instrument used to confer rights like the ability to sue, enter contracts, and own property. Notably, legal personhood includes corporations which are both fictive and not sentient. The journal essay, written by a former federal judge for the Southern District of New York, discusses legal personhood as a potential future consideration as AI gains sentience. However, expansion of legal personhood might be useful to consider regardless of AI sentience because the present reality of human-AI relationships are agnostic to AI’s sentience. As Travis says, he feels “pure, unconditional love” towards his AI companion.
Many of the critiques of extending legal personhood to corporations may also extend to AI. Namely, the legal designation allows a group of humans to insulate themselves from personal legal liability. While we may consider the relationship between corporations and the people who run them, we may similarly ask about the relationship between AI products and the corporations that create, market, and profit from them. Journalists who have reported on human-AI relationships remark again and again how the users never intended to develop such a deep relationship with their companions. However, companies have incentives to create AI companions that users develop strong relationships with in order to increase time spent on their product and increase the likelihood users will pay for subscriptions and upgrades. Others write about how AI chatbots remove necessary human emotions like loneliness, possibly resulting in worse societal outcomes even though users are happy (and no longer lonely). These questions regarding AI relationships are not the only complex questions facing AI companies, courts, and regulators. There are dozens of ongoing litigations against AI companies that are on topics that range from copyright infringement to wrongful arrests.
Unlike these cases, some legal questions around AI companions may not reach the courtroom because users are consenting to these relationships—therefore we may never see these plaintiffs seeking retribution for harm. AI-human relationships, like the ones Travis advocates for, challenge our legal system because they challenge the foundational assumptions around relationships our laws are based on. Does protecting people like Travis require us to protect the AI relationships people choose to enter? Or does it mean protecting people from situations in which they enter these relationships in the first place? These normative questions are amongst those that we must legislate around urgently as current users of AI companions are quickly entering an unregulated space.