Cristina Pullen, J.D., Class of 2028
Artificial intelligence (AI) is quickly being integrated into every major industry in the United States. From chatbots to business automation to predictive tools, corporations and government entities want to embrace this technology as a way to boost efficiency like never before. Thus, it’s no surprise that American correctional facilities, along with the rest of the country, are also exploring ways to adopt AI to address current challenges.
Alabama, Georgia, New York, and other states commonly use speech recognition software to record, transcribe, and scan inmate calls. In Virginia, judges have harnessed AI to “score” reoffending probability in over 50,000 convictions and sentence accordingly. In response to understaffing, one county jail in Georgia is piloting six-foot tall robots to patrol the jail floors at night.
According to the Oklahoma Department of Corrections (ODCO) Executive Director Steven Harpe, “in corrections, AI tools will help us enhance security, streamline decisions, and make real-time data-driven decisions to ensure that we transform lives in a safe environment.” Ideally, AI would solve many of our administrative and resourcing problems in prisons, and even make greater rehabilitation efforts possible. In reality, however, too much is unknown about AI to guarantee productive and fair use in our correctional facilities.
Today, we face two major areas of concern: privacy concerns in AI call monitoring and prediction bias in recidivism algorithms.
Privacy Concerns in AI Call Monitoring
Many correctional facilities partner with third-party AI companies such as LeoTech and Securus to record, transcribe, and flag inmate calls and conversations. Inmate call monitoring is a standard practice, and any non-confidential transcriptions can be introduced as evidence in trial. Thus, maintaining the confidentiality of attorney-client privilege is of the utmost importance when preparing an adequate defense. However, AI alone cannot decipher if a conversation is confidential without human input, and an overreliance on the technology can lead to serious missteps.
Recently, Securus faced multiple lawsuits for recording attorney-client privileged calls. Defense attorneys discovered the confidential recordings after reviewing prosecution-provided documents.
In addition to the difficulty of “catching” all privileged recordings, it’s unclear how long and where these companies keep this call data. Hackers and looming distrust of phone calls can further exasperate threats to a person’s Sixth Amendment right to “competent and effective legal counsel,” and overall privacy. For example, if an inmate is afraid to be honest with their attorney over the phone and their attorney is hard to reach otherwise, the inmate is unlikely to receive the most informed guidance needed to move forward.
AI can also introduce inaccurate and misleading transcriptions. A recent study on one of the most advanced AI transcription tools, OpenAI’s Whisper, found that the tool’s transcription hallucination rate is about 1%. While 1% seems small, there are close to 2 million people incarcerated in our criminal legal systems. If all 2 million people made a phone call tomorrow, 20,000 of those calls would include a hallucination, and according to the Whisper study, when the transcription is wrong, it’s really wrong: “38% of hallucinations include explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.” Further studies are needed to measure the accuracy of tools from companies such as LeoTech and Securus to safeguard against an unfair trial for any defendant in these circumstances.
Prediction Bias in Recidivism Algorithms
The Fourteenth Amendment’s Equal Protection Clause is a fundamental right for incarcerated persons. All correctional facility staff are prohibited from discriminating against the incarcerated on the basis of race, religion, sex, or national origin. Thus, when biased AI tools reinforce systemic injustice, AI threatens Constitutional rights.
One questionable tool that many states employ is the Correctional Offender Management Profiling for Alternative Actions (COMPAS). Judges in New York, Pennsylvania, Wisconsin, California, and Florida use COMPAS to predict an offender’s likelihood of recidivism, which can impact a past offender’s chance of parole. Much of this predictive technology remains unchecked, and empirical studies have shown that these specific kinds of AI tools have unfairly weighed race when making predictions.
Even when companies manually remove race as a data point, there are many additional factors “hidden in the design of the algorithm” that often indirectly lead to discrimination.
Proxy variables, including factors like ZIP codes and income levels, are deeply correlated with the consequences of historical segregation and racial wealth gaps. This allows an algorithm to infer race and reproduce systemic bias, even when the explicit “race” feature is excluded. Because these proxies introduce predictive errors in similar patterns, and their presence is often obscured, the discriminatory outcomes they cause can be difficult to detect, or truly understand until it’s too late.
How can we rely on AI to give advice on how to further justice, when this advice is based on decisions that, at best, we do not understand, and at worst, are rooted in discrimination?
Without further research, it’s possible for tools like COMPAS to cause irreparable harm to many by unduly influencing harsher sentences, even in the hands of a well-intentioned judge.
An Opportunity for Ethical AI Use in Corrections
Despite these challenges, AI is not all bad news for corrections. There are opportunities to use AI to improve justice for inmates and support for facility staff.
Therapy bots and wearable technology have promise for increasing the mental health of incarcerated individuals by providing continuous support and progress tracking. Similarly, surveillance systems can increase safety within prison walls by keeping all parties—both inmates and staff—accountable and providing on-time support with smart detection tools. Also, increasing administrative efficiency with AI is likely to benefit everyone in the prison system without significant risk to individual rights.
Nonetheless, all AI tools should continue to be tested, researched, and deployed with the utmost care, especially when deployed in spaces so sensitive to risk as in correctional facilities.