[Gayathri Sindhu] 0:07
Hey everyone, welcome to yet another exciting episode of the Berkeley Technology Law Journal Podcast. I’m your host, Gayathri Sindhu. Today, we are privileged to have with us Professor Colleen Chien, an absolute powerhouse. And I would go so far as to call her a triple threat because not only is she known for being a leading authority in intellectual property and technology law, but also for her impactful work involving AI and the criminal justice system. Professor Chien earned a BA and BS from Stanford University, and even went on to work at NASA’s Jet Propulsion Laboratory as an undergraduate. During the past decade, she has worked in the Obama White House as a senior adviser in intellectual property and innovation, and more recently, as a senior counselor to the Department of Commerce, and Marian Coak Distinguished Scholar at the United States Patent and Trademark Office. She is currently a professor at our very own Berkeley Law. In this episode, we will take a deep dive into the evolving realm of AI law, a field where Professor Chien has been at the forefront shaping discussions and educating future legal minds at Berkeley. We will explore the content of her new course, the Law and Governance of AI, and discuss what makes AI governance so crucial in our current legal landscape. Professor Chien will also be sharing her insights from her recent study on the use of generative AI in legal aid, highlighting how these tools are increasing productivity, and transforming access to justice, which I hope will be a welcome respite from all the AI doomsday theories we’ve all inevitably encountered. We will also delve into the broader societal impacts of AI, examining both the opportunities and challenges they present. Join us as we navigate these complex topics, gaining expert insights from Professor Chien on how the legal profession and policymakers can work together to ensure that the benefits of AI are equitably distributed, while mitigating risks. This is a conversation that anyone interested in the future of law and technologies won’t want to miss.
[Gayathri Sindhu] 2:52
Professor, thank you so much for joining us today.
[Colleen Chien] 2:56
It’s my pleasure. Thanks for having me.
[Gayathri Sindhu] 2:59
No, it’s really our pleasure, because you’ve been such a badass in the tech law space. So we have some fun and some hard questions for you to answer.
[Colleen Chien] 3:12
Great! Looking forward to it.
[Gayathri Sindhu] 3:14
Great. So, Professor, you have played many important roles in the government, including being a member of the Biden-Harris transition team, and also as a senior adviser on IP issues in the Obama White House. So having fulfilled these roles, did you have different experiences or challenges while working under different governmental regimes or during the transition?
[Colleen Chien] 3:41
Sure. So the first thing I’ll say is, it’s been a huge privilege to be able to work in public service under these democratic administrations. And the roles I played were pretty different each one. And I’m actually currently still with the Biden administration. So I’ll just walk through them a little bit. And if there’s more, you want to hear about either one of them, I’m happy to answer questions. But in the first case, in the Obama administration, I worked in the White House. So it was a senior advisor of intellectual property and innovation in a specific office. So a lot of times we think about the government, you work for the government. But it’s important to remember the government, in particular, the executive branch is actually organized as the White House and the political center. And then you have the agencies themselves. So in this case, I was in the White House. And the White House itself has a lot of different parts of it, all of whom might have different equities, they call them when it comes to Science and Technology Policy. But I was in the specific Office of Science and Technology Policy (OSTP) and I was in the Office of the Chief Technology Officer (CTO). So we were basically a number of lawyers in a large organization that was mostly technologists. And my portfolio was pretty broad; it covered not only intellectual property, but also innovation. So that meant playing point on a broad range of topics, ranging from things like scientific cooperation across countries to the IP aspects of diplomatic climate policy. I was very excited to be able to travel with Senator Kerry, I think he might have been the Secretary at that point. But Secretary John Kerry and his team went on a large delegation to China to talk about IP aspects of climate policy, to things like open educational resources, to things like patent assertion entities. So it was a broad portfolio. It was in the White House. And so I think just being closer to sort of the center of the political part of the administration means that you have to also react to what’s happening in terms of current news developments, or what the President might be interested in at a particular time. Now, when I was tapped for the transition team, for the Biden administration, the transition itself is a very specific task. It involves thinking about, in the way I’ve described it, is sort of like you have one family moving out, and another family moving into the White House, essentially. And in that regard, one of the big things that you want to be able to do on the day one of the new administration is sort of hit the ground running and know where all the different parts of the House are, what’s happening in them, how are people doing? What kind of opportunities to advance the political agenda within the agency? So when you’re doing a transition team task, and this case, I was on the Department of Commerce team, because the Patent Office sits within the Department of Commerce. And so I had primary responsibility for the Patent Office. And that meant, sort of working with the outgoing politicians, as well as the rank and file agency bureaucrats who are all excellent. And trying to figure out where we are with respect to the different issues, where my deeper opportunities to, very early in the administration, really advance the administration’s priorities. So that was a very specific and discrete task. And it lasted, basically, from the time that President Biden was declared as the president elect, to the time he took office, and then after that, I was asked to stay on and help with just the transition before the current Patent Office Director came in. They just sort of needed some expertise. And so I was privileged to be able to work for the Department of Commerce. As soon as the new Patent Director, the Under Secretary of Commerce, Kathi Vidal, became the Director, I started also working in advising her. So it’s been a number of different roles. And I’ve really enjoyed all of them. I think the current Director is incredible. It’s a real privilege to be able to provide input whenever she asks for it. And it is very different, I think, being an agency where you have staff, not me personally, but the agency itself has over 10,000 employees. You have a set of constituents, it’s very defined. You have a mission that’s been given to you by Congress, as opposed to being in the White House where you’re more of a coordinator and looking across the different agencies and trying to develop policy based on the different issues that arise. So I would say they’re pretty different roles. But again, it’s been a real pleasure to be able to work in both capacities.
[Gayathri Sindhu] 8:28
That sounds amazing and interesting. And it also sounds like you have had some experience juggling different things and transitioning through different things in life, just as you had previously held a position in NASA’s Jet Propulsion Laboratory, and then shifted to law. That was like a fascinating switch. So could you tell us some more about what prompted you to that shift?
[Colleen Chien] 9:01
Sure. And, you know, when I was working at NASA, it was when I was a student. So I live where we grew up in the Pasadena La Cañada area, and NASA was sort of in my backyard. So after I started college, it was something natural to do to try to apply for a job there. And I was fortunate enough to get it. But I think that the interest in both doing technology working at NASA and also doing the law was actually quite consistent. And I was sort of this career interest I’ve had, since I went to college, of leveraging science and technology, to be as idealistic as it might sound, sort of make the world a better place and reach its potential to improve the human condition. So initially, I thought the best way to do that would be to work on technology itself. And so in college, I focused on energy systems engineering and clean tech. And so when I was working at NASA, it was because I was able to work on solar satellites and work on solar cell technology. But at some point, I think I realized, like working as a technologist and sort of during the summer internships, that it wasn’t enough to really make good technology that would actually have the potential to be helpful. You had to create the right economic and policy conditions for these technologies to be taken up. And so I felt that a better use of my time would be to try to have my technical knowledge, leverage that, but use it really to inform and create good policies that would both incentivize and encourage new innovation to be developed, but also for those new innovations to be adopted. So that transition sounds fairly straightforward, but it wasn’t. I sort of did a lot of different things. I came in as an undergrad, I worked in engineering, but then I did a Fulbright scholarship and worked as a journalist. I worked in consulting for a couple years in business. And I found my way back to law school, I think I was always interested in writing and policy, I think, became for me the thing that made sense in terms of thinking about how to move the technological discussion, as well as the business incentive, the market policy was one way to really think about how to move all those levers together in the right directions.
[Gayathri Sindhu] 11:13
That sounds quite eventful, your whole career trajectory. But is that why you’re now interested in AI? Because you’re so interested in keeping with the policy of current socioeconomic conditions? Is that what prompted your current interest in AI and AI governance?
[Colleen Chien] 11:34
Well, I would say that again, when I describe my path, it wasn’t at all planned. And I don’t think it’s particularly linear, right? They’re sort of doing a lot of different things. So for any of you out there who are kind of struggling to think about, well, what should I do next? And how is this all gonna work together? I would just provide you the assurance that it will all work out. And as long as you’re kind of following what you believe in and what you’re passionate about, it’s completely fine to have a very circular path like I did. The other law professors knew very early, they wanted to be a law professor and went that route. I didn’t at all, I did a lot of different things. And even when I was in law school, I never had the idea at all that I could become a law professor, much less one at my alma mater, especially working with these giants in the field. With respect to AI, I think it was actually interesting. The way that I got into it was that I had a former student of mine, who was an entrepreneur, and she was my research assistant, when she was in law school, she started to kind of found companies when she left. And she said, I’d love to keep in touch, and can you advise me? But she also said, this is a really important development, this new technological development, this was like maybe five or seven years ago, that’s going to change everything. It’s called Artificial Intelligence. And you need to teach it to your students. You need to make sure that they’re prepared when they leave law school to understand what this is about. So I basically followed my student’s advice, and started to get up to speed on the topics and then started to teach it. And I enjoyed teaching a lot at Santa Clara where I initially was or where I was until just recently. But it was also a challenge, because we didn’t really have any case law. There wasn’t a statutory law, really. And it was trying to figure out how to fit this into the law school. But now, I think, yes, I think AI is everywhere. It’s been incorporated into, I think, a lot of classes in different ways. But I’m fortunate to be able to have this particular space, to really just focus on the developments we’re seeing in law and governance, and trying to get this set of students that I’m working with now ready to be going into practice and making an impact in this area.
[Gayathri Sindhu] 13:39
Understood. And that brings us to your class on law and governance of AI. So when you introduce this course, what were your aims, your educational aims and your student objectives that you will really look forward to achieving?
[Colleen Chien] 14:00
Right. So I mean, I think as with the other law professors, I want to provide the best educational experience to my students, and in this case, especially at Berkeley Law, to make sure that they are prepared and well positioned to make an impact in law, whether that be in practice, in Chambers as clerks, or in the public sector, advancing the public interest. And so I think for AI, it’s a topic that cross cuts many different realms. In the class itself, we work on sort of public law questions, right? When algorithms are deployed by the government, or were they deployed in a way that is discriminatory or might be biased? We also focus on private law questions like copyright, like whose rights are implicated when you train these models on huge corpuses of information and text? And so, I think the aim was really to take any student that was interested in AI that thought it had an intersection with the main area of their practice or they were looking for a new practice area. And to really give them a good taste of the current set of legal and governance issues that are out there and get them familiar with them. And then also teaching them how to think critically about new technologies as they arise, how we can apply the frameworks we already know of, and feel empowered to apply to these new contexts. So the class has a lot of lawyers in it like yourself, who have actually already practiced like our LLMs, as well as law students. And it’s been fun just to hear the different perspectives people have brought in with respect to the different topics and really get a sense of the basic set of tools we want to teach everybody, right? Sort of strong analytical skills, writing skills, skills of practice and of presentation, but apply them to this new domain. I think another aim, though, is to just ensure that students feel confident that they can take on a new technological area, they can master it quickly, and at least enough to be able to know what questions to ask and what they do and don’t know, but to realize that they themselves have a lot of knowledge in the law that they can bring to bear. Because I think what’s happening now is that the market is moving so quickly, and the technology is moving so quickly. The role of lawyers still isn’t yet defined. But I think as lawyers can understand what their role is in shaping the norms for these technologies, how they should and shouldn’t be developed and deployed, their voice and our role in the ecosystem will grow. So it’s really a hope to try to help educate this new generation of lawyers, which is going to help shape how AI ultimately impacts society.
[Gayathri Sindhu] 16:43
During the course of this class, were there any opinions or questions that really surprised you, or that were entirely new to you?
[Colleen Chien] 16:54
Oh, that’s a good question. I mean, I think that what we’ve actually done is ask people to really try to understand the perspectives of different stakeholders in the AI ecosystem. And so I have seen some students who, for example, might have come from a civil liberties background and be strong advocates of the public interest, have to put themselves in the position of having to represent, say, a tech company or a new entrant in the foundation model space. I think it’s been interesting just to see how flexible they can be in their thinking, and that they can think about this is maybe an issue I might resolve in a different way. And so that’s been fun to see the students really stretch themselves and really try to understand the issue from different perspectives, whether it be the business side, or be the technology side, which is distinct from like the business, the regulator side, the stakeholder side, sort of civil society. So that’s been fun to just experiment with and hear how people have been able to stretch themselves.
[Gayathri Sindhu] 18:04
I love that answer, as a lawyer pushing people out of their comfort zones is a perfectly wonderful scenario to witness, I would say.
[Colleen Chien] 18:14
Well, it seems important here, because as lawyers, we may be the one in the room saying, I’m not sure you should move forward with that new product or that new development scheme that you’re thinking about. Have you thought about this? Have you thought about that? So a lot of these are softer skills, they’re not about, let’s apply the law and figure out that’s very settled and say that this is the legal outcome, right? It’s about managing risk. And it’s about being able to empathize with the other side and understand what pressures that are on them to try to reach a conclusion. So I think, and I definitely benefited from having an AI practitioner, Karen Silverman, who’s terrific, and also a Berkeley grad. She’s actually been working closely with governments and companies to try to help advise them on their AI policies. And so I think having her be a contributor to the class has helped me understand that a lot of times the lawyers are not going to really have case law or precedent to rely on. They’re gonna have to use their own judgment. And they’re gonna have to persuade the technologist, the CEO, and the regulator of a position. And so it’s important to really be able to tap into, what are the different forces and pressures that are on that person, and how can we come up with a workable solution?
[Gayathri Sindhu] 19:35
That’s so true. I think that’s very important, especially in today’s uncertain atmosphere of where AI is going. But on that note, was there a topic that you are particularly interested in teaching? Or what was your favorite class or topic to teach?
[Colleen Chien] 19:57
Um, that’s a good question. I think there’s been a lot of different areas. Certainly, copyright AI has been really fun and very hot this year, in terms of all the lawsuits that have been brought against the various foundation model providers, and really being able to grapple with that, and what’s happening with the intellectual property system more generally. So as a person who is steeped in intellectual property, that has been a really fun area. But I think more generally, just seeing the emergence of actual law and engagement from the highest levels. That’s new, right? Because I think I’ve taught this class before and there hasn’t really been any law, there hasn’t been a lot of cases. And now we see the EU AI act moving steadily towards enactment. So there’s actually something substantive to really sink our teeth into. The White House itself has also been quite engaged with putting out the executive order on safe and trustworthy AI. So that’s been very concrete, and something that we can track. I think just the news, like whether it be the Taylor Swift deepfake situation or be election concerns around the integrity of information that is coming out, it’s been really fun to be able to draw the connections between the class and what’s happening in the news, and to be able to pull that in and for students to really see that connection. What’s also been great, I think, is that we have such a wealth of students with deep experience. We’ve had some of our international students, for example, my Chinese LL.M. students, be able to actually take Chinese decisions with respect to AI, translate them and provide context in the class and to help us to understand how other countries are approaching AI issues. So that’s been a really nice perspective that has been added in, as well as those who are more technologists in the class who can speak to technology issues, that diversity has also enriched our discussion. So I think that has all been really enjoyable. And those topics, as well as thinking about the types of rights that consumers might have when they get AI, when they’re confronting AI decisions, again, have been expressed through some of the regulatory agencies like the FTC, as well as the CCPA in California. There’s been, again, this emergence of a framework of public interests, consumer rights that have emerged. And so that’s been also fun to teach, because I think students can really relate to that, right? Like if you get denied from a job and automated algorithms were part of that. You kind of want to know why, what can I do better next time? What went into that? And so we can really have a good conversation around. Well, what would you want to know? What would give you more trust in the process? What are the things if you were confronted with this would make you feel you could rely upon the outcome?
[Gayathri Sindhu] 22:59
Well, those all sound like incredibly interesting topics to cover. I am definitely regretting not having attended your class. But another fun thing I heard about your class was that the students took a poll and decided that of course, they would be more upfront about AI use, and that’s what you’d prefer they do. But do you think students are always upfront about their AI use and disclose it promptly? Or do you think there are situations where you know that they are using it, but the student isn’t disclosing it? Or other ways in which you find out?
[Colleen Chien] 23:46
Yeah, that’s a great question. And I think it has been part of something I wanted my students to have, which is the ability to use AI in an ethical and also responsible way. And so I’ve said before that, it’s not that AI is going to replace lawyers, but that lawyers who use AI to their advantage will replace those who don’t. And so my real goal is really for the long term for my students to really understand how to incorporate this as a tool in their toolbox. At the same time, there’s obviously ethical concerns with presenting work that’s not their own as their own without attribution, and violating the Honor Code. And there’s also a real concern about skills-fade or lack of development of the skills because if you start relying on AI exclusively, or even primarily, you’re not going to develop the skills yourself to do the writing, to do the analysis. I know for example, myself, like my direction finding is terrible because I rely so much on Google Maps and various map assistive technology. So this is a real concern when you’re starting to develop, if you don’t have your own skills, you’re not going to have a good intuition for how to leverage the technology. So we decided as a class to adopt that the use of the technology was acceptable except in situations where they’re actually in class, as well as when they are presenting their work for grades for a final. And for written assignments, I just asked them to disclose what they had used the technology for. And if they were going to copy directly from the AI, they need to attribute because that’s otherwise considered plagiarism. So the first assignment that I had them do, only one student disclosed her use of AI, and this last one, I’ve seen more people disclose it, but it’s still quite a, I would say modest disclosure, you know, I used it to correct my sentences and check my syntax. And I suspect that there’s probably more use going on than that. But I think my exam is going to be a closed computer in the sense that people will not be able to use their computers for that. And so hopefully there won’t be any concerns there. But yeah, I do think it’s hard, there’s sort of the sense of a cyborg, right? We’re all becoming cyborgs, as we start to use these technologies more and more as part of what we think is our own expression, because we’re sort of directing the AI. And so I do think it’s a bit challenging, it’s a bit of a moving target. But hopefully, my students feel safe to disclose how they’re using things. I’ve actually rewarded students who’ve disclosed their use, and I hope that they will also consider if they overuse it, that they’re really not getting the value of the very expensive education that they’ve been seeking, because they’re not learning as much. So I think ultimately, it’s up to the individual to take responsibility for their education and the ethical choices they’re making. But I’m trying to make it within a framework where I’m not turning a blind eye or putting my head in the sand saying don’t do it at all. I’m saying use it, but use it in a responsible way.
[Gayathri Sindhu] 27:00
The fact that you have such a positive outlook on it would really help. I think, if I were to attend your class, I would be more hard pressed to disclose it than not because of how open you are about discussing it as well.
[Colleen Chien] 27:16
I was actually interested in turning the question back to you. Do you think that students are using AI? Have you seen it change over the course of the semester? And are they being truthful when they do or don’t disclose to their professors?
[Gayathri Sindhu] 27:30
I don’t think they’re being entirely truthful, including me. But as somewhat of a millennial, and not as much a Gen Z kid, I was very reluctant to use AI initially, especially because I used to think that I am so much smarter than this new thing. But that was also a very stupid thing to think. Now I am using it a lot more. But even then I’m really shy about disclosing to what extent I use it. But having a professor who talks about this openly, would essentially enable me to disclose it. I’m very sure. If I were in your class, I’m sure I would just let you know in what capacity I use AI, and try to find out what your views on it, how much use is okay, how much use is not okay. Because as a new and shiny toy, I think AI use can be very tricky to navigate. So I would definitely benefit from a professor like you, I must say.
[Colleen Chien] 28:50
Well, I think the main thing from my perspective, when students don’t just disclose, is that we also leave on the table an opportunity to learn from each other. So if a student says, hey, I used it in this way, it would have been really useful. And you know, here’s how you could use it as well, then students will be able to pick that up and learn from it. So that’s the one situation in particular, where I wish people would be more forthcoming. So they can learn from each other a bit more. But I think we’re all kind of figuring out how to use it, and what the limits are. I’ve used it to create logos for our class, like we just had a couple simulations, and I use logos for that. I’ve used it in various ways like writing a difficult email and a number of cases. That’s been really useful. And I use it in research quite a bit with respect to coming up with code that might be useful for programming something or data cleaning. Everyone just has to figure out what are the tasks for which it’s going to be useful and reliable and which ones aren’t going to be. So I just want our Berkeley Law students to come out and be really ahead of the curve in using the technologies and so I’m really encouraging people to do that if they can be more efficient and save time, like why not? And then spend more of the time on thinking about their own arguments they want to create in doing writing or research projects. So I’m a little more in favor than others, perhaps.
[Gayathri Sindhu] 30:17
That is absolutely an amazing point of view to have. Because it sounds like you belong more to the faction of people who think that AI can help lawyers in doing their jobs better, as opposed to replacing them in their jobs. And I understand that you’ve done some research in the matters as well, so could you please share some insights from your recent research on the use of Gen AI tools in legal practice, especially how it can impact lawyer efficiency and access to justice, especially for underprivileged communities? 1
[Colleen Chien] 30:55
Sure. So this is a project that I took on in the fall with another Berkeley Law grad who was my roommate in law school. And she’s now a partner at Munger, Tolles, and Olson, and she’s a leader in the community, her name is Miriam Kim. And she chose to take her sabbatical at Berkeley Law and come back to Berkeley Law and be in a place where she had learned so much of what has made her such an outstanding lawyer, and a person of high impact in the legal profession. So she and I think, over the years, a lot based on our own experiences at Berkeley Law, doing clinical work and doing pro bono work. I think both of us have had an interest in advancing the public interest in advancing access to justice. So what we decided to do is, we noticed that, even though the large law firms like hers and others have had privileged access to these tools for quite some time. You know, we don’t really know what’s happening with respect to legal aid lawyers and those who may not have the financial means or the privileged access to these high end technologies. And so the potential for the access to justice gap to get even worse, because you have well-resourced lawyers who have access to technology against those who don’t, that could really grow the gap even more. So we decided to focus on legal aid lawyers, and we were actually approached by the State Bar of California that said, “can you try to help us think about generative AI and what it can do in this space?” So we were able to partner with them. And also with technology companies, we got free licenses from OpenAI, as well as from Gavel, which is an automation platform, as well as CoCounsel Casetext, which has put out one of the first legal assistant technologies that is now being rolled out more generally. But we were able to get these free technologies, licenses, and we were able to give them to legal aid attorneys and then do a pilot. So we started out with serving folks and saying like, who is using this? What are your fears? I think the headlines and law in AI have barely been like the Meta case and the Cohen case, right?2 Where there were least hallucinated faked citations. And so a lot of lawyers have said this technology is really not appropriate for use in law, like law needs to be precise, it has to be accurate, we’re not talking about just making some funny cartoons or coming up with some images. We actually need to make sure this is accurate, and it’s just not there yet. And we thought even an imperfect technology can still be very useful. So we took a survey. We have 200 folks fill out a survey. We’ve learned some very interesting things through that survey. And then we separately recruited a number of those survey participants to participate in this trial, where they were given access to the tools for a trial period. And then we asked them what their experiences were like. We implemented this as a randomized control trial, which meant that we could rather than say, oh, it’s a correlation between people who obviously go into the trial are more tech savvy and interested. So perhaps it’s just that there was a correlation between their enthusiasm, the fact that they selected into the pilot and the fact that they had better outcomes, we were able to instead randomize and say, well, half of them are gonna get a bit of a different treatment than the other, and we’re gonna see what the impact of that is. But that’s essentially how we structured it.
[Gayathri Sindhu] 34:33
So, at the end of the study, do you think that there are factions of society who would essentially significantly benefit more from Gen AI tools, or do you think it’s something that’s sort of an equalizer and everybody can take a good chunk of use out of it?
[Colleen Chien] 34:58
Well, I think there were a couple things that were surprising. One was when we just took the organic survey, we just said, well, who’s using this? And what are you using it for? Even though legal aid is predominantly women, about 75% of legal aid lawyers in California are women, what we found is that the technology was being predominantly taken up by men. So men were more comfortable, were already using the technology and thought it was helpful for them. And so I would say that organically, you can’t just say, put the technology out there, it’s going to be an equalizer. There is a risk that historic patterns will replicate. And so you do need to be deliberate about how you ensure that everyone can benefit from the technology and be thoughtful about how you roll it out. But once we gave the technology to folks, most generally, it was a very uniform, positive experience that 90% of participants thought that there was a level of productivity increase, and 75% signaled their intent to use these generative AI tools in the future. And then we saw the gender gap that was present in the survey not replicated in the pilot. So basically, there was no gender gap after people used conditional upon them actually using the technology. Again, for the randomized control pilot, what we did is, for some subset, we gave them concierge services, we gave them assistance. And those folks did actually have markedly better outcomes than those who didn’t, which implies from a causal perspective that the concierge services made a difference. So yes, I think everyone can benefit. But there needs to be intentionality given to how we roll these tools out and providing assistance can make a difference.
[Gayathri Sindhu] 36:43
Do you have any suggestions for how they may be rolled out so as to bridge this gap? For example, through policy or through active measures or campaigns? What would you think would be the most effective way of going forward with this?
[Colleen Chien] 37:02
I think there’s like two or three things to think about. One is ensuring right now that everyone can have equal access to tools. As the premium version of ChatGPT costs more money, all these different tools cost money. And a lot of times legal aid attorneys or those who are smaller, serving underserved clients just don’t have those funds. So some equitable pricing scheme would definitely be useful. And not just for legal aid, but other applications, as well as ensuring we have a tiered access and some commitment to universal service. And I think the companies would say, well, we have our free version or paid version. But I think, especially for professionals, if they can increase their equitable access programs, that would be better, and that would ensure that there’s broader uptake. I think there’s also an opportunity for there to be assistance provided, because it brings you up the curve, right? To use the technology takes time, it takes time to learn how to use it most efficiently. And I bet most of us are using it probably at maybe 10% productivity level. But there are much bigger increases of productivity that can be achieved, if you spend more time, right? There’s that expression, to save time, you need to waste time, right? You know, to save years, you need to be able to waste days. That’s an expression in academia, right? If you allow yourself to really learn something, then you’re gonna save a lot of time downstream, but you have to invest that initial upfront time. So to come over that curve, we’ve recommended that there’ll be a national helpdesk for legal aid lawyers or pro bono lawyers or frontline agencies that might say, well, we don’t know how to adopt this, how can we get your help? And just like we have pro bono commitments with respect to lawyers in the library where people can get legal advice they go in, if we had a national helpdesk, people could call in and say, how do I configure this? How do I set this up? Give me some advice on how to do these things. So those are some low hanging fruit. On the broader level, we want to think about more systemic change. That’s not just about augmenting legal aid or augmenting lawyers, but about fundamentally getting to the root causes of some of these issues. So for example, I have a project called Paper Prisons. And it’s really about looking at the difference between eligibility and delivery of legal relief in the criminal justice system. So how do we use algorithms and automation to help people get their expungements, right? One in three American adults has a criminal record. Those records tend to really foreclose economic opportunity, volunteering opportunities, housing opportunities for individuals with these records, but they could expunge them. They don’t because the process of applying for an expungement is very tedious, time consuming and expensive. You need to get a lawyer. So one way to change this would be to say, let’s give those lawyers tools, make them more efficient. And let’s augment them and make them two times more efficient. But that would just be a drop in the bucket with respect to the vast number of people who could get this relief that aren’t getting it. So a better approach, from my perspective, is to think systematically about where the information is, it’s mostly in the state. And so can we actually change the way the state is calculating information and have them automatically determine who’s eligible for expungement, and then apply that treatment without individuals having to be involved? That’s why I’m helping with respect to the Clean Slate Initiative, which now tries to pass laws in different states to change and address this Second Chance Gap or these Paper Prisons that I’ve written about. And so that’s a more systemic change, right? It’s not just about marginally increasing the efficiency of a lawyer, but it’s going to the root cause. What is the real issue of the thing that needs to be done? And how do we use technology to get that done as quickly as possible, as efficiently as possible? AI might be a big part of it, or it might be a smaller part of it. But I think focusing on the problem, rather than the solution, is ultimately how we’re going to close that justice gap.
[Gayathri Sindhu] 41:09
I think uncertainty in regulatory mechanisms is what really turns down the effectiveness of these initiatives or makes people hesitant towards proposing such initiatives. So as a professor who teaches a class on the governance of AI, do you have any preferred methods or preferred practices that you would like to see in the regulation of AI?
[Colleen Chien] 41:37
Well, I think, again, that it is very domain specific to think about what the problem is in each realm. So obviously, there are the regulations of new problems that are created by AI, and usually it’s not a completely new problem but algorithmic disclosure or automated decision making. So clearly understanding what are the new problems that we need to solve is a whole set of legal questions and problems. But I would say that for where AI has a lot of potential, we want to think about regulating for both positive impacts, as well as trying to forestall any of the risks. So I would, again, think about what are the essential problems that we’re trying to solve as a society? And how do we then create the right environment in which AI can flourish and not create additional risks? So I think the challenge right now is that a lot of what we’re seeing in terms of regulatory momentum has been responsive. It says, oh, we’ve seen these deepfakes, we’ve seen these nonconceptual porn images proliferate, we’ve seen all these bad biases potentially come out. And so we need to regulate against them. But there’s also an opportunity to regulate for a good outcome. And I’m hoping that as we develop a more robust response to AI that we can think about these positive potentials as well.
[Gayathri Sindhu] 43:01
That makes sense. So Professor, as a woman in the tech space, there’s a question that I personally would love to ask you, which is having been at the forefront of scholarship in the technology law hemisphere, where women are rarely in the spotlight, were there any challenges you faced in account of your gender, or any instances that prompted you to be more vocal about women’s rights and the role that women play in the tech law space or in an academic space?
[Colleen Chien] 43:39
Well, I would say in general, I’ve been so grateful to be in a law academy where there is a lot more gender parity in general, if not necessarily in patent law, where women are a minority. And so I think that both in the overall legal academy, as well as the political leadership, right under Kathi Vidal, and Michelle Lee, who have been trailblazers as the first and second women heads of the Patent Office, as well as when I worked in the White House, I worked under Megan Smith, who’s the first female CTO. So I’ve had a lot of strong women that I’ve been able to look up to. And I think that’s been terrific. And also be able to work with my students and work with a lot of women and mentor them and feel like we’re just part of this larger community, even if we’re not in the majority. So in that sense, I don’t feel like my gender has really put me at a disadvantage. But there has been one context I need to mention, in which gender dynamics have been challenging for me and that is when I was more junior, having and this has happened two times, when I had people who were my senior who I respected and looked up to behave in inappropriate ways towards me, in both physical as well as verbal comments that were just completely shocking to me because I had looked up to these people. I consider them mentors, collaborators, and to be objectified by them in those ways was very upsetting. And I remember in the second case, in particular, I told my friends, I told my husband, I was very upset at the moment, but I did not report this person, who was very prominent, to any authorities. I felt like it was just something I needed to shake off. And it was only until later that I saw that person. Actually, I witnessed him putting moves on a younger graduate student. I realized that he was probably continuing to perpetuate this abusive behavior. And that’s when I felt like it was important to speak out and to tell my story. I think that other people coming forward and telling their stories was part of what eventually helped bring the situation to a close. So I think I’m not proud that I didn’t speak up initially, I’m glad that I was able to do so subsequently. And I think it’s just really important to try to figure out how you can find your voice in those kinds of situations. I was glad that I had that second chance to really address what had happened.
[Gayathri Sindhu] 46:15
I’m very sorry you had to go through something like that. But thank you so much for talking about it here. Because this is exactly what inspires more people to speak up and having these conversations is what helps younger people talk about it. And I know that mentors, like you, have a profound effect on how people like me behave and other juniors behave. So thank you so much for saying that.
[Colleen Chien] 46:43
Well, thank you for asking the question. And I think, again, to go back to that moment, I felt like, well, I can handle it, this was super inappropriate. But I’m strong, I can handle it. And it was really not thinking about the benefit to others that could come from me speaking out and really trying to address the situation. That’s what prompted me to really take a stand there. So I encourage anybody who’s on the border to think about others who are going to be impacted if this type of behavior continues.
[Gayathri Sindhu] 47:16
Thank you so much, Professor. Because at that moment, I know it’s a personal affront. And the first instinct is to always see if you personally can handle it, cannot handle it. Once we internalize that, and we deal with it, we seldom don’t want to look back into that moment. But it does take a lot of courage to see it happening to somebody else, and still speak up. So I’m so happy you did that. I know anybody listening will gain from this experience. And now moving on to more positive and uplifting messages. Looking ahead, what do you envision would be the most significant positive impacts that AI would have in our society?
[Colleen Chien] 48:08
Well, I think that the trajectory of AI, we’re still in very early days with respect to its social impact. But if it can reach its potential, right? To reduce risky work that humans have to get engaged in, tedious work, dangerous work, and really help humans be the best at what they’re good at, right? Which is being in community with others, caring for them, engaging in leisure, if we can really focus on what humans are good at, then I think there is the potential for there to be this type of revolution with respect to how we spend our time, how we move in the world and reducing environmental degradation and all the kinds of trends that we’re seeing that are not positive. So I do think that the optimistic perspective is one to keep an eye on as our North Star, but that these things will not happen automatically. We obviously need to create the right policies, we need to control the risks, and we need to make sure that AI doesn’t get in the wrong hands. And some of the negative scenarios don’t play out. So I do think that eventually improvement of the human condition and really raising that for all people is ultimately what we need to keep as our focus, but it will take time to get there.
[Gayathri Sindhu] 49:36
Of course, and are there any negative impacts that you are afraid of? Because I actually had a professor who pointed out in class that he was surprised none of us were afraid that AI would take over and wage war on all of us. And at the moment, I remember thinking that sounds so silly, but when you really think about it, maybe it has the potential for it. So what do you think, Professor?
[Colleen Chien] 50:03
I mean, I know that I’m definitely not up to speed on the capacity of AI. And so I do defer to the technologists who are at the forefront, who are echoing and signaling the risks and sounding the alarm bells. We do need to take them seriously, we need to listen to them. And that’s part of why the Executive Order that came out of the White House was a Security Order. So with the security of AI and ensuring that it does not increase the potential for bio weapons and all kinds of disruption in the hands of nefarious actors. So I do think we need to guard against these risks. But I do think ultimately, we faced threats like this before, we’ve been able to respond to them. And I hope that’s the case here as well.
[Gayathri Sindhu] 50:52
Perfect, that sounds like the right call to action that we all need, and the right perspective we need to have. And on that note, I would just like to end with a more fun question, maybe. As you said, patent law is not something that essentially interests or tickles the fancy of a lot of people. And I’ve heard a lot of people say it’s very boring. But as somebody who does so much work in patent law, was there any specific patented invention, or an instance in patent law history that really fascinated you or gripped your attention?
[Colleen Chien] 51:35
Well, I think that, contrary to the opinion that patent law is boring, that patent law has the potential and has, in its early years, really been a force for patent prosperity and the actual democratization of the United States with respect to thinking about who could bring their ideas, get them developed, and really move our young country forward. So one of the most inspiring patent inventors I can think of is Madam CJ Walker. She was the first self-made African-American millionaire. And what she came up with was a set of technologies that would help people work with hair in African-American community. She was an entrepreneur. She was able to sell her product widely, using a whole network of church sales ladies, to really solve the problems of African-American people and sell the product to them to buy something that was very helpful, and be able to really get at this overlooked issue. And so I think the idea that everybody is a potential innovator, that everyone is looking to solve problems, that we can create a system that encourages and rewards innovation and the solving of problems of the individual communities, is really inspiring. This kind of sense of patent prosperity, and making sure that is available to all, is something that I’m passionate about. And that’s something I’ve been working quite a lot on with a lot of other scholars.
[Gayathri Sindhu] 53:02
Thank you so much for that answer, Professor. That was the perfect end to this episode. And thank you so much for joining us today.
[Colleen Chien] 53:11
My pleasure. Thank you for having me.
[Eric Ahern] 53:44
You’ve been listening to the Berkeley Technology Law Journal Podcast. This episode was created by Gayathri Sindhu, Lucas Wu, and Eric Ahern. The BTLJ Podcast is brought to you by editors Eric Ahern, Meg O’Neill, and Juliette Draper. Our executive producer is BTLJ Senior Online Content Editor Linda Chang. BTLJ’s Editors in Chief are Will Kasper and Yuhan Wu. If you enjoyed our podcast, please support us by subscribing and rating us on Apple Podcasts, Spotify, or wherever you listen to your podcasts. Write to us at BTLJpodcast@gmail.com with questions or suggestions of who we should interview next. This interview was recorded on April 12, 2024. The information presented here does not constitute legal advice. This podcast is intended for academic entertainment purposes only.