SPEAKERS: Professor Chris Hoofnagle, Beth Calley, Lucy Huang
Podcast Transcript:
[Lucy Huang] 00:07
Hello and welcome to the Berkeley Technology Law Journal podcast. My name is Lucy Huang and I am one of the senior editors of the podcast. Today, we are excited to share a conversation with Professor Chris J. Hoofnagle, Professor of Law at Berkeley Law. Professor Hoofnagle is a widely recognized scholar in cybersecurity, privacy law and consumer protection, and serves as the faculty director of the Center for Law and Technologies, information privacy programs. He is the author of leading texts such as Law and Policy for the Quantum Age and Federal Trade Commission Privacy Law and Policy. In today’s episode, we’ll explore the rapidly evolving intersection of cybersecurity and elections. We’ll discuss the foundations of cybersecurity, confidentiality, integrity, and availability, and explain how these principles apply to the complex ecosystem of American elections. We discuss the role of federal agencies like the CISA, the unique strengths and weaknesses of the United States decentralized election system, the risks posed by mis and disinformation and why paper ballots continue to play such a critical role in election integrity. Finally, we look towards the future, examining debates over digital identity, internet voting and what it truly means to secure elections in a democratic society. We hope you find the conversation as illuminating as we did.
[Beth Calley] 01:26
You’re listening to the BTLJ Podcast. I’m Beth Calley, and I’m excited to be joined today by Professor Chris Hoofnagle. Professor, thank you so much for speaking with us today.
[Professor Chris Hoofnagle] 01:35
Thank you for inviting me.
[Beth Calley] 01:38
I want to start by giving our listeners a little bit of a better understanding of your own personal work in cyber law. Could you talk us through your experience and how you came to work in cybersecurity and the law?
[Professor Chris Hoofnagle] 01:50
Happily, after I got a law school, I worked for a DC based non-profit on privacy law, and after doing that for years and gaining expertise in privacy, I came to Berkeley to teach, originally in the clinics, but then eventually in the podium. So since 2009, I’ve taught privacy or security and computer crime-related classes. I am a self-taught programmer, deeply interested in technology and interested in problems of crime, and so I got into cybersecurity after doing research and work into cybercrime networks, which led to a broader interest in the problems of cybersecurity.
[Beth Calley] 02:35
So I think the concept of cybersecurity itself is a little bit complex, and it might be a little bit opaque to some of our listeners, so I’m hoping we can break that down before we go any further in our conversation. Could you give us a broad overview of what exactly cyber security is?
[Professor Chris Hoofnagle] 02:54
The field is based on an unfortunate term, cyber unfortunately, we have to live with it. That’s the term the field has basically. That’s the term that is stuck to this field. And it’s a vague term. People think it’s means the internet or computers. It is a bit broader. It’s essentially those networks of devices and systems that interact with the internet or other types of networks. So cybersecurity, maybe 10 years ago, was about the internet, but today we have to think more broadly. Cybersecurity includes almost anything connected. So anything in the telecommunications network, even telephones, anything in your home that is internet connected, such as a smart speaker, all falls underneath the umbrella of cybersecurity. And the result is is there’s a tremendous number of different challenges, Multi Forest challenges, that these different services and devices present.
[Beth Calley] 04:00
So from the little bit of research we’ve done, we came across a couple of concepts that inform cybersecurity practices, broadly speaking, and these were confidentiality, integrity and availability. Could you break down each of those pillars of cybersecurity for our listeners?
[Professor Chris Hoofnagle] 04:20
Certainly, this is known as the CIA is known as the CIA triad, C for confidentiality. That is, is data only made available to those who are authorized to see it. It’s kind of like secrecy. Integrity is the sleeper one that people don’t think about, but it might be the most important, and that is, are the data accurate for their purpose? So an integrity attack might be I break into the bank and change people’s balances, or I break into a health care record and change someone’s diagnoses. That’s actually far worse than a confidentiality breach, and then find. Think there’s availability, can you use the computer? So the most notable way to think about this is the denial of service attack. This is where people use their computers to prevent another service from working, and as a result, you can’t get to check your email or to vote. Let’s say so it’d kind of be like, what if you and all your friends, like we all got together and we locked arms and we stood in front of the target, the target store, so that no one could get in. That would be an example of an availability attack on target. Same thing can be done with computers.
[Beth Calley] 05:36
What laws govern cybersecurity in the US, and how do the concepts that we just talked about inform that framework?
[Professor Chris Hoofnagle] 05:45
The US has a high level anti hacking law known as the Computer Fraud and Abuse Act, and it’s largely passed in similar form in most of the world, and that deals with illegal hacking. We also have a law that deals with identity theft and that complements the computer intrusion statute, because a lot of Computer Misuse surrounds financial hacking, stealing credit card numbers and so on. The new trend in the United States and particularly in Europe, is to create duties of care for computer systems and products. So we’re what’s emerging first in the States, in states like Massachusetts and California, are laws that say, if you’re a business and you collect you somehow touch personal information, you have to have computer security for those for those systems. And what we’re seeing in Europe is something way broader. At the moment, the Europeans, in essence, are enacting a product liability type regime for everything that touches personal data and the like. So everything from the smart speaker, you know, $20 smart speaker, you buy at Best Buy to an enterprise router that costs $100,000 will be subjected to comprehensive security rules as Europe is way ahead on the regulatory side, the US has mostly left This to industry using things like tort standards of reasonableness.
[Beth Calley] 07:23
So now that we’ve walked through those three pillars of cybersecurity, and we’ve talked a little bit broadly speaking of how the US regulates cybersecurity, I’m hoping that can guide our conversation about the intersection between cybersecurity and elections. So could you explain what goes into securing our elections, Broadly speaking?
[Professor Chris Hoofnagle] 07:45
This is a moment where it makes sense to, you know, pan out a bit, and get an idea for just how complex and just how humble we should be about the problem of elections. I mean, if you think about an election and what goes into election, and a state like California that has 40 million citizens, we have to have an office who figures out who those citizens are. We need to figure out which ones are over the age of 18, which ones are eligible to vote because they have registered, which ones have continue to be eligible because they have not committed a serious felony, we have to track whether these people move away, whether they die and so on. So it’s a tremendous information problem in a nation like America, where we do not have a national identifier, and there is no kind of master government database of your identity, or my identity. So one is this information challenge, and that rolls forward to a series of other information challenges. An obvious one is, how do you prove that you’re Beth when you go to vote? And what are we going to require? Anything? Should we require you to know your home address? Should we require you to have a driver’s license or something similar? Think about the enormity of the ballot complexity. So the this ballot has to be distributed to 10s of millions of Californians. So it can’t be prevent. It can’t be printed on November 1. It has to be printed weeks and weeks earlier to get ready for the distribution challenge. And of course, as you know, there’s invariably litigation before any election over what the ballot actually says. And then you think about the ballot, the ballot itself has to have security safeguards so that I can’t just print out ballots. And imagine a world where I could just go to Kinko’s and create ballots. The ballot has to have internal security so that it can’t be stained or otherwise manipulated, so that your vote is not properly collected. Then that ballot has to be distributed to. All those 10s of millions of people, those people need to vote. There’s security issues there. For instance, we do mail by vote. Vote by Mail in California, not many people talk about the problem that if you live in a household with, let’s say, domestic violence, you might not have a free vote at home. You might your spouse might tell you, Hey, you’re voting this way, period. And that kind of out is out of our minds. We’ve kind of like, put that out of our minds with vote by mail, then that ballots got to get back into a mailbox or a ballot box. We’ve seen cases where those boxes end up getting sabotaged or destroyed. Has to come back to the Secretary of State. It has to be authenticated, counted, validated, and so on. So huge number of things can go wrong in this there’s a huge number of places where people can intervene. Now, the good news is, that’s a really long narrative. The good news is, is that the people who run the elections in the state know what they’re doing. They’ve seen all sorts of forms of manipulation before, and there are redundant procedures in place to deal with various problems. So you know. So to talk a little bit about California and its resilience, one of the reasons, main reasons, why California has a reliable election system is that we are using a paper ballot. We have not moved to a world where you you type in on a computer screen that could be through just innocent error, misconfigured. There’s many, many ways software can be messed up. But instead, we actually have this really reliable ballot. It’s a scantron, and it itself is the artifact that proves your vote. So after all those ballots go through a scantron machine, and you should see it. It’s very, very fast. I mean, it’s not as fast as computer tabulation, but it is very fast. It goes through these readers. They’re read the state provide, performs a audit of 1% of the results, and statistically speaking, it is very unlikely that someone can manipulate the election and not be discovered based on that 1% audit. And in fact, that that approach was invented here at the University of California by one of our colleagues, Philip Stark, who came, who did the statistical modeling to show you don’t have to count all the ballots. In fact, you can count a very small number of the ballots and infer whether there has been some type of interdiction in people’s voting.
[Beth Calley] 12:38
So you mentioned some security issues, and you also mentioned an audit that can be performed, who’s responsible for doing that and who’s responsible for preventing potential security issues or attacks?
[Professor Chris Hoofnagle] 12:51
In the United States? For better or worse? Okay, for better, let me say the reason why, for better or worse, this is disaggregated in the States. It’s a power devolved to the states. For better and worse, the better is we have lots of diversity, so there’s no way to hack the election of the United States. Any attack would have to be on a much more local level, even attacking a whole state is complex in ways that are discounted by a lot of people. So could you? Could a really determined person attack a city, oh, yeah, like, or a precinct, for sure, but in aggregate, our diversity is a strength, and it makes it impossible to do a nationwide attack. The worst part of that is so the good, the upside is our diversity makes it very difficult to levy any single blow. The downside of that is that security usually comes from centralization and standardization. So you want anyone who does security, they don’t like things like bring your own device, and they hate it when people show up with the Chromebook or, God forbid, a Linux computer. They want everyone to be using the Dell, the Dell that’s issued by the employer. And they want you everyone to be using the same phone, because then they can monitor all those devices, and if they go out of patch, that is, if a vulnerability is discovered, they can very quickly push the patch and fix everything. So that’s the kind of upside downside of our state of devolving this to the states, and so each Secretary of State has to figure that out, and the federal government plays an important role in helping them figure it out.
[Beth Calley] 14:50
Are there any agencies that specifically work in that kind of realm of securing our elections?
[Professor Chris Hoofnagle] 14:56
So under a series of presidential decrees. These the federal government figured out that the decentralized aspect of the United States made our infrastructures vulnerable to determined nation state attackers. So it’s not just elections, it’s water facilities, electricity facilities and so on. So over the years, the federal government has built more and more protections for so called critical infrastructure, and the kind of capstone of this is the creation of the critical the cybersecurity and infrastructure security agency, CISA. CISA is tasked with with an impossible job, and that is securing all the critical infrastructures in a nation as large, as rich and diverse as America, and it’s electricity, it’s water, it’s shopping malls, it’s all the things we need, like banking in order to make our lives Run and elections.
[Beth Calley] 16:01
So could you talk about with all of that critical infrastructure to manage, how do elections play a role in how CISA governs security of those things?
[Professor Chris Hoofnagle] 16:15
The challenge that CISA has had with all critical infrastructures is getting those critical infrastructures to cooperate. And so what CISA does is they essentially provide public goods that any state can use that will increase their baseline of security. A lot of what CISA does is figure out these kind of operational issues that your average Secretary of State or your average county clerk just wouldn’t know. So just here’s an example. There’s 100 different security technologies and techniques out there. What are the top 10 you should do? Like, if you’re going to spend your million dollars on security, should you buy all 100, or should you buy five and really invest in those? CISA gives advice on this type of thing, and it ends up being the basics, the things we all know in cybersecurity, it’s protecting yourself against phishing. So that typically means multi-factor authentication, and it’s patch management. So all the computers you have to know, all the computers and all the devices you have, and you have to make sure they’re patched. Elections bring several other complexities, but the good news is, is those states that use electronic voter machines, these things are walled off from the internet, and actually, most of the time they’re in a warehouse, they they get rolled out for election, and then they go back into a warehouse. So they’re not devices that are plugged into the internet at the second that could be attacked by, let’s say, the CCP or the Russian SVR. So elections are a little different, and this air gapping creates a lot of protection that don’t that, to be clear, does not exist for other critical infrastructures like electricity.
[Beth Calley] 18:10
You mentioned a little bit about potential attacks. What are the incentives of bad actors to obtain information or try to hack into election systems?
[Professor Chris Hoofnagle] 18:21
It’s all the reasons that people have always manipulated elections. It doesn’t necessarily have to be a fraudulent, excuse me, a foreign actor. It could be the candidate who just has bad morals. It could be the candidate whose fans have bad morals and the candidate doesn’t even know about it so and we’ve, you know, we’ve seen this as long as history. When ballot boxes are burned or stolen or the like, you have to decompose every aspect of the election chain and understand that the attack could come in the registration space, in the sense that I could send a Change of Address form to the California Secretary of State saying, my name is Beth, and I have moved to Nevada, right? And so that would be in a way of attacking Beth and depriving her of her ability to vote. So you so there have to be controls and authentication on that, on that type of thing. Then there’s the in person at the election. There has to be some type of authentication at the ballot box, like, how do I know it’s Chris at the ballot box and not and not someone else? And then you have the machine space, the ballot space, the machine space and the reporting space. Each one is subject to different types of integrity attacks, so you have to have procedures and redundancies for all of these things. And again, in California, we were lucky. I mean, because we ultimately can rely on that paper ballot. We never be in a situation where we truly don’t, don’t know what happened. We should be able to open up all the ballots and manual recount.
[Beth Calley] 20:08
What are the potential dangers of bad actors obtaining personal information about voters or being able to access some of that other information as part of our election systems?
[Professor Chris Hoofnagle] 20:20
The attacks are diabolical, and they are only limited by the ingenuity of the human mind. So if you were a well resourced attacker, maybe you don’t even change the address of Beth or Chris. Maybe what you do is you somehow communicate with Beth and Chris and tell them that Election Day is Wednesday or or send them to the wrong polling place. You don’t need to do a lot to get most people not to vote. Most people don’t even vote. So maybe the people who go to Berkeley Law School or the people who listen to this podcast, I know I’m going to vote okay, like in my I will make personal sacrifice to vote, but for a lot of people, a significant transaction cost. Will say, I just couldn’t make it. I feel bad, but, you know, too busy, too busy with the kids, too busy with work. So you could imagine an attacker who degrades participation by somehow imposing transaction costs on a population they disfavor. And from there, there’s just lots of different ways of interfering with people’s lives, of giving them bad information to get them not to vote.
[Beth Calley] 21:39
So given those dangers, how secure from an integrity perspective, is the current voting system that we have?
[Professor Chris Hoofnagle] 21:49
I think from a data integrity perspective, we’re in the best of all worlds. We’ve got paper ballots that you mark with a pen that’s difficult to erase. Tampering is obvious. We can always go back to the ballot so that so likely attacks are elsewhere, ie confusing the voter about the voting day that they can vote. Or, how about this, telling the voters that police will be at the ballot station, and they will be checking people for criminal background checks or something like that. You can just imagine, there’s a dozen different ways to talk people out of being a citizen, and that can be done in many ways, including just with flyers. Right? Just hang the flyers in the neighborhood saying ICE will be there or or the the FBI will be there, and so on. Standard tactics have gone back a long way. But what’s interesting about today is that, because the web is fully identified, right? So when you’re on Instagram or Tiktok or like you’re fully identified, everyone knows who you are, that kind of targeting can happen over the internet that lowers transaction costs for attackers and makes it possible for attackers to not even be in the country.
[Beth Calley] 23:15
So then I want to move to availability, and one key concern with election interference is that it isn’t always as simple, like we’ve talked about as a bad actor going in and changing votes or manipulating votes. So with things like mis and disinformation around elections, like telling people the wrong day of the election or telling people that they’ll face repercussions going to election stations, how do we protect against those kinds of attacks that are maybe more difficult to point out, who’s doing it?
[Professor Chris Hoofnagle] 23:50
I have pretty strong views about the disinformation problem. I think we cannot, as a free society, treat it as a legal problem that we address. You know, the difference between misinformation and disinformation is intent of speaker. So we’re already kind of skiing on stilts with that, and then when we roll, when we move into the area of disinformation, the risk we run is that it turns into a censorship regime and no one is able to resist it. It’s just it’s so powerful to label something as disinformation and call for its censorship that the most stalwart defenders of free speech on the right will say, oh so and so is disinformation. Shut it down. And the same people on the left will do will shut down speech. Will say that, you know, if you look at the president, Biden administration, his administration believed that it was disinformation to talk about his disability, and quite literally, they were telling Facebook and Twitter when you know, when people say that President Biden is not well and that he’s not fit for office anymore, that is disinformation. And look how scary that is, right? You can, you can end up in a world where we can’t actually learn the truth, and we don’t learn the truth right until President Biden shows up for the debate, and then we all see it with our eyes, and we realize, wow, the White House has been lying to us, and not only that, they were telling the platforms don’t run this information. It’s disinformation. So my point here is not to pick on President Biden, but just that all parties cannot resist this ability to shut down the other side through censorship. And so I think it is super dangerous. Now, that said, I think it is possible to police some type of disinformation, if you dis if you decompose it. Disinformation exists in several different categories. One is the concept of counter will. So this is how you erode a population’s desire to do something like I could, I could buy advertising in Russia to erode the will of young Russians to fight against Ukraine. It’s very kind of diffuse. But there is also disinformation that’s individually targeted. It’s often referred to as counter commander or counter force disinformation, and that type of disinformation can look like a conspiracy to deprive someone of their civil rights right. So you know, suppose that the disinformation I do is that I go to certain zip codes in Oakland and I say the police are going to be there. Good luck at the polling. Good luck. That looks to me, much more like not a disagreement, not an opinion that that we can argue about, but rather the subjective purpose of a person to depress voting outcomes based on their population. So I think, we could think about disinformation and when it is that tailored and that targeted and that purposeful, but if we, if we pan out, the broader concept of disinformation, almost always just results in censorship.
[Beth Calley] 27:38
So we’ve talked a little bit about how a feature of the elections process in the US being controlled by the states, primarily, is that it’s decentralized. Could you go into more detail about how this potentially would make attacks less likely to be successful or less likely to occur in the first place?
[Professor Chris Hoofnagle] 27:59
Computer Security is dominated by this concept of break once run everywhere. And that is possible where you have homogeneity, homogeneity, I can’t say it right now of systems, if the systems are homogenous, break one place, and you’ve broken everywhere. So the US, fundamentally doesn’t have that. Now, within any given jurisdiction, there might be a break once run everywhere problem, but then behind that are a series of redundancies. A great example. Example is the provisional ballot. So suppose someone has changed your address, the procedure is to give you a provisional ballot, then you have an opportunity to prove your right to vote at some later time, and you still vote. We still, we still have it on on paper. So there are these defaulting mechanisms that make us more resilient.
[Beth Calley] 28:59
Does that decentralized nature help at all with what we talked about as disinformation in terms of elections?
[Professor Chris Hoofnagle] 29:08
I’d have to think about that. I mean, it seems like yes and no. So aggregate disinformation, like counter will disinformation tends to be very big, like broad campaigns, but then when disinformation is tailored, you know, maybe you have a situation where you could deliver a an important false message to a certain population, let’s say, based on their geography. I you know, a lot of people have thought about this, and this is one of the reasons why the big advertising giants, like Google’s and so on, they just won’t let you advertise around an election, because they’ve realized that their technology is so powerful they know who everyone is and where everyone is all the time. And there’s been these examples of pretty, you know, pretty aggressive use of these power I mean, a great example is pro life activists using the geolocation of phone holders to deliver them messages when they visit a Planned Parenthood. Okay, so the platforms have figured this out, and I think this is one of the reasons why they are backing away from revenue that would come from targeted electioneering.
[Beth Calley] 30:36
So then I think one of the concerns in the alternative of this sort of decentralized system, although it may strengthen protections against wide scale interference, is that attacks might be overlooked or missed, because, as we discussed, there’s little federal oversight. What do you understand to be the key risks with each state running their own election process?
[Professor Chris Hoofnagle] 31:01
Well, it’s only as good as the Secretary of State’s office. And so if you have a very professional Secretary of State, someone who knows what they’re doing, you might have a fine system. And if you have a Secretary of State, maybe who is elected and maybe isn’t elected for the best of reasons, maybe you have someone who doesn’t know that they don’t know what they’re doing. The you know, the added kind of problem there is that a lot of elections are very close, and so if a smart election attacker would find those individual precincts that could be flipped in a plausible way, takes a lot of planning. It also takes a lot of intelligence. So to make an attack like that, it’s so in essence, it’s it’s not easy for a foreign government to make an attack like that, because they have to have a lot of target intelligence, including local intelligence, and they might just not have it. I think one of the funniest disinformation campaigns I saw in recent years came from the CCP. So it was the essentially, it’s Chinese intelligence, and they wanted to interfere with the creation of a chip fabrication plant in Texas. And so what they did is they stood up a kind of grassroots pro environmental organization to object to this chip factory. Now, of course, that could work in California like that makes semantic sense in California. It does not make much sense in Texas, and it was immediately spotted as a foreign influence campaign because of how poorly it rhymed with Texas politics, and that’s China, right? A sophisticated adversary couldn’t kind of even figure out this, so that lack of sophistication is partly what’s protecting us.
[Beth Calley] 33:08
So now that we’ve kind of talked through how cybersecurity and elections interact, I want to look towards the future and talk about what elections might look like down the line. So how should cybersecurity and election law evolve to address new forms of digital interference that maybe don’t fit within the traditional definitions of hacking, so things like dis and misinformation?
[Professor Chris Hoofnagle] 33:35
Well, I think what we’re going to see one of the issues in the political space is we have a problem with the root of identity in the United States. So we do not have a national identity card. The closest thing we have to that is the real the real ID card. So the modern driver’s license does a better job authenticating people, but it’s still a house of cards. It’s still show me your power bill and so on. Most Americans do not have a passport, so those high quality indicators of citizenship, plus identity that’s tracked in careful ways with fingerprints and so on we don’t have in the US. So one battle that I think we’re going to see going forward, and it’s linked with digital currency, is whether there should be some type of digital identity or some type of way to authenticate in a reliable way, let’s say at the voting box, and this is going to be tremendously controversial for all the reasons, right? Progressives are going to object because there will be poor people who never got around to enrolling. Civil libertarians will object because they’ll believe that they shouldn’t have to carry a federal government identifier and so on. So I think one of the areas of compromise and exploration we’re going to have to deal with is the root problem of identity. So that deals with, you know, do we have the right person in in the box. The issues. So the other issue that keeps on coming up is, why not just have an internet vote, like, what? Why not? Why have, why do we even go through this like mailing all these heavy ballots, all this paper and so on? Why not have an internet vote and the advocates of it say, Listen, there’s going to be fraud, but the fraud we experience from it will be offset by the millions of Americans who will vote because they don’t have to go to the mailbox and they don’t have to go to the ballot box, right? They can just sit at home, they can use their phone. And of course, the big problem there is that the underlying security problems of the Internet are so great that it is completely implausible to have a secure internet vote the US military does it, but the military is a special entity that it’s Like, you know, it’s like the Star Trek Enterprise, right? Everything works on the Star Trek Enterprise, but that’s because it’s a federation warship where people have to comply with the rules. In a free society, it’s much harder, and you’d have to deal with all the problems, all the kind of nuanced problems that exist with internet sites and services. So the near future of this is going to continue to be some type of in person interaction or paper ballot mailed. That is, I think that’s where we are, until much more fundamental problems can be resolved with the internet, and in the near term, I think there’s going to be issues surrounding identity.
[Beth Calley] 37:12
I want to dive a little deeper into the prospect of Internet voting. And you’ve mentioned frequently that California elections are so secure because we have paper ballots and our election machines don’t connect directly to the internet. Could you talk through what Internet voting would even look like in the US?
[Professor Chris Hoofnagle] 37:33
We would have to start out with a stronger root of identity, and there’s ways we could do that. A great example is we have post offices disaggregated all over the United States. Currently, the post offices are, I mean, it’s basically the first line for your authentication. When you get a passport, you go see we have trusted post officers who know how to look at your ID and to take your fingerprints and do all those things. So imagine a world where you opt in to getting something like a digital identity identifier. This would be a cryptographic signature that is issued to you by the government, and so we can reliably prove that it is you, and then you visit some type of system that where only those who have had this identity vetting and only those who have agreed to an internet vote get to vote. It would have to look something like that. It couldn’t be like amazon.com right? There’s fraud on amazon.com and so on amazon.com, is easy to use, and there’s also like fraud. Well, why? Well, because we don’t care. It ends up being rounding errors, right? But in elections, rounding errors change the difference. So it would have to look more like a passport process, and it’s not feasible to do on a very large scale. I think this so here’s an example. I think it would look more like the Clear lane, right, where there’s people who are able and willing to opt into some type of background checked biometric system, and so all you’re doing is looking at that slice, rather than the entire American population.
[Beth Calley] 39:26
So I want to connect this back to an example of what internet elections could look like. And this year, in 2025 in Nepal, there was an example of Internet voting, and it was actually conducted on Discord. Do you see any world in which the US uses some kind of third party platform like discord to conduct elections, or is that something that would never really happen?
[Professor Chris Hoofnagle] 39:55
That’s an example of the difficult place that many middle nations are in when it comes to technology. So America is not a technology taker. We can we don’t have to settle for using Facebook or Instagram or Tiktok for anything. We can make our own stuff. And much of the world because of the dominance of platforms, the fact that basically it’s Google or Microsoft, right and and that’s, that’s kind of your choice. And most much of the world nations are technology takers, and they have to take those trade offs. I cannot foresee a situation in the US where we do that, because if you test, if you test nation state level hackers onto a system like discord, they will find holes, just unquestionably, they will. I mean, just as an aside, this past week, I was coaching our undergraduate offensive cyber team, they came in second in the out of 10 in a contest where they analyze a system, they have a limited number of hours to analyze a system and attack it. And what had they done? They will they assembled an AI system that would enumerate holes in the system and so on. This is within the capacity of Berkeley computer science students. So just imagine what the efforts that a Russia or a China would place to crack a platform and get an edge. It’s just not feasible.
[Beth Calley] 41:35
So how do we improve, specifically, public confidence in the electoral system, after some concerns were brought forth in the 2016 2020 and 2024, elections around keeping our elections secure?
[Professor Chris Hoofnagle] 41:52
I would say, go volunteer and see what these systems look like in person. One of the big risks of the internet is it turns us all into armchair experts, and in a way, like it erodes our I feel like it erodes our empathy and our kind of real the realization that other people are smart and other people think about things. So the people running elections didn’t they weren’t born yesterday, and they’ve dealt with security challenges for a long time. I think a lot of the controversy surrounding this issue, including the extremism that’s resulted in personal attack, like quite literally, personal violent attacks against election workers, would be eased if one you could spend a day doing election watching. You know, you can show up and watch and see just how professional and how careful, carefully these systems are are monitored. But then again, I’m quite radical on this. I actually favor a draft. Okay, that’s like how radical I am about the kind of reintegrating people into what it means to run a society, because when you see these people in action, it becomes impossible to vilify them in the ways it’s so easy to do when you’re an armchair internet commenter.
[Beth Calley] 43:15
So speaking of being a proponent of bringing people back into the election process. Do you feel like in the future there’s a possibility of people being removed entirely from the election process? In other words, is there a world in which we advance to a point where we don’t have polling workers, and it maybe is entirely internet based voting?
[Professor Chris Hoofnagle] 43:40
The closest analogy comes from cryptocurrency systems. And so could you imagine a system where every transaction ends up getting hashed in such a way that it cannot be changed by another person? And such systems are pretty secure. Now, just looked at cryptocurrency like sometimes people do get robbed and their their cryptocurrency is taken from them. But the interesting thing about that is that cryptocurrency is trackable, and the steps that the thief makes can be tracked. So can you imagine doing that with voting, where your voting is cryptographically signed. Then after the vote, you could you could log in and you could verify Now, the downside of this is that it’s likely to reduce ballot secrecy, and that’s really something to think about. Ballot secrecy is what makes it, makes us each free to vote our own conscious and conscience and not to be terrorized because we decided. You know, who knows? Maybe you decided to vote for the wrong side in the last election, and maybe you’re rounded up because you voted the wrong way. And. The last election. So you we’d have to think we’d have to deal with that problem. And I don’t see that problem being solved by even state of the art systems in crypto.
[Beth Calley] 45:13
How much cybersecurity risk would you consider the US to be willing to accept for the convenience of making elections easier?
[Professor Chris Hoofnagle] 45:24
Not much. So it’s inherently tied up with this political issue of democratizing the vote, so before your time. But in like my memory, we had a law called Motor Voter. And this was very controversial. The idea was, is when you went and got your driver’s license, you would be automatically enrolled to vote, so you wouldn’t have to have that extra step of sending in a voter registration form. Very controversial. There are a lot of people who didn’t like that for a lot of reasons, and so movements to democratize the vote will receive resistance, and it’s going to come from both parties, depending on where it where it happens. I just have a personal anecdote. I was very I was interested in student activism as a college student, and I used to go register students to vote in our college town, because I wanted students to vote in the college town so they could affect the college politics, right? And I was astonished by how many rules there were around this activity. For instance, I wasn’t allowed to collect ballot registration forms from people. Was actually a crime. I had to give them the form. They had to fill it out and put it in the mailbox. What I wanted to do is have them fill it out and give it to me, and then I would be sure to take it to that mailbox, right? That’s actually a crime. It’s like a felony. And so there are complex dynamics surrounding expanding the vote that will have to work out. I’m sure security will be used as an excuse not to democratize the vote.
[Beth Calley] 47:06
Can you elaborate on that a little bit further, what you understand to be some of the pushbacks that we might face in democratizing the vote from a cyber security perspective?
[Professor Chris Hoofnagle] 47:18
Security, in a way, is a dreadful thing. Among the theories I teach in my cybersecurity course is the concept of critical security studies, which is the point that security can allocate power and it can be used to deprive people or to make other people more insecure. This is like a problem with viewing issues through a security lens, is that you can start to shape society so that people are disempowered, or they have more risk. I’ll just give you a commercial example. When companies collect personal information from you in order to authenticate you, they increase their security, but they decrease your security. It’s a transfer, because then, if they have a breach, your personal information is now available in one more database. There’s ways to get around these problems, but most companies do not. So we have to understand security as a what I’d say first is public policy problems don’t necessarily have to be security problems. They can be thought of in other frames. They could be think of, thought of as a resource problem, an economic problem, a public health problem. When you choose the security frame, you can often justify setting the table such that it makes some people harder. It makes life harder for some people.
[Beth Calley] 49:00
So do you foresee the solution to this being trying to move back to fully paper voting, or do we push for more advanced technology involved in our elections?
[Professor Chris Hoofnagle] 49:11
I think paper is technology, and there was a time before paper, and it’s a great technology, and we ought to the most advanced technology might not be the best way to deal with election security. Here’s an example. I read a book about quantum technologies. There’s actually a government that uses quantum encryption to transmit its ballot results. I actually think that’s a grave error. Quantum encryption is way more complex. It’s way easier to screw up. So we ought to think about what we can do. We should think about even paper as a technology, and we should think about its relative advantages and disadvantages. We really resist the kind of shiny new object.
[Beth Calley] 50:03
Are there any areas in the US? I know we’ve mentioned that California primarily relies on paper voting, but are there states that have moved more in the direction of using different forms of technology, other than paper?
[Professor Chris Hoofnagle] 50:18
I don’t know which states have done this, but some states have taken up computer terminal voting, so it’s essentially the same companies that make ATMs can also make a touchscreen voting machine that you can you can make selections with. Now that’s great. The problem is, and I teach programming, okay. The problem is, is that you can make innocent programming errors. You can also make errors in user interface that cause people to vote incorrectly. And this can be entirely innocent, entirely accidental, so it adds a complexity that I don’t think is necessary. One of the big reasons why I’m a fan of California and its Scantrons is that they can be read ultra fast. Now, it’s not as fast as computer tabulation, but the downsides of these computers is that when you look at them, okay, so let’s say it’s 2025 you buy computer systems for your state. They’re going to go in, sit in a warehouse for two years until the next election, and then you’re going to roll them out, and they’re going to sit in the warehouse again for another two years, and you’re going to roll them out. And what do you think? What do you think? What software do you think is on those systems? It’s probably an old version of Windows. It might be Windows eight. We actually see this in the military. A lot of military appliances are computer controlled, and below the screen is actually Windows eight. And so what you inherit is all the vulnerabilities of a very old operating system that you can’t update because it’s linked inherently to the device, whether there is the voting machine or a some type of weapon device, it’s often just not worth it if you can find some other way.
[Beth Calley] 52:15
So because it’s inherently linked to the device, would you have to completely redo, like get new devices in order for that to advance to a higher system or a more advanced one?
[Professor Chris Hoofnagle] 52:29
Not necessarily, but you would have to, there would have to be smart people maintaining the underlying software for those years when there’s no elections, see, that’s why. I don’t understand how computer based voting devices net out. I mean, this is a device you use it once a year. It’s a digital device. Then it goes back into the warehouse. I don’t understand how it pencils out. You. I mean, if you take security seriously, you have to update the underlying software in those systems.
[Beth Calley] 53:06
So to kind of close us out, I’m wondering if you have any broad takeaways for how currently cybersecurity interacts with elections and what the future looks like.
[Professor Chris Hoofnagle] 53:22
Well, the first thing I would say is, don’t panic. Things are actually way better than they used to be. So the creation of the CISA, the federal agency, has even in its diminished state. Even though it’s been attacked, it has produced a lot of material that if institutions merely followed. It makes them much harder to be attacked. You can download these materials, and you can see that it’s it’s actually pretty simple. It’s things like properly provisioning accounts to employees, making sure those employees understand what phishing is, making sure those employees have multifactor and authentication. It’s basic. The interesting thing is, basic cyber hygiene eliminates a lot of the attack surface. So that advice is out there, and states increasingly are following it. So I say, don’t panic. The problem is understood and it’s better managed than it’s ever been in the past. And then think about disinformation. I think the risk there is the risk to ourselves that we could end up in a world where we so embrace censorship, that the polity in itself is damaged from our desire to secure ourselves. So I’m, you know, to reiterate, I’m very skeptical of disinformation as a as a concept. I think we shouldn’t be using security to deal with it. I think we should be using economics. So why do why is there so much disinformation? Well, it makes money, and we if we figure out the monetary drivers, we could find ways of of reducing it. And we also have to know it’ll never go away. There’s always going to be disinformation. It’s been disinformation. Even Julius Caesar uses disinformation in his civil war.
[Beth Calley] 55:19
How would we figure out what those economic incentives are?
[Professor Chris Hoofnagle] 55:24
We have done it with respect to newspapers and magazines. So as an example, there is a federal regulation where, if you use the mail to circulate your publication, you have actually have to say who you are and who funds you. So it’s a federal regulation. Every magazine has this disclosure in it, and this is why you can look at a certain newspaper that’s distributed in the Bay Area, and you can look at it and you can say, wait a minute, this is the Chinese Communist Party. It’s because of this federal regulation that requires them to explain who actually pays for the newspaper. We do that with magazines. There’s no reason why we couldn’t do it on the internet. We could have disclosures about who is behind what. We could have disclosures that identify bots. There are reliable technologies to detect bots and so on. So we’ve long had this problem with mail and magazines and newspapers, but we haven’t been willing to implement these solutions.
[Beth Calley] 56:31
Thank you so much, Professor Hoofnagle, I know I’ve learned a lot, and I’m sure listeners will as well.
[Professor Chris Hoofnagle] 56:36
Thanks so much for having me a pleasure.
[Lucy Huang] 56:38
You’ve been listening to the Berkeley Technology Law Journal podcast. This episode was created by Beth Calley, Meryl Miralam, Ying Wei Kao and Robert Thyberg. The BTLJ podcast is brought to you from senior podcast editors Joy Fu and Lucy Huang and junior podcast editors Cameron Banks, Robert Thyberg, Paul Wood, Ellen Huang and Yijia Zhou. Our executive producer is Jesse Wong. BTLJ’s editors in chief are Yasameen Joulaee and Emily Rehmet. If you enjoyed this episode, please support us by subscribing and rating us on Apple podcasts, Spotify, or wherever you listen. You can reach us at btljpodcast@gmail.com. This interview was recorded on November 14, 2025. The information presented here does not constitute legal advice. This podcast is intended for academic and entertainment purposes only.