How Will AI Affect Democracy?
SEASON 3, EPISODE 2: Two AI experts join Governors Bredesen and Haslam to discuss the potential impact of AI on democracy
Policymakers are increasingly focused on how to regulate AI, but what impact might AI have on democracy itself? The risks of AI technology for the democratic system, including misinformed voters and manipulated election processes are becoming more evident by the day, but is it all bad news? Dr. Sarah Kreps, a political scientist and director of the Cornell Tech Policy Institute, and Bruce Schneier, a technologist and Harvard Kennedy School lecturer, join Governors Bredesen and Haslam to dig into the good, the bad, and the unknown about how AI will impact democracy.
Subscribe and follow You Might be Right wherever you get your audio content – including Apple Podcasts and Spotify– to never miss an episode, or sign up for our email list to receive new episodes straight to your inbox each week here.
“People want empathy, at least the perception of empathy, in their politics.”
Kreps, who describes herself as a “technology optimist,” noted that the impact of AI in politics might be slower than in areas like finance and science because “where people don’t love the idea of AI is where you want empathy,” she said. “And I think people want empathy, at least the perception of empathy, in their politics. I think that’s why you might see slower integration of AI tools in politics because it will just seem like this person was not empathetic and they couldn’t even write their own speech, or they couldn’t even write their own email response.”
One area that Kreps identified as a potential positive use case was the ability of AI to help elected officials analyze sentiment and better understand the opinions of their constituents. Recounting a conversation with a former member of Congress who received thousands of emails per week, Kreps noted, “He’s trying to figure out you know, what’s the pulse of his constituency, and he can’t process it all…So what these kinds of tools can also do is categorize kind of a sentiment analysis, topic analysis, what do people think? Because you can’t, as elected officials, represent the people if you don’t know what they think.”
Harvard professor and democracy scholar Danielle Allen also addressed this topic in an earlier episode of You Might Be Right, in which she pointed to Tawain as an example of a country that has led the way in using AI to aid in consensus building.
“The concerns are that we don’t know the concerns”
Schneier opened his conversation with the governors by pointing out that one of the major challenges and concerns with AI is that “the speed of change is going to be faster than our speed of reacting to change.” He added, “It’s going to affect work, it’s going to affect how we communicate, it’s going to affect everything about how we live our lives. So, the concerns are that we don’t know the concerns. And we in society are pretty slow in reacting to change, like passing new laws and new regulations, or figuring out how society changes in response to technology. And these changes are going to come pretty fast, is my guess.”
Schneier noted that rapid speed of change means that his answers at the time when this episode was recorded in August 2023 were, in at least one case, different than what his response would have been back in March, and that his thinking may well have evolved again in a few more months. “Studying this is an exercise in humility,” he said. “What you say could be wrong. I mean, have me back in December, we’ll do this again, we’ll have different answers.”
“What do we do as a country, as a society, where getting it wrong means people die?”
Kreps and Schneier both shared their thoughts on how the government should regulate AI.
“I kind of think a new regulatory agency is something we should think about,” Schneier said. He acknowledged “that kind of major change is difficult” but noted that in the past, government agencies have formed to regulate new technologies like airplanes, radio, nuclear power, and cars.
“We have very much a rights-based society,” he said. “The exceptions are where doing the thing can kill people, and that’s like airplanes, cars, and pharmaceuticals. There, we are permissions based, you can’t do the thing unless we allow it, and we do that because getting it wrong means people die. AI is going to move very quickly into that latter category. So, what do we do as a country, as a society, where getting it wrong means people die? That’s going to be moving this from rights-based to permissions-based, and no one’s going to like that…We’re all going to hate it, but it’s going to be necessary because we are going to become so powerful as individuals that we are going to need to do that.”
Kreps highlighted the need to ensure whatever regulations are put in place are sustainable as political power shifts. “I think whatever set of rules or regulations is put in place should be applicable whether or not you agree with the person in the White House,” she said. “I think often there’s a tendency to craft legislation or answers that suit the party that’s in control at the time, but I think it just needs to be incumbent upon those that are making those decisions to think one day, it might be the case that the other guy is in charge, and you need to have the same set of value structures regardless.”
Subscribe and follow You Might be Right wherever you get your audio content – including Apple Podcasts and Spotify – to never miss an episode, or sign up for our email list to receive new episodes straight to your inbox each week here.

Join the conversation on Twitter by following @UTBakerSchool, @PhilBredesen, and @BillHaslam.