Why AI won't destroy democracy (at least this time)
An interview with Sam Jeffers of WhoTargetsMe
The Election Tech Handbook is holding its second monthly meetup in a week’s time, on the 11th April.
Register here: https://lu.ma/electiontechmeetup2
Sam Jeffers is the Executive Director and Co-Founder of Who Targets Me. Started in 2017 during the UK elections to monitor the use of online political ads in real time and provide analysis of their impact, the Who Targets Me plug-in has now been installed by over 30,000 users worldwide in more than 100 countries and 20 languages.
Recently, he spoke at a recent event hosted by the UK Democracy Network and ANTIPARTY. For more events like, subscribe to the Election Tech Handbook Calendar: https://lu.ma/electiontechevents2024
He sat down with the steward of the Election Tech Handbook, Richard Hames, to talk about why much of the fear around Generative AI is misguided.
The Election Tech Handbook already contains projects about misinformation and disinformation. You can add others!
Say No to Disinfo, https://www.saynotodisinfo.com, which designs, implements and iterates media literacy campaigns.
Fake News Debunker – https://chromewebstore.google.com/detail/fake-news-debunker-by-inv/mhccpoafgdgbhnjfhkcmgknndkeenfhe – which allows you to see fake news when it is presented to you
The Library, https://electiontechhandbook2024.uk/library, also has resources on misinformation, such as a misinformation guide for voters on how to spot misinformation, made by Carnegie Mellon University in the US: https://www.cmu.edu/block-center/responsible-ai/genai-voterguide/genai-voter-guide.html
This interview has been edited for length and clarity.
Sam Jeffers
The idea that [Generative AI] is going to destroy democracy is so pervasive. And I do think, like with any technology, that it will have some impact. But it’s not going to have an overwhelming impact.
I think we look at technology as being the problem when the reason why democracy seems to be sliding is mostly because politicians just can't deliver anything. There's much bigger problems that are real and political and I just don't see a few voice-fakes that make something strange happen in this election.
The odd bad incident will happen and we should stop that happening. But we certainly shouldn't be thinking that British democracy is under any real threat.
There are very few incidents that change elections but there are lots of narratives that do.
Obviously, it's much easier to get an op-ed written, if you say, “My God, we won't be able to believe what we see anymore. And the truth is going to disappear entirely.”
It's the same with fear of disinformation or Russian influence, right? These things exist. They're important, but they're not the most important thing. And I think I think that's where a lot of this debate is getting very hyperbolic.
And some of the research now in disinformation studies, does have this Liars’ Dividend Problem [spreading false information makes it generally unclear what is true and not]. People are being told they can't believe what they see even though they should believe most of what they see. And that's a bigger problem: if you start talking about it the whole time as something that is going to destroy democracy, you make it seem that nothing is true anymore. I mean, come on.
Richard Hames
You produce the problem you're trying to defend against.
Sam Jeffers
A lot of the platform concern about AI and elections for me is quite performative. I feel like we're going to get through 2024 without really any major AI incident. And at the end of it, [the platforms are] gonna say, “Hey, look, we protected the elections from AI. We did everything we could. We learned our lessons from 2016. Look how good we are at protecting democracy." And part of that is just going to be because nothing much actually happens.
Richard Hames
I've heard people suggest that the story of existential AI risk is part of the strategy incumbent players are using to get regulatory capture. Is something like that happening here?
Sam Jeffers
There's a sort of self-defence mechanism. [People are asking themselves] “how much are we actually worried about this?” versus “how much do we need to be seen to be worried about this?” Versus “if we really seem to worry about this, and then nothing happens that we can then raise our hands in the air and say, "what a great job we did."”
I spoke to a guy who's at X/Twitter Integrity, and he is really worried about this. But I think if you look at it from the perspective of just Twitter, that's quite understandable because the platform has lost its ability to moderate itself. But Twitter isn't everything. It's important, but it's not everything. And the other platforms have definitely tried a bit harder than that. It's much more costly for the other platforms for bad things to happen. Whereas bad things happening on Twitter is just priced in at this point.
Richard Hames
You had a really interesting phrase [in your talk for UK Democracy Network and ANTIPARTY], which was: “AI can create a moment but it can’t create a narrative.” I was thinking about the history of British and American elections and how they have turned on either ‘narratives’ or on ‘moments’.
There certainly seems to be certain moments where things turn decisively. The classic example is Ronald Reagan, who has some sort of witty put down in a debate and then suddenly he wins 50 states. I don't know if that's really compelling though, or whether it’s necessary to understand a bigger context.
Sam Jeffers
Actually, I just don't think these turning moments are decisive in the end. Reagan does have a witty put down and Walter Mondale has to laugh it off, and it's not great. But Reagan is going to win that election by 100 miles, no matter what happens.
Hillary vs Trump is super close. And that “her emails” thing is definitely a problem at the last minute. These kinds of things happen: an October surprise. On the other hand, everything that happened in that election was within the margin of polling error and Trump lost the popular vote, won a couple of states unexpectedly and won the election. You could just as easily have seen Hillary become President that evening and no one would have batted an eyelid about it.
There's this Slovakian example that's getting used to illustrate AI in elections at the moment. A deep fake ran in Slovakia two days before the election, and the baddie won the election. Right. This is a bad incident, but he actually had a massive poll lead in that election throughout.
There are very few incidents that change elections but there are lots of narratives that do. The narratives are big things like “let's get Brexit done”. Those are thematic hooks in elections. The narrative that Jeremy Corbyn was useless had taken hold by 2019. It didn't exist in 2017. The narrative that Theresa May was useless had taken hold in 2017.
I find it very hard to imagine [what you would come up with] if you sat down now with your most evil hat on and said, “What is the thing that I could try and invent to swing the British election?” You'd have to think, “What have we got on Keir Starmer? Like maybe the Savile stuff? Can I create endless Savile deep fakes about Keir Starmer?” Maybe. But then who am I? What reach do I have? There's no way the Tory party is going to do that. It would be insane for them to do that. It's not realistic, right?
It's very hard to come up with that stuff and so that's where I'm sort of sceptical about the impact of anything on an individual election. Even if Rishi Sunak wanted to invent a scenario now, to help him win the election where everything for the next six months goes perfectly for Sunak. The economic figures are incredible. Not a single boat crosses the channel. Schools magically repair themselves and hospitals start self-building, right? He would still lose the election. And that's my general view on it: it’s just that elections are hard to win.
We sometimes look at technology as being the problem when in fact the reason why democracy seems to be sliding is as much because politicians just can't deliver anything
Richard Hames
The other example of a decisive turning moment that springs to mind is the ‘bigoted woman’ moments in 2010 with Gordon Brown. But by then New Labour's mandate has sort of run dry, there’s been the largest financial crisis for years. Lots of other things have happened but then we pick out particular moments as decisive.
I agree that the Tory party are not likely to make lots of deepfakes themselves. During the 2019 election they changed their Twitter name to ‘factcheckUK’, right. That seems egregious at the time and it was egregious. But even so I agree, I don’t think they're going to be doing the deepfake stuff. But there are lots of, as you've said, superPAC-style arrangements in UK politics now. Doesn’t the ease of setting up a channel mean that it’s not possible to spread fake images without revealing who you are?
Sam Jeffers
So in 2019, these new pages pop up and they spend a bunch of money and they're not very transparent about who they are, although they do check all of the boxes that need to be checked legally. They registered with the Information Commissioner, they registered with the Electric Commission, they submitted spending reports, they did the things you need to do. They set up a company with Companies House and spent money through it, they had directors, etc. It wasn’t totally easy to find out who these people were, but a lot of that was because they simply wouldn't answer the phone or emails or take questions about what they were doing. They were just running messages and not really doing much else.
The problem there is less how they create content, but that the system doesn't really require enough transparency. Still no one knows what their source of actual funding was. This is sketchy stuff, but the kind of stuff that we could fix with good electoral democratic reform. You’ve just gotta make it a bit more obvious who someone is and who can hold them to account. That feels like a reasonable response, right?
There are some ways that I've talked to some of the platforms about how they could create more transparency. One argument is that you shouldn't be able to pop up a week before a British election and just drop 100 grand on ads, right? That feels somewhat illegitimate, if you're not actually running for office and you could potentially just be targeting a single constituency. And there's nothing really in the rules to stop that. So something like throttling people’s spending a bit, making sure that they do proper ID verification and so on. It’s sort of simple stuff, rather than needing to worry about whether we can detect AI content or anything like that.
Richard Hames
I also wanted to ask about the idea of incredibly persuasive text. Not deep fakes, not images, but words being microtargeted. When I think about the kinds of persuasive texts that have been used in elections for the last few rounds, they are literally three words long: “Take Back Control” and “Get Brexit Done”. These are things that have been persuasive. And on these massive slogans AI can't do that better, as far as we know, than Isaac Levido.
Sam Jeffers
There's also no channel by which this infinitely persuasive messaging can reach you either right? Am I gonna be on Facebook one day and then little bots gonna pop up and be like, “Oh, hey, Sam, let's just chat about politics for ages while I try and destroy your fundamental existing views on politics and take you apart and turn you from a lefty into a righty”?
Maybe you're still sceptical of this argument, but there are just so many of these things that have to happen in a row for [generative AI] to make a difference. The whole the whole history of political communication research just says “this ain't going to happen”.
The people who are saying it will, like Sam Altman [the CEO of OpenAI, which makes ChatGPT], just don't do political campaigns. They do marketing. And often, what they're marketing are new things about which people have no priors, so Altam and others think it's easy to just say “we invented a thing. We can tell you about it” And people are like, “I don't know anything about that thing. That sounds exciting.” You can't do that in politics. You can say “we just invented a new [Tory party]”. Everyone knows about it. It doesn't work. So hence my scepticism.
Richard Hames
In that Sam Altman marketing example, there’s a strangely pervasive idea that you can replace the whole of existing political structure, including all the legal stuff, all the social norms, with a new layer of technology and that layer technology is so powerful, that it just obliterates the rest of those things. I think that vision is deeply uncompelling.
Amongst American liberals in particular, who are very powerful in getting their voices out to the world, it speaks to a particular kind of cope, shall we say, that the idea that Trump won the election because of some sort of devious technological dominance, and not because people hate the Clinton family or just really didn’t want Hillary to be President, which is a longstanding, 20-year-old narrative in American politics.
In your talk, you said that fragmentation prevents the sort of mass takeover imagined in the most worrying AI and elections narrative. And I think it might be a bit more complex than that. So on the one hand, people being split from each other does mean that it's difficult to message through single big channels. But it also means that you can pick off bits of the landscape of people to persuade them of something as a group like a conspiracy theory. If you can pick off a big segment it then becomes very difficult to bring them back in because there is no clear mainstream narrative, or any single unified world. The classic example here is QAnon in America. And again, I appreciate the American and British systems are very different.
And so fragmentation seems to me to cut both ways in terms of the spread of misinformation and disinformation. I wonder what you think about that?
Sam Jeffers
If you take the QAnon segment of the American electorate, it wasn't enough, right? QAnon lost the 2020 election and in some respects, QAnon lost Trump the 2020 election because the Republican Party just looks incredibly mad to a lot of people when it is affiliated with these things that are clearly irrational and incorrect. So it's much easier for a purple voter to say “Joe Biden looks like he's a bit old and he's a bit wobbly, but he’s also a bit less mad than everyone else.”
I think that can be true as well in Britain, when you look at how the far right is looking at certain issues, or how bits of the Labour far left are. They create problems in all sorts of interesting ways and potentially being most centrist is a good response to that. Maybe there's not enough centre or the centre ultimately doesn't hold and fragments into a thousand pieces. And you'd certainly worry about that stuff happening in Europe, right where the far-right parties are going to have a real field day in the European elections this year. They’re going to do very well.
It happens that Britain has had a Conservative government for a long time. Eventually governments’ mandates evaporate and someone goes in the other direction. In the UK’s case, that means it's going to go to the left and the Labour Party. In France, it's going to go from Emmanuel Macron to Marine Le Pen. In Germany, it's going to go from the SPD to the AfD.
We're going to have to look in Britain about how we maintain some kind of common core. One thing I think is that the Labour Party should restore the BBC funding to the levels it's historically been at. The BBC could well be the dominant global media trusted news if it had the sort of funding that something like Al-Jazeera gets from the Qatari government. You could see it doing a lot more than it currently does.
The odd bad incident will happen and we should stop that happening. But we certainly shouldn't be thinking that British democracy is under any real threat
Richard Hames
Like it did for much of the postwar period right?
Sam Jeffers
For a long long time! Now they're having to cut back on all of the foreign correspondents and all sorts of other stuff. They haven't had enough innovation on the Internet. They’ve got a website and iPlayer and not much else. It just seems to me that someone needs to think quite a lot about what that sort of common core of people's information looks like, and needs to have a bit of a plan for it. And maybe a new government will do that.
Richard Hames
What would change your mind on your general thesis here? Would Sora3 have to come out before the election and people get trapped in 3D worlds in their Apple Vision Pros or something? What technological shifts would have to happen for you to be like, “Oh, this is something real.”
Sam Jeffers
I don't think it's not real. I don't think it won't work or that it won't change things. It's just gonna be a lot slower than people forecast and I don’t see a real way that a cataclysmic, ‘October surprise’ thing is going to work.
If you look at Trump's attempt to create something about Hunter Biden in 2020 at the last minute, it turns out to be pretty fake. It probably had some kind of, maybe, Russian involvement. And it didn't work, right? Trump got pretty well beaten in that election. And it's actually a pretty good chance that once people wake up to it, Trump will get pretty well beaten in the next election as well, whatever crap he comes up with. He has less media power now than he did in 2016 and 2020. He's definitely less present. A lot of his coverage has nothing to do with how he would govern America, and much more to do with how he's in legal strife and all the rest of it.
It's hard for me to think of something that could happen this year that would change my mind. It would involve a very big and close election. So America is one. It would involve a new platform or channel being really permissive, like Facebook, which is still the biggest, or TikTok.
It just seems to me that someone needs to think quite a lot about what that sort of common core of people's information looks like, and needs to have a bit of a plan for it. And maybe a new government will do that.
The one thing that could happen would be that whoever runs TikTok decides that going to promote a load of pro-Trump anti-Biden sort of fake content, whether it's AI generated or just generally bad content and just stick that on the For You page for days and days and days in the run up to the election. That is a big screen that a lot of people look at a lot of the time and conceivably is an audience that doesn't get much news content from elsewhere. And that could be something to really steer the election.
Do I think that's going to happen? No, I don't. I think that is enormously unlikely to happen, but that's probably the single affordance in American democracy that could cause problems globally.