Request Podcast

Transcript of Is AI eroding democracy ahead of the US election? | BBC News

BBC News
Published about 1 year ago 378 views
Transcription of Is AI eroding democracy ahead of the US election? | BBC News from BBC News Podcast
00:00:00

Welcome to AI decoded that time of the week when we look in-depth at some of the most eye-catching stories in the world of artificial intelligence. And with the US presidential election less than two weeks away, there are concerns AI technology is being used to make the voting process confusing. White House National Security Advisor, Jake Sullivan, speaking at an event on AI earlier, said the US was making progress in identifying foreign interference in its elections. But in his words, there's a long way to go to to get where we need to be. So while false information aimed at disrupting elections is nothing new, increasingly advanced AI tools could make it easier to deceive voters with video and audio that looks and sounds plausible. Which brings us to this story from NBC. A new public service campaign featuring Hollywood stars has been created to alert Americans on how not to be duped by AI-generated deepfakes in the run-up to election day. We'll be speaking to Taylor, one of the organizers of the campaign. Meanwhile, the Futurism website says the Pentagon is planning to use deepfake technology to their own advantage by creating AI personas to infiltrate online chat forums in order to gather information.

00:01:16

This comes despite the US government's persistent warnings that deepfakes and other AI-generated content will deepen the misinformation crisis and lead to what they call a muddier information ecosystem for everyone. Which brings us to this article, warning, a Donald Trump win in November could potentially help unleash dangerous artificial intelligence. Wired magazine says the former President's opposition to what he called Woke Safety Standards for AI would likely mean the dismantling of regulations. And one of those very worried over the implications of all of this is Microsoft's Bill Gates, who ranks the threat posed by artificial intelligence alongside nuclear war and bioterrorism. The multibillionaire says he finds AI both wondrous and a little bit scary, and that it's very possible malign actors will use AI in ways that could prove dangerous for humanity. So very interesting topics to discuss with me now to do that. Our regular AI commentator and presenter, Stephanie hair. And joining us once again is Susie Allegri, author of the book Human Rights: Robot Wongs. And as well, joining us down the line from Washington, we have Miles Taylor, AI commentator and former US government official in both the Bush and Trump administrations. Stephanie, we're less than a fortnight away from the presidential election in the United States, something we've been talking about at length.

00:02:43

The polls are telling us it is incredibly close, too close to call. So how does this provide opportunities for the misuse of AI?

00:02:53

I've just come back from a week and a half in Chicago, and I definitely felt that it was very, very tight, even in the Midwest of the United States. And one of the things that I was picking up when I was there was this discussion about misinformation, disinformation, and just people not knowing where to go to get trustworthy facts. So this brings us into this whole question of if people are going to use artificial intelligence to deceive, what are we creating in terms of a trustworthy electoral process?

00:03:25

It's a very, very big question indeed. So let's take a look now at the public service campaign aiming to alert Americans not to be duped by AI ahead of election day with a little help from Hollywood.

00:03:39

Artificial intelligence has gotten so advanced. You probably can't tell that some of Are it real?

00:03:47

I'm definitely real.

00:03:48

That's the problem. Because this election, bad actors are going to use AI to trick you into not voting.

00:03:55

Not voting.

00:03:56

Luckily, we already know what they're going to do.

00:03:59

They'll to use fakes phone calls. Videos or messages to try to change when, how, or where you vote.

00:04:07

For example, a fake message saying, Voting has been extended.

00:04:11

Or, Your polling location has closed or changed due to an emergency. Or you need new documentation to vote.

00:04:20

These are all scams designed to trick you into not voting.

00:04:26

Don't fall for it.

00:04:27

Do not fall for it.

00:04:30

This threat is very real. If something seems off, it probably is.

00:04:35

Always double-check your Stake's official website.

00:04:38

Or go to represent. Us/votsafe.

00:04:42

Voting is your right.

00:04:44

Voting is your right.

00:04:46

Don't let anyone take it from you.

00:04:48

Don't let anyone take it.

00:04:50

I love you, Amy.

00:04:59

I'm so I'm sorry. I am not even American.

00:05:04

So sorry.

00:05:07

Artief?

00:05:09

No, I'm really here, actually. Well, yeah, here.

00:05:19

It's a great ad, and with us now, one of the creators of this campaign, Miles. Great to have you with us from the US today. Is this It's all about asking the public to inject some critical thinking into the content they look at? In other words, not to take everything they see, as in the case of your campaign, literally at face value.

00:05:42

That's right, Anita. I think the thing that it's easy to compare this to for folks is what they saw with spam in their email inbox in the 1990s. At first, when you got an email account, you trusted everything that came in because no one but people you trusted had your email address. That is until they did. And then you got that email from the Nigerian Prince offering to turn the $10,000 you wired him into $100,000. And people started to get duped. They started to get fooled. Well, this is the new spam, and this is the next generation of spam, and it's highly sophisticated. And we're going to see deep fakes emerge in a lot of different parts of our lives. And the tip of the iceberg is our elections. And that's where folks are going to first start to see this materialize. And so we wanted to start getting the public acclimated to the fact that things that might seem innocuous about when, where, and how to vote potentially could be deep fake tricks, trying to misdirect them. And this is something that law enforcement officials are very concerned about in the United States. They are worried about bad actors using deepfakes to try to suppress the vote to change the outcome, get certain people to not show up at the polls, get others to show up at the polls in ways they don't know they're being fooled.

00:06:55

So yes, Anita, the goal here is try to get folks to show a little bit of critical thinking and just go check their sources and verify before voting.

00:07:05

Yeah, the spam analogy is a good one, Miles. Susie, how big a thread is this thing to democracy, especially as we see now in the US presidential race where the polls are really, really tight, where small numbers, relatively small numbers of voters could make the election go one way or another.

00:07:25

I mean, we're two elections away from the Cambridge Analytics scandal, where We saw parliamentarians around the world concerned about the way that social media and information systems can influence voters. I think what's key is understanding that it's often about voter suppression. It's not about changing people's political ideas. It's about making you not get up off the couch to go and vote. I think it is a really significant issue and something that really requires very tough regulation around electoral law and enforcement of electoral law to deal with these issues. Because AI deepfakes, they're out there. What we need is to make sure that people involved in elections, in particular, and bad actors, have consequences when they stretch the law.

00:08:13

This brings us to the point about accountability, which is what penalties are there for people who are using artificial intelligence to try to suppress the vote? I think Susie is absolutely spot on. The past two elections that we've seen in the United States, we're using technology to try to change of how people might vote. There was a big debate about whether that was effective or not. Now, what we're seeing is America is so polarized. People are pretty entrenched and tribal, and it's going to come down to the swing states and even microcommunities within those swing states. The real trick is suppressing the vote, making it so people don't sign up on time, don't know where to go, etc. The question I would have for Miles is, are we going to see any action in the United States to actually make it where interfering in the election in this is criminalized?

00:09:01

Miles, briefly on that?

00:09:03

It's a great question. Unfortunately, there were a number of bills before Congress this year to introduce steeper penalties and to deter bad actors. I don't think anyone will be surprised to hear me say, Those did not pass the US Congress. And so we are stuck with the laws we had on the books previously. Now, I was meeting today with FBI officials on this threat, and they will say that existing laws on the books, of Of course, allow them to go prosecute voter suppression because that's illegal. But there's an additional complication with deep fakes, namely, those law enforcement authorities need to actually be able to detect that something is fake in the first place. And a lot of these agencies and a lot of state and local election officials, do not have deep fake detection tools. So if there are phone calls to voters telling them that a polling location has closed or something's changed, there aren't systems in place in real to detect that those might be deep fake phone calls and fraudulent actors. And that's not going to be sorted in time for this election. But certainly, if something goes wrong this cycle, folks are going to wake up and say, We need those real-time detection tools to protect our systems, to protect our networks, to protect our democracy.

00:10:18

Okay, which brings us on to the next story that the Pentagon, that's the Department of Defense in the US, its headquarters, is going to use generative AI to create these fake online so that people, a supposed online user, can go into chat forums and get information. Even as the US government warns against these AI deepfakes being used for nefarious purposes. So Susie, is this a bit like the AI equivalent of a spy?

00:10:49

Absolutely. And it's essentially what's good for the goose is good for the gander. If you start using these tools as a government, then you are effectively facilitating their development in ways that is undoubtedly then going to be used back at you. And yes, it is really the AI hyper tech version of online spying.

00:11:11

And Stephanie, are we going to see more of this happening? It's not so much if you can't beat them, join them, but join them to beat them.

00:11:18

I almost wonder if it's going to create the opposite effect. I wonder if people are eventually just going to question why we even use social media at all, if it's all just spam and junk profiles, and you might be in a conversation with a robot, with a bot, or with a spy, or with somebody who's in marketing. The whole point was it was supposed to be about authentic connection back in the day with people that you knew or maybe wanted to know. I don't know why you would go to it if it's just a social media sewer.

00:11:46

Yeah, that's a really interesting point. Miles, what do you think about that? I'm seeing and hearing more and more people complaining actually about the content that they see on whatever platform or platforms they prefer to use. Is that something that you are seeing as well?

00:12:03

For sure. I think we're seeing across society, a lot of these platforms, look, some of them are subjected to more spam than others. We are seeing societal and partisan divisions about which platforms to use, and part of that is related to bots. Although a lot of these platforms would say that one of their highest priorities is the detection of bots because that's really bad for business. Just like it was bad for business for Google in that era when we were getting a lot of spam emails, and then that company invested a lot in spam detection. Most of it gets filtered out now. We're likely to go into an era where a lot of the personas we're engaging with online are not real, and before companies develop the technical acumen to sift those out. Now, I swear, Anita, that I did not plan this in advance of this hit, but I was also at the Pentagon today talking about this broader issue, not this issue of fake personas. I'm not privy their plans on the fake personas, but I suspect folks over there working AI would say, Look, this is a double-edged short problem. Yes, we don't want to see this happen, but because it is, if we are going to engage our adversaries out there around the world, countries that want to do the West harm, we're going to have to play that game, and we're going to have to deploy those automated bots out there to engage with theirs.

00:13:23

So it does become definitely a gray area.

00:13:26

That was a very timely visit, Miles. Miles, Susie, and Stephanie Actually, you're staying with me, but coming up after the break, what could a second Donald Trump presidential term mean for AI development and regulation? We'll get a unique insight from Miles Taylor, who previously served in his administration. Should we all be worried, like Bill Gates, who's concerned artificial intelligence could get out of control? We'll discuss all that after the break. Welcome back to AI decoded. The global battle to regulate artificial intelligence has been raging ever since deep concerns were raised over the unpredictable of this super-intelligent technology. The European Union was the first to legislate with the European Artificial Intelligence Act, which came into force last August. However, the move has been criticized by tech giant Metta, who've warned that the EU's approach to regulating AI is creating a risk the continent could be cut off from accessing cutting-edge services. Other initiatives in the US have found themselves facing headwinds from every direction. So what's the future controlling this technology that will transform all our lives? Well, welcome back to our regular AI commentator and presenter, Stephanie hair, as well as Susie Allegri, author of the book Human Rights: Robot Wongs.

00:14:42

And joining us from Washington, we have Miles Taylor, AI commentator and former US government official in both the Bush and Trump administration. Miles, I think this is a perfect point to pick up with you as you worked in the Trump administration. You've got an insight into to his thinking. So if he is elected again as President come November's vote, what is that going to mean for regulation, especially when he has an alliance with people like Elon Musk?

00:15:15

Well, Donald Trump does not operate with a public policy scalp. He operates with a wrecking ball. So on this issue, I think the concern folks have is that there may need to be adjustments to AI regulations on the margins. In fact, personally, I would say there absolutely needs to be. We're really early in the age of AI, and we haven't gotten it completely right. But if you take a wrecking ball to the issue, you can cause a lot more damage than good. Right now on this question, the issue of bias is really what's at case here. All AI systems are trained on the world we lived in. They're trained on us. Our inherent biases, the things we said we say, and appropriate things we say, things about the population that we may not like, things we may not like about ourselves, they get reflected in these models. What a lot of companies have tried to do is take those dangerous or frustrating things out. To discussions about self-harm and racism, and try to remove those biases from the models. Now, in doing that, there are secondary consequences. Sometimes in putting your finger on the scale, you can push too hard, and so you You get hallucinations.

00:16:30

You get absurd results. You have seen that highlighted often in the press of some of these platforms trying to control for these problems and creating new problems. It is like a game of Whac-amal. But you take a wrecking ball to the regulation in that space, and you could inadvertently lead to a spike in a lot of those biases on these platforms. I think that's what we're concerned about, looking at the possibility of a second Trump administration is an AI public policy free for all. In some ways good for industry in other ways, potentially very uncertain or even harmful for aspects of society.

00:17:06

Miles, I'm curious, do you think it actually would be good for industry to remove regulation? Because you hear in the United States this idea that regulation will hinder innovation, and if we overregulate, China will win. But doesn't business actually need a clear set of rules in order to operate and create that climate of certainty and standards that everybody can agree to across the board so that the lawyers don't get too busy with lawsuits between the European Union, the United States, and elsewhere? You want everybody on the same page. What happens if we rip up the existing rule book because we think it's too woke?

00:17:44

Well, in fact, in many cases, that can be very bad for industry, for there to be a patchwork of laws around the world, and for the United States to be on a substantially different page than its partners from a regulatory standpoint. That can be frustrating. That can be very expensive. That's certainly a possibility if the existing regulations on the book or the existing discussions around AI are thrown out the window, is you could create a very complicated situation for business. Now, I will say at the same time, I have to be fair, AI companies have largely been self-regulating in the United States, and in some cases, doing a pretty decent job of it because their businesses demand it. Things like child sexual abuse material and harmful and violent imagery, a lot of these companies don't want that on their platforms. They don't want people generating content using AI, and so they've strictly cracked down on that without necessarily clear requirements from the US government. But as you've often noted, Stephanie, we're moving into a much more complicated period here where standardized uniform regulation around the world is going to make it easier for these companies to operate, easier for law enforcement to collaborate, and any indication that especially Western allies are not on the same page is probably going to send shivers down the spine, or it should in those C-suites of those companies, because it will get very complicated for them to deliver their products in that patchwork regime.

00:19:12

Susie, I'm going to pick up with you for our final topic on this week's segment. This is Bill Gates saying that he is really worried about the impact, as our viewers will see right now, about the impact of AI as serious as nuclear war and bioterrorism, he says. I mean, that's his personal worry. Do you think he's right to think that way, Susie?

00:19:34

Well, what I thought was quite interesting about this story was the three topics that he chose. Was it bioterrorism, nuclear war, and climate change, I think, were his three worries, along with AI. I mean, AI potentially exacerbates the risks of all those. We've seen this week AI companies looking to have their own small nuclear reactors in order to deal with the massive energy requirements of AI at the scale that it's coming, and also the potential for AI to create greater bioterrorism threats much more easily. In a sense, all of those concerns that he has are potentially exacerbated by AI. The worry about artificial general intelligence, I think, is a distraction from actually that issue of looking at how AI exacerbates other potential risks to humanity.

00:20:28

We've literally got about a minute left A quick thought from both you, Stephanie and Miles, on this. Stephanie?

00:20:33

I think it's weird that Bill Gates is worried now because he told Bloomberg back in July that AI would solve more problems than it creates. I'd like to know what's changed in three months for him.

00:20:42

Okay, a bit of a segue. Miles?

00:20:46

Well, that's the double-edged sword. I think every one of the threats that Bill Gates mentioned, you also could make the case that well-developed artificial intelligence could help mitigate those threats. I think that tension between the two is the fight we're going to be having for decades.

00:21:00

Conversation with the three of you today. I hope our viewers have enjoyed that as well. I'm sure they have Miles Taylor there in Washington, and Stephanie hair, and Susie Allegri with me in the studio. Thank you all very much.

AI Transcription provided by HappyScribe
Episode description

Images generated by artificial intelligence (AI), are one of the emerging disinformation trends ahead of the US presidential ...