Request Podcast

Transcript of Russia and Iran use AI to target US election | BBC News

BBC News
Published about 1 year ago 439 views
Transcription of Russia and Iran use AI to target US election | BBC News from BBC News Podcast
00:00:00

You are watching The Context with me, Christian Fraser. It is time for our regular Thursday feature, AI Decoded. Welcome to the program. Freely available, largely unregulated, the creative tools of generative AI now amplifying the threat of disinformation. How do we tackle it? What can we trust? And how are our enemies using it to undermine our elections and Our Freedoms. This week, Governor Gavin Newsom signed a bill in California that makes it illegal to create and publish deep fakes related to the upcoming election. And from next year, the social media giants will be required to identify and remove any deceptive material. It is the first state in the nation to pass such legislation. Is it the new benchmark? Some of this stuff, obviously fake, some of it designed to poke fun. But look how these AI memes of cats and ducks powered the pet eating rumor mill in America with dangerous consequences. It is a problem, too, in China. How does the Communist Party retain social order in a world where the message can be manipulated Beijing is pushing for all the AI to be watermarked and he's putting the onus on the creators. From politics to branding, there is no bigger brand than Taylor Swift, hijacked by the former President who shared fake images of her fans, endorsing him.

00:01:33

It affects us all. With me as ever in the studio, our regular commentators and AI presenters, Stephanie hair is here. And from Washington, our good friend Miles Taylor, who worked in National Security, advising the former Trump administration. We'll talk to them both in a second. But before we do that, we're going to show you a short film. One of the many false claims that has appeared online in recent months was a story that Kamala Harris had been involved in a hit and run accident in 2011. That story was created by a Russian troll farm and was one of the many inflammatory stories Microsoft intercepted. The threat analysis unit that does their work in New York is at the very forefront in defending all our elections. Our AI correspondent, Marc Chislack, has been to see it.

00:02:19

Times Square, New York City. An unlikely location for a secure facility which monitors attempts by foreign governments to destabilize democracy. It is, however, home to MTAG, the Microsoft Threat Analysis Center. Its job is to detect, assess, and disrupt cyber-enabled influence threats to democracies worldwide. The The flow that's carried out here is extremely sensitive with the very first people that have been permitted to film inside. It's also the first time Russian, Iranian, and Chinese attempts to influence a US election have all been detected at once.

00:03:00

All three are in play, and this is the first cycle where we've had all three that we can definitely point to. Individuals from this organization serve on a special presidential committee in the Kremlin. Reports compiled by these analysts advise governments like the UK and US, as well as private companies on digital threats.

00:03:20

This team has noticed that the dramatic nature of the US election is complicating attempts at outside interference.

00:03:27

The biggest impact of the switch of President Biden for Vice President Harris has been it's really thrown the Russians so far off their game. They really focused on Biden as somebody they needed to remove from office to get what they wanted in Ukraine.

00:03:42

Russian efforts have now pivoted to undermining the Harris Walsh campaign via a series of fake videos designed to provoke controversy. These analysts were instrumental in detecting Iranian election influence activity via a series of bogus websites. The FBI is now investigating this, as well as Iranian hacking of the Trump campaign.

00:04:05

We found that in the source code for these websites, they were doing was using AI to rewrite content from a real place and using that for the bulk of their website.

00:04:14

Then occasionally, they would write real articles when it was a very specific political point they were trying to make.

00:04:21

The third major player in this election interference is China, using fake social media accounts to provoke a reaction in the US public. Experts are unconvinced these campaigns affect which way people actually vote, but they worry they are successful in increasing hostility on social media. Mark Chislack, BBC News.

00:04:43

Yeah, that gives you an idea of just how quick This is advancing. Stephanie, do you think we're almost to the point as the technology improves, the creative technology, that we're going to be very close, very soon to not knowing the difference between fact and fiction?

00:04:59

It's getting harder and harder to detect a lot of the deep fake imagery. Audio is particularly very difficult to detect. It's a lot easier to fake. So yes, I think we're right now, possibly in the last US election, where it's easy to see when you're being manipulated. And the trick really is, do you want to believe it? Because what this is all about is really hijacking your emotions.

00:05:21

And watermark, because that is often the go-to solution to this, why would that not be the answer to all the ills of generative AI?

00:05:31

I still wonder if there would be ways of manipulating even that, but it's probably a pretty good start. It's just that thing, you always feel like you're playing whack-a-mole with these technologies. You do one thing and then it advances and you have to catch up again. So we would probably start with watermark, and then there would be an advance and a kickback and we'd have to react to that and so on and so forth. I think it's also about preparing citizens, though, to have the critical media skills that we all need to be able to construct narratives, look at who's us information, and just does it check with reality?

00:06:05

Miles, I was saying to Stephanie, this is a good step forward what's happening in California this week. You've got the governor there putting the onus on the social media companies and on the creative companies to do something about this, and particularly around the election. Then Stephanie said to me, Well, okay, American companies regulated by American legislators, why wouldn't they just go to China?

00:06:29

Look, I I think that's one of the concerns always when it comes to tech regulation. And Christian, you remember the debate well over encryption in the United States. There was the San Bernardino terrorist attack almost 10 years ago now where the FBI could not get into the shooter's phone. And it led to a big debate in the United States about these encrypted messaging apps like Telegram and Signal, and whether it should be legislated that those were forbidden in the United States. Opponents of those laws, though, said, Well, sure, you can outlaw them here, but someone overseas is going to create the same apps, and it's going to be really difficult to prevent people from using a version of it overseas. We face the same problem here with regulations around deep fakes and AI. It's only as far as US legislation and law enforcement can reach that those types of things can be enforced. So there is a big challenge here, but also there's a domestic challenge about the First Amendment amendment implications and free speech implications. And of course, Governor Newsom signing up that law has opened up that debate as well. So there will be a lot of contention the next few years about how to get this right from a legislative and regulatory standpoint.

00:07:43

The other thing that occurs to me, and we talk about protecting children online all the time on this program, one of the issues the companies always come up against is finding the material and getting rid of it. If you are having to find very good deep fake material, That process becomes much more difficult, doesn't it? And how do we find a metric to hold the social media companies and the online companies to task?

00:08:10

Well, I think Stephanie said something really important here, which was the game of Whac-a-Mole you're playing. If you think that watermarking, basically putting a sticker on this content and saying this is fake, if you think that's a solution, it's going to be really hard to keep up. A lot of the experts I talk to in AI say that maybe that's a short term solution. But in the longer term, you have to re-architect what's real and what's not real to your earlier point, Christian. What do I mean by that? There's a word I want listeners to remember. It's called provenance. There's a big discussion in technology communities about making sure by default, when you do something like capture a picture on your iPhone, that it's cryptographically signed to say, I was taken at this place at this time, and that can't that can't be changed. It's tied to a public ledger. Not that people can see your photos publicly, but that's a cryptographic signature that can't be broken. Eventually, all of our tech will be signed with that provenance that says, I am real, and you'll know if it's not real because it won't have that point of creation certification.

00:09:18

But it's years before we're there, and in the meantime, a lot of difficult conversations are going to be had.

00:09:24

It's almost a supply chain approach or even a criminal approach when you have a chain of evidence and you to be able to follow it all the way through and you can't tamper with it. Or when we had mad cow disease here in the United Kingdom many years ago, people suddenly wanted to know when they were going grocery shopping, they wanted to buy some beef, what farm did it come from? And suddenly people realized they needed traceability all the way through the food chain. So I'm wondering if there's a parallel there to help people understand all of the things that you're creating can have that encoded. So you would always be able to know. It's like following through a painting. When If it's a painting sold? It might go through 50 different hands if it's 400 years old before it finally ends up in the Met. Where did it come from? Was it illegally bought, et cetera? You should be able to follow data through in the same way.

00:10:13

Let's bring in someone who is working in this field here in the studio with us is Dr. Christian Schröder-De With. He is a Senior Research Associate in Machine Learning at the University of Oxford. He and his team are researching how to identify some of these deepfakes using AI. Welcome to the program. We were just talking about how quickly things are advancing to the point where to the naked eye, it's becoming more difficult, certainly with imagery. What technology are you developing that makes that easier?

00:10:45

Yes, so, Christian, I really like this discussion. I think the solution to our problems of establishing provenance of content will involve both a lot of research, but also by the adoption of existing technologies. So in terms of research, I think the clip really brought home that AI is being used to amplify the misinformation problem, so let's use AI to solve it. So some of the research that I do is about using AI to detect misinformation.

00:11:13

So you're using the AI to track down the deepfake AI?

00:11:17

So basically, yes. So what I did this summer spending doing some research with BBC Verify and University of Oxford was just when you have a picture, for example, explain Whether it is a deep fake or not.

00:11:31

Let's bring one up. I've got one that I think you've looked at, and people will be familiar with this. It's the Pope in a Puffer Jacket, which actually did get into some news streams around the time that this photo came out. So although we're joking, it did actually deceived quite a lot of people. Show me what you did with this.

00:11:49

Yeah, exactly. You can see the Pope in a Puffer Jacket. Obviously, from the context, it's quite clear it's a deep fake, and it's probably for entertainment purposes. But a human expert, for example, at BBC could look at this picture and could find the details that are a bit off. For example, the spectacles seem to be fused into the cheeks or the crucifix doesn't quite attach to the chain. The question is, you see it's very important to have these explanations as well, not just a number of this is 0.7% or 0.7 deep fake or not, but you need to have an explanation for why it is a deep fake. We now have AI tools that can create these explanations as well.

00:12:27

Something that you put on the desktop, something that you So you should run a photograph through?

00:12:31

Yeah, potentially, yes. But these tools still have a lot of failure cases, and this is where we need more research.

00:12:39

Where do they fail and why? Famously, it's things like they can't get fingers, so you might get six fingers on a hand. Yes.

00:12:48

So this is a classic on videos, for example. You have some tempore inconsistency, so an object disappears suddenly, for example. But the problem is that these tools are trained on a lot of data, and they're learning so-called features, patterns, that help them to make these decisions. Now, it can happen that sometimes these patterns are present in some images that are too far away from what it has seen during trading.

00:13:12

Without getting too technical on that, can you explain that to people? Is it a pixel difference? I mean, it's not in the way the image looks, is it? The AI, presumably, is looking deeper into the image than that. Yes.

00:13:25

So the AI is actually taking an image, and then it is It's projecting this into some very high dimensional space. Within this high dimensional space, basically, you then do a dimensionality reduction into a lower space. Then in this lower space, what you can do is you can form these features. Then if you have an image that it hasn't seen during training, then these features might not generalize to that image. Then you can have issues where an image evokes some impressions that are wrong. You see some reflections or something, and actually they are not deep fake, but the AI thinks- Stephanie mentions the photographs that they struggle with.

00:14:11

I've got one here. This is Lionel Messi kissing the World Cup, much to my chagrin. But this one is real, but the machine thought it was fake. Why?

00:14:25

The machine might think so just because it maybe hasn't seen an image that's close enough to this picture in its training set. As we always get new images in, for example, winning the World Cup, Messi winning the World Cup was a new occasion. It might think that, for example, some reflections in the trophy or the way Messi holds his hand or maybe the skin tone aren't natural. The problem is we then get these explanations, and these explanations can be very, very convincing, but they are nevertheless wrong.

00:14:55

Miles, do you like this idea of AI tracking AI deep fakes?

00:15:02

I don't just like it, Christian. I love it. We've got to use AI against AI to protect ourselves. It's actually going to be our best asset. One of the things that's interesting that's happening right now is we always focus on who's developing the technology that could be used for bad. But my fellow Oxonian there on set, and a lot of folks around the world are now investing time and resources into building companies on deep fake detection. There are companies in the United States like TruPick and Reality Defender that are exciting. They're venture-backed. A lot of people want to go work for them. And what do those companies do? They focus solely on trying to prove what is and isn't real. And one of the things that's just become possible, really only in the past few months, is some of these technologies are leveraging context awareness of the world to determine whether something's fake or real. So these models aren't just looking at the image and saying it looks manipulated. The models can also say, Well, The Pope, the past couple of weeks, has been on vacation in Italy. There's no way this photo was just taken and he was wearing a Puffer jacket and they can give you a confidence score.

00:16:10

That's really exciting.

00:16:11

Are you incorporating that in your technology?

00:16:14

Absolutely. This is incorporating wider context on where the content is found and when it is found and who is depicted. So the semantic information, absolutely, yes.

00:16:22

It strikes me that the social media companies and the online companies have a vested interest in this, because if you can't tell fact from fiction, you get what's called a liars dividend, right? That actually you become a disruptor. You poison the wealth so much that actually no one believes anything. And that's not good for a social media model that makes their money from spreading news and informing people.

00:16:44

It just raises the question of what social media is for. It was quite exciting at first when it was this new thing and you could stay in touch with your friends, and then a lot of people, journalists, would use certain tools to keep up with the news and get breaking news fast. But once that starts feeling where you're like, actually, they're just reading your data or you're looking for news to get it fast, but it's not actually reliable and it's being flooded, the information ecosystem is being flooded all the time, eventually, it might just turn off. That's without even going into the mental health implications of being on these sites, which we know are really harmful for people. I wonder sometimes if we might be having lived through the golden age of social media, and we're now entering this new phase, and if it isn't cleaned up, people could just end up leaving it or only going to the way that you would read the National Enquiry in the United States to read about aliens or something.

00:17:33

Are the big developers interested in what you're doing?Yes.

00:17:37

Absolutely.funding it. The summer, my collaboration was with a big tech company, in fact. There is a lot of interest in these solutions. Actually, the interest goes even further. What we can do now, we can proactively try to look for deep fakes and disinformation in social media platforms using autonomous agents. I think this is where things are going. Then we can establish this situation of awareness on that global scale, which Miles also I've got to also ask you, is this the right environment to be developing the right country?

00:18:04

Do you get the support for stuff like this?

00:18:06

I think so, yes. Yeah? Yeah. Generally, yes. I think UK is a great place.

00:18:11

Well, that's encouraging, isn't it? On that note, One of the problems here is not so much the deep fake news as the disinformation that is spread by conspiracy theories who are creating material they believe to be true. What if we could bring the conspiracy theories from the shadows and back to the light? Coming after the break, we'll hear about the AI chatbot that is de-programming the people who have disappeared down the rabbit holes. We'll be right back. Stay with us. Welcome back. The Moon landings that never happened. The COVID microchip that was injected into your arm. The pizza pedophile ring in Washington. Conspiracy theories abound, often with dangerous consequences. Many have tried reasoning with the conspiracy theorists, but to no avail. How do you talk to someone so convinced of what they believe, who is equally suspicious of why you would even be challenging those beliefs? Well, researchers have set about creating a chatbot to do just that. It draws on a vast array of information to converse with these people using bespoke fact-based arguments. The debunk bot, as it's known, is proving remarkably successful. Joining us on Zoom is the lead researcher, Dr. Thomas Castello.

00:19:30

He's the Associate Professor in Psychology at the University of Washington. You're very welcome to the program. Tell us what the demystified chatbot does.

00:19:41

Yeah, sure. Thanks. I'm happy to be here. The idea is that studying conspiracy theories and trying to debunk them has been pretty hard until now because there are so many different conspiracy theories out there in the world. And you need all of this. You need to look across this whole corpus of information comprehensively to debunk all of them and study them in a systematic way. And large language models, these AI tools, are perfect for doing just that. So we ran an experiment where we had people come in and describe a conspiracy field theory that they believed in and felt strongly about. The AI summarized it for them and they rated it. And then they entered into a conversation with this debunk bot. So it was given exactly what they believed and program set up to persuade them away from the conspiracy theory using facts and evidence. What we found at the end of this about eight-minute conversation, this back and forth, was that people, conspiracy theorists, reduced their beliefs in their chosen conspiracy by about 20% on average. Actually, one in four people came out the other end of that conversation, actively uncertain towards their conspiracy.

00:20:50

So they were newly skeptical.

00:20:51

And so is it the basis that they don't know where to go to get this information and they are suspicious of anybody that might have the answers as to the things that concern them?

00:21:03

Yeah, that could be part of it. I think really it's just being provided with facts and information that's tailored to exactly what they believe.

00:21:12

How do you deploy it? Because I can't imagine that conspiracy theories are wondering around saying, Disprove the conspiracy theory that I believe to be true.

00:21:21

Yeah, no, I mean, that's a great question. I think it's one that I'd be curious to hear others' answers about, too. In the studies, we paid people to come and do That said, I'm optimistic about the truth motivations of human beings in general. I think people want to know what's true. And so if there's a tool that they trust to do that, then all the better.

00:21:41

Miles, can you see a purpose for this in America?

00:21:45

Yeah, I can certainly see this principle being incorporated into a lot of technology. I mean, a lot of us already every day use things like ChatGPT. And I'll actually give you an example, Christian, of ChatGPT disproving something for me. So there's a famous Winston Churchill quote, A lie gets halfway around the world before the truth can get its pants on. No quote better describes the conversation we're having as how fast this disinformation spreads. Well, guess what? I put I sat in the ChatGPT before I did a presentation on this subject and said, Hold on a second. That's actually not a quote from Winston Churchill. It's a quote from Jonathan Swift in the 1700s. So AI helped me disprove that misinformation that's been for years. So yes, I think this is important and it should be integrated into these technologies.

00:22:35

Christian, is this where the two worlds collide? Because presumably, there are conspiracy theories who believe something so fervently that they put out AI-generated material as well. So if you can deal with the conspiracy theory, maybe you can stop the prevalence of fake material.

00:22:52

Yeah, potentially. I must say, though, that this study was done in laboratory conditions. So It will be very interesting to see whether these results also translate into the real world. Then also the language models that were used, they were safety-finetuned. That means they were programmed to say the truth and so on. And so if that safety finetuning is not there, they could be used for something we call interactive disinformation. So they could be used to convince people of things that are not true. So that's the big risk that I see here.

00:23:30

Thomas, I've got a question for you. I'm curiousest about how much having good information actually changes people's mind. The example I would give is smoking. We've known for decades, smoking is bad for you. Everybody agrees we've got all the data to back it up. We put labels on it really clearly, and yet people still smoke. When you talk to a smoker and try to persuade them to give it up because you care about them, they will sometimes really entrench in. It's really hard to break, not just because it's addictive, but because they maybe want to smoke. So I see this parallel, perhaps, with conspiracy theories in terms of we have beliefs, and information is not always enough to change it. It's not just about facts, it's about something else.

00:24:12

Yeah, that's a great point. I think that the case of smoking or other kinds of drug use, we know that it's bad for us when we start doing it. They're fundamentally not about information, whereas beliefs, and particularly, conspiracy beliefs, are often descriptive. They're accounts of what went on in the world, that Al Qaeda didn't put together the 9/11 terrorist attacks. It was the government. And so dealing with claims about the world is something that I think is conducive to informational persuasion in a way that may The nicotine use is not.

00:24:48

Yeah. I mean, Miles, we focus so much on the legislating. It's the questions I always ask you, how far behind a Congress on that? What are statehouses doing about AI legislation? But what we've shown tonight is actually that it's the industry itself that is forcing the change. Maybe it's not legislation because legislation is always one step behind.

00:25:11

Well, Christian, I'm going to give you an embarrassing admission that proves that point So I was at dinner last night with one of the creators of ChatGPT, GPT-3, one of the earlier versions. She worked for Sam Altman. We were talking about the technology, and I complained to her. I said, I was teaching a course at University of Pennsylvania, and I got lazy, and I was supposed to come up with a list of 25 books on a subject for my students. I said, I'm going to look it up on GPT. What are the best 25 books? Produced it, e-mailed it out. Well, guess what? My students e-mailed me and said, All of those books are fake. Gpt 3 came up with a bunch of fake books. I said this to her and she said, Well, yeah. And that was bad. And it gave ChatGPT a bad reputation in your mind. And that's why we kept improving the models is we don't want to serve you up false content because you won't want to work with this product. And so that may not be heartening to everyone, but certainly those industry improvements move a lot faster than legislation because there's a business imperative to get it right.

00:26:08

Yeah, that indeed is the vested interest that I see for a lot of the online companies and, of course, the AI companies that are developing this stuff. We're out of time. It flies by, doesn't it? Just to remind you that all these episodes are on the AI decoded playlist on YouTube. Some good ones on there as well. So have a look at those. Thank you to Dr. Schroeder, Dr. Castello, Miles, and, of course, to Stephanie. Let's again. Same time next week. Thanks for watching.

AI Transcription provided by HappyScribe
Episode description

The United States election result will have major implications around the world, meaning many countries have an incentive to try ...