Request Podcast

Transcript of How AI defeats humans on the battlefield | BBC News

BBC News
Published over 1 year ago 523 views
Transcription of How AI defeats humans on the battlefield | BBC News from BBC News Podcast
00:00:00

It is time now for our new weekly segment, AI Decoded. Welcome to AI Decoded. It is that time of the week when we look in-depth at some of the most eye-catching stories in the world of artificial intelligence. Now, last week, we looked at how artificial intelligence could threaten human jobs in the future. But what about those on the battlefield? Well, The Guardian is calling it AI's Oppenheimer moment due to the increasing appetite for combat tools that blend human and machine intelligence. This has led to an influx of money to companies and government agencies that promise they can make warfare smarter, cheaper, and faster. Here in the UK, leading military contractor BAE systems are ramping up efforts to become the first in their industry to create an AI-powered learning system meant to make military trainees mission ready sooner. Now, our BBC AI correspondent, Mark Chisack, went to meet all those involved. We will be showing you his piece in just a moment. But with me, I'm very pleased to say, is our regular AI contributor and presenter, Priya Lacani, who's CEO of AI-powered education company Century Tech. Now, Priya, this is a fascinating area, but perhaps one of the most controversial, and people have huge concerns about it.

00:01:31

Yeah, that's absolutely right, because this is using AI to potentially have unmanned military drones. What you're going to see in Mark's incredible piece is unmanned military warcraft, potentially. Then there's all these questions about, well, hang on, obviously it's great if there aren't humans being harmed out there on the field, but does that mean that actually war could escalate much quicker? A decision is then going to be made by these AI systems. If both parties have AI systems, systems, what happens then? It's a race as to who can escalate further. There's all sorts of ethical considerations, but you're also going to see learning systems and how BAE systems are approaching using AI to improve learning in terms of training the military and soldiers. It's a fascinating area. Then we'll do a bit of a deep dive into the ethics a little bit later in the program.

00:02:20

Lots to talk about, Priya. Let's take a look as we were just talking about this report by Mark Chuzack. Then stay with us because we've got lots to discuss afterwards.

00:02:33

Up, down, flying, or hovering around. For 75 years, the Farnborough Air Show has showed off aircraft, both civilian and military, often inviting pilots to put their airplanes through their paces to the delight of the assembled attendees, including plane buffs, even new prime ministers. In recent years, Farnborough has played host to a lot more of these unmanned air vehicles or drones, as they're commonly known. Drones with military application with fixed wings that behave like an airplane, or rotors capable of hovering like a helicopter, are in abundance. But all have something in common. A human being involved in the command and control of these aircraft at some stage, it's a process that's called human in the loop.

00:03:31

It's critical from a moral and ethical point of view to ensure that there is a human judgment that is always at the heart of selection of the course of action.

00:03:44

Military application of AI is extremely controversial. Images of killer robots and the idea of AI run amok are frequent additions to stories in the press about the risks the technology poses. Nevertheless, militaries around the world are already using artificial intelligence. One area where it's particularly useful is training pilots to fly aircraft like these. Flight simulators are an integral part of a pilot's training. They save time and money, allowing prospective pilots to gain valuable skills from the comfort and safety of TerraFirma. Before, formerly with the RAF, Jim Whitworth is a pilot instructor, experienced in flying military jets like the Hawk and Tornado. As soon as you see that, I want you to just pull the stick back, set an attitude, as we discussed. This simulator rig is for a Hawk jet, the Royal Air Force's preferred trainer. What feedback have you given to the team developing this in terms of its realism? Really, it's about the feedback from the controls. I would like it to feel as much like a Hawk as possible. Where does the AI come into the mix? We can record everything a trainee does in this environment, in this simulator. We can give some metrics with which to measure the performance and then score each performance.

00:05:13

And then as we start to build up data on each trainee, artificial intelligence can then start to analyze that data for us and show us where our pinch points in the syllabus are. And are Mikey K, a former senior RAF military pilot, and Dr Peter Azaro from the organization Stop Killer Robots, who's also a professor at the new school in New York, where his research focuses on artificial intelligence and robotics. Thank you so much, both of you, for joining us here on AI Decoded. And Mikey, perhaps if I can start with you and ask just to Give us a quick rundown of your understanding of how AI technology is being used on the battlefield at the moment.I think a really good example is a process called the kill chain, which that overlay of what the ethics are, what the rules of engagement are. The scenarios are very different. Prosecuting different targets in different environments with different platforms, different weapons. I gave the example of two Islamic State snipers on on the second floor of a 30-story building. The human has the ability to be able to select a weapon through technology, through machine learning, but also put, for example, a steel tip on the top of that weapon called a penetrator. It can go through 28 floors to the second floor with a delayed fuse on it and just take out what's on the second floor without destroying anything else. It is a massively, massively complex procedure of which AI will learning how to do that. But my advice, and certainly the way I would approach this, is that a human in the loop right now is imperative in order to minimize that collateral and minimize potential mistakes.Mistakes do, sadly, happen quite a lot.We haven't talked about the transparency of that either, in the sense that a lot of this is classified intelligence. Would defense contractors be liable? Peter, I've got a very quick question for you. We're running out of time, but war games show that the use of machines are likely to result in conflict escalating quicker than it would otherwise. What are your thoughts about that?Well, as you said, with the speed, decision making happens in shorter and shorter time frames. The real difficulty is when more and more strategic decision making and decisions to engage a target or initiate an operation become automated, then you actually would have humans that would not be in control of the overall planning, the decisions to go to war, the decisions to escalate a conflict could all just happen automatically. And we've seen this already with online trading and flash crashes that have occurred in stock markets where different algorithms will interact with each other and lead to a stock market crash, and they have to turn off the whole system. We don't want this happening with autonomous systems in warfare. But I think to the question you asked before about precision weapons, we know this is automation. And automation increases speed. It also reduces cost. By reducing the cost of bombing each individual target, that means you can afford to bomb a lot more targets. So if you're only killing a certain percentage of civilians with each strike, but now you can strike many, many more things, you can actually wind up having a much larger impact on the civilian population, even though you've increased precision.It's not automatic that these systems will improve warfare and any impact on civilians.Dr. Peter Azaro, I'm going to have to stop you there. I'm sure we could talk about this all evening. It's an absolutely fascinating subject. We really appreciate your time. Dr Peter Azaro, Mikey Kay, thank you. Here in the studio, Priya, thank you so much for joining us. That's it. We are out of time. Ai decoded. We'll be taking a well-deserved break for the month of August. But don't worry, we will be back in full force at the beginning of September. Do please join us then.

00:11:05

are Mikey K, a former senior RAF military pilot, and Dr Peter Azaro from the organization Stop Killer Robots, who's also a professor at the new school in New York, where his research focuses on artificial intelligence and robotics. Thank you so much, both of you, for joining us here on AI Decoded. And Mikey, perhaps if I can start with you and ask just to Give us a quick rundown of your understanding of how AI technology is being used on the battlefield at the moment.

00:11:38

I think a really good example is a process called the kill chain, which that overlay of what the ethics are, what the rules of engagement are. The scenarios are very different. Prosecuting different targets in different environments with different platforms, different weapons. I gave the example of two Islamic State snipers on on the second floor of a 30-story building. The human has the ability to be able to select a weapon through technology, through machine learning, but also put, for example, a steel tip on the top of that weapon called a penetrator. It can go through 28 floors to the second floor with a delayed fuse on it and just take out what's on the second floor without destroying anything else. It is a massively, massively complex procedure of which AI will learning how to do that. But my advice, and certainly the way I would approach this, is that a human in the loop right now is imperative in order to minimize that collateral and minimize potential mistakes.Mistakes do, sadly, happen quite a lot.We haven't talked about the transparency of that either, in the sense that a lot of this is classified intelligence. Would defense contractors be liable? Peter, I've got a very quick question for you. We're running out of time, but war games show that the use of machines are likely to result in conflict escalating quicker than it would otherwise. What are your thoughts about that?Well, as you said, with the speed, decision making happens in shorter and shorter time frames. The real difficulty is when more and more strategic decision making and decisions to engage a target or initiate an operation become automated, then you actually would have humans that would not be in control of the overall planning, the decisions to go to war, the decisions to escalate a conflict could all just happen automatically. And we've seen this already with online trading and flash crashes that have occurred in stock markets where different algorithms will interact with each other and lead to a stock market crash, and they have to turn off the whole system. We don't want this happening with autonomous systems in warfare. But I think to the question you asked before about precision weapons, we know this is automation. And automation increases speed. It also reduces cost. By reducing the cost of bombing each individual target, that means you can afford to bomb a lot more targets. So if you're only killing a certain percentage of civilians with each strike, but now you can strike many, many more things, you can actually wind up having a much larger impact on the civilian population, even though you've increased precision.It's not automatic that these systems will improve warfare and any impact on civilians.Dr. Peter Azaro, I'm going to have to stop you there. I'm sure we could talk about this all evening. It's an absolutely fascinating subject. We really appreciate your time. Dr Peter Azaro, Mikey Kay, thank you. Here in the studio, Priya, thank you so much for joining us. That's it. We are out of time. Ai decoded. We'll be taking a well-deserved break for the month of August. But don't worry, we will be back in full force at the beginning of September. Do please join us then.

00:18:44

that overlay of what the ethics are, what the rules of engagement are. The scenarios are very different. Prosecuting different targets in different environments with different platforms, different weapons. I gave the example of two Islamic State snipers on on the second floor of a 30-story building. The human has the ability to be able to select a weapon through technology, through machine learning, but also put, for example, a steel tip on the top of that weapon called a penetrator. It can go through 28 floors to the second floor with a delayed fuse on it and just take out what's on the second floor without destroying anything else. It is a massively, massively complex procedure of which AI will learning how to do that. But my advice, and certainly the way I would approach this, is that a human in the loop right now is imperative in order to minimize that collateral and minimize potential mistakes.

00:19:42

Mistakes do, sadly, happen quite a lot.

00:19:45

We haven't talked about the transparency of that either, in the sense that a lot of this is classified intelligence. Would defense contractors be liable? Peter, I've got a very quick question for you. We're running out of time, but war games show that the use of machines are likely to result in conflict escalating quicker than it would otherwise. What are your thoughts about that?

00:20:05

Well, as you said, with the speed, decision making happens in shorter and shorter time frames. The real difficulty is when more and more strategic decision making and decisions to engage a target or initiate an operation become automated, then you actually would have humans that would not be in control of the overall planning, the decisions to go to war, the decisions to escalate a conflict could all just happen automatically. And we've seen this already with online trading and flash crashes that have occurred in stock markets where different algorithms will interact with each other and lead to a stock market crash, and they have to turn off the whole system. We don't want this happening with autonomous systems in warfare. But I think to the question you asked before about precision weapons, we know this is automation. And automation increases speed. It also reduces cost. By reducing the cost of bombing each individual target, that means you can afford to bomb a lot more targets. So if you're only killing a certain percentage of civilians with each strike, but now you can strike many, many more things, you can actually wind up having a much larger impact on the civilian population, even though you've increased precision.

00:21:20

It's not automatic that these systems will improve warfare and any impact on civilians.

00:21:27

Dr. Peter Azaro, I'm going to have to stop you there. I'm sure we could talk about this all evening. It's an absolutely fascinating subject. We really appreciate your time. Dr Peter Azaro, Mikey Kay, thank you. Here in the studio, Priya, thank you so much for joining us. That's it. We are out of time. Ai decoded. We'll be taking a well-deserved break for the month of August. But don't worry, we will be back in full force at the beginning of September. Do please join us then.

AI Transcription provided by HappyScribe
Episode description

An array of tools powered by artificial intelligence (AI) are under development or already in use in the defence sector. For instance ...