Yes, welcome to AI Decoded. That time of the week when we look in-depth at some of the most eye-catching stories in the world of artificial intelligence. We had a short break last week due to the US presidential election, so now we know the result. How will the future of artificial intelligence look once Donald Trump returns to the White House? While The Guardian says Elon Musk's influence on the Trump administration could lead to tougher safety standards for AI. That's according to leading scientists who have previously worked closely with Musk on addressing AI's dangers. Cnbc reports that Denmark has laid out a framework to help EU member states use generative artificial intelligence in compliance with the European Union's Strict New AI Act. The new approach has won the backing of some of Denmark's biggest banks, pension managers, and insurance firms, as well as US tech giant Microsoft. The Financial Times asks, Should we be worried over hurting AI machines feelings. The paper says, AI company Anthropic has appointed an AI welfare researcher to assess, among other things, whether its systems are inching towards consciousness or agency, and if so, whether their welfare must be considered. And finally, in the Metro, the Vatican and Microsoft have unveiled a digital twin of St Peter's Basilika.
Using artificial intelligence, people can visualize and explore the building, while AI is also being to help manage visitor flows and identify conservation problems. Our AI correspondent has been to the Holy See, and we'll be showing you that stunning report later in the program. Well, joining me tonight to discuss all of these topics, we have Conor Leahey, CEO of AI Safety Research Company, Conjecture, and also with us, our regular AI decoded Presenter, Priya Lakhani, CEO of Century Tech. Really great to have both of you with us. I think this is going to be a fascinating conversation education. Let's begin with that mention of Elon Musk. Many people see him as a disruptor, a deregulator, but actually, he's warned that unrestricted development of artificial intelligence could be catastrophic for humanity. So given his influence with Donald Trump, Connor, are we going to see tougher safety standards regarding AI?
It's extremely hard to say. One thing Donald Trump and also his vice President, JD Vance, have been pretty consistent on is talking about deregulation population, especially of technology and other things. But Elon Musk has been extremely consistent in the past of talking about the catastrophic, even extinction-level risks from powerful AI systems. In this article, the scientist, Max Mark, who knows Elon Musk quite well, speaks about how Elon Musk really understands that it's more of a suicide race going on right now, because especially in the US right now, there's a narrative that the US must race with China. It must beat China to get to AI and AGI first. And this is a losing proposition for everyone involved. And this is something Musk understands quite well.
Interesting.
Do you think Musk is going to be humanity's savior?
Well, if someone could do it, he's been very damn lucky so far, hasn't he?
I mean, what could that look like in terms of that tougher regulation?
So the main important thing is that there is a small number of companies and organizations who are racing towards extremely powerful forms of AGI, Artificial General Intelligence. This is quite different from the applications we might see in a medical context or even maybe in a chatbot context.
Or maybe the visualization of the basilica that we're going to be seeing later.
Exactly. I don't think anyone wants to see harsher regulations on these kinds of applications of AI. It's fantastic. What I think Musk and people in his field are particularly worried about is general AI systems that are as intelligent or even more intelligent than humans. Because we already now have a lot of problems with our AI systems. We don't know how to control them. We barely know how they work, if at all. If such systems were as smart as people or even smarter, and we don't know how to control them, well, how does that end in well?
I think the fundamental question here is what is the meaning of intelligence? We talked about some simpler systems, as you say, and then we're talking about some pretty profound systems. You actually explain the meaning of intelligence in the compendium that you created? Can you explain your version of the meaning of intelligence?
For me, fundamentally, intelligence is something mechanistic. It's the ability to solve problems. It's the ability in many different environments and circumstances to solve problems. Humans are definitely unusually intelligent in this regard, but it's not special to some degree. Our closest ancestors, closest common ancestor with a chimpanzee a couple of million years is very similar to us. Our cousins, the chimpanzees, have very similar brain structures to us about all the same parts. Ours is just three times bigger. Our brains are three times bigger than chimpanzees.
Yeah, it's only a tiny, tiny bit of our DNA that's different.
Tiny percentage of difference. But even just this small difference of three times the brain size is the difference from chimpanzees living in the jungle throwing rocks at each other and human beings building nuclear weapons and going to the moon. So intelligence is something extremely powerful, and it's clearly something that you can get just by making things more somehow.
Sorry. Just the challenge with that, though, I think people would say is you're talking about scaling, in a sense, and then you're comparing the scaling of the chimp brain to the human brain. But as humans, when we talk about ourselves, we have these biological drives, don't we? We are interested in competition and territory and survival and reproduction, but we have these intrinsic biological drives, whereas a A machine doesn't necessarily have that biological drive. It's learning patterns. How do you square that circle making that comparison between chimps and humans and then the humans, and then an AGI, which is a scaled neural network, artificial neural network, if you like.
Yeah, I think this is absolutely correct. It's important to see that intelligence is a separate thing from drives or emotions. You can be extremely intelligent and not have emotions or have different emotions or very few emotions. Everyone, I has met a sociopath in their life who was very intelligent but didn't have many emotions. By default, you're absolutely correct. Ai systems will not have emotions. They will be more like sociopaths or psychopaths. But that doesn't make me necessarily feel good.
No, it does.
That's slightly worrying. I think most people watching would agree. In this compendium, I'd underlined, well, I'd underlined lots of it, but you say ultimately the more intelligent and powerful agent decides the future. So you're looking back a couple of million years, a few million years, talking about how humans evolved. They became much more powerful than the chimpanzee, therefore could control the chimpanzee's destiny. You're suggesting that now AI might control us as humans. It's a very dystopian take, some might say. How realistic is it?
Well, I think if it is possible, and it is done, that we create machines that are smarter than humans and If we don't control them, well, who do you think is going to be doing the controlling? It's not going to be us. The same way that chimps are much stronger than us. I don't know if you've ever seen pictures of a shaved chimp. They're hench. I couldn't take a chimp, but we control chimps. I don't think any of us could. I don't think any of us could. But we control chimps because of our superior intelligence, our technology, our abilities, and so on. If we make machines that have even more of that, that are even cleverer, even smarter, even better at developing new technologies, coordinating.
This stuff that Denmark is doing, and it's creating these guardrails they've been described as to allow businesses, industry to be more compliant with the EU's law on AI, how useful will that be in trying to curb some of this that you're talking about?
I think the EU AI Act is a particularly interesting piece of legislation, as it's maybe the only piece of legislation that specifically talks about general purpose AI systems and systemic risks that come from them. The work coming out of Denmark here is a good attempt from industry for creating best practices of how to deploy various current level systems to use cases, including in highly regulated environments. I think this is the work, the precursors to the work of what it would look like to integrate powerful AI systems into our economy. But it is very far from the human-level intelligent systems that we're talking about right here. It's definitely an important step in this direction And I think the EU AI Act, especially over the next couple of years, will be sharpening its focus quite a lot on these general purpose AI systems and these systemic systems. And it'll be very interesting to see how those legislations shape up.
Conor, I was just wondering on the Denmark piece of work, which is a really practical guide. Just one of the challenges that I can see there is they talk about, for example, this is for an AI assistance. If you're going to have an AI assistance in the private sector or the public sector, here's a framework so that you can ensure that you're using that and you're compliant with the EU AI Act and also with GDPR. But some of these things just feel like they're going to stifle innovation, particularly for small businesses. They talk about red teaming your AI assistant. It talks about making sure that your AI assistant is dealing within certain competencies. And if it's not, it should stop. And I just imagine a small retailer, a small business that wants to utilize this technology, create those efficiencies, be more productive, have those predictions coming in to forecast supply and demand. What do you think about this stifling innovation? And the reason I ask that, it comes full circle to some of the issues in the US. So why Gavin Newsom, the governor, didn't sign off on that bill in California that Elon Musk actually supported, which was we don't want to stifle innovation.
How can we create regulation, create fairness, avoid the catastrophe, avoid the suicide race, but do it in a fair way so that everybody can benefit from the opportunities that AI brings?
That's a trillion dollar question, isn't it? I mean,It really is. It really is. The true answer to this question is basically is that there's always a fundamental trade-off between short-term growth and long-term systemic risk. Let me give you an example of a form of innovation that I believe should have been stifled. Bad bonds during the 2008 financial crisis. These were a new innovative form of financial product that created massive systemic risk in the market and led to a huge crash that harmed millions of people. But some people got really rich and they innovated a lot. So innovation by itself is not inherently good or bad. It's a question of how can we get the good things that we want while mitigating the externalities and the downside risk. I think there's plenty of very fair criticism of the EU AI Act that maybe it's hit the wrong trade-off. But I think it's important to understand there's always a trade-off. There's no free lunch.
I just want to bring in this question of should we be fretting over AI's feelings? Conor, I'm sensing a no from you.
No. A psychopath and sociopath. I think a lot of this is a distraction to a certain degree. I think we're already seeing, for example, many people falling in love with chatbots. This is already quite a big problem among younger generations. And also, for example, in the East, China, there is a huge demand for these kinds of products and so on that I think have very severe and unaccounted for mental health concerns. When people start believing these things are actually emotional or haveCaring about them.caring about them. So Could there hypothetically be some thinking machine with emotions? Maybe, I don't know. But we are nowhere near that. And the idea that we should therefore care about our computer's feelings, I think, is a dangerous direction at this point in time.
We're going to pause the conversation just for a moment, coming up after the break. Religion and artificial intelligence might sound like an unlikely pair, but we'll be showing you how the Vatican is using AI to help millions of people explore one of Christendom's most important sites. Stay with us here on AI Decoded. And welcome back to AI Decoded. Now, what do you get when you take 400,000 detailed images at the Vatican and mix in some AI technology? The answer a digital replica or digital twin of the famous St Peter's Basilika. We're going to have a report from Mark Chisleck in just a minute. But first, Priya, all of these images, and you know something about digital twinning. You've come across this before. Tell Tell us more about how the process works.
I think we just need to think about this as anything in the physical world, creating a digital twin of that piece in the physical world. So whether it's a building in its operations, whether it's an engine for aircraft, whether it is the human body, and creating a digital replica of that and taking enough data so that we can then create essentially what looks like a simulation. So we can simulate how that machine might work. We can start to calculate and use equations to try and, for example, if it was for healthcare, invent new therapies. In this case, and what's absolutely fascinating is it's a very different version of the digital twin that I've been looking at in terms of supply chain operations and the human body. It's a digital twin of something where we can then look at where maintenance needs to occur in a very, very old building where it needs upkeep. I think what we're going to see, I'm not going to ruin Mark Thunder here, but it's potentially spotting also lovely new things that we haven't actually seen before.
Okay, well, these 400,000 images were taken at the Vatican over a period of just three weeks using drones, cameras, and lasers plus AI technology to make this replica, an exact digital replica of the exterior and interior of St Peter's Basilika. So let's get this report now from Marc Chisleck.
Religion and artificial intelligence might sound like an unlikely pair. Here at the Vatican, headquarters of the Catholic Church, Pope Francis and the Vatican's own AI experts have been exploring the ethical use of the tech for some years now. But a new initiative will see AI used to digitally preserve one of its most significant locations. To see what it is, I'm making the journey from the heart of Rome to the sovereign nation the Italian capital surrounds, the Vatican City.
I'm now leaving Italy. As I step over this line, I'm entering entering a completely different country.
Entry into the Vatican City doesn't require a visa or passport. Visitors can simply walk in, and six days a week, that's exactly what pilgrims and tourists alike do in huge numbers.
A state within a state, the world's smallest country, the Vatican City, home to the planet's largest church by capacity, capable of accommodating 60,000 people. St Peter's Basilika.
An architectural masterpiece. Some of history's greatest artists contributed to its construction. Michelangelo designed its 136-metre dome, one of the tallest in the world. Bernini created the Baldechino, the ornate bronze canopy above St Peter's tomb. It's the tomb that gave this church its name, believed to be the burial site of St Peter, one of the 12 Apostles of Jesus, and the first Pope of the Catholic Church.
Something like 50,000 people visit St Peter's Basilika, every single day. Big numbers. But there are 1.3 billion Catholic people in the world, many of whom will never get the opportunity to visit this, the most important church in the Catholic world. And that is where lots and lots and lots of photographs with the help of AI comes in.
Every 25 years, the Catholic Church celebrates a year of forgiveness and spiritual renewal known as a Jubilee. The next one is in 2025. A huge restoration job is underway here in preparation. Part of those preparations involve digitally preserving St Peter's. The Vatican has partnered with tech giant Microsoft and a French company, Econem, which specializes in this work. Combined, they've created a virtual twin of the entire church, created by photographing every part of its interior and exterior.
We collected approximately 500,000 images using cameras and drones. We then processed them with photogrammetry software, and the features were extracted to create the 3D environment and the 3D aspect of the monument. They opened the Basilica forest from 07:00 PM to midnights. We were working there all night long, maybe 12 evenings.
The result is a 3D model that can be explored in minute detail. The project is called the People's Basilica. The digital twin of the Basilica will be available online, allowing people worldwide to explore its art and architecture.
One interesting aspect of this project is that by exploring St Peter's Basilica virtually, the viewer can get up close and personal with details of the church that would be almost impossible to see in the real world. But I have to say the real-world experience takes some beating.
What was great is that we first went on site and we were close to the mosaics. We were everywhere in the basilica, and we could see the missing ties, the cracks that were first identified by the architects there.
This detail is vital to the basilica's upkeep and has helped identify damage that requires attention from the team renovating St Peter's. But why is AI an important part of this process? Brad Smith is President of Microsoft and had previously liaised with the Vatican on its AI and ethics work.
In some ways, it's a marriage of two different technologies. One is what we're familiar with, a camera with very high-resolution photography, but it really takes AI to knit all of that together.
Without AI-enhanced algorithms and tools, it would not have been possible to process this amount of data.
But AI uses a lot of energy. And with so much concern around the environmental impact of artificial intelligence, is this the right technology to employ in preservation of historically and culturally significant locations?
I think we have to recognize that AI is one of many things that uses electricity, and we in the tech sector need to both acknowledge and embrace that need. We need to be committed as we are to making data centers become more efficient so they use less electricity. We need to be committed as we are to investing to bring online additional sources of electricity. We need to invest in additional sources of carbon-free energy for electricity.
While the Vatican seems happy to employ AI to help preserve one of its most significant sites, how does the Church feel about the effect of artificial intelligence on the wider world?
Cardinal Mauer El Gambetti is overseeing this entire project.
Artificial intelligence is like a tool given to humanity to better understand reality. It's like a language, but it also has potential to bring people closer to history, art, and spirituality. Like many tools that have helped humanity grow in understanding and allowed society to develop.
The People's Basilika is online now.
Virtual visitors will be able to discover for themselves if that spirituality translates into a digital experience. Mark Cheslack, BBC News.
I wonder what Michelangelo would think of that.
Absolutely fascinating. I think you've met the Pope's AI guy, if I can call him that.
Yeah, Paola Benanti, he's great. The Vatican has been very active, actually, in putting out ethical guidelines around AI and guidance. It's really been quite impressive.
Yeah, I think in the Pope's 2023 speech, he You've talked about the ethical use of AI, and this is obviously a fantastic, positive example of using a digital twin. Have you seen other examples of this technology in your work?
Yeah, not in my work, personally, but I've seen many other people from other companies and so on using this technology for industrial engineering applications, motors, engines, aircraft, stuff like this.
If you could have a digital twin of anything, what would it be, Connor?
Probably a human body, so we could test medicines more efficiently.
That's probably down the track somewhere. Someone must be thinking about that. You've just mentioned it.
Ai Decoded will have an episode on that coming soon, but I don't want to ruin it.
So do stay with us every week, every week on a Thursday. Do stay with us to see that. Absolutely. Conor, I have to say that compendium that you and a number of your colleagues have written is a really fascinating read. If anyone wants to read more about AI and questions raised by artificial intelligence, Conor Leahey and Priya Lakhani, thank you so much. Really good to talk to you. And we are out of time for this week's AI decoded. We will do it all again at the same time next week.
Donald Trump has vowed to overturn guidance on the safe development of artificial intelligence (AI). Trump's promise echoes his ...