Request Podcast

Transcript of EP 1: Ready Or Not

The Last Invention
Published 2 months ago 645 views
Transcription of EP 1: Ready Or Not from The Last Invention Podcast
00:00:11

This is the Last Invention. I'm Gregory Warner, and our story begins with a conspiracy theory.

00:00:18

Greg, last spring, I got this tip via the Encrypted Messaging app, Signal.

00:00:26

This is reporter Andy Mills.

00:00:28

From a former tech executive executive, and he was making some pretty wild claims. I wanted to talk to him on the phone, but he thought his phone was being tapped. But the next time I was out in California, I went to meet with him.

00:00:42

I'm really contending with who I am in this moment. Up until a few months ago, I was an executive in Silicon Valley. And yet here I am sitting in a living room with you guys talking about what I think is one of the most important things that needs to be discussed. In the whole world, which is the nature in which power is decided in our society.

00:01:07

And he told me this story that a faction of people within Silicon Valley had a plot to take over the United States government, and that the Department of Government Efficiency, DOGE, under Elon Musk, was really phase one of this plan, which was to fire human workers in the government and replace them with artificial intelligence. And that over time, the plan was to replace all of the government and have artificial intelligence make all the important decisions in America.

00:01:46

I have seen both the nature of the threat from inside the belly of the beast, if it were, in Silicon Valley, and seeing the nature of what's at stake.

00:02:00

Now, this guy, his name is Mike Brock, and he had formerly been an executive in Silicon Valley. He'd worked alongside some big-name guys like Jack Dorsey, but he'd recently started a sub stack. He told me that after he published some of these accusations, he had become convinced that people were after him.

00:02:18

I have reason to believe that I've been followed by private investigators. For that and other reasons, I traveled with private security when I went to DC and New York City last week.

00:02:30

He told me that he had just come back from Washington, DC, where he had met with a number of lawmakers, including Maxine Waters, and debriefed them about this threat to American democracy.

00:02:44

We are in a democratic crisis. This is a coup. This is a slow-motion, soft coup.

00:02:51

And so this faction, who is in this faction? What is it? Like the Masons or something, or was it a secret cult?

00:02:58

Well, he named several names, people who are recognizable figures in Silicon Valley. He claimed that this, quote, unquote, conspiracy went all the way up to JD Vance, the vice president. He called the people who were behind this coup.

00:03:14

The the Accelerationists.

00:03:16

The Accelerationists. It was a wild story. But some conspiracies turn out to be true. It was also an interesting story. So I started making some phone calls. I started looking into it. And some of his claims I could not confirm. Maxine Waters, for example, did not respond to my request for an interview. Other claims started to somewhat fall apart. And of course, eventually, Doge itself somewhat fell apart. Elon Musk ended up leaving the Trump administration, and for a while, it felt like it was one of those tips that just doesn't go anywhere. But in the course of all these conversations I was having with people close to artificial intelligence. I realized that there was an aspect of his story that wasn't just true, but in some ways it didn't go quite far enough. Because there is indeed a faction of people in Silicon Valley who don't just want to replace government bureaucrats, but want to replace pretty much everyone who has a job with artificial intelligence. They don't just think that the AI that they're making is going to upend American democracy. They think it is going to upend the entire world order.

00:04:37

The world, as you know it, is over. It's not about to be over.

00:04:42

It's over.

00:04:43

I believe it's going to change the world more than anything in the history of mankind, more than electricity.

00:04:49

But here's the thing, they're not doing this in secret. This group of people includes some of the biggest names in technology, Bill Gates, Sam Altman, Mark Zuckerberg, Most of the leaders in the field of artificial intelligence. Ai is going to be better than almost all humans at almost all things. A kid born today will never be smarter than AI. It's the first technology that has no limit.

00:05:14

Wait, so you get a tip about a slow motion coup against the government, and then you realize, no, this is not just about the government. This is pretty much every human institution.

00:05:22

Well, yes and no. Many of these accelerationists think that this AI that they're building is going to lead to the end of what we have come to think of as jobs, the end of what we traditionally thought of as schools. Some would even say this could usher in the end of the nation state. But they do not see this as some shadowy conspiracy. They think that this may end up literally being the best thing to ever happen to humanity.

00:05:51

I've always believed that it's going to be the most important invention that humanity will ever make. Imagine that everybody will now in the future and have access to the very best doctor in the world, the very best educator.

00:06:06

The world will be richer and can work less and have more. This really will be a world of abundance. They predict that their AI systems are going to be the thing that helps us to solve the most pressing problems that humanity faces. Energy breakthroughs, medical breakthroughs.

00:06:24

Maybe we can cure all disease with the help of AI.

00:06:27

They think it's going to be this hinge moment in human history where soon we will be living to maybe be 200 years old, or maybe we'll be visiting other planets, where we will look back in history and think, Oh, my God, how did people live before this technology?

00:06:42

It should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy.

00:06:49

I think a world of abundance really is a reality. I don't think it's utopian, given what I've seen that the technology is capable of.

00:07:01

So these are a lot of bold promises, and they come from the people who are selling this technology. Why do they think that the AI that they are building is going to be so transformative?

00:07:12

Well, the reason that they're making such grandiose statements and these bold predictions about the near future, it comes down to what it is they think that they're making when they say they're making AI. Okay. This is something that I recently called up my old colleague, Kevin Bruce to talk about. Kevin, how is it that you describe what it is that the AI companies are making? Am I right to say that they're essentially building a super mind, a digital super brain?

00:07:43

Yes, that is Correct.

00:07:45

He's a very well-sourced tech reporter and a columnist at the New York Times.

00:07:48

Also co-host of the podcast Hard Fork.

00:07:50

And he says that the first thing to know is that this is far more of an ambitious project than just building something like chatbots.

00:07:59

Essentially, many of these people believe that the human brain is just a biological computer, that there is nothing special or supernatural about human intelligence, that we are just a bunch of neurons firing and learning patterns in the data that we encounter, and that if you could just build a computer that simulated that, you could essentially create a new intelligent being.

00:08:26

I've heard some people say that we should think of it less like a piece of software or a piece hardware and more like a new intelligent species.

00:08:34

Yes. It wouldn't be a computer program exactly. It wouldn't be a human exactly. It would be this digital supermind that could do anything a human could and more.

00:08:47

The goal, the benchmark that the AI industry is working towards right now is something that they call AGI, Artificial General Intelligence. The general is the key part because A general intelligence isn't just really good at one or two or 20 or 100 things, but like a very smart person, can learn new things, can be trained in how to do almost anything.

00:09:11

I guess this is where people get worried about jobs getting replaced because suddenly you have a worker, a lawyer or a secretary, and you can tell the AI to learn everything about that job.

00:09:22

Exactly. I mean, that is what they're making, and that's why there's a lot of concerns about what this could do to the economy. I mean, a true could learn how to do any human job, factory worker, CEO, doctor. As ambitious as that sounds, it has been the stated on paper goal of the AI industry for a very long time. But when I was talking to Kevin Bruce, he was saying that even just a decade ago, the idea that we would actually see it within our lifetimes, that was something that even in Silicon Valley was seen as a pie in the sky dream.

00:09:56

People would get laughed at inside the biggest technology companies for even talking about AGI. It seemed like trying to plan for something, building a hotel chain on Mars or something. It was like that far off in people's imagination. And now, if you say you don't think AGI is going to arrive until 2040, you are seen as like a hyper conservative, basically Luddite in Silicon Valley.

00:10:22

I know that you are regularly talking to people at OpenAI and Anthropic and DeepMind and all these companies. What is their timeline at this point? When do they think they might hit this benchmark of AGI?

00:10:35

I think the overwhelming majority view among the people who are closest to this technology, both on the record and off the record, is that it would be surprising to them if it took more than about three years for AI systems to become better than humans at least almost all cognitive tasks. Some people say physical tasks, robotics, that's going to take longer. But the majority view of the people that I talk to is that something like AGI will arrive in the next two or three years, or certainly within the next five.

00:11:12

I mean, holy shit.

00:11:15

Holy shit.

00:11:16

That is really soon.

00:11:17

This is why there has been such insane amounts of money invested in artificial intelligence in recent years. This is why the AI race has been heating up. Right.

00:11:28

This is to accelerate path to AI.

00:11:31

But this has also really brought more attention to this other group of people in technology, people who I personally have been following for over a decade at this point, who have dedicated themselves to try everything they can to stop these accelerationists.

00:11:49

The basic description I would give to the current scenario is, if anyone builds it, everyone dies.

00:11:55

Many of these people, like Eleazar Yudkowsky, are former accelerationists who used to be thrilled about the AI revolution and who for years now have been trying to warn the world about what's coming. I am worried about the AI that is smarter than us. I'm worried about the AI that builds the AI that is smarter than us and kills everyone. There's also the philosopher nick Bostrom. He published a book back in 2014 called Superintelligence. Now, a superintelligence would be extremely powerful. We would then have a future that would be shaped by the preferences of this AI. Not long after, Elon Musk started going around, sounding this alarm? I have exposure to the most cutting-edge AI, and I think people should be really concerned about it. He went to MIT. I mean, with artificial intelligence, we are summoning the demon. Told them that creating an AI would be summoning a demon. Ai is a fundamental risk to the existence of human civilization. Musk went as far as to have a personal meeting with President Barack Obama, trying to get him to regulate the AI industry and take the existential risk of AI seriously. But he, like most of these guys at the time, they just didn't really get anywhere.

00:13:07

However, in recent years, that has started to change. The man dubbed the godfather of artificial intelligence has left his position at Google, and now he wants to warn the world about the dangers of the very product that he was instrumental in creating. Over the past few years, there have been several high-profile AI researchers, in some cases, very decorated AI researchers. This morning, as companies raced to integrate artificial intelligence into our everyday lives, one man behind that technology has resigned from Google after more than a decade. Who have been quitting their high paying jobs, going out to the press, and telling them that this thing that they helped to create poses an existential risk to all of us.

00:13:50

It really is an existential threat. Some people say this is just science fiction. Until fairly recently, I believed it was a long way off.

00:13:57

One of the biggest voices out there doing this has been this guy, Jeffrey Hinton. He's a really big deal in the industry, and it meant a lot for him to quit his job, especially because he's a Nobel Prize winner for his work in AI.

00:14:09

The risk I've been warning about the most, because most people think it's just science fiction, but I want to explain to people it's not science fiction, it's very real, is the risk that we'll develop an AI that's much smarter than us, and it will just take over.

00:14:25

It's interesting when he's talking to journalists trying to sound this alarm, they're often saying, Yes, we know that AI poses a risk if it leads to fake news, or what if someone like Vladimir Putin gets a hold of AI?

00:14:37

It's inevitably, if it's out there, going to fall into the hands of people who maybe don't have the same values, the same motivations.

00:14:45

He's telling them, No, no, no, no, no, this isn't just about it falling into the wrong hands. This is a threat from the technology itself.

00:14:52

What I'm talking about is the existential threat of this digital intelligence taking over from biological intelligence. For that threat, all of us are in the same boat, the Chinese, the Americans, the Russians, we're all in the same boat. We do not want digital intelligence to take over from biological intelligence.

00:15:12

Okay, so what exactly is he worried about when he says it's an existential threat?

00:15:16

Well, the simplest way to understand it is that Hinton and people like him, they think that one of the first jobs that's going to get taken after the industry hits their benchmark of AGI will be the job of AI researcher. Then the AGI will 24/7 be working on building another AI that's even more intelligent and more powerful.

00:15:43

You're saying AI would invent a better AI, and then that AI would invent an even better AI.

00:15:49

That is one way of saying it. Yes, exactly. The AGI now becomes the AI inventor, and each AI is more intelligent than the AI before it, all the way up until you get from AGI, Artificial General Intelligence, to ASI, Artificial Superintelligence.

00:16:08

The way I define it is this is a system that is single-handedly more intelligent, more competent at all tasks than all of humanity put together.

00:16:17

I've now spoken to a number of different people who are trying to stop the AI industry from taking this step. People like Conor Leahey. He's both an activist and a computer scientist.

00:16:29

So it can do anything the entire humanity working together could do. For example, you and me are generally intelligent humans, but we couldn't build semiconductors by ourselves. But humanity put together can't build a whole semiconductor supply chain. A superintelligence could do that by itself.

00:16:47

It's like this. If AGI is as smart as Einstein or way smarter than Einstein, I guess.

00:16:54

Einstein that doesn't sleep, that doesn't take bathroom breaks, right?

00:16:57

And lives forever and has memory for everything. Exactly. Asi, that is smarter than a civilization.

00:17:03

A civilization of Einstein's. That's how the theory goes, right? You have the ability now to do in hours or minutes things that take a whole country or maybe even the whole world, a century to do. And some people believe that if we were to create and release a technology like that, there'd be no coming back. Humans would no longer be the most intelligent species on Earth, and we wouldn't be able to control this thing.

00:17:33

By default, these systems will be more powerful than us, more capable of gaining resources, power, control, etc. Unless they have a very good reason for keeping humans around, I expect that by default, they will simply not to do so, and the future will belong to the machines, not to us.

00:17:50

They think that we have one shot, essentially.

00:17:53

One shot. One shot meaning we can't update the app once we release it.

00:17:58

Once this cat is out of the bag, once this This genie is out of the bottle, whatever metaphor.

00:18:01

Once this program is out of the lab, as it were.

00:18:03

Exactly. Unless it is 100% aligned with what humans value, unless it is somehow placed under our control, they believe it will eventually lead to our demise.

00:18:16

I guess I'm scared to ask this, but how would this look like a global disaster? Or are we talking about it getting control of CRISPR and releasing a global pandemic?

00:18:25

Yes, there are those fears for sure. I want to get I'll dig more into all the different scenarios that they foresee in a future episode. But I think the simplest one to grasp is just this idea that a superior intelligence is rarely, if ever, controlled by an inferior intelligence. We don't need to imagine a future where these ASI systems hate us or they break bad or something, the way that they'll often describe it is that these ASI systems, as they get further and further out from human-level intelligence after they evolved beyond us, that they might just not think that we were very interesting.

00:19:06

I mean, in some ways, hatred would be flattering. If they saw us as the enemy and we were in some battle between humanity and the AI, which we've seen from so many movies. But what you're describing is just indifference.

00:19:19

I mean, one of the ways that people will describe it is that if you're going to build a new house, of all the concerns you might have in the construction of that house, you're not going to be concerned about the ants that live on that land that you've purchased. They think that one day the ASIs may come to see us the way that we currently see ants.

00:19:40

It's not like we hate ants. Some people really love ants, but humanity as a whole has interests. If ants get in the way of our interests, then we'll fairly happily destroy them.

00:19:53

This is something I was talking to William McCascoll about. He is a philosopher and also the co founder of this movement called the Effective altruists.

00:20:01

The thought here is if you think of AI as we're developing as like this new species, that species, as its capabilities, keep increasing. So the argument goes, we'll just be more competitive than the human species. And so we should expect it to end up with all the power. That doesn't immediately lead to human extinction, but at least it means that our survival might be as contingent on the goodwill of those AIs as the survival of Ants are on the goodwill of human beings.

00:20:36

We'll be back right after this break.

00:20:46

The Last Invention is sponsored by Ground News. Ground News is one of the most helpful tools that I use to avoid the echo chambers and media bias online, especially when it comes to shining a light on our blind spots. So whether you're politically on the left or the right or somewhere in the center, the blind spot feature from Ground News highlights the stories that tend to be disproportionately covered by one side or the other. As an example, take these two stories about President Donald Trump, one which had low coverage among left-leaning outlets.

00:21:25

It's a very important relationship. We're going to get along good with China.

00:21:28

Reported that Trump says US will accept 600,000 Chinese students as part of a trade deal.

00:21:34

I hear so many stories about we're not going to allow their students. We're going to allow, it's very important, 600,000 students.

00:21:40

And another, largely uncovered by right-leaning outlets.

00:21:43

Trump's social media Company is using Crypto.

00:21:47

Com's-trump family Crypto Empire expands with Crypto. Com partnership.

00:21:52

That's our transactional Trump family.

00:21:54

Make some money when you can.

00:21:55

By seeing which stories are amplified or ignored, depending on the outlet, Ground News helps you step outside the filter bubbles that shape most people's news diets, giving you a fuller picture of what's actually happening. I really think that if you like this podcast, you're going to like their mission. Go to groundnews. Com/invent to get 40% off the same unlimited access Vantage plan that we use. You can even sign up to stay up to date with the biases in your coverage proactively with the Weekly Blind Spot Report, delivered directly to your inbox. This is a great way to support them and the work that they do because Ground News is a subscriber-supported platform. We appreciate what they're up to. We appreciate their support for this podcast, so go check them out and make sure to use our link, groundnews. Com/invent, so they know we sent you. This episode of The Last Invention is brought to you by F. I. R, the foundation for Individual Rights and Expression. There's a pattern that you can trace throughout history. In ancient Athens, Socrates was put to for asking tough questions of the powerful. Centuries later, monarchs banned and burned books they considered dangerous.

00:23:07

In the last century, authoritarian governments shut down newspapers, censored broadcasts, even jailed their critics. The struggle was always the same. Who gets to decide what people can know? Today, that struggle is playing out in a new arena, and the risk now is subtler. Search results that quietly vanish, recommendation engines that steer us toward safe and comfortable answers, and AI filters that can suppress ideas before we ever even see them. That's where FHIR comes in. Fhir has spent decades defending free inquiry on our campuses, in the courts, and in our culture. Now, through a $1 million grant program in collaboration with the Cosmos Institute, they are supporting projects that keep free thought alive in the era of AI. Join us today at thefhire. Org/thelastinvention. By supporting FHIR, you are protecting the future of inquiry in America and ensuring that tomorrow's most important questions can still be asked. Once again, visit thefire. Org/thelastinvention. And thanks.

00:24:11

If the future is closer than we think. If one day soon there is a at least reasonable probability that super intelligent machines will treat us like we treat bugs, then what do the folks worried about this say that we should do?

00:24:29

Well, There's essentially two different approaches to the perceived threat. Some people who are worried about this, they simply say that we need to stop the AI industry from going any further, and we need to stop them right now.

00:24:44

We should not build ASI. Just don't do it. We're not ready for it, and it shouldn't be done. Further than that, it's not just I am not trying to convince people to not do it out of the goodness of their heart. I think it should be illegal. It should be logically illegal for people and private corporations to attempt even to build systems that could kill everybody.

00:25:04

What would that mean to make it illegal? How do you enforce that?

00:25:07

Some accelerationists joke like, what are you going to outlaw algebra?

00:25:11

You don't need uranium in a secret center. You can just build it with code.

00:25:16

But you do need data centers, and you could put in laws and restrictions that stop these AI companies from building any more data centers and a number of other laws. There are some people, though, who go even further and I would say that nuclear-armed states like the US should be willing to threaten to attack these data centers if these AI companies like OpenAI are on the verge of releasing an AGI to the world.

00:25:45

Wait, so even bombing data centers that are in Virginia or in Massachusetts?

00:25:51

They see it as that great of a threat. They believe that on the current path we're on, there is only one outcome outcome, and that outcome is the end of humanity.

00:26:02

If we build it, then we die.

00:26:04

Exactly. This is why many people have come to calling this faction the AI Doomers. The accelerationists like to call Doomer.

00:26:13

That was a pejorative coined by them, and very successfully, I must say.

00:26:17

I disavowed the Doomer label because I don't see myself that way.

00:26:20

Some of them have embraced the name Doomer. Others of them dislike the name Doomer. They often will call themselves the Realists. But in my reporting, everyone calls themselves the Realists, so I didn't think that would work.

00:26:32

I consider it to be realistic, to be calibrated.

00:26:35

And one of the reasons that they balk at the name is that they feel like it makes them come off as a bunch of anti-technology Luddites, when in fact, many of them work in technology, many of them love technology. People like Conor Leahey, they even like AI as it is right now. He uses ChatGPT. He just tells me that from everything that he sees, where it's headed, where it's going, we have no choice to stop them.

00:27:00

If it turns out tomorrow, there's new evidence that actually all these problems I'm worried about are less of a problem than I think they are, I'd be the most happy person in the world. This would be ideal.

00:27:11

All right. So one approach is we stop AI in its tracks. It's illegal to proceed down this road we're on. But that seems challenging to do, given how much is it already invested in AI, and frankly, how much potential value there is in the progress of this technology. So what's the alternative?

00:27:30

Well, there's another group of people who are pretty much equally worried about the potentially catastrophic effects of making an AGI and it leading to an ASI. But they agree with you that we probably can't stop it. Some of them would go as far as to say, We probably shouldn't stop it because there really is a lot of potential benefits in AGI. What they're advocating for is that our entire society, essentially our entire civilization, needs to get together and try in every way possible to get prepared for what's coming.

00:28:06

How do we find the win-win outcome here?

00:28:09

One of the advocates for this approach that I talked to is Liv Burie. She is a professional poker player and also a game theorist.

00:28:17

Our job now, right now, whether you're up someone building it or someone who is observing people build it or just a person living on this planet because this affects you too, is to collectively figure out how we unlock this narrow path because it is a narrow path we need to navigate.

00:28:33

We should be really focusing a lot right now on trying to understand as concretely as possible what are all the obstacles we need to face along the way and what can we be doing now to ensure that that transition goes well.

00:28:47

This faction, which includes figures like William MacCaskill, what they want to see is the thinking institutions of the world, the universities, research labs, the media, join together to try and solve all of the issues that we're going to face over the next few years as AGI approaches.

00:29:06

So you mean not just leave this up to the tech companies?

00:29:09

Exactly. They want to see politicians brainstorming ways to help their constituents in the event that the bottom falls out of the job market, right?

00:29:20

Right. Or prepare communities to have no jobs, I guess.

00:29:23

Some of them go that far, like universal basic income. And they also want to see governments around the world, especially in the US, start to regulate this industry. What are the concrete steps we could take in the next year to get ready?

00:29:37

So we'd like regulations that say when a big company produces a new, very powerful thing, they run tests on it and they tell us what the tests were.

00:29:46

Jeffrey Hinton, after he quit Google, he converted to this approach, and he was talking to me about the kinds of regulations that he wants to see.

00:29:55

We'd like things like whistleblower protection. If someone in one of these big companies discovers the company is about to release something awful which hasn't been tested properly, they get whistleblower protections. Those are to deal, though, with more short term threats.

00:30:10

Okay, but what about the long term threats? What about this idea that AI poses this existential threat? What is it that we could do to prevent that?

00:30:19

Okay, so I can tell you what we should do about AI itself taking over. There's one good piece of news about this, which is that no government wants that. So governments will be able to collaborate on how to deal with that.

00:30:33

So you're saying that China doesn't want AI to take over their power and authority. The US doesn't want some technology to take over their power and authority. And so you see a world where the two of them can work together to make sure that we keep it under control.

00:30:47

Yes.

00:30:48

In fact, China doesn't want an AGI to take over the US government because they know it will pretty soon spread to China. So we could have a system where there were research institutes in different countries that were focused on how are we going to make it so that it doesn't want to take over from people. It will be able to if it wants to, so we have to make it not want to. The techniques you need for making it not want to take over are different from the techniques you need for making it more intelligent. So even though the countries won't share how to make it more intelligent, they will want to share research on how do you make it not want to take over.

00:31:25

And over time, I've come to calling the people who are a part of this approach, the scouts.

00:31:30

Like the Boy Scouts.

00:31:31

Be prepared. Like the Boy Scouts. Yes, exactly. And it turned out that after I ran this name by William McCaskill, so what if I called your camp The Scouts?

00:31:41

So a little fun fact about myself is I was a Boy Scout for 15 years.

00:31:48

He actually was a boy scout, and so I thought, Okay, the scouts.

00:31:51

Maybe that's why I've got this approach.

00:31:54

But the key thing about the scouts approach, if it's going to work, is they believe that we cannot wait, that we have to start getting prepared, and we have to start right now. This is something that I was talking about with Sam Harris.

00:32:08

The reasons to be excited and to want to go, go, go are all too obvious, except for the fact that we're running all of these other risks, and we haven't figured out how to mitigate them.

00:32:18

Sam is a philosopher. He's an author. He hosts the podcast Making Sense, and he's probably the most impassioned scout that I know personally.

00:32:26

There's every reason to think that we have something like a tightrope to perform successfully now, in this generation, not 100 years from now. We're edging out onto the tight rope in a style of movement that is not careful. If you knew you had to walk a tight rope and you got one chance to do it, and you've never done this before, what is the attitude of that first step and that second step? We're like racing out in the most chaotic way. Flailing our arms. Yeah. Just like, we're off balance already. We're looking over our shoulder, fighting with the last asshole we met online, and we're leaping out there.

00:33:13

Right. You've been on this for a long time. In 2016, I remember you did this big TED Talk. I watched it at the time. It had millions of views. You were essentially saying the same thing. You were trying to get people to realize that we have a tight rope to walk and we have to walk it right now.

00:33:30

Well, I wanted to help sound the alarm about the inevitability of this collision, whatever the time frame. We know we're very bad predictors as to how quickly certain breakthroughs can happen. Stuart Russell's point, which I also cite in that talk, which I think is a quite brilliant change in a frame, he says, Okay, let's just admit it is probably 50 years out. Let's just change the concepts here. Imagine we received a communication from elsewhere in the galaxy, from an alien civilization that was obviously much more advanced than we are because they're talking to us now. The communication reads, Thus, people of Earth, we will arrive on your lowly planet in 50 years. Get ready. Just think of how galvanizing that moment would be. That is what we're building, that collision and that new relationship.

00:34:58

Coming up on the last invention.

00:35:06

Why is all the worry about the technology going badly wrong? And why are people not worried enough about it not happening?

00:35:14

The accelerationists respond to these concerns.

00:35:17

Existential risk for humanity is a portfolio.

00:35:20

We have nuclear war, we have pandemic, we have steroids, we have climate change.

00:35:26

We have a whole stack of things that could actually, in fact, have this existential risk. You're saying that it's going to decrease our overall existential risk, even as it itself may pose, to some degree, an existential risk?

00:35:38

Yes. Researchers tell us what they saw that changed their minds.

00:35:43

I was a person selling AI as a great thing for decades. I convinced my own government to invest hundreds of millions of dollars in AI. All my self-worth was on the plan that it would be possible positive for society. And I was wrong. I was wrong.

00:36:06

And we go back to where the technology fueling this debate began.

00:36:10

Basically, this is the Holy Grail of the last 75 years of computer science.

00:36:16

It is the genesis, the er, philosopher's stone of the field of computer science.

00:36:27

The Last Invention is produced by Longview, home for the curious and open-minded. To learn more about us and our work, go to longviewinvestigations. Com. Special thanks this episode to Tim Urban. Thanks for listening. We'll see you soon.. This episode is sponsored by Ground News, the app that helps you spot media bias and see a broader picture of the news shaping our world. Get 40% off their Vantage Plan at ground. News/ground. Com. Sponsored by FHIR, Defending Free Thought in the Age of AI. You can learn more at thefhire. Org/thelastinvention.

AI Transcription provided by HappyScribe
Episode description

A tip alleging a Silicon Valley conspiracy leads to a much bigger story: the race to build artificial general intelligence — within the next few years — and the factions vying to accelerate it, to stop it, or to prepare for its arrival.

FEATURING:

Mike Brock, Kevin Roose, Geoffrey Hinton, Connor Leahy, William MacAskill, Liv Boeree, Sam Harris, and Yoshua Bengio

LINKS:

Sam Harris 2016 TED Talk, Can We Build AI Without Losing Control Over It

Nick Bostrom's book Superintelligence

Sam Harris' Making Sense podcast

Center for AI Safety

PauseAI (Connor Leahy's organization)

Hard Fork podcast

Geoffrey Hinton’s resignation coverage:  Why the “godfather of AI” left Google to warn about existential risk

William MacAskill's book What We Owe the Future

Eliezer Yudkowsky's book If Anyone Builds It, Everyone Dies

CREDITS:

This episode of The Last Invention was reported and produced by Andy Mills, Gregory Warner, Andrew Parsons, Megan Phelps-Roper, Matthew Boll, Seth Temple Andrews, and Ethan Mannello. It is hosted by Gregory Warner

Music for this episode was composed by Scott Devendorf, Ben Lanz, Cobey Bienert, and Matthew Boll

The Last Invention artwork by Jacob Boll

To become a Longview subscriber you can visit us here

Thank you to our sponsors Ground News and FIRE

GROUND NEWS : Go to groundnews.com/invent to get 40% off unlimited access to global coverage of the stories shaping our world.

FIRE 

This is a paid sponsorship link.