Request Podcast

Transcript of EP 2: The Signal

The Last Invention
Published 2 months ago 417 views
Transcription of EP 2: The Signal from The Last Invention Podcast
00:00:03

I'm Gregory Warner, and this is the last invention.

00:00:08

On November 30th, 2022, the world as we know it changed forever with the introduction of ChatGPT.

00:00:14

The robots are taking over. The internet's going crazy over new artificial intelligence called ChatGPT.

00:00:21

So much of this moment that we're in, in the AI revolution, so much of this debate that we're having about what we should do next, was triggered by the arrival of ChatGPT.

00:00:32

The next generation of artificial intelligence is here. It took Netflix more than three years to reach one million users, but it took ChatGPT just five days. The program can write really complex essays, books, news articles, and even computer code.

00:00:49

Whether you're a ChatGPT fan or not, that single AI chatbot and the models that followed, they supercharged an industry, shifted our relationship to AI, and fueled this debate about our future with intelligent machines. That moment, that meteoric impact on our collective conversation, that wasn't just predicted. That moment was forged over 70 years ago in the middle of a war.

00:01:21

All right, Greg. In a lot of ways, what we think of today as artificial intelligence, it comes out of the Second World War.

00:01:28

Again, reporter Andy Mills.

00:01:31

And actually not just the war, but one battle in that.

00:01:36

The Battle of the Atlantic continues. As our convoys pass to and fro, U-boats lurk in the vast waters, preparing to cut our lifelines at every opportunity.

00:01:44

All right, so for context, it's 1940, and the Germans have all but cut off the supply lines between the US and Great Britain in the Atlantic Ocean, thanks in part to this technological edge that their Navy has in the of Uboats.

00:02:00

Hitler has loosed the whole weight of his Uboat force against our lifeline in the Atlantic.

00:02:07

It became clear that if they couldn't stop these Uboats, they might lose the war.

00:02:14

The Battle of the Atlantic holds first place in the thoughts of those upon whom rests the responsibility for procuring the victim.

00:02:23

One great hope that they had to reverse Germany's dominance in the Atlantic was to crack their stigma code that they used to communicate with these Uboats. But the trouble was, month after month, as hundreds and thousands of soldiers died and ships sank, nobody could crack it.

00:02:42

Right. And just to clarify, code-breaking at that point in history, was still a very human endeavor. Yes.

00:02:48

At this point, it is very common for pretty much every military across the world to have a team of code breakers that they work with, where they try and intercept and decode the communications of their enemies. But none of those teams in the Allied forces were able to crack this code. Back in England, a team was assembled of a somewhat unlikely group of people. You had mathematicians, academics, even some chess masters.

00:03:16

These were folks who were not in the military. The military was trying to recruit beyond their ranks.

00:03:21

Yes, they were recruited out of the classrooms, recruited out of their research labs, and essentially given an enigma, and the government said, Look, we need help Is there anything you could do? And eventually, after a lot of trial and error, they end up constructing this electromechanical, essentially, calculator that could sift through millions and millions of possible configurations of this code, something that would have taken a human group of code breakers weeks and weeks in just hours. And lo and behold...

00:03:55

German submarines have been surrendering all over the place in fairly satisfactory numbers.

00:04:00

They crack Enigma. It opens up the Atlantic. This is how you get events like D-Day. The Americans could now travel the Atlantic to Europe. Many people think that this is one of the key factors that leads to the Allies victory over the Nazis.

00:04:15

Time and time again, the very issue of the war depended on the breaking of the U-boat blockade. Now, as some of the last U-boats came in from sea, no one forgot the lives given and the battles fought against the very grave menace they implied.

00:04:29

The The main guy behind that code-breaking team, his name was Alan Turing. And according to the people that I interviewed, right there in the middle of the war, looking at this big electromechanical contraption that he had helped make, He was already envisioning the day when that machine would be able to think for itself. If we were trying to tell the story of where AI is at right now and where it might be headed next, where does that begin? Alan Turing.

00:05:01

Alan Turing.

00:05:02

Alan Turing, obviously, one of the godfathers of computer science. And was the father of AI.

00:05:08

From almost day one, Turing saw computers not just as tools that could break codes, but as machines that could think at the highest level.

00:05:16

Yes, and from the very beginning of this field of computer science, he inspired this goal to make what today we call AGI. It is the genesis, the er, philosopher's stone of the field of computer science. Basically, this is the Holy Grail of the last 75 years of computer science. Even more dramatically than that, he believed that ultimately, the machines would think even better than humans, and that when that happened, they would be able to take control. He talks about how, Surely one day, there will be machines that can converse with each other, improve upon things, do anything a human can do, and they will surely leave humanity behind.

00:06:04

The idea that this is possible has been around for a very long time. Alan Turing in 1951 said that the default outcome is that the machines then are going to take control.

00:06:14

This is something that I talk to Max Tegmark about. He teaches machine learning at MIT and is a very influential voice in the AI debate we're having today.

00:06:23

Because he didn't think of AI as just another technology like the steam engine, thought about it as a new species. But Turing was pretty chill about it and said, Don't worry about it. It's far away. He was right. It was far away from 1951. He said, But I'll give you a test so you know when you're close. I'll give you a canary in the coal mine. It's called the Thuring test.

00:06:43

Max says that this is one of the reasons that he created what's called the Turing test. You know the Turing test, right?

00:06:49

I think the Turing test is where you're chatting with a, possibly a machine, possibly a human, something on the other side of the screen. The goal of the test is, can the machine fool you into thinking thinking that you're actually chatting with a human?

00:07:02

Yes, that in short is the Turing test. Can you be in conversation with a machine and not know the difference between it and a human being? And Tegmark, he was saying that this was not a nine tests. This wasn't even necessarily just a test of the machine, but it was a way to send a signal to people in the future to say that once you've crossed this threshold-when machines can master language and knowledge at the level of humans, then you're close. It was like a warning shot to the future to tell them there's no going back. Because soon it will be outside of our control.

00:07:52

Did Turing see this future of machines taking control as something good or something he was warning us about?

00:08:01

Well, that question is very much up for debate right now. A lot of the people who are worried about this moment we're in with artificial intelligence now, they look at this one line where he said, once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage, therefore, we should have to expect the machines to take control. They say, Obviously, that sounds very foreboding. He was warning us that this was going to be dangerous. But in most of his lectures, he's far more matter of fact. A lot of people point out that Turing himself, he wasn't a moralist. He was a contrarian. He often went as far as to say that when this happened, the machines would deserve our respect. And so a lot of the people that I spoke to said that trying to take Turing or the Turing test and to turn it into something doomer or something accelerationist is the point.

00:09:00

I actually think the difference between the optimist and the pessimist can be overstated. I think the fundamental difference has always more fundamentally been between people who thought this was a real possibility to take seriously and the people who didn't. I think the main thrust of the argument was to say, look, people, take this seriously. This is a thing.

00:09:24

This is something that I was talking about with Robin Hansen. He is an economist, but he also, for many years, was an AI researcher.

00:09:30

It's a rhetorical device to try to get people to take a space of possibility seriously. Certainly in academia, there's just a long tradition, quite reasonable, of saying, Until you can even tell me what your words mean, I'm not really interested in taking your conversation seriously, right? So Turing was primarily trying to just make sure we could talk about an AI and have that be somewhat precisely defined so that you wouldn't dismiss the idea by just saying that's too vague.

00:10:02

It sounds like Turing didn't just give us a test of how we'd know the AI could think, but actually gave us some practical, concrete way of saying, Okay, no, this is artificial intelligence. When it can talk back to us and convince us it's human.

00:10:17

See, it's more complicated than that. That's what I thought the Turing test was for a long time, that once it can do this, it is now officially an AI. But he actually didn't think that it was going to suddenly be super useful in this moment. It wasn't that now you are in the presence of a true thinking machine, and it's tomorrow going to start taking over. What he was doing was more nuanced. He was giving the field of computer science a goal that they could shoot for. He was saying this is something you could technically go out and build. But in this deeper sense, he was producing this story that the broader public could understand. Making this prediction that once this moment happened, once a machine could so thoroughly mimic how an intelligent person communicates, that we would treat it differently, that we would imbue it with something profound and utterly transformative.

00:11:24

It's not just a test of the machines and how far it's come, but it would change our relationship to the machine.

00:11:30

Right. In some ways, it's just as much a test about us as it is a test about the machine.

00:11:36

What happens to that dream of Thuring's?

00:11:40

Well, sadly, Turing dies in 1954. It's a terrible story. He was prosecuted by his own government for being gay.

00:11:51

Because homosexuality was illegal in England at that time.

00:11:54

Yes. He was sentenced to what they called chemical castration, and allegedly, he committed suicide. But Turing's dream doesn't die with him. It gets picked up by a group of scientists in the US, many of whom actually knew Thuring personally. A lot of them had corresponded with him. They raised money for a 10-week summer program at Dartmouth, where the idea was to come together, create a prototype of a true thinking machine, and to turn the entire pursuit into a proper field of study.

00:12:30

The AI discipline is founded in the summer of 1956.

00:12:34

This is a story that I talked to Karen Howe about. She is a tech reporter and the author of Empire of AI.

00:12:40

The people that had gathered together were already very accomplished scientists, giants in their own fields.

00:12:49

So this summer program included people like Claude Shannon, who was the inventor of information theory.

00:12:55

Also Claude, the namesake of the Anthropic AI model, Claude.

00:12:59

Indeed. He was. Nathan Rodchester was there. He was the maker of the first commercial computer at IBM. There were also people there like Marvin Minsky and John McCarthy, who would found the first AI labs at MIT.

00:13:13

And the reason why this is considered the origin story of the field is because a Dartmouth professor, John McCarthy, coined the term artificial intelligence and started using it for the first time to form this new field.

00:13:26

This is where we first get the name Artificial Intelligence.

00:13:31

What did they call these thinking machines before that?

00:13:33

Well, Turing called them thinking machines. Some of them were calling it automata.

00:13:38

John McCarthy tried originally to call it automata studies.

00:13:41

Which, personally, I like.

00:13:43

It just didn't sound exciting. They were trying to attract funding from government and from nonprofits, and it just wasn't working. So he specifically went to cast About for a more evocative phrase and hit upon artificial intelligence.

00:13:59

And this name Artificial Intelligence. Karen says that this would forever shape the field, partly because it would tie it to the thorny question of what actually is human intelligence.

00:14:12

And because there is no consensus around where human intelligence comes from. There was plenty of debate and discourse at the time about what would it actually take to get machines to think.

00:14:26

Everybody there believed in what Turing was saying, that machines The means could think, that AI could be built. But the debate was more about, well, what exactly are we mimicking? What is the intelligence we're copying?

00:14:38

Yes. How does our intelligence work, and therefore, how would we recreate it? Right away, That question, it splits the AI researchers into these two different groups, and it gives birth to two different paths towards AI that continue to this day.

00:14:58

And the dominant two camps that emerged were called the connectionists and the symbolists. The symbolists believe that human intelligence comes from the fact that we know things. If you want to recreate intelligent computer systems, you need to encode them with databases of knowledge. So this created a branch of AI focused on building so-called expert systems.

00:15:22

All right, so the first group, the symbolists, they get their name because they believed you could build intelligence into a machine by writing rules and using symbols like numbers and words, and that you could essentially encode humanlike intelligence and create something like a human expert.

00:15:44

But The connectionists, they believe human intelligence comes from the fact that humans can learn. And so to recreate intelligent computer systems, then we need to develop machine learning systems, software that can learn from data.

00:16:00

But the second group, the connectionists, they say human intelligence, it doesn't come from shoving a bunch of expertise and logic into our brains. It comes from our ability to find our own patterns in the world and to find our own connections between those patterns. So instead of trying to build something that's like a human expert, we should try and build something closer to a human toddler or a baby.

00:16:25

When you watch babies grow up, they're constantly exploring the world. They're They're gathering all of this experience, and they're quickly updating their model of their environment around them. And that's what the connectionists believe was the primary driver of how we become intelligent.

00:16:42

That's cool that you could create an AI not that knows the things you taught it, but that it could go out and learn new things from the patterns it finds from the data. But thinking about a baby that will grow up and come to its own conclusions, derive its own values, wouldn't that model be a lot less predictable than the symbolist human expert?

00:17:05

You're teasing at the current drama, Greg. Yes. The connectionist's model would be far less controllable, far more unwieldy, and eventually, that's going to strike some fear into many at heart.

00:17:19

In the long run, the framing that they picked, the ideas that they discussed back then, the debates that emerged that summer have continued to have lasting impact to present day.

00:17:30

Okay, so the summer program ends and they have a new name, they have a debate. What else do they got?

00:17:36

Well, they've got a lot of dreams, they've got a lot of theories, but they do not have a lot of money. Remember, like we said, computer science is a brand new field. Artificial intelligence is like a branch of computer science. It's a new experimental field inside of a new and experimental field. And so they don't have the resources that they need to really build the models and turn their math and their dreams into something functional in the world. However, all that was about to change because of the arrival of, in many ways, a new battle.

00:18:14

Cbs television presents a special report on Sputnik 1, the Soviet space satellite.

00:18:20

On October fourth, 1957, the USSR becomes the first nation to ever get a satellite into orbit.

00:18:30

Really quite an advancement for not only the Russians, but for international science. It's the first time anybody has ever been able to get anything out that far in space and keep it there for any length of time.

00:18:40

It's this absolutely amazing achievement for science. But of course, it comes right in the midst of the Cold War. It gets the American people alarmed that a foreign country, especially an enemy country, can do this, and we fear this. It triggers all of these fears about the communists winning the space race.

00:19:00

Let's not fool ourselves. This may be our last chance to provide the means of saving Western civilization from anihilation.

00:19:12

In response to this, in a move that modern day accelerationists say that we should embrace in our current AI race with China, the United States decides, All right, let's accelerate.

00:19:27

These are extraordinary times, and face an extraordinary challenge. Our strength, as well as our convictions, have imposed upon this nation the role of leader in freedom's cause.

00:19:40

They flood a bunch of money and a bunch of resources into universities, into science labs. They make this huge national effort to recruit all kinds of young, talented people to go into space, to go into technology.

00:19:54

Now it is time to take longer strides. Time for this nation to take a clear Really leading role in space achievement, which in many ways may hold the key to our future on Earth. Ten, nine. Ignition sequence start. 6.

00:20:13

And it works.

00:20:15

Lift off. We have a lift off. Lift off on Apollo 11.

00:20:20

The US wins the race. We are the first to get to the moon.

00:20:24

That's one small step for man, one giant leap for mankind.

00:20:33

It becomes really one of the most inspiring events in history. This realization of an enormous dream, this example of what can happen when people put everything into this goal of reaching beyond what was previously thought possible. It turns out that this also was an absolute boom time for the field of artificial intelligence.

00:21:01

The Cold War was really a fountain of youth for research.

00:21:08

I found this interview of Marvin Minsky. He was one of the guys at the Dartmouth Summer program.

00:21:13

There was huge amounts of money money more than you needed.

00:21:18

He was saying that suddenly, these AI labs had more money than they knew what to do with. They were finally able to build the first AI models. They ended up actually building the first AI chatbot named Eliza.

00:21:30

Things proceeded very, very rapidly. A new generation of ideas every two or three years was a wonderful period where you had to change everything you thought.

00:21:43

In fact, the field of AI was moving at such a rapid pace that a lot of AI researchers began to think that by the time the astronauts got to the moon, that we were going to be living here on Earth alongside thinking robots.

00:22:02

The Thinking Machine. Produced by the CBS Television Network.

00:22:10

I found this old CBS archive from the 1960s.

00:22:13

Can machines really think? I'm David Wayne, and as all of you are, I'm concerned with the world in which we're going to live tomorrow. A world in which a new machine may be of even greater importance than the atomic bomb.

00:22:28

And in it, they're interviewing all these AI researchers at the time.

00:22:32

But I think the computers will be doing the things that men do when we say they're thinking.

00:22:37

All of them are confident that this breakthrough was close.

00:22:41

I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratories, which is not too far from the robot of science fiction fame. I'm convinced that machines can and will think in our lifetime.

00:22:54

Okay, so they sound very optimistic about the AI they were going to make, but were there any fears and doubts about what would happen once they made it?

00:23:03

Well, this is something that a lot of people, especially those who are worried about our current AI, marvel at about this time period. This is something that came up when I was talking to nick Bostrom, the author of that book Superintelligence.

00:23:17

The early pioneers were actually quite optimistic about the timelines. They thought maybe in 10 years or so, we would be able to get machines that can do all that humans can do. In some sense, they took that seriously. I guess that's part of what drew them into trying to program computers to do AI stuff. But in another sense, they weren't serious at all because they didn't then seem to have spent any time thinking about what would happen if they were right, if they actually did succeed in getting machines to do everything that humans can do.

00:23:49

You're saying that in the research labs during this time, there weren't people who were saying, Oh, my God, what's this going to do to the economy? Oh, my God, what if this changes the society for forever.

00:24:00

Yeah, there was very little thinking about the ethics of this, the political implications, like safety. It was as if their imagination muscle had so exhausted itself in conceiving of this radical possibility of human-level AI, that they couldn't take the obvious next step of saying, Well, probably if we get that, we will have super intelligence not too long after.

00:24:24

I think that the best explanation for just why this time period had such a different mindset than we do today came from something that I heard from Robin Hansen. Why is it that you think there was not a big, robust discussion about AI safety and that they were moving so fast, asking so few questions? Why wasn't it more safety-est at the time?

00:24:47

Well, first of all, safetyism as a cultural trend is just something that's happened since then, mostly in the world. The world back then wasn't very safety-ist, honestly. Right.

00:24:57

No seat belt laws.

00:24:58

They didn't have seat belts, for example. Okay. Secondly, this was a technological framing. This was a, can we do this? Is this technologically feasible?

00:25:11

He was saying you have to put yourself into their mindset, which is that they are living in the aftermath of the Second World War. Many of these people were veterans of the Second World War. They are worried about another war that could break out with the Soviets. There was this idea that if your scientists could conceive of a new technology, you have to assume that your opponent's scientists have also conceived of that technology.

00:25:40

Because right after World War II, there was clearly this strong perception that the winners are the ones who more effectively pursue possible technological changes. There was this expectation of technological progress and an expectation that in order for your nation to stay competitive with the world, you needed to pursue feasible technologies.

00:26:05

The idea being that it would be a better world if we made this technology and not our enemies.

00:26:10

Good, maybe for humanity, but plausibly also more good for our people.

00:26:14

For our nation.

00:26:15

Our nation or whatever, if we are pursuing this before the rest.

00:26:20

Do we know if they actually thought the Soviets were trying to compete in AI?

00:26:24

Yes. There was a rumor that was going through the Department of Defense, circulating through the government that the Soviets and their AI researchers were right on America's heels. When you read the communication about it, it sounds so similar to how we think of China and the US with AI today. Right.

00:26:45

The US has no choice but to barrel forward with creating this AI because, God forbid, the communists get their hands on it first.

00:26:52

Different communists, but same fear as now. However, there was this one guy in the field of AI research who did express some serious concerns about what might happen. His name was Dr. I. J. Good. His friends called him Jack.

00:27:09

But literally Dr. Good?

00:27:11

His name is- His name is literally Dr. Good. He was friends with Alan Turing. They worked together on that code-breaking machine in World War II. To many of those who are concerned about AI today, they think that good is just as influential to this debate as Turing.

00:27:31

As a consequence, I think mainly of conversations with Turing during the war. I was also fascinated, though not to the extent that Turing was obsessed with the notion of producing thinking machines. So I was quite interested in them. And in fact, in 1958- And that's because Good was the first person to publish this idea that he called ultraintelligence, which today we call superintelligence.

00:27:58

This idea that once the thinking machine became a true artificial intelligence, that it could think as good or better as a human, that that machine would then create an even more intelligent machine, which would create an even more intelligent machine, and you would have what he called an intelligence explosion.

00:28:19

Almost like a nuclear chain reaction. That it affects everything around it.

00:28:23

This moment where everything would change for the human race. But what's interesting about it is that He opens up this paper, he wrote, by saying that humanity has no choice, really, but to create this machine.

00:28:38

Well, I wrote a paper in 1965 called Speculations Concerning the First Ultra-intelligent machine. I started off that letter by banging a gong by saying, The survival of humanity depends on the early construction of an ultra-intelligent machine.

00:28:56

Why did I. J. Good think that our survival as a species depended on making this machine? It sounds like making it quickly. Meaning not just that we, Oh, we really should make this thing, but we need to make it for our own survival.

00:29:10

Well, he wrote this paper and he put this idea out there in 1965. In some ways, 1965 is a world away from 1956. At the time, he and many others had started to worry about the existential risks facing humanity, the most obvious being this fear of a nuclear war between the US and the Soviet Union that ended up leading to mutual destruction for everyone on Earth.

00:29:39

This is a time when public school kids are regularly doing nuclear drills, jumping under their desks just in case.

00:29:46

Exactly. This was also a time of the largest population boom in the history of humanity. There was concerns about there maybe not being enough food to feed all of these new humans that were coming into the world. This was the early days of what would become the environmentalist movement, concerns about what cities full of smog and pollution were going to do to our increasingly crowded world. People like I. J. Good were saying that we are going to need a technological solution to the problems of our age, and that this ultra-intelligent machine, this would be a safeguard to all future existential crises that face the human species.

00:30:32

What strikes me, though, is that all of these existential concerns are concerns brought about by technology. So I. J. Good felt like more technology was the answer to these technological problems?

00:30:45

Once again, one of the reasons that he is now so legendary among those who are concerned about this AI moment we're living in right now is that he was the first to really point out that even though there would be all these amazing benefits in having a super powerful, intelligent machine, that it also would pose its own existential threat. The most famous line in this paper is, The first ultra-intelligent machine is the last invention that man ever need make.

00:31:19

Because every other invention past then would be invented by AI. We wouldn't need to invent anything else. Exactly.

00:31:25

But he follows that line up by saying that to experience the benefits and the protections of this intelligence explosion, that we would need to find some way to ensure that that machine is docile. That was his word.

00:31:43

It's almost like he's offering hope, but with a very large caveat.

00:31:47

Yes. Essentially, he's saying, We will make this. Maybe we need to make this. But everything hinges on how we make this and what we do between now and when that machine machine arrives. Because if we succeed in making this machine before we figure out how to make it, quote, unquote, docile, his warning was that man's last invention might end up being our final mistake.

00:32:24

We'll be We'll be right back after this short break.

00:32:46

The Last Invention is sponsored by Ground News. Ground News is one of the most helpful tools that I use to avoid the echo chambers and media bias online, especially when it comes to shining a light on our blind spots. So whether you're politically on the left or the right or somewhere in the center, the blind spot feature from Ground News highlights the stories that tend to be disproportionately covered by one side or the other. As an example, take these two stories about President Donald Trump. One, which had low coverage among left-leaning outlets.

00:33:18

It's a very important relationship.

00:33:19

We're going to get along good with China.

00:33:21

Reported that Trump says US will accept 600,000 Chinese students as part of a trade deal.

00:33:28

I hear so many stories about we're not going to We're going to allow their students. We're going to allow, it's very important, 600,000 students.

00:33:33

Another largely uncovered by right-leaning outlets.

00:33:36

Trump's social media company is using Crypto.

00:33:40

Com's- Trump family Crypto Empire expands with Crypto. Com partnership.

00:33:45

That's our transactional Trump family. Make some money when you can.

00:33:48

By seeing which stories are amplified or ignored, depending on the outlet, Ground News helps you step outside the filter bubbles that shape most people's news diets, giving you a fuller picture of what's actually happening. I really think that if you like this podcast, you're going to like their mission. Go to groundnews. Com/invent to get 40% off the same unlimited access Vantage plan that we use. You can even sign up to stay up to date with the biases in your coverage proactively with the weekly Blind Spot Report, delivered directly to your inbox. This is a great way to support them and the work that they do because Ground News is a subscriber-supported platform. We appreciate what they're up to. We appreciate their support for this podcast, so go check them out and make sure to use our link, groundnews. Com/invent, so they know we sent you. This episode of The Last Invention is brought to you by F. I. R, the Foundation for Individual Rights and Expression. There's a pattern that you can trace throughout history. In ancient Athens, Socrates was put to death for asking tough questions of the powerful. Centuries later, monarchs banned and burned books they considered dangerous.

00:35:00

In the last century, authoritarian governments shut down newspapers, censored broadcasts, even jailed their critics. The struggle was always the same. Who gets to decide what people can know? Today, that struggle is playing out in a new arena, and the risk now is subtler. Search results that quietly vanish, recommendation engines that steer us toward safe and comfortable answers, and AI filters that can suppress ideas before we ever even see them. That's where FHIR comes in. Fhir has spent decades defending free inquiry on our campuses, in the courts, and in our culture. Now, through a $1 million grant program in collaboration with the Cosmos Institute, they are supporting projects that keep free thought alive in the era of AI. Join us today at thefire. Org/thelastinvention. By supporting FHIR, you are protecting the future of free inquiry in America and ensuring that tomorrow's most important questions can still be asked. Once again, visit thefire. Org/thelastinvention. Thanks.

00:36:06

Okay, so Andy, where we left off, Alan Turing had infused the field of computer science with this dream of a thinking machine. The summer program at Dartmouth took up that dream, founded the field, gave it the name AI. The Cold War supplied way more money than anyone knew what to do with. Then AI is actually start being built at that point, right? Mm-hmm. There's all this confidence that we're going to the moon, we're also going to start living with robots. We did make it to the moon. We did not start living alongside intelligent robots.

00:36:40

Sadly, no.

00:36:41

Why? What happened to Turing's dream?

00:36:44

In short, throughout the 1960s, as incredible in some ways as the advancements were that the field of AI was making, they failed to live up to the hype that they created. Their AI models don't scale. They're not seen as very useful. They fail to hit a number of their benchmarks, and eventually, their funding starts to dry up. Eventually, the US government does realize that the USSR is not on the brink of creating a true thinking machine, and the entire field of AI enters what many people call an AI winter. But the idea of artificial intelligence, it does not enter an AI winter. In fact, if anything, it moves even further into the mainstream, but not because of any advancements being made by the world of technology, but because of the world of science fiction. Much of that stems from the 1968 movie, 2001: A Space Odyssey by Stanley Kubrick and Arthur Clarke.

00:38:08

Okay, 2001: A Space Odyssey, great movie. But is this then the first time AI appears on the screen?

00:38:16

Well, yes and no. On the one hand, there had been these somewhat intelligent machines that had been in movies like Metropolis, going back to 1927. Writers like Isaac Asimov in the '30s and the '40s were writing these really interesting stories about a time where human beings lived alongside intelligent robots. But what makes 2001 so singular is that its main character isn't a humanoid robot, but it's something much more like a super intelligent AI system.

00:38:48

What's the difference between a robot and a super intelligent system?

00:38:51

Previous ideas of this thinking machine were a cross between the Tin Man and Frankenstein. They spoke robotically, and they were dumb. But how 9000- Good afternoon, Hal.

00:39:07

How's everything going? Good afternoon, Mr. Eymard. Everything is going extremely well.

00:39:12

He's not a clunky robot, but he's something more like software. And he's rational, he's smart, he's curious.

00:39:19

Good evening, Dave. How are you doing, Hal? Everything's running smoothly. And you? Oh, not too bad. Have you been doing some more work?

00:39:27

A few sketches.

00:39:28

May I see them? Sure.

00:39:31

Right. Even the way he's manipulative, it feels human in some way.

00:39:35

Do you mind if I ask you a personal question?

00:39:37

Like a nosy HR rep.

00:39:40

No, not at all. Well, forgive me for being so inquisitive, but during the past few weeks, I've wondered whether you might be having some second thoughts about the mission.

00:39:51

And one of the reasons that how 9000 feels so different is because Stanley Kubrick and Arthur Clarke, his co-writer, they actually constructed the character of Howl in consultation with the top AI researchers at the time.

00:40:07

One day, he just turned up, and he was going to make this movie. He was intrigued by artificial intelligence and invited me to come out to the studios.

00:40:17

Marvin Minsky, who was a part of the original Dartmouth summer program, he worked with Kubrick on the film.

00:40:23

Well, that was a very confusing cooperation because Stanley Kubrick would not tell me anything about the plot.

00:40:34

He says that specifically, Kubrick consulted him and his colleagues at MIT about what the AI system would look like, how it might function, what esthetics it might have. But for the question of how an AI system might pose a real danger, might break bad, they consulted none other than I. J. Good.

00:41:02

And so what did Dr. Good let them know about how AI might break bad?

00:41:07

As you remember, most of the movie takes place inside of a spaceship. There are a number of human astronauts as well as Hau 9000, and they're on this mission. You never quite get to know exactly what the mission is they're on, but you understand that it is of grave importance for the entire human race.

00:41:30

Hal, you have an enormous responsibility on this mission, in many ways, perhaps the greatest responsibility of any single mission element. Does this ever cause you any lack of confidence?

00:41:41

And early on in the film, you realize that Control Center back on Earth has given Hal strict orders to ensure that the mission is successful.

00:41:48

Let me put it this way, Mr. Amer. No 9,000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.

00:42:02

And the drama in the movie is that at a certain point, Howe becomes convinced that the human astronauts on this spaceship are an impediment to how 9000 accomplishing its ultimate mission.

00:42:17

Oh, Hal, do you read me? Do you read me, Hal? Affirmative, Dave. I read you.

00:42:23

And so in a somewhat cold and calculated way- Open the pod bay doors, Hal.

00:42:30

I'm sorry, Dave. I'm afraid I can't do that.

00:42:33

Hal makes the decision to kill the hibernating crew. And in the iconic scene in the movie- What's the problem?

00:42:42

I think you know what the problem is just as well as I do.

00:42:46

What are you talking- How locks the captain out of the ship.

00:42:49

This mission is too important for me to allow you to jeopardize it. I don't know what you're talking about, Hal. Open the doors. Dave, this conversation can serve no purpose anymore. Goodbye.

00:43:04

Hal. Hal. Hal.

00:43:12

And this is, of course, the scene where he goes rogue. He doesn't follow a direct order by the captain of the ship.

00:43:18

Well, this is also one of the things that makes 2001: A Space Odyssey such a singular movie, especially in this time, because how doesn't How does it go rogue the way that Frankenstein goes rogue? How doesn't go rogue the way that the robot in Metropolis goes rogue?

00:43:38

You mean he doesn't try to destroy his creator. He doesn't become a monster?

00:43:42

No, I mean, he does become a murderer. He murdered a mass murderer of people that he knows. That's not good. But the idea is that it is not coming from some rage or some- Evil. Yeah, or some sense of evil. He is actually doing this out of an enactment of the values he's been programmed to have. And this is an idea born not just out of a desire for a great plot, although it is a great plot, but out of I. J. Good, trying to imagine the kinds of conflicts that we are going to come into one day in the future when he believed we would begin to make these ultra-intelligent machines. And so in a way, the movie becomes most people's introduction, not just to artificial intelligence, but to the threat that a future AI might pose. And of course, 2001, it's this absolutely massive hit.

00:44:52

The winner is Stanley Kubrick in 2001.

00:44:56

It wins the Academy Award. It wins like every major award. It's now seen as one of the most influential films of all time. And really, from that moment in 1968 up through today, artificial intelligence becomes this mainstay in American entertainment.

00:45:15

You're reading a magazine, you come across a full-page nude photo of a girl. Is this testing whether I'm a replicant or a lesbian, Mr. Deckard? Just answer the questions, please.

00:45:23

Movies like Blade Runner, The Terminator. Come with me if you want to live. The Matrix.

00:45:29

The future is our world. Yes, the future is our time.

00:45:35

More recently, Ex Machina. Hello. Hi. Do you have a name?

00:45:40

Ava.

00:45:42

Over time, AI's place in science fiction, it has presented this interesting problem for everyone today who is concerned about AI.

00:45:52

One of the effects, in some sense, is to make people aware of possibilities and to have images to hang words onto when topics come up, right?

00:46:02

Again, Robin Hanson told me that while Sci-Fi did the same thing that Alan Turing was trying to do, it gave the public a way to talk about AI, a way to picture AI.

00:46:13

Right. So if we talk about robots and how they might be in society and what they might do, these images from science fiction are available for us to use to fill in those words with images.

00:46:27

But at the same time, it also weirdly put AI into this category that makes it easier to dismiss.

00:46:35

By having this category of science fiction, which everybody agrees shouldn't be taken seriously, a concept that shows up and seems to fit into science fiction can just be dismissed among serious people, and it has been.

00:46:48

Right. This idea that that's just sci-fi. We don't need to take it seriously.

00:46:52

Exactly. But that works on both the doomer and the acceleration aside.

00:46:56

This is something that came up when I was talking with Sam Harris. He was saying that the idea of AI almost seems too cool for people to see it as a real threat. There's just something fun and sexy about these science fiction tropes. When you watch a film like Ex Machina, it is just fun, and it's hard to... You're not really thinking about your kids dying awful deaths. It's not the same thing as like flesh-eating bacteria, where you just I think, let's avoid this at all costs. I don't want to think about it. It's just like this is just all awfulness any way I look at it. What's going on in the glass box in Ex Machina, that's fun.

00:47:42

On the other side, though, does Accelerations can say, yes, but most of the academic safetyism and government safetyism, it's also neglecting a key emotion, which is the enthusiasm and joy and excitement of humanity We go accelerating forward into vast spaces of technical possibilities.

00:48:04

There's not a lot of movies out there where the plot is, We create AI and the future is awesome.

00:48:12

Exactly. Our world needs to see that excitement and enthusiasm to allow us to be less safety-ish because they say, plausibly, that our world is not realizing the potential of a lot of technologies because we're so safety-ist.

00:48:31

When I talk to people who are more accelerationist, they say that the sci-fi problem is exactly the opposite of what Sam Harris is saying. The contemporary AI safety discourse, when you actually look at where they got these ideas from, it's literally fiction. People like the online commentator and social scientist Justin Murphy. The fact is that this is a highly imaginative and highly creative possibility, which is definitely worth I'm thinking about, but it's not scientific or nearly as technical as it pretends to be.

00:49:07

It is literally grounded primarily in fiction.

00:49:09

He was saying that the scouts, the doomers, they've essentially allowed science fiction and these science fiction tropes to scare them and shape their sense of reality. So it sounds like you're saying that science fiction is actually playing an important role in where we're at with this technology, where we're at with this debate right now.

00:49:30

Absolutely. I've always taken the stem view of we should have careful analysis of what's possible and we should achieve. I've always assumed that the ends we were trying to pursue were just shared and obvious. I've, more recently in the last couple of years, really come to appreciate cultural evolution and its power. I realized that you can't take inspiration and motivation for granted. Honestly, motivation is the closest thing to magic we have in our world. If people are motivated, they do far more than if they're not. And we just don't really understand how it works, what actually motivates people. But it's this power that makes everything work. And science fiction has been a reservoir of motivation.

00:50:15

Robin Hansen was saying that in many ways we wouldn't be in this moment we're in right now. You and I wouldn't be doing this podcast. There wouldn't be this big debate happening around AI if it were not for science fiction and the ways that it colored how we saw things like ChatGPT.

00:50:32

When ChatGPT showed up three years ago and people saw that they could talk to something that seemed to talk back reasonably, that had an enormous cultural impact, in part because it resonated with decades of science fiction.

00:50:47

Right.

00:50:48

This trillions of dollars of investment that is going into AI is there in substantial part because of that resonance. That's what made them excited to invest and pursue pursue AI, when you saw ChatGPT right in front of you talking back, it's that reservoir of science fiction motivation that convinced people, wow, I should be pursuing this. And it's the reservoir of science fiction fear that will convince people, if they do, that they should be scared of this. Both of those are in the reservoir. They're both resources for both sides of this. But unfortunately, the logical and analytical arguments are just not the main force that Power is action in these areas.

00:51:31

Hansen's point, at least how it hit me, was that ultimately this comes down to what story human beings believe we're living in. This debate swirling around artificial intelligence, it may be decided by what we come to believe happens next in that story.

00:51:52

Which is why ChatGPT just feels so eerie because it's a new technology, but it's not a new story. It is literally Alan Turing's story finally coming true.

00:52:03

That's exactly what I keep thinking, that the technology that has launched us into this moment is a technology that has found a mastery of human language and communication exactly as he predicted. Here we are, having that other side of the line moment as a human species.

00:52:25

From the '50s, when the term artificial intelligence was coined, Until four years ago, AI was chronically overhyped. Everything took longer than promised.

00:52:36

This is something I was talking to Max Tegmark about. He and his colleagues, they believe that signal that Turing sent all those years ago, we're in the moment now.

00:52:47

And then it switched about four years ago to becoming underhyped when things happened faster than we expected it. Almost all my AI colleagues thought that something as good as even ChatGPT was decades away, and it wasn't. It already happened. And since then, AI systems have gone from high school level to college level to PhD level to professor level to beyond in some areas, a lot faster than even people thought after ChatGPT. So we're in this underhyped regime now when something we thought we were going to have decades to figure out. The question of how to control smarter than human machines, we might only have two years or five years. And I think that's fundamentally why so many people are freaking out about this and why you're doing this important piece of journalism now. It's not like we didn't know that we were going to have to face this at some point, but it's been a big surprise to most of the community that now is not 2050, that it's only 2025, and we're already here getting so close to the precipice.

00:53:59

Next next time on The Last Invention.

00:54:01

The battle of Man Against Machine.

00:54:03

The AI Winter Thaws.

00:54:05

Machine didn't just beat man, but trounced him. The victory seemed to raise all those old fears of superhuman machines crushing the human spirit.

00:54:15

How neuroscientists, games, and gamers end up unlocking the door to artificial intelligence.

00:54:21

Oh, the computer, this machine, can be creative.

00:54:33

The Last Invention is produced by Longview. To learn more about us and our work, go to longviewinvestigations. Com. Special thanks this episode to Peter Clarke. See you soon, and thanks for listening.

00:54:49

This episode is sponsored by Ground News, the app that helps you spot media bias and see a broader picture of the news shaping our world.

00:55:04

Get 40% off their Vantage Plan at ground. News/invent. This episode is sponsored by FHIR, Defending Free Thought in the Age of AI. You can learn more at thefhire. Org/thelastinvention.

AI Transcription provided by HappyScribe
Episode description

In 1951, Alan Turing predicted machines might one day surpass human intelligence and 'take control.' He created a test to alert us when we were getting close. But seventy years of science fiction later, the real threat feels like just another movie plot.

THIS EPISODE FEATURES:

Connor Leahy, Max Tegmark, Robin Hanson, Karen Hao, Nick Bostrom, Sam Harris, and Justin Murphy

LINKS:

Karen Hao’s book “Empire of AI”

Nick Bostrom's book "Superintelligence"

CREDITS:

This episode of The Last Invention was reported and produced by Andy Mills, Gregory Warner, Andrew Parsons, Megan Phelps-Roper, Matthew Boll, Seth Temple Andrews, and Ethan Mannello. It is hosted by Gregory Warner

Music for this episode was composed by ⁠Scott Devendorf⁠, ⁠Ben Lanz⁠, ⁠Cobey Bienert⁠, and Matthew Boll

The Last Invention artwork by ⁠Jacob Boll⁠

To become a Longview subscriber you can visit us ⁠here⁠

Thank you to our sponsors Ground News and FIRE

⁠GROUND NEWS⁠ : Go to ⁠groundnews.com/invent⁠ to get 40% off unlimited access to global coverage of the stories shaping our world.

⁠FIRE⁠ 

This is a paid sponsorship link.