Request Podcast

Transcript of JD Vance's AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant

All-In with Chamath, Jason, Sacks & Friedberg
Published 10 months ago 2,646 views
Transcription of JD Vance's AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant from All-In with Chamath, Jason, Sacks & Friedberg Podcast
00:00:00

Great job, Naval.

00:00:01

You rocked it. Maybe I should have said this on air, but that was literally the most fun podcast I've ever recorded.

00:00:07

Oh, that's on air. Cut that in.

00:00:08

Yeah, put it in the show.

00:00:09

I had my theory on why you were number one, but now I have the realization.

00:00:13

What's the actual reason? You know us for a long time.

00:00:14

What was your theory? What's the reality?

00:00:16

My theory was that my problem with going on podcast is usually the person I'm talking to is not that interesting. They're just asking the same questions and they're dialing it in and they're not that interested. It's not like we're having a peer-level actual conversation. That's why I wanted to do air chat and Clubhouse and things like that because you can actually have a conversation. I see. What you guys have very uniquely is four people of whom at least three are intelligent. No, I'm kidding. How could you say that? Sax is in How did you...

00:00:46

Sacks isn't even hearing you say that? That is so cold.

00:00:50

That's the best. Of whom at least three are intelligent and all of you get along and you can have an ongoing conversation. That's a very high hit rate. Normally in a podcast, you only get one interesting person, and now you've got three, maybe four. That, to me, was why all this is successful. Who invited this guy?

00:01:07

Who are you talking to?

00:01:08

He's number four. We don't know. He'll remain mysterious forever. Of the four, the problem is if you get people together to talk, two is a good conversation, three, possibly, four is the max. That's why at a dinner table at a restaurant, four talk. You don't do five or six because then it splits in multiple conversations. You had four people who were capable of talking. That, I It's not what I thought was a secret, but there's another secret. The other secret is you guys are having fun. You're talking over each other. You're making fun of each other. You're actually having fun. That's why I'm saying this is the most fun podcast I've ever been on. That's why you'll be successful.

00:01:43

Welcome back anytime, Ni'vall. Thanks, brother. Thank you.

00:01:45

Welcome back. Keep it fun. Yes, absolutely.

00:01:47

Keep it fun, guys. Thanks for having me.

00:01:49

188 and three smart guys. I can't believe that. I can't even believe you'd say that about Sacks. He's not even here to defend himself.

00:01:57

Sorry, David.

00:02:00

Let your winners ride.

00:02:02

Rain Man, David Sass.

00:02:04

And it said, We open source it to the fans, and they've just gone crazy with it.

00:02:10

Love you guys.

00:02:11

It's the Queen of Kinawa. I'm going all All right, everybody. Welcome back to the number one podcast in the world. We're really excited today. Back again, your Sultan of Science, David Friedberg. What do you got going on there, Friedberg? What's in the background? Everybody he wants to know. Namaste. Namaste.

00:02:31

I used to play a lot of a game called Sim Earth on my Macintosh LC way back in the day.

00:02:38

That tracks.

00:02:39

Yeah.

00:02:40

That tracks. And of course, with us again, your chairman.

00:02:44

What games did you play growing up, J. Cal? Actually, I'm curious. Did you ever play video games?

00:02:47

Let's say, Andrea, Allison, Susan. I mean, it was like a lot of cute girls. I was out dating girls, Friedberg. I was not on my Apple tune playing Civilization.

00:03:02

Let me find one of those pictures.

00:03:04

Don't get me in trouble, man. The '80s were good to me in Brooklyn.

00:03:08

Rejection, the video game.

00:03:10

Yes. You have three lives. Rejected. It's a numbers game, Chamal. As you know, as you well know, it is a numbers game.

00:03:19

nick, go ahead. Pull up Rico Suave here.

00:03:22

Oh, no. What is this one?

00:03:23

Instead of playing video, here I am. No, that's in the '80s.

00:03:26

That's fat J. That's fat J. You want to get nick, help out your uncle. He's out slaying.

00:03:32

Yeah, here he is out slaying. Help out your uncle with the thin J.

00:03:35

Cal. That's Priya Zempik.

00:03:36

You know what he was slaying in there?

00:03:37

Exactly. A snack.

00:03:38

You were pre-easempic and post-easempic, right?

00:03:42

Correct. And weight lifting, both of them.

00:03:44

Dirty.

00:03:46

Lace potato chips. Go find my Leonardo DiCaprio picture, please, and replace my fat J. How picture with that. Thank you. Oh, God, I was fat. Man, plus 40 pounds is a lot heavier than I am. It's no joke.

00:03:58

It's no joke.

00:03:59

40 pounds is a lot. It's no joke, though. There's so many great emo photos of me.

00:04:03

I'm proud of you.

00:04:04

No joke. Thank you, my man. If you want a good photo.

00:04:08

Can you get through the intros, please, so we can start? Come on, quick.

00:04:10

How are you doing, brother? How are you doing, chairman dictator, Jamal? Good Good opportunity. You're good? You're good? You're good? You're good? You're good? You're good? You're good? You're good? All right. We are really excited today. Today, for the first time on the All In podcast, the Iron Fist of Angel List, the Zen-like mage of the early He has such a way with words. He's the Socrative of nerds. Please welcome my guy, Namaste, N'Vaal. How are you doing? The intros are bad.

00:04:38

That is the best intro I've ever gotten. I didn't think you could do that. That was amazing. That's your superpower. There you go. Right there. Lock it in. Quit venture capital. Just do that. Absolutely.

00:04:48

That's actually, you know what? Interestingly- Number one podcast in the world, like someone said. I mean, that's what I'm manifesting. It's getting close. We've been in the top 10. So I mean, the weekends are good for I'll call in.

00:05:00

This one will hit number one. This one will go viral.

00:05:02

I think it could. If you have some really great pithy insights, we might go right to the top because you have a new audience.

00:05:09

I just got to do a Sieg Heil and it'll go viral.

00:05:11

No, no, no.

00:05:13

Are you going to send us your heart?

00:05:15

My heart goes out to you.

00:05:17

My heart, I end here at the heart. I don't send it out. I keep it right here. I put both hands on the heart and I hold it nice and steady. Hold it in. I hold it in. It's sending out to you, but just not Not explicitly. All right. For those of you who don't know, Neva was an entrepreneur. He kicked a bit of ass. He got his ass kicked, and then he started venture hacks. He started emailing folks and saying, 50 20 years ago, maybe 15, here are some deals in Silicon Valley. He went around, he started writing 50K, 100K checks. He hit a bunch of home runs, and he turned venture hacks into angel lists. Then he has invested in a ton of great Startups, maybe give us some of the greatest hits there, Daval.

00:06:03

Yeah, Twitter, Uber, Notion, a bunch of others, Postmates, Udemy, a lot of unicorns, a bunch of upcoming. It's actually a lot of deals at this point. But honestly, I'm not necessarily proud of being an investor. Investor, to me, is a side job. It's a hobby. So I do startups.

00:06:21

How do you define yourself?

00:06:24

I don't. I guess these days, I would say more like building things. Every Every so-called career is an evolution. All of you guys are independent and you do what you're most interested in. That's the point of making money, so you can just do what you want. These days, I'm really into building and crafting products. I built one recently called AirChat It didn't work. I'm still proud of what I built and got to work with an incredible team. Now I'm building a new product. This time I'm going into hardware. I'm just building something that I really want. I'm not ready to talk about it yet. And you fund it all yourself, Partially, I bring investors along. Last time, they got their money back. Previous times, they've made money. Next time, hopefully, they'll make a lot of money. It's good to bring your friends along.

00:07:09

I'll be honest. I love that you said, I love the product, but it didn't work. Not enough people say that.

00:07:14

Yeah, I know. I built a product that I loved, that I was proud of, but it didn't catch fire. It was a social product, so it had to catch fire for it to work. I found the team great homes. They all got paid. The investors that I brought in got their money back. I learned a ton, which I'm leveraging into the It's a new thing. But the new thing is much harder. The new thing is hardware and software.

00:07:34

What did you learn building in 2024 and 2025 that you didn't know maybe before then?

00:07:40

The main thing was actually just the craft, the craft of pixel by pixel, designing a software product and launching it. I guess the main thing I took away that was a learning was that I really enjoyed building products and that I wanted to build something even harder and something even more real. I think like a lot of us, I'm inspired by Elon and all the incredible work he's done. I don't want to build things that are easy. I want to build things that are hard and interesting. I want to take on more technical risk and less market risk. This is the classic VC learning, which is you want to build something that if people get it, if you can deliver it, you know people will want it. It's just hard to build as opposed to you build it and you don't know if they want it. That's A learning.

00:08:27

Airchat was a lot of fun. For those of you who don't know, it was like a social media network where you could ask a question and then people could respond. It was like an audio-based Twitter. Would you say that was the best way to describe it?

00:08:40

Audio Twitter, asynchronous AI transcripts and all kinds of AI to make it easier for you. Translation. Really good way for trying to make podcasting-type conversations more accessible to everybody. Because honestly, one of the reasons I don't go on podcasts, I don't like being intermediated, so to speak, where you sit there and someone interviews you and then you go back and forth and you go through the same old things. I just want to talk to people. I want peer relationships like you guys have running here.

00:09:07

Nival, what happened? When you went through that phase, there was a period where it just seemed like something had gone on in your life and you just knew the answers. You were just so grounded. It's not to say that you're not grounded now, but you're less active posting and writing. But there was this period where I think all of us were like, All right, what does Na'Val think?

00:09:27

Oh, really? Okay, that's news to I would say it would be the late teens, the early '20s.

00:09:34

Jason, you can correct me if I'm getting the dates wrong, but it's in that moment where these Na'Valisms and this philosophy really started to... I think people had a tremendous respect for how you were thinking about things. I'm just Were you going through something in that moment?

00:09:48

Oh, yeah, that's right. No, they're very insightful. I've been on Twitter since 2007 because I was an early investor, but I never really tweeted. I didn't get featured. I had no audience. I was just doing the usual techie guy thing talking to each other. Then I started Angelus in 2010. The original thing about matching investors to startups didn't scale. It was just an email list that exploded at early on, but then just didn't scale, so we didn't have a business. I was trying to figure out the business, and at the same time, I got a letter from the Securities Exchange Commission saying, Oh, you're acting as an unliced broker dealer. I'm like, What? I'm not making any money. I'm just making intros. I'm not taking anything. It's just a public service. But even then, they were coming after me. I was in it and I'd raise a bunch of money from investors. I was in a very high stress period of my life. Now, looking back, it's almost comical that I was stressed over it. But at the time, it all felt very real. The weight of everything was on my shoulders. Expectations, people, money, regulators.

00:10:44

I eventually went to DC and got the law change to legalize what we do, which ironically enabled a whole bunch of other things like ICOs and incubator days and so on, demo days. But in that process, I was in a very high stress period of my life, and I just started tweeting whatever I I was going through, whatever realizations I was happening. It's only in stress that you are forced to grow. Whatever internal growth I was going through, I just started tweeting it, not thinking much of it. It was a mix of... There are three things that I always are running through One is I love science. I'm an amateur. I love physics. Let's just leave it at that. I love reading a lot of philosophy and thinking deeply about it. And I like making money. Truth, love, and money. That's my on my Twitter bio. Those are the three things that I keep coming back to. And so I just started tweeting about all of them. And I think before that, the expectation was that someone like me should just be talking about money, stay in your lane, and people had been playing it very safe.

00:11:46

And so I think the combination of the three caught people's attentions because every person thinks about everything. We don't just stay in our lane in real life. We're dealing with our relationships, we're dealing with our relationship with the universe, we're dealing with what we know to be true and with science and how we make decisions and how we figure things out. We're also dealing with the practical everyday material things of how to deal with our spouses or girlfriends or wives or husbands and how to make money and how to deal with our children. I'm just tweeting about everything. I just got interested in everything I'm tweeting about it. A lot of it, my best stuff was just notes to self. It's like, Hey, don't forget this.

00:12:23

How to get rich. Remember that one? How to get rich. That was one of the first threads.

00:12:27

That was a super banger. That one went super viral. That was a I think that is still the most viral thread ever on Twitter.

00:12:34

I like timeless things. I like philosophy. I like things that still apply in the future. I like compound interest, if you will, in ideas. Obviously, Recently, X has become so addictive that we're all checking it every day. Elon's built the perfect for you. He's built TikTok for nerds, and we're all in it. But normally, I try to ignore the news. Obviously, last year, things got real. We all had to pay a lot of attention to the news. But I just like to tweet timeless things. I don't know. People pay attention. Sometimes they like what I write, sometimes they go nonlinear on me. But yeah, the how to get rich tweet storm was a big one.

00:13:10

Is it problematic when people now meet you because the hype versus the reality, it's discordant now because people, if they absorb this content, they expect to see some quasi-dady floating in the air. You know what I mean? Yes.

00:13:25

Yeah. Like many of you have stopped drinking, but I used to have the occasional glass wine. There was a moment there where I went and met with an information reporter back when I used to meet with the reporters. She said, Where are we going to meet? I said, Oh, let's meet at the wine merchant. We got a glass of wine. She's like, What you drink? It was like a big deal for I don't know.

00:13:44

I'm so disappointed.

00:13:47

I was like, I'm an entrepreneur. Most of them are alcoholics or in psychedelics or doing whatever it takes to manage.

00:13:54

When they say I'm on therapy, you know what that's I'd love to code for.

00:14:00

So yes, it is highly disported.

00:14:03

Plant medicine.

00:14:04

I'm almost reminded of that line in the Matrix where that agent is about to shoot one of the Matrix characters and say it's only human. That's what I wanted to say to everybody, He's a lonely human.

00:14:15

You did recently a podcast with Tim Farris on parenting.

00:14:21

This was out there. I love this. I bought the book from this guy. Just give a brief overview of this philosophy of parenting.

00:14:30

I didn't listen to this. I have to write this down. Tell us what is your- You're going to love this.

00:14:34

This spoke to me, but it was a little crazy.

00:14:37

I'm a big fan of David Deutsch. David Deutsch, I think, is basically the smartest living human. He's a scientist who is pioneered quantum computation. He's very brilliant. He's written a couple of great books, but it's about the intersection of the greatest theories that we have today, the theories with the most reach. Those are epistemology, the theory of knowledge, evolution, quantum physics, and computation.

00:14:58

This is the beginning of infinity. The Beginner City guy. That's the book that you always reference.

00:15:01

The beginning of infinity is the second book.

00:15:02

That you always reference. Correct.

00:15:03

Yes. The Fabric of Reality is the other book. I've spent a fair bit of time with him, done some podcasts with him, hired and worked with people around him. I'm just really impressed because it's the framework that's made me smarter, I feel like, because we're all fighting aging. Our brains are getting slower and we're always trying to have better ideas. As you age, you should have wisdom. That's your substitute for the raw horsepower of intelligence going down. Scientific wisdom, I take from David, not take, but I learned from David. One of the things that he pioneered is called taking children seriously. It's this idea that you should take your children seriously, like adults. You should always give them the same freedom that you would give an adult. If you wouldn't speak that way with your spouse, if you wouldn't force your spouse to do something, don't force a child to do something. It's only through the latent threat of physical violence, Hey, I can control you. I can make you go to your room. I can take your dinner away, or whatever, that you intimidate children. It It resonated with me because I grew up very, very free.

00:16:03

My father wasn't around when I was young. My mother didn't have the bandwidth to watch us all the time. She had other things to do. I was making my own decisions from an extremely young age. From the age of five, no Nobody was telling me what to do. From the age of nine, I was telling everybody what to do, so I'm used to that. I've been homeschooling my own kids, so the philosophy resonated. I found this guy, Aaron Stupel, on AirChat. He was an incredible expositor of the philosophy He lives his life with it 99% as extreme as one can go. So his kids can eat all the ice cream they want and all the Snickers bars they want. They can play on the iPad all they want. They don't have to go to school if they don't feel like it. They dress how they want. They don't have to do anything they don't want to do. Everything is a negotiation, negotiation, explanation, just like you would with a roommate or an adult living in your house. It's insane and extreme. But I live my own home life in that arc, in that direction. And I'm a very free person.

00:17:00

I don't have an office to go to. I try really not to maintain a calendar. If I can't remember it, I don't want to do it. I don't send my kids to school. I really try not to coerce them. Obviously, that's an extreme model, but I was still very- Sorry, hold on a second.

00:17:15

So your kids, if they were like, I want Haagen Dazs, and it's 9: 00 PM, you're like, Okay.

00:17:25

Two nights ago, I did this. I ordered the Haagen Dazs. It wasn't Haagen Dazs, it was a different brand.

00:17:28

I ordered it. I'm just going to I'm going to go through a couple of examples.

00:17:30

We do a ash ice cream at 9: 00 PM and we all eat at 9: 00.

00:17:33

Yeah, so they're like, Dad, I want... And they're happy.

00:17:35

They're happy kids.

00:17:36

I want to be on my iPad. I'm playing Fortnite. Leave me alone. I'll go to sleep when I want. You're like, Okay.

00:17:41

My oldest probably plays iPad nine hours a day.

00:17:45

Okay, so then your other kid pees in their pants because they're too lazy to walk to the bathroom.

00:17:51

They don't do that because they don't like peeing their pants.

00:17:52

No, I understand, but I'm just saying there's a spectrum of all of these things, right? Yeah. And your point of view is 100% of it allowed and you have no judgments.

00:18:01

No, that's not where I am. That's where Aaron is. My rules are a little different. My rules are they got to do one hour of math or programming plus two hours of reading every single day. The moment they've done that, they're free creatures, and everything else is a negotiation. We have to persuade them. It's a persuasion, I should say, not even a negotiation. Even the hour of math and two hours of reading, really, you get 15 to 30 minutes of math, maybe an hour if you're lucky, and you get half an hour to two hours of reading if you're lucky.

00:18:30

What do you think the long-term consequences of that are? Then also, what is the long-term consequences, let's say, on health if they're making decisions you know are just not good, like the ice cream thing at 9: 00 PM? How do you manage that in your mind?

00:18:45

I think whatever age you're at, whatever part you're at in life, you're still always struggling with your own habits. I think all of us, for example, still eat food and feel guilty or want to eat something that we shouldn't be eating, and we're still always evolving our diets, and kids are the same. My oldest is already... He passed on the ice cream last time and he said, I want to eat healthier because finally, I managed to get through to him and persuade him that he should be healthier. My younger kids will eat it, but they'll eat a limited amount. My middle kid will sometimes eat some.

00:19:13

Okay, so if they say something, you'll enable it, but then you'll be like, Hey, listen, this is not the choice I would make. But if you want it, I do it.

00:19:21

I'll try it, but you also have to be careful where you don't want to intimidate them and you don't want to be so overbearing that then they just view dad as controlling.

00:19:30

I find this so fascinating. What do you think happens to these kids? I'm sure you have a vision of what they'll be like when they're fully formed adults. What is that vision?

00:19:39

I try not to. They're going to be who they're going to be. This is how I grew up. I did what I wanted.

00:19:45

I would rather they have agency than turn out exactly the way I want.

00:19:52

Because agency is the hardest thing. Having control over your own life, making your own decisions. I want them to be happy. I have a It's a happy household.

00:20:01

What's Plato's goal? Eudmonia?

00:20:04

Eudmonia? Yeah, the happy-life-our-star.

00:20:06

Or the fulfillment, this concept. Is that what you want for them?

00:20:11

I don't really want anything for them. I just want them to be free and their best selves. God damn.

00:20:20

Chamatha is worrying about details. She's got 17 kids now. I don't know if you know, but Chamatha has got a whole punch list of things. I love this interview because the guy made really interesting point, which was they're going to have to make these decisions at some point. They're going to have to learn the pros and cons, the upside, the downside to all these things, eating, iPad, and the quicker you get them to have agency to make these decisions for themselves with knowledge to ask questions, the more secure it will be. I found it a fascinating discussion. I like cause and effect, especially in teenagers, now that I have a teenager. It's really good for them to learn, Hey, if you When you do your homework, you have a problem, and then you got to solve that problem. How are we going to solve that problem? I like to present it as, What's your plan? Anytime they have a problem, 8-year-old kids, 15-year-old kids, I just say, What's your plan to solve this? Then I like to hear their plan and let me know if you want to brainstorm it, but I thought it was a very interesting, super interesting discussion.

00:21:20

I would say overall, my kids are very happy. The household is very happy. Everybody gets along. Everybody loves each other. Some of are way ahead of their peers. Nobody's behind in anything that matters. Nobody seems unhealthy in any obvious way. No one has average eating habits. I haven't even found really an average behavior that's out of line.

00:21:43

So it's all good.

00:21:44

Self-correct.

00:21:45

It's like a-I worry a lot about this iPad situation. I see my kids on an iPad, and it's almost like, unless they're doing an interactive project, if they end up watching- Says the guy who has a video game theme in the background. That was interactive, right?

00:22:01

And who probably grew up playing video games nonstop and probably spends nine hours a day on his screen just called a phone. Yeah, it's the same thing, man.

00:22:11

Well, I mean, I feel like watch... But do they watch shows, N'Avall?

00:22:14

No, there's a hypocrisy to picking up your phone and then say to your kid, no, you can't use your iPad. I grew up playing games nonstop in video games when I was older, and I was an avid gamer until just a few years ago. I'm not criticizing the iPad.

00:22:29

I was I've been sitting on a computer since I was four years old, so I totally get it. I think the question for me is like, but I didn't have the ability to play a 30-minute show and then play the next 30-minute show in the next 30-minute show and then sit there for two hours and just have a show playing the whole time, I was interacting on the computer and doing stuff and building stuff, which was a little different for me from a use case perspective.

00:22:54

We did use to control their YouTube access, although now we don't do that. The only thing I asked them is that they put on captions when they're watching YouTube, so it helps their reading. They learn to read faster.

00:23:06

That's a good tip. Yeah, I like that one.

00:23:07

I will say that one of my kids is really into YouTube, the other two are not. They just got over it. To the extent that they use YouTube, it's mostly because they're looking up videos on their favorite games. They want to know how to be better at a game.

00:23:20

All right, let's keep moving through this docket. We have David Sacks with us here. David, give us your philosophy of parenting. Okay, next item on the docket. Let's go.

00:23:29

It's all about some real issues. Sacks is like waiting. This show is not a-Sax is like waiting.

00:23:33

Parenting show?

00:23:35

A parenting show.

00:23:36

I asked David, what's your parenting philosophy? He said, Oh, I set up their trust four years ago. So I said, He's good. Trust is set up. Everything's good. It's parent philosophy, check.

00:23:45

What's your parenting philosophy? Check. What's your parent philosophy? G-r-a-t. Check.

00:23:48

You're all set, guys. Let me know how it works out. All right. Speaking of working out, we've got a vice president who isn't cuckoo for Coco puffs and who actually understands what AI is. J. D. Vance gave a great speech. I watched it myself. He talked about AI in Paris. This was on Tuesday at the AI Action Summit, whatever that is. He gave a 15-minute of a speech, he gave a 15-minute banger of a speech. He talked about overregulating AI and America's intention to dominate this. We happen to have with us, Na'Val, the Tsar, the Tsar of AI. Before I go into all the details about the speech, I don't want to steal your thunder. Max, this speech had a lot of verbiage, a lot of ideas that I've heard before that maybe we've all talked about. Maybe tell us a little bit about how this all came together and how proud you are. Gosh, having a vice president who understands AI is just mind-blowing. He could speak on a topic that's topical credibly. This was an awesome moment for America, I think.

00:24:55

What are you implying there, J. Cal?

00:24:57

I'm implying you might have workshopped it with him. No. Or that he's smart. Both of those things.

00:25:02

The vice president wrote the speech, or at least directed all of it. So the ideas came from him. I'm not going to take any credit whatsoever for this.

00:25:10

Okay, well, it was on point.

00:25:11

Maybe you could talk about- Yes, I agree it was on point. I think it was a very well-crafted in a well-delivered speech.

00:25:16

He made four main points about the Trump administration's approach to AI. He's going to ensure, this is point one, that American AI continues to be the gold standard. Fantastic check. Two, he says that the administration understands that excessive regulation could kill AI just as it's taking off. And he did this in front of all the EU elites who love regulation, did it on their home court. And then he said, number three, AI must remain free from ideological bias, as we've talked about here on this program. Then number four, the White House, he said, will, quote, maintain a pro-worker growth path for AI so that it can be a potent tool for job creation in the US. What are your thoughts on the four major bullet points in his speech here in Paris?

00:26:03

Well, I think that the vice president, you knew he was going to deliver an important speech as soon as he got up there and said that, I'm here to talk about not AI safety, but AI opportunity. And to understand what a bracing statement that was, and really almost like a shot across the bow, you have to understand the history and context of these events. For the last couple of years, the last couple of these events have been exclusively focused on the AI safety. The last in-person event was in the UK at Bletchley Park, and the whole conference was devoted to AI safety. Similarly, the European AI regulation, obviously, is completely preoccupied with safety and trying to regulate a way safety risks before they happen. Similarly, you had the Biden EO, which was based around safety. And then you have just the whole media coverage around AI, which is preoccupied with all the risks from AI. So to have the vice president get up there and say right off the bat that there are other things to talk about in respect to AI besides safety risks, that actually there are huge opportunities there was a breath of fresh air, and like I said, a shot across the bow.

00:27:14

And you could almost see some of the Eurocrats. They needed their fainting couches after that. Eurocrats. Trudeau looks like his dog just died. So I think that was just a really important statement right off the bat to set the context for the speech, which is AI is a huge opportunity for all of us, because really that point just has not been made enough. And it's true, there are risks. But when you look at the media coverage and when you look at the dialog that the regulators have had around this, they never talk about the opportunity communities. It's always just around the risk. So I think that was a very important corrective. And then, like you said, he went on to say that the United States has to win this AI race. We want to be the gold standard. We want to dominate.

00:27:56

That was my favorite part.

00:27:58

Yeah. And by the way, language about dominating AI and winning the global race, that is in President Trump's executive order from week one. So this is very much elaborating on the official policy of this administration. And the vice president then went on to say that he specified how we would do that. We have to win some of these key building block technologies. We want to win in chips. We want to win in AI models. We want to win in applications. He said, We need to build. We need to unlock energy for these companies. And then most of all, we just need to be supportive towards them as opposed to regulating them to death. And he had a lot to say about the risk of overregulation, how often it's big companies that want regulation. He warned about regulatory capture, which our friend Bill Gurley would like. And he said that, So basically, Basically, having less regulation can actually be more fair, can create a more level playing field for small companies as well as big companies. And then he said to the Europeans that, We want you to be partners with us. We want to lead the world, but we want you to be our partners and benefit from this technology that we're going to take the lead in creating.

00:29:05

But you also have to be a good partner to us. And then he specifically called out the overregulation that Europeans have been engaged in. He mentioned the Digital Services Act, which has acted as like a speed trap for American companies. It's American companies who've been overregulated and fined by these European regulations because the truth of the matter is that it's American technology companies that are winning the race. And So when Europe passes these onerous regulations, they fall most of all on American companies. And he's basically saying, We need you to rebalance and correct this because it's not fair and it's not smart policy, and it's not going to help us collectively win this AI race. And that brings me just to the last point, is I don't think he mentioned China by name, but clearly, he talked about adversarial countries who are using AI to control their populations, to engage in censorship and thought control. And he basically painted a picture where it's like, yeah, you could go work with them or you could work with us. We have hundreds of years of shared history together. We believe in things like free speech, hopefully, and we want you to work with us.

00:30:12

But if you are going to work with us, then you have to cooperate, and we have to create a reasonable regulatory regime.

00:30:19

Nival, did you see the speech and your thoughts just generally on JD Vance and having somebody like this representing us and wanting to win?

00:30:29

Yeah. American Very surprising, very impressive. I thought he was polite, optimistic, and just very forward-looking. It's what you would expect an entrepreneur or a smart investor to say. So I was very impressed. I think the idea that America should win, great. I think that we should not regulate. I also agree with. I'm not an AI doomer. I don't think AI is going to end the world. That's a separate conversation. But there's this religion that comes along in many faces, which is that, oh, climate change is going to end the world. Ai is going to end the world. Astroids is going to end the world. Covid-19 is going to end the world. It just has a way of fixating your attention. It captures everybody's attention at once. It's a very seductive thing. I think in the case of AI, it's really been overplayed by incentive bias, motivated reasoning by the companies who are head and they want to pull up the ladder behind them. I think they genuinely believe it. I think they genuinely believe that there's safety risks, but I think they're motivated to believe in those safety risks, and then they pass that along.

00:31:24

But it's a weird position because they have to say, Oh, it's so dangerous that you shouldn't just let open source go at it, and you should let just a few of us work with you on it. But it's not so dangerous that a private company can't own the whole thing. Because if it was truly the Manhattan Project, if they were building nuclear weapons, you wouldn't want one company to own that. Sam Altman's I've honestly said that AI will capture the light cone of all future value. In other words, all value ever created at the speed of light from here will be captured by AI. If that's true, then I think open source AI really matters, and little tech AI really matters. The The problem is that the nature of training these models is highly centralized. They benefit from supercomput or clustered compute, so it's not clear how any decentralized model can compete. To me, the real issue boils down to is how do you push AI forward while not having just a very small number of players control the entire thing. We thought we had that solution with the original OpenAI, which was a nonprofit and was supposed to do it for humanity.

00:32:24

But now, because they want to incentivize the team and they want to raise money, they have to privatize at least a part of it. Although it's not clear to me why they need to privatize the whole thing. Why do you need to buy out the nonprofit portion? You could leave a nonprofit portion and you could have the private portion for the incentives. But I think that the real challenge is how do you keep AI from naturally decentralizing because all the economics and the technology underneath are centralizing in nature. If you really think you're going to create God, do you want to put God on a leash with one entity controlling God? That, to me, is the real fear. I'm not scared of AI I, I'm scared of what a very small number of people who control AI do to the rest of us for our own good, because that's how it always works.

00:33:07

Well said. Probably should go with the Greek model, having many gods and heroes as well. Friedberg, you heard the JD Vence speech, I assume, What are your thoughts on overregulation? And maybe to N'Avall's point, one person owning this versus open source?

00:33:23

I think that there's this big definition of social balance right now on what I would techno-optimism and techno-pessimism. Generally, people fall into one of those two camps. Generally speaking, techno-optimists, I would say, are folks that believe that accelerating outcomes with AI, with automation, with bioengineering, manufacturing, semiconductors, quantum computing, nuclear energy, etc, will usher in this era of abundance. By creating leverage, which is what technology gives us, technology will make things it will be deflationary and it will give everyone more, so it creates abundance. The challenge is that people who already have a lot worry more about the exposure to the downside than they desire the upside. The Techno pessimists are generally, like the EU and large parts, frankly, of the United States, are worried about the loss of X, the loss of jobs, the loss of this, the loss of that, whereas countries like China and India are more excited about the opportunity to create wealth, the opportunity to create leverage, the opportunity to create abundance for their people. Gdp per capita in the EU, $60,000 a year. Gdp per capita in the United States, $82,000. But GDP per capita in India is $2,500 and China is $12,600.

00:34:49

There's a greater incentive in those countries to manifest upside than there is for the United States and the EU who are more worried about manifesting downside. It is a very difficult social battle that's underway. I do think over time, those governments and those countries and those social systems that embrace these technologies are going to become more capitalist, and they're going to require less government control and intervention in job creation, the economy, payments to people, and so on. The countries that are more techno-pessimistic are, unfortunately, going to find themselves asking for greater government control, government intervention in markets, governments creating jobs, government making payments to people, governments effectively running the economy. My personal view, obviously, is that I'm a very strong advocate for technology acceleration, because I think in nearly every case in human history, when a new technology has emerged, we've largely found ourselves assuming that the technology works in the framework of today or of yesteryear. The automobile came along and no one envisioned that everyone in the United States would own an automobile, and therefore, you would need to create all of these new industries like mechanics and car dealerships, roads, all the people servicing building roads, and all the other industry that emerge.

00:36:08

Or motels. It's very hard for us to sit here today and say, Okay, AI is going to destroy jobs. What's it going to create? And be right. I think we're very likely going to be wrong whatever estimations we give. The area that I think is most underestimated is the large technical projects that seem technically infeasible today that AI can unlock. For example, habitation in the oceans. It's very difficult for us to envision creating cities underwater and creating cities in the oceans, or creating cities on the moon, or creating cities on Mars, or finding new places to live. Those are technically, people might argue, Oh, that sounds stupid. I don't want to go do that. But at the end of the day, human civilization will drive us to want to do that. But those technically are very hard to pull off today. But AI can unlock a new set of industries to enable those transitions. I think we really get it wrong when we try and assume the technology as a transplant for last year or last century and then we become techno pessimists because we're worried about losing what we have.

00:37:03

Are you a techno pessimist? Are you optimist? Because you bring up the downside of an awful lot here on the program, but you are working every day in a very optimistic way to breed better strawberries and potatoes for folks. You're a little bit of- No, I have no techno pessimism whatsoever.

00:37:19

I try and point out why the other side is acting the way they are. Got it. Okay.

00:37:23

Putting it in full context.

00:37:24

What I'm trying to highlight is I think that that framework is wrong. I think that that framework of trying to transplant new EU technology onto the old way of things operating is the wrong way to think about it. It creates this, because of this manifestation about worrying about downside, it creates this fear that creates regulation like we see in the EU. As a result, China's GDP will scale while the EUs will stagnate if that's where they go. That's my assessment or my opinion on what will happen.

00:37:50

Chamath, you want to wrap this up for us? What are your thoughts on JD?

00:37:52

I'll give you two. Okay. The first is, I would say this is a really interesting moment where I would call this tale of two vice presidents. Very early in the Biden administration, Kamala was dispatched on an equally important topic at that time, which was illegal immigration, and she went to Mexico and Guatemala. You actually have a really interesting A/B test here. You have both vice presidents dealing with what were in that moment, incredibly important issues. I think that JD was focused, he was precise, he was ambitious. Even the part of the press that was very supportive of Kamala couldn't find a lot of very positive things to say about her. The feedback was meandering, she was ducking questions She didn't answer the questions that she was asked very well. It's so interesting because it's a bit of a microcosm then to what happened over these next four years in her campaign, quite honestly, which you could have taken that window of that feedback, and unfortunately for her, it just continued to be very consistent. That was one observation I had because I heard him give the speech, I heard her, and I had this moment where I was like, Wow, two totally different people.

00:39:12

The second is on the substance of what JD said, I said this on Tucker, and I'll just simplify all of this into a very basic framework, which is if you want a country to thrive, it needs to have economic supremacy and it needs to have military supremacy. Supremacy. In the absence of those two things, societies crumble. The only thing that underpins those two things is technological supremacy. We see this today. On Thursday, what happened with Microsoft, they had a $54 billion contract with the United States Army to deliver some Whiz-Bang thing, and they realized that they couldn't deliver it. And so what did they do? They went to Andril. Now, why did they go to Andril? Because Andril has technological supremacy to actually execute. A few weeks ago, we saw some attempts at technological supremacy from the Chinese with respect to Deep Seek. So I think that this is a very simple existential battle. Those who can harness and govern the things that are technologically superior will win, and it will drive economic vibrancy and military supremacy, which then creates safe, strong societies. That's it. From that perspective, JD nailed it. He saw the forest from the trees.

00:40:34

He said exactly what I think needed to be said and put folks on notice that you're either on the ship or you're off the ship. I think that that was really good.

00:40:43

Yeah. There was a little secondary conversation that emerged, Sacks, that I would love to engage you with, if you're willing, which is this Civil War, quote, unquote, between maybe MAGA 1. 0, MAGA 2. 0, techies in the MAGA Party like ourselves, and maybe the core MAGA folks. We can pull up the tweet here in JD's own word, and he's been engaging people in his own words. It's very clear that he's writing these tweets, a distinct difference between other politicians in this administration, and they just tell you what they think. Here it is. I'll try and write something to address this in detail. This is JD Vance's tweet. But I think this civil war is overstated. Though, yes, there are some real divergences between the populace, I would describe that as MAGA, and the techies. But briefly, in general, I dislike substituting American labor for cheap labor. My views on immigration and offshoring flow from this. I like growth and productivity gains and this informs my view on tech and regulation when it comes to AI, specifically, the risks are, number one, overstated to your point, Nevaal, or two, difficult to avoid. One of my many real concerns, for instance, is about consumer a fraud.

00:42:01

That's a valid reason to worry about safety. But the other problem is much worse if a peer nation is six months ahead of the US on AI. Again, I'll try and say more. This is JD going right at, I think, one of the more controversial topics, Sacks, that the administration is dealing with and has dealt with when it comes to immigration and tech, because these two things are dovetailing each other. If we lose millions of driver jobs, which we will in the next 10 years, just like we lost millions of cashier jobs, well, that's going to impact how our nation and many of the voters look at the border and immigration. We might not be able to let as many people immigrate here if we're losing millions of jobs to AI and self-driving cars. What are your thoughts on him engaging this directly, Sacks?

00:42:49

Well, the first point he's making there is about wage pressure, which is when you throw open our borders or you throw open American markets to products that can be made in foreign countries by much cheaper labor that's not held to the same standards, the same minimum wage or the same union rules or the same safety standards that American labor is and has a huge cost advantage, then you're creating wage pressure for American workers. And he's opposed to that. And I think that is an important point because I think the way that the media or neoliberals like to portray this argument is that somehow MAGA's resistance to unlimited immigration is somehow based on xenophobia or something like that. No, it's based on bread and butter their kitchen table issues, which is if you have this ridiculous open border policy, it's inevitably going to create a lot of wage pressure for people at the bottom of the pyramid. So I think JD is making that argument. But, and this is point Two, he's saying, I'm not against productivity growth. So technology is good because it enables all of our workers to improve their productivity, and that should result in better wages because workers can produce more.

00:43:57

The value of their labor goes up if they have more tools to be productive. So there's no contradiction there. And I think he's explaining why there isn't a contradiction. A point I would add, he doesn't make this point in that tweet, but I would add is that one of the problems that we've had over the last 30 years is that we have had tremendous proactivity growth in the US, but labor has not been able to capture it. All that benefit has basically gone to capital or to companies. And I think a big part of the reason why is because we've had this largely unrestricted immigration policy. So I think if you were to to tamp down on immigration, if you were to stop the illegal immigration, then labor might be able to capture more of the benefits of proactivity growth, and that would be a good thing. It'd be a more equitable distribution of the gains from proactivity and from technology. That, I think, would help tamp down this growing conflict that you see between technologists and the rest of the country, or certainly the heartland of the country.

00:44:58

Nival, this is... Okay, you want to add anything else, David?

00:45:00

All right. Well, I think just the final point he makes in that tweet is that he talks about how we live in a world in which there are other countries that are competitive. And specifically, he doesn't mention China, but he says, We have a peer competitor. And it's going to be a much worse world if they end up being six months ahead of us on AI rather than six months behind. That is a really important point to keep in mind. I think that the whole Paris AI summit took place against the backdrop of this recognition, because just a few weeks ago, we had deep seek, and it's It's really clear that China is not a year behind us. They're hot on our heels or only maybe months behind us. If we hobble ourselves with unnecessary regulations, if we make it more difficult for our AI companies to compete, that doesn't mean that China is going to follow suit and copy us. They're going to take advantage of that fact, and they're going to win.

00:45:48

All right, Na'Val, this seems to be one of the main issues of our time. Four of the five people on this podcast right now are immigrants. We have this amazing tradition in America. This is a country built by immigrants for immigrants. Do you think that should change now in the face of job destruction, which I know you've been tracking self-driving pretty acutely. We both have an interest there, I think, over the years. What's the solution here if we're going to see a bunch of job displacement, which will happen for certain jobs? We all know that. Should we shut the border and not let the next Nevaal, Chamat, Sacks, and Friedberg into the country?

00:46:28

Well, let me declare my biases up front. I'm a first-generation immigrant. I moved here when I was nine years old, or rather my parents did, and then I'm a naturalized citizen. So obviously, I'm in favor of some level of immigration. That said, I'm assimilated. I consider myself an American first and foremost. I bleed red, white, and blue. I believe in the Bill of Rights and the Constitution, first and second and fourth and all the proper amendments. I get up there every July fourth, and I deliberately defend the Second Amendment on Twitter, at which point half my followers go bananas because they're not supposed to. I'm supposed to be a good immigrant, right? And carry the usual set of coherent leftist policies, globalist policies. I think that legal high real immigration with room and time for assimilation makes sense. You want to have a brain drain on the best and brightest coming to the freest country in the world to build technology and to help civilization move forward. As As Chamath was saying, economic power and military power is downstream of technology. In fact, even culture is downstream of technology. Look at what the birth control pill did, for example, to culture, or what the automobile did to culture, or what radio and television did the culture and then the internet.

00:47:46

Technology drives everything. If you look at wealth, wealth is a set of physical transformations that you can affect. That's a combination of capital and knowledge. The bigger input to that is knowledge. The US has become the home of knowledge creation, thanks to bringing in the best and brightest. You could even argue deep seek. Part of the reason why we lost that is because a bunch of those kids, they studied in the US, but then we sent them back home. I think you absolutely have to-Is that actually accurate?

00:48:11

They were- Yeah, some. A few of them.

00:48:13

Really? Oh, my God. That's Exhibit A.

00:48:16

Wow, I didn't know that. I think you absolutely have to split skilled assimilated immigration, which is a small set, and it has to be both. They have to both be skilled and they have to become Americans. That oath is not meaningless. It has to mean something. So skilled assimilated immigration, you have to separate that from just open borders, whoever can wander in, just come on in. That latter part makes no sense.

00:48:38

If the Biden administration had only been letting in people with 150 IQs, we wouldn't have this debate right now. Absolutely. The reason why we're having this debate is because they just opened the border and let millions and millions of people in.

00:48:50

It was to their advantage to conflate legal and illegal immigration. Every time you'd be like, Well, we can't just open the border. They'd say, Well, what about Elon? What about this? They would just parade.

00:49:00

If they were just letting in the Elon and the Jensons and- Friedbergs. We wouldn't be having the same conversation today.

00:49:07

The correlation between open borders and wage suppression is irrefutable. We know that data, and I think that the Democrats, for whatever logic, committed an incredible error in basically undermining their core cohort. I want to go back to what you said because I think it's super important. There is a new political calculus on the field, and I agree with you. I think that the three cohorts of the future are the asset light working in middle class. That's cohort number one. There are probably 100 to 150 million of those folks. Then there are patriotic business owners, and then there's leaders in innovation. Those are the three. I think that what MAGA gets right is they found the middle ground that intersects those three cohorts of people. Every time you see this left versus right dichotomy, it's totally miscast. It sounds discordant to so many of us because that's not how any of us identify. I think that that's a very important observation because the policies that we adapt will need to reflect those three cohorts. What is the common ground amongst those three? On that point, Na'Val is right. There's not a lot that those three would say is wrong with a very target targeted form of extremely useful legal immigration of very, very, very smart people who agree to assimilate and be a part of America.

00:50:39

I mean, I'm so glad you said it the way you said it. I remember growing up where my parents would try to pretend that they were in Sri Lanka, and sometimes I would get so frustrated. I'm like, If you want to be in Sri Lanka, go back to Sri Lanka. I want to be Canadian because it was easier for me to make friends. It was easier for me to have a life. I was I'm trying my best. I wanted to be Canadian. Then when I moved to the United States 25 years ago, I wanted to be American. I feel that I'm American now, and I'm proud to be an American. I think that's what you want. You want people that embrace it. It doesn't mean that we can't dress up in a show, every now and then. But the point is, what do you believe and where is your loyalty?

00:51:22

Friedberg, we used to have this concept of a melting pot of this assimilation, and that was a good thing. Then it became cultural We made a right turn here. Where do you stand on this? Recruiting the best and brightest and forcing them to assimilate, making sure that they're down with this.

00:51:40

Not forcing, Jason. You find the people that care to be here.

00:51:43

Yeah. Let me rephrase that.

00:51:45

I reject the premise of this whole conversation.

00:51:48

Wait, wait, hold on.

00:51:49

Look, I'm a first-generation American who moved here when I was five and became a citizen when I was 10. And yes, I'm fully American, and that's the only country I have any loyalty to. But the premise that I reject here is that somehow an AI conversation leads to an immigration conversation because millions of jobs are going to be lost. We don't know that.

00:52:10

That's also true.

00:52:11

I agree with that. You're making a huge assumption that's buying into the doomer That AI is going to wipe out millions of jobs. That is not evidence. I think it's going to create more jobs than any of us are understanding. Furthermore, have any jobs been lost by AI? Let's be real. We've had AI for two and a half years, and I think it's great. But so far, it's a better search engine, and it helps high school kids cheat on their essays.

00:52:32

I mean, come on.

00:52:32

You don't believe that self-driving is coming? Hold on a second, Sacks.

00:52:36

You don't believe that millions- But hold on.

00:52:39

Those driver jobs weren't even there 10 years ago. Uber came along and created all these driver jobs. That's true. Fair enough. Doordash created all these driver jobs. What technology does... Yes, technology destroys jobs, but it replaces them with opportunities that are even better. Then either you can go capture that opportunity yourself or an entrepreneur will come along and create something that allows you to capture those opportunities. Ai is a productivity tool. It increases the productivity of a worker. It allows them to do more creative work and less repetitive work. As such, it makes them more valuable. Yes, there is some retraining involved, but not a lot. These are natural language computers. You can talk to them in plain English, and they talk back to you in plain English. But I think David is absolutely right. I think we will see job creation by AI that will be as fast or faster than job destruction. You saw this even with the internet. Youtube came along, look at all these YouTube streamers and influencers. That didn't used to be a job. New jobs, really opportunities, because job is a wrong word. Job implies someone else has to give it to me, and it's like they're handed out to zero-sum game.

00:53:38

Forget all that. It's opportunities. After COVID, look at how many people are making money by working from home in mysterious little ways on the internet that you can't even quite grasp.

00:53:49

Here's the way I categorize it, okay? Is that whenever you have a new technology, you get productivity gains, you get some job disruption, meaning that part of your job may go away, but then you get other parts that are new and hopefully more elevated, more interesting. Then there is some job loss. I just think that the third category will follow the historical trend, which is that the first two categories are always bigger, and you end up with more net productivity and more net wealth creation. We've seen no evidence to date that that's not going to be the case. Now, it's true that AI is about to get more powerful. You're going to see a whole new wave of what are called agents this year, agentic products that are able to do more for you. But there's evidence yet that those things are going to be completely unsupervised and replace people's jobs. I think that we have to see how this technology evolves. I think one of the mistakes of, let's call it the European approach, is assuming that you can predict the future We're worth perfect accuracy, with such good accuracy that you can create regulations today that are going to avoid all these risks in the future.

00:54:53

We just don't know enough yet to be able to do that. That's a false level of certainty.

00:54:57

I agree with you. The company Companies that are promulgating that view is what Neuval said, those that have an economic-vested interest in at least convincing the next incremental investor that this could be true because they want to make the claim that all the money should go to them so they could Hoover up all the economic gains. That is the part of the cycle we're in. If you actually stratify these reactions, there's the small startup companies in AI that believe there's a productivity leap to be had and that there's going to be prosperity, everybody on the sidelines watching, and then a few companies that have an extremely vested interest in them being a gatekeeper because they need to raise the next 30 or $40 billion trying to convince people that that's true. If you view it through that lens, you're right, Sacks, we have not accomplished anything yet that proves that this is going to be cataclysmically bad. If anything, right now, history would tell you it's probably going to be like the past, which is generally productive and accretive to society.

00:55:55

Yeah. Just to bring it back to JD's speech, which was where we started, I think it was a quintessentially American speech in the sense that he said we should be optimistic about the opportunities here, which I think is basically right. We want to lead. We want to take advantage of this. We don't want to hobble it. We don't even fully know what it's going to be yet. We are going to center workers. We want to be pro-worker. And I think that if there are downsides for workers, then we can mitigate those things in the future. But it's too early to say that we know what the program should be. It's more about a statement of values at this point.

00:56:33

Do you think it's too early, Friedberg, given optimists and all these robots being created, what we're seeing in self-driving? You've talked about the ramp up with Waymo. Two actually say we will not see millions of jobs and millions of people get displaced from those jobs? What do you think, Friedberg? I'm curious because that is the counter argument.

00:56:55

My experience in the workplace is that AI tools that are doing things that an analyst or knowledge worker was doing with many hours in the past is allowing them to do something in minutes. That doesn't mean that they spend the rest of the day doing nothing. What's great for our business and for other businesses like ours that can leverage AI tools is that those individuals can now do more. Our throughput, our productivity as an organization has gone up, and we can now create more things faster. Whatever the product is my company makes, we can now make more things more quickly. We can do more development with less cost.

00:57:34

You're seeing it on the ground, correct, Adaholo?

00:57:35

I'm seeing it on the ground. I don't think that this transplantation of how bad AI will be for jobs is the right framing as much as it is about an acceleration of productivity. This is why I go back to the point about GDP per capita and GDP growth. Countries, societies, areas that are interested or industries that are interested in accelerating output, in accelerating productivity, the ability to make stuff and sell stuff, are going to rapidly embrace these tools because it allows them to do more with less. I think that's what I really see on the ground. Then the second point I'll make is the one that I mentioned earlier, and I'll wrap up with a third point, which is I think we're underestimating the new industries that will emerge drastically, dramatically. There is going to be so much new shit that we are not really thinking deeply about right now that we could do a whole another two-hour brainstorming session on, on what AI unlocks in terms of large scale projects that are traditionally or typically or today held back because of the constraints on the technical feasibility of these projects. That ranges from accelerating to new semiconductor technology to quantum computing, to energy systems, to transportation, to habitation, et cetera, et cetera.

00:58:50

There's all sorts of transformations in every industry that's possible as these tools come online, and that will spurn insane new industries. The most important point is the third one, which is we We don't know the overlap of job loss and job creation if there is one. The rate at which these new technologies impact and create new markets, but I think Navel is right. I think that what happens in capitalism and in free societies is that capital and people rush to fill the whole of new opportunities that emerge because of AI, and that those grow more quickly than the old bubbles deflate. If there's a deflationary effect in terms of job need in other industries, I think that the loss will happen slower than the rush to take advantage of creating new things will happen on the other side. My bet is probably on the order of, I think new things will be created faster than all things will be lost.

00:59:37

I think- Actually, as a quick side note to that, the fastest way to help somebody get a job right now, if you know somebody in the market who's looking for a job, the best thing you can do is say, Hey, go download the AI tools and just start talking to them. Just start using them in any way. Then you can walk into any employer in almost any field and say, Hey, I understand AI, and they'll hire you in the spot. Exactly.

00:59:58

Nival, you and I I watched this happen. We had a front row seat to it. Back in the day, when you were doing venture hacks, and I was doing an open angel for them, we had to fight to find five or 10 companies a month. Then the cost of running these companies went down. They went down massively from 5 million to start a company to two, then to 250, then to 100. I think what we're seeing is three things concurrently. You're going to see all these jobs go away for automation, self-driving cars, cashiers, etc. But we're going to We also see static team size at places like Google. They're just not hiring because they're just having the existing bloated employee base learn the tools. But I don't know if you're seeing this, the number of startups able to get a product to market with two or three people and get to a million in revenue is booming. What are you seeing in the startup landscape?

01:00:48

Definitely what you're saying in that there's leverage. But at the same time, I think the more interesting part is that new startups are enabled that could not exist otherwise. My last startup, AirChat, could not have existed without AI because we needed the transcription translation. Even the current thing I'm working on, it's not an AI company, but it cannot exist without AI. It is relying on AI. Even at Angelist, we're significantly adopting AI. Everywhere you turn, it's more opportunity, more opportunity, more opportunity. People like to go on Twitter, or the artist formerly known as Twitter. Basically, they like to exaggerate like, Oh, my God, we've hit AGI. Oh, my God, I just replaced all my mid-level engineers. Oh, my God, I've stopped hiring. To For me, that's like, moronic. The two valid ones are the one-man entrepreneur shows where there's one guy or one gal, and they're scaling up like crazy things to AI. Phil Kaplan. Shut up. Or there are people who are embracing AI and being like, I need to hire, and I need to hire anyone who can even spell AI. Anyone who's even used AI, just come on in, come on in.

01:01:49

Again, I would say the easiest way to see that AI is not taking jobs or creating opportunities is go brush up on your AI, learn a little bit, watch a few videos, use the AI, tinker with and then go reapply for that job that rejected you and watch how they pull you in.

01:02:04

In 2023, an economist named Richard Balwin said, AI won't take your job. It's someone using AI that will take your job because they know how to use it better than you. That's become a meme, and you see it floating X, but I think there's a lot of truth in that. As long as you remain adaptive and you keep learning and you learn how to take advantage of these tools, you should do better. If you wall yourself off from the technology and don't take advantage of it, that's when you put yourself at risk.

01:02:29

Another way to think about it is these are natural language computers. Everyone who's intimidated by computers before should no longer be intimidated. You don't need to program anymore in some esoteric language or learn some obscure mathematics to be able to use these. You can just talk to them and they talk back to you. That's magic.

01:02:46

The new programming language is English. Chamatha, you want to wrap us up here on this opportunity, slash displacement, slash chaos.

01:02:54

I was going to say this before, but I'm pretty unconvinced anymore that you I should bother even learning many of the hard sciences and maths that we used to as underpinnings. I used to believe that the right thing to do was for everybody to go into engineering. I'm not necessarily as convinced as I used to be because I used to say, Well, that's great first principles thinking, et cetera, et cetera, and you're going to get trained in a toolkit that will scale. I'm not sure that that's true. I think you can use these agents and you can use deep research, and all of a sudden, they replace a lot of that skills. So what's left over? It's creativity, it's judgment, it's history, it's psychology, it's all of these other software things-Leadership, communication. That allow you to manipulate these models in constructive ways. Because when you think of the prompt engineering that gets you to great answers, it's actually just thinking in totally different orthogonal ways and nonlinearly. So that's my last thought, which is it does open up the aperture, meaning for every smart mathematical genius, there's many, many, many other people who have high EQ.

01:04:01

And all of a sudden this tool actually takes the skill away from the person with just a high IQ and says, If you have these other skills now, you can compete with me equally. And I think that that's liberating for a lot of people.

01:04:15

I'm in the camp of more opportunity. I got to watch the movie industry a whole bunch when the digital cameras came out and more people started making documentaries, more people started making independent film shorts, and then, of course, the YouTube revolution. People started making videos on YouTube or podcasts like this. If you look at what happened with the special effects industry as well, we need far fewer people to make a Star Wars movie, to make a Star Wars series, to make a Marvel series. As we've seen, now we can get the mandalorian, Ashoka, and all these other series with smaller numbers of people, and they look better than obviously the original Star Wars series or even the prequels. There's going to be so many more opportunities. We're now making more TV shows, more series, everything we wanted to see of every little character. That's the same thing that's happening in startups. I can't believe that there is a app now, Noval, called Slopes, just for skiing. There are 20 really good apps for just meditation. And there are 10 really good ones just for fasting. We're going down this long tail of opportunity, and there'll be plenty of million to $10 million businesses for us if people learn to use these tools.

01:05:27

I love how that's the thing that tips you over.

01:05:31

Which one? You get an extra Marvel movie or an extra Star Wars show, so that tips you over. I think for a lot of people, it feels great that- AI may take over the world, but I'm going to get an extra Star Wars movie.

01:05:44

I'll be So I'm cool with it.

01:05:46

Are you not entertained?

01:05:48

One final point on this is, look, given the choice between the two categories of techno-optimists and techno-pessimists, I'm definitely in the optimist camp, and I think we should be. But I think there's actually a third category that I I would submit, which is techno-realist, which is technology is going to happen. Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will. China is going to do it or somebody else will do it. And it's better for us to be in control of the technology, to be the leader, rather than passively waiting for it to happen to us. And I just think that's always true. It's better for businesses to be proactive and take the lead, disrupt themselves instead of waiting for someone else to do it, and I think it's better for countries. And I think you did see this theme a little bit. I mean, these are my own views. I don't want to ascribe them to the vice president. But you did see, I think, a hint of the techno-realism idea in his speech and in his tweet, which is, look, AI is going to happen.

01:06:48

We might as well be the leader. If we don't, we could lose in a key category that has implications for national security, for our economy, for many things. So that's just not a world we want to live in. So I think a lot of this debate is academic because whether you're an optimist or pessimist, this glass half empty, half full, the question is just, is it going to happen or not? And I think the answer is yes. So then we want to control It's just let's just boil it down. There's not a tremendous amount of choice in this, I think.

01:07:19

I would agree heavily with one point, and I would just tweak another. The point I would agree with is that it's going to happen anyway, and that's what Deep Seq proved. You can turn off the flow of chips to them, and you can turn off the flow of talent. What do they do? They just get more efficient. They exported it back to us. They sent us back the best open source model when our guys were staying closed source for safety reasons.

01:07:40

Yeah, exactly. It's going to come right back to us.

01:07:42

Deep Seek, safety of their equity.

01:07:44

Deep Seek exploded the fallacy that the US has a monopoly in this category and that somehow, therefore, we can slow down the train and that we have total control over the train. I think what Deep Seek showed us, no, if we slow down the train, they're just going to win.

01:07:59

Yeah. The The part where I'd like to tweak a little bit is the idea that we are going to win. By we, when you say America, the problem is that the best way to win is to be as open, as distributed, as innovative as possible. If this all ends up in the control of one company, they're actually going to be It's lower to innovate than if there's a dynamic system. That dynamic system, by its nature, will be open. It will leak to China, it will leak to India. But these things have powerful network effects. We know this about technology. Almost all technology has network effects underneath. So even if you are open, you're still going to win and you're still going to control all the most of it.

01:08:34

You look at the internet. That was all true for the internet, right? The internet's an open technology. It's based on tons of open source. But who runs the dominant internet? But who's the dominant internet? But who's the dominant companies? All the dominant companies are US companies because they were in the lead.

01:08:43

Exactly right. Because we embrace the open Internet.

01:08:46

We embraced the open Internet. That was different.

01:08:48

There will be benefits for all of humanity. I think the vice president's speech was really clear that, Look, we want you guys to be on board. We want to be good partners. However, there are definitely going to be winners economically, militarily. In order to be one of those winners, you have to be a leader.

01:09:04

Who's going to get to AGI first, Naval? Is it going to be an open source? Who's going to win? Is it going to be open source or closed source? Who's going to win the day? If we're sitting here 5, 10 years from now and we're looking at the top three language models, which is going to be- I'm going to get a lot of trouble for this, but I don't think we know how to build AGI, but that's a much longer discussion. Okay, put AGI side. Who's going to have the best model five years from me?

01:09:23

Hold on, I 100% agree with you.

01:09:25

I just think it's a different thing. But what we're building are these incredible natural language computers, and Actually, David, in a very pithy way, summarized the two big use cases. It's search and it's homework. It's paperwork. It's really paperwork. A lot of these jobs that we're talking about disappearing are actually paperwork jobs. They're paperwork shuffling. These are made up jobs. The federal government, as they're finding out through DOGE, a third of it is people digging holes of spoons and another third are filling them back up.

01:09:51

They're filling out paperwork and then burying it in a mineshaft.

01:09:53

They're burying it in a mineshaft, an Iron Mountain. I think a lot of these made-up jobs are going to stick around.

01:09:58

Then they're going to go down the mineshaft to get the paperwork when someone retires and bring it up.

01:10:01

You know what? I'm going to get them some thumb drives. We can increase the throughput of the elevator with some thumb drives. It would be incredible.

01:10:07

What we found out is the DMV has been running the government for the last 70 years. It's been a compounding. That's really what's going on. The He got DMVs in charge.

01:10:16

I mean, if the world ends in nuclear war, God forbid, the only thing that's be left will be the cockroaches and then a bunch of government documents. Tps reports. Tps reports down in a mine shaft.

01:10:28

Basically, yeah. Let's take a moment, everybody, to thank our Tsar. We miss him. We wish he could be here for the whole show.

01:10:39

Thank you, Tsar.

01:10:40

Thank you to the Tsar.

01:10:41

Good to see you, guys.

01:10:42

We miss you. We miss you, little buddy. I wish we could talk about Ukraine, but we're not allowed. Get back to work. We'll talk about it another time over coffee. I'll see you in the commissary. See you guys. Thanks for the invite. Bye. Man, I'm so excited. I'm N'Avall. Sacks invited me to go to the military mess. I'm going to be in the commissary with Sacks.

01:11:01

No, he didn't, J. Kell. You invited yourself. Be honest.

01:11:02

I did. Yes, I did. I put it on his calendar.

01:11:05

To keep the conversation moving, let me segue a point that came up that was really important into tariffs. The point is, even though the internet was open, the US won a lot of the Internet. A lot of US companies won the Internet. They won that because we got there the firstest with the mostest, as they say in the military. That matters because a lot of technology businesses have scale economies and network effects underneath. Even basic brand-based network effects. If you go back to the late '90s, early 2000s, very few people would have predicted that we would have ended up with Amazon basically owning all of e-commerce. You would have thought it would have been a perfect competition and very spread out. That applies to how we ended with Uber as basically one taxi service, or we end up with-Airbnb. Meta, Airbnb. It's just network effects, network effects, network effects rule the world around me. But when it comes to tariffs and when it comes to trade, we act like network effects don't exist. The classic Ricardian comparative advantage dogma says that you should produce what you're best at, I produce what I'm best at, and we trade.

01:12:07

Then even if you want to charge me more for it, if you want to impose tariffs for me to ship to you, I should still keep tariffs down because I'm better off. You're just selling me If you're selling me stuff cheaply, great. Or if you want to subsidize your guys, great, you're selling me stuff cheaply. The problem is that is not how most modern businesses work. Most modern businesses have network effects. As a simple thought experiment, suppose that we have two countries, I'm China, you're the US. I start out by subsidizing all of my companies and industries that have network effects. I'll subsidize TikTok, I'll ban your social media, but I'll push mine. I will subsidize my semiconductors, which do tend to have winner take all in certain categories or I'll subsidize my drones. And then- BYD. Exactly, BYD, self-driving, whatever. Then when I win, I own the whole market and I can raise prices. If you try to start up a competitor, then it's too late. I've got If I've got network effects or if I've got scale economies, I can lower my price to zero, crash you out of business. No one in their mind will invest, and I'll raise prices right back up.

01:13:07

You have to understand that certain industries have historesis, or they have network effects or they have economies of scale. These are all the interesting ones. These are all the high-margin businesses. In those, if somebody is subsidizing or they're raising tariffs against you to protect your industries and let them develop, you do have to do something. You can't just completely back down.

01:13:27

What do you guys think, bath about tariffs and network effects? It does seem like we do want to have redundancy in supply chains, so there are some exceptions here. Any thoughts on how this might play out? Because, yeah, Trump brings up tariffs every 48 hours and then it doesn't seem like any of them land. So I don't know. I'm still on my 72-hour Trump rule, which is whatever he says, wait 72 hours and then maybe see if it actually comes to pass. Where do you stand on all these tariffs and tariff talk?

01:13:58

Well, I think the tariffs will be a plug. Are they coming? Absolutely. The quantum of them? I don't know. I think that the way that you can figure out how extreme it will be, it'll be based on what the legislative plan is for the budget. There's two paths right now. Path one, which I think is a little bit more likely, is that they're going to pass a slim down plan in the Senate just on border security and military spending. Then they'll kick the can down the road for probably another three or four months on the budget. Plan two is this one big, beautiful bill that's irking its way through the House. There, they're proposing trillions of dollars of cuts. In that mode, you're going to need to raise revenue somehow And especially if you're giving away tax breaks. And the only way to do that is probably through tariffs, or one way to do it is through tariffs. My honest opinion, Jason, is that I think we're in a very complicated moment. I think the Senate plan is actually on the margins more likely and better And the reason is because I think that Trump is better off getting the next 60 to 90 days of data.

01:15:06

I mean, we're in a real pickle here. We have persistent inflation. We have a broken Fed. They're totally asleep at the switch. The thing that Yellen and Biden did, which in hindsight now was extremely dangerous, is they issued so much short term paper that in totality, we have 10 $1 trillion we need to finance in the next 6-9 months. It could be the case that we have rates that are like five, five and a quarter, five and a half %. I mean that that's extremely bad at the same time as inflation, at the same time as delinquencies are ticking up. So I think tariffs are probably going to happen. But I think that Trump will have the most flexibility if he has time to see what the actual economic conditions will be, which will be more clear in three, four, five months. And so I almost think this big, beautiful bill is actually counterproductive because I'm not sure we're going to have all the data we need to get it right.

01:16:19

Friedberg, any thoughts on these tariffs? You've been involved in the global marketplace, especially when it comes to produce and wheat and all this corn and everything. What do you think the here is going to be, or is it sabre-rattling in a tool for Trump?

01:16:35

The biggest buyer of US AG exports is China. China. Ag exports are a major revenue source, major income source, and a major part of the economy for a large number of states. There will be, as there was in the first Trump presidency, very likely, very large transfer payments made to farmers because China is very likely going to tariff imports or stop making import purchases altogether, which is what happened during the first presidency. When they did that, the federal government, I believe, had transfer payments of north of $20 billion to farmers. This is a not negligible sum, and it's a not negligible economic effect because there's then a rippling effect throughout the ag economy. I think that's one key thing that I've heard folks talk about is the activity that's going to be needed to support the farm economy as the US's biggest the ag customer disappears. In the early 20th century, we didn't have an income tax, and the federal revenue was almost entirely dependent on tariffs. When tariffs were cut, there was an expectation that there would be a decline in federal government revenue. But what actually happened as volume went up. Lower tariffs actually increased trade, increase the size of the economies.

01:17:50

This is where a lot of economists take their basis in, Hey, guys, if we do these tariffs, it's actually going to shrink the economy. It's going to cause a reduction in trade. The counterbalance financing effect is one that has not been tested in economics, which is what's going to happen if simultaneously we reduce the income tax and reduce the corporate income tax and basically increase capital flows through reduced taxation while doing the tariff implementation at the same time. It's a grand economic experiment, and I think we'll learn a lot about what's going to happen here as this all moves forward. I do think ultimately, many of these countries are going to capitulate to some degree, and we're going to end up with some negotiated settlement that's going to hopefully not be too short term impactful on the economies and the people and the jobs that are dependent on trade.

01:18:35

Economy feels like it's in a very precarious place.

01:18:38

It does to asset holders. Yeah, to asset holders. Obviously, they've left it in a bad place in the last administration, and we shut down the entire country for a year over COVID, and the bill for that has come due, and that's reflected in inflation. I think there are a couple of other points in tariffs. First is, it's not just about money. It's also about making sure we have a functional middle class with good jobs because If you have a non-tariff world, maybe all the gains go to the upper class and an underclass, and then you can't have a functioning democracy when the average person is on one of those two extremes. I think that's one issue. Another is strategic industries. If you look at it today, probably the largest defense contractor in the world is DJI. They got all the drones. Even in Ukraine, both sides are getting all their drone parts from DJI. Now, they're getting it through different supply chains and so on, but Ukrainian drones and Russian drones, the vast majority of them are coming through China through DJI. And we don't have that industry. If we have a kinetic conflict right now and we don't have good drone supply chain internally in the US, we're probably going to lose because those things are autonomous bullets.

01:19:41

That's the future of all warfare. We're buying F-35s and the Chinese are building swarms of nanogros. At scale. At scale. We do have to re-onshore those critical supply chains. And what is a drone supply chain? There's not a thing called drone. It's like motors and semiconductors. It's a lot of pieces. And optics and lasers and It's just everything across the board. I think there are other good arguments for at least reshoring some of these industries. We need them. The United States is very lucky in that it's very autarkic. We have all the resources, we have all the supplies. We can be upstream of everybody with all the energy. To the extent we're importing any energy, that is a choice we made. That is not because fundamentally we lack the energy. We have to, right. Yeah, because we in all the oil resources and the natural gas and fracking, combined with all the work we've done in nuclear-efficient and small reactors, we should absolutely be energy independent.

01:20:34

We should be running the table on it. We should have a massive surplus. Hey, if you're worried about a couple of million of DoorDash Uber drivers losing their jobs to automation, hey, there's going to be factories to build these parts for these drones that we're going to need. There's a lot of opportunity, I guess, for people to- There is a difference between different kinds of jobs.

01:20:56

Those kinds of jobs are better jobs, building difficult things at scale physically that we need for both national security and for innovation. Those are better jobs than paperwork, writing essays for other people to read or even driving cars.

01:21:12

Listen, I want to get two more stories here. We have a really interesting copyright story that I wanted to touch on. Thompson Reuters just won the first major US AI copyright case, and Fair Use played a major role in this decision. This has huge implications for AI companies here in the United States. Obviously, OpenAI and the New York Times, Getty Images versus Stability. We've talked about these, but it's been a little while because the legal system takes a little bit of time, and these are very complicated cases, as we've talked about. Thompson Reuters owns Westlaw. Now, if you don't know that, it's like Lexus, Nexus. It's one of the legal databases out there that lawyers use to find cases, et cetera. They have a paid product with summaries and analysis of legal decisions. Back in in 2020. This is two years before ChatGPT, Reuters sued a legal research competitor called Ross for copyright infringement. Ross had created an AI-powered legal search engine. Sounds great. But Ross had asked Westlaw if they would pay a license to its content for training. Westlaw said no. This all went back and forth, and then Ross signed a similar deal with a company called Legal Ease.

01:22:22

The problem is, Legal Ease's database was just copied and pasted from a bunch of Westlaw answers. Reuters, Westlaw, sued Ross in 2020, accusing the company of being becariously liable for legal ease's direct infringement. Super important point. Anyway, the judge originally favored Ross in fair use. This week, the judge reversed this ruling and found Ross liable, noting that after further review, fair use does not apply in this case. This is the first major win, and we debated this. Here's a clip. You heard it here first on the All-In pod. What I would say is when you look at that fair use I've got a lot of experience with it. The fourth factor test, I'm sure you're well aware of this, is the effect of the use on the potential market and the value of the work. If you look at the lawsuits that are starting to emerge, it is Getty's right to then make derivative products based on their images. I think we would all agree. Stablediffusion, when they use these open web, that is no excuse to use an open web Crawler to avoid getting a license from the original owner of that. Just because you can technically do it, doesn't mean you're allowed to do it.

01:23:27

In fact, the open web projects that provide these explicitly, We do not give you the right to use this. You have to then go read the copyright laws on each of those websites. And on top of that, if somebody were to steal the copyrights of other people, put it on the open web, which is happening all day long, if you're building a derivative work like this, you still need to go get it. It's no excuse that I took some site in Russia that did a bunch of copyright violation and then I index them for my training model. I think this is going to result.

01:23:56

Hey, Friedberg, can you shoot me in the face and let me know in the second time? Okay. Okay.

01:24:00

Oh, great.

01:24:03

I feel the same way today.

01:24:05

Same way now, exactly.

01:24:05

I know, me too. Okay, good segment.

01:24:08

Let's move on. Well, since these guys don't give a shit about copyright holders, what do you think about I'm so glad you're here, Navel, to actually talk about the topics these two other guys would engage me with.

01:24:21

I'm going to go even thinner of a limb and say I largely agree with you. I think it's a bit rich to crawl the open web, Hoover up all the data, offer direct substitution for a lot of use cases, because now you start and end with the AI model. It's not even like you link out like Google did. Then you just close off the models for safety reasons. I think if you trained on the open web, your model should be open source.

01:24:40

Yeah, absolutely. That would be a fine thing. I have a prediction here. I think this is all going to wind up like the Naps for Spotify case. For people who don't know, Spotify pays, I think, 65 cents on the dollar to the original underwriters of that content, the music industry. They figured out a way to make a business, and Napster is roadkill. I think that there is a non-zero chance. It might be 5 or 10 % that OpenAI is going to lose the New York Times lawsuit, and they're going to lose it hard, and there could be injunctions. I think the settlement might be that these language models, especially the closed ones, are going to have to pay some percentage in a negotiated settlement of their revenue, half, two-thirds, to the content holders. This could make the content industry have a massive, massive uplift and a massive resurgence.

01:25:35

I think that the problem... There's an example on the other side of this, which is that there's a company that provides technical support for Oracle, third-party company. Oracle has tried umpteen times to sue them into oblivion using copyright infringement as part of the justification. It's been a pall over the stock for a long time. The company's name is Rimini Street. Don't ask me why it's on my radar, but I've been looking at it. They lost this huge lawsuit, Oracle One, and then it went to appellate court, and then it was all vacated. Why am I bringing this up? I think that the legal community has absolutely no idea how these models work. Because you can find one case that goes one way and one case that goes the other. What I would say should become standard reading for anybody bringing any of these lawsuits. There's an incredible video that Karpathy just dropped, that Andre just where he does this deep dive into LLMs, and he explains ChatGPT from the ground up. It's on YouTube, it's three hours. It's excellent. It's very difficult to watch that and not get to the same conclusion that you guys did.

01:26:46

I'll just leave it at that. I tend to agree with this.

01:26:49

There's also a good old video by Ilyas Sitzkover, where he was, I believe, the founding chief scientist or CTO of OpenAI. He talks about how these large language models are basically extreme compressors. He models them entirely as their ability to compress. And they're lossy compressors. It's a lossy compression.

01:27:06

It's a lossy compression.

01:27:07

It's a lossy extreme compression. Exactly. And Google got sued for fair use back in the day, but the way they managed to get past the argument was they were always linking back to you. They showed you a tiny bit and they sent you the traffic. They provided some value. They sent you the traffic.

01:27:20

This is lossy compression. It is absolutely... I'm now on your... I hate to say this, Jason. I agree with you. You were right.

01:27:35

That's all I wanted to hear all these years.

01:27:38

That's all I wanted to hear was one time. That's why I was shaking my head when I saw those videos because I was like, Oh, man, Jason was right.

01:27:44

Jason was right. Oh, my God. You were right.

01:27:46

I've been through this so many times that these... I think this is... Ruper Murdoch said, We should hold the line with Google and not allow them to index our content without a license. And Google navigated it successfully, and they were able to not get him to stop. I think what's happened now is that the New York Times remembers that. They all remember losing their content and these snippets and the one box to Google, and they couldn't get that genie back in the bottle. I think the New York Times realizes this is their payday. I think the New York Times will make more money from licenses from LLMs than they will make from advertising or subscription eventually. This will renew the model.

01:28:32

Almost. I think the New York Times content is worthless to an LLM, but that's a different story. I think the actual valuable content is different. Well, okay, sure. If you don't have a political reason, whatever.

01:28:39

But I can tell you, as a user, I loved the wire cutter. I think you knew Brian and everybody over the Wirecutter. That was like such an innovation.

01:28:46

Wirecutter, fair enough. Yeah, Wirecutter.

01:28:48

What a great product. I used to pay for the New York Times. I no longer pay for the New York Times. My main reason was I would go to the Wirecutter, and I would just buy whatever they'd call me to buy. Now, I go to ChatGPT, which I pay for. Chatgpt tells me what to buy based on the wire cutter, and I'm already paying for it, so I stop paying for it.

01:29:08

I philosophically disagree with all of your nonsense on this topic. All three of you are wrong, and I'll tell you why. Number one, if information is out in the open internet, I believe it's accessible and it's viewable, and I view an LLM or a web Crawler as basically being a human that's reading and can store information in its brain. If it's out there in the open. If it's behind a paywall, 100%, if it's behind some protected password.

01:29:34

Wait, wait, wait, David. In that case, can a Google Crawler just crawl the entire site and serve it on Google?

01:29:41

Why can't they do that? Here's the fair use. The fair use is you cannot copy, you cannot repeat the content. You cannot take the content and repeat it.

01:29:50

That is how the law is currently written. But now what I have is I have a tool that can remix it with 50 other pieces of similar content, and I can change the word slightly and maybe even translate into different language. So where does it stop?

01:30:01

Do you know the musical artist Girl Talk? We should have done a Girl Talk track.

01:30:05

Oh, God, here it goes. He's got weird musical taste. Here we go.

01:30:10

He basically takes small samples of popular tracks, and he got sued for the same problem. There was another guy named White Panda, I believe, had the same problem. Ed Sheeran got sued for this.

01:30:21

Yeah, but there are entire sites like Stack Overflow and WikiHow that are basically disappeared now because you can just swallow them all up and you can just spit it all back out in ChatGPT with slight changes. I think that the first and fourth- But I think the fair use is how much of a slight change is exactly the right question, which is how much are you changing? Yeah, it's the right question. That's the question. It actually boils down to the AGI question. Are these things actually intelligent? Are they learning or are they compressing and regurgitating? That's the question.

01:30:45

I wonder this about humans, and that's why I bring up the white panda, the girl talk in audio, but also visual art. There was always artists, even in classical music, I don't know if you guys are classical music people, but there's a demonstration of how one composer learned from the next, and you can actually crack the music as being standing on the shoulders of the prior. The same is true in almost all art forms, in almost all human knowledge, and I think that's right. I think that's right. Communication.

01:31:11

It's very hard to figure that out.

01:31:13

Well, that's exactly right.

01:31:14

That's the hard part. It's very hard to figure that out, which is why I come back to there's only one of two stable solutions to this, and it's going to happen anyway. If we don't crawl it, the Chinese will crawl it. Deep Seek proved that. There's only one of two stable solutions. Either you pay the copyright holders, which I actually think doesn't work, and the reason is because someone in China will crawl it and they just dump the weights. So they can just crawl and dump the compressed weights. Or if you crawl, make it open. At least contribute something back to open source. You crawl open data, contribute it back to open source. The people who don't want to be crawled, they're going to have to go to a huge length to protect their data. Now everybody knows to protect the data.

01:31:53

Yeah, well, the licensing thing is happening here. I have a book out from Harper business on the shelf behind me, and I'm getting 2,500 smackaroos for the next three years for Microsoft indexing it. They're going out and they're licensing this stuff. And they're going book- You're getting $2,500. So your book is- Literally, I'm getting $2,500 for three years, a bunch of harper- To go into an LLM. To go into Microsoft, specifically. And you know what? I'm going to sign it, I decided, because I just want to set the precedent. Maybe next time it's 10,000, maybe next time it's 250. I don't care. I just want to see people have their content I respect it, and I'm just hoping that Sam Altman loses this lawsuit and they get an injunction against him. Well, just because he's just such a weasel in terms of making- Stop. Open eye into a closed thing. I mean, I like Sam personally, but I think what he did was the super weasel move of all time for his own personal benefit. And this whole lying like, Oh, I have no equity. I get health care. He does it for the love.

01:32:54

And now I get 10%.

01:32:55

No, bro, he does it.

01:32:56

He does it for the love?

01:32:57

What was the statement? He does it for the... They do it for the joy, the happiness.

01:33:01

The joy, the benefit. The benefits. I think he got health care.

01:33:04

I think in Open Aya's defense, they do need to raise a lot of money and they got to incent their employees. But that doesn't mean they need to take over the whole thing. The nonprofit portion can still stay the nonprofit portion and get the lion's share of the benefits and be the board, and then he can have an incentive package, and the employees can have an incentive package.

01:33:21

Why don't they get a percentage of the revenue?

01:33:23

Just give them 10% of the revenue goes to the team. I don't understand why it has to be bought out right now for 40 billion, and then the whole thing disappears a closed system. That part makes no sense to me.

01:33:31

That's called a shell game and a scam.

01:33:34

Yeah, I think Sam and his team would do better to leave the nonprofit part alone, leave an actual independent nonprofit board in charge, and then have a strong incentive plan and a strong fundraising plan for the investors and the employees. I think this is workable. It's just trying to grab it all. It just seems way off, especially when it was built on open algorithms from Google, open data from the web, and on a nonprofit funding from Elon and others.

01:33:58

I mean, what a great proposal. We just workshopped here. What do they make 6 billion a year? Just take 10% of it, 600 million every year, and that goes into a bonus.

01:34:09

They're losing money, Jason, so they have to- Okay, eventually they- No, but even equity.

01:34:13

They could give equity to the people building it, but they could still leave it in the control of the nonprofit. I just don't understand this conversion. There was a board coup, right? The board tried to fire Sam, and Sam took over the board. Now it's his hand-picked board. It also looks like self-dealing, right? Yeah, they'll get an independent You hire a valuation, but we all know that game. You hire a valuation expert who's going to say what you're going to say, and they'll check a box. 17 box. If you're going to capture the light code of all future value or build super intelligence, we know that's worth a lot more. That's why Elon just bid 100 billion.

01:34:42

Exactly. You're saying the things that actually the regulators and the legal community have no insight because they'll see a fairness opinion and they think, Oh, it says fairness and opinion, two words side by side, it must be fair. And they don't know how all of this stuff is gamed.

01:34:58

Yeah. Man, I got stories about 409(a)s that would...

01:35:02

Exactly. Yeah, everything is game.

01:35:04

409a's are gamed. These fairness opinions are gamed. But the reality is I don't think the legal and the judicial community has any idea.

01:35:13

I mean, imagine if a founder you invested in. This is just a total imaginary situation, Neville, had a great term sheet at some incredible dollar amount, didn't take it, ran the valuation down to under a million, gave themselves a bunch of shares, and then took it three months later. I don't know.

01:35:30

What would that be called? What does that call?

01:35:33

Securities fraud? Can we wrap up? Yeah, let's wrap on your story.

01:35:36

I had an interesting... Nick will show you the photo. I had an interesting dinner on Monday with Brian Johnson, the Don't Die guy. Came over to my house.

01:35:43

How's his erection doing overnight?

01:35:45

What we talked about is that he's got three hours a night of nighttime erections.

01:35:50

Wow, look at this.

01:35:52

By the way, first of all, I'll tell you. I think that he's- Coon.

01:35:56

Wait, which one of those is giving him the erection?

01:35:59

No, So he measures his nighttime erections.

01:36:01

I think Coon is giving him the erection. But he said that when he started...

01:36:06

So by the way, he said he was 43 when he started this thing. He was basically clinically obese. Yeah. In these next four years has become a specimen. He now has three hours a night of nighttime erections. But that's not the interesting thing. At the end of this dinner, by the way, his skin is incredible. I was not sure because when you see the pictures online, but his skin in real life is like a porcelain doll. Both my wife and I were like, We've never seen skin like this, and it's incredibly soft.

01:36:35

Wait, wait, wait, wait, wait, He had supple skin?

01:36:46

Bro, it's the softest skin I've ever touched in my life. Anyways, that's not the point. It was really fascinating dinner. He walked through his whole protocol. But at the end of it, I think it was Nikesh, the CEO of Palo Alto Networks, he was just like, Give me the top three things.

01:37:02

Top three?

01:37:03

And of the top three things, what I'll boil it down to is the top one thing, which is like 80% of the 80%. It's all about sleep.

01:37:14

I was about to get sleep.

01:37:15

And he walked through his nighttime routine, and it's incredible, and it's straightforward. It's really simple. It's like how you do a wind down. Anyways, I have tried to-Explain the wind down.

01:37:26

Briefly.

01:37:27

Let's just say that because Brian goes to bed much earlier, so our normal time. Let's just say 10: 00, 10: 30. So my time, I try to go to bed by 10: 30. He's like, You need to be in bed. You need to, first of all, stop eating three or four hours before. I do that. I eat at 6: 30, so I have about three hours. You're in bed by 9: 30 or 10: 00. You deal with the self-talk. Okay, here's the active mind telling you all the things you have to fix in the morning. Talk it out, put it in its place, say, I'm going to deal with this in the morning.

01:37:56

Write it down in a journal, you're saying? Something like that.

01:37:57

Whatever you do so that you put it away. You cannot be on your phone.

01:38:02

That's got to be in a different room.

01:38:03

Or you just got to be able to shut it down and then read a book so that you're actually just engaged in something. He said that he typically falls asleep within 3-4 minutes of getting into bed and starting his routine. What? I tried it. I've been doing it since I had dinner with him on Monday. Last night, I fell asleep within 15 minutes. The hardest part for me is to put the phone away. I can't do it. Of course.

01:38:29

What about you? I'll tell us your one down.

01:38:31

Oh, yeah, I know Brian pretty well, actually. I joke that I'm married to the female Brian Johnson because my wife has some of his routines. But she's the natural version, no supplements, and she's intense. I think when Brian saw my sleep score from my eight sleep, he was shocked. He was just like, You're going to die. He's like, You're literally going to die. What do you got, 70, 80? No, it's terrible. It's awful. Tell the truth.

01:38:58

What's your number?

01:38:59

What's your It was like a '30s, '40s. What? Yeah. But it's also because I don't sleep much. I only sleep a few hours a night, and I also move around a lot in the bed and so on. But it's fine. I never have trouble falling asleep. But I would say that Brian's skincare routine is amazing. His diet is incredible. He's a genuine character. I do think a lot of what he's saying, minus the supplements, I'm not a big believer in supplements, does work. I don't know if it's necessarily going to slow down your aging, but you'll look good, you'll feel good. Yeah, sleep is the number one thing. In In terms of falling asleep, I don't think it's really about whether you look at your phone or not, believe it or not. I think it's about what you're doing on your phone. If you're doing anything that is cognitively stressful or getting your mind to spin, then yes.

01:39:42

You think you can scroll TikTok and fall asleep is fine?

01:39:46

Anything that's entertaining or that is like, you could read a book on your Kindle or on your iPad, and I think it'd be fine falling asleep. Or you could listen to some meditation video or some spiritual teacher or something, and that'll actually help you fall asleep. But if you're on X or if you're checking your email, then, heck, yeah, that's going to keep you up. My hack for sleep is a little different. I normally fall asleep within minutes. The way I do it is you all have a meditation routine. You have a set time. You have a sat time every night? No, I sleep whenever I feel like. Usually around one in the morning, two in the morning.

01:40:19

God damn, I'm in bed by 10. I need to sleep.

01:40:22

I'm an owl. But if you want to fall asleep, the hack I found is everybody has tried some a meditation routine. Just sit in bed and meditate. And your mind will hate meditation so much that if you force it to choose between the fork of meditation and sleeping, you will fall asleep once every time.

01:40:40

If you don't fall asleep, you'll end up meditating, which is great, too.

01:40:44

I like the meditation. I do the body scan.

01:40:47

The coda to this story was a friend of mine came to see me from the UAE, and he was here on Tuesday, and I was telling him about the dinner with Brian. He told me the story because he's friends with Khabib, the UFC fighter. He says, When Khabib goes to his house, he eats anything and everything, fried food, pizzas, whatever. But he trains consistently. My friend Adala says, How are you able to do that? How does it not affect your physiology? He goes, I've learned since I was a kid. I sleep three hours after I train in the morning, and I sleep 10 hours at night. I've done it since I was 12 or 13 years old.

01:41:23

That's a lot of sleep.

01:41:24

It's a lot of sleep.

01:41:26

The direct correlation for me is if I do something cognitively, big heavy-duty conversations or whatever, so no heavy conversations at the end of the night, no existential conversations at the end of the night. Then if I go rucking, on the ranch, I put on a 35 I walk for- You do that at night before you go to bed? No, if I do it any time during the day, I typically do it in the morning or the afternoon, but the one to two mile ruck with the 35 pounds, whatever it is, it just tires my whole body out so that when I do, lay down.

01:41:59

Is that why you don't prepare for the pot?

01:42:03

I mean, this pot is the top 10 pot in the world, Chamal. Do you think it's an accident?

01:42:10

Friedberg, what's your sleep routine? Can you just go to bed? You're just close your eyes.

01:42:13

I take a warm bath and I send J. Cal a picture of my feet.

01:42:18

I'll wait till J. Cal is done. I do take a nice warm bath. I nailed it.

01:42:23

But you do it every night, a warm bath?

01:42:26

Yeah, I do a warm bath every night.

01:42:28

With candles, too.

01:42:29

Do you do it right before you go to bed?

01:42:32

Yeah, I usually do it after I put the kids down, then I'll basically start to wind down for bed. I do watch TV sometimes, but I do have the problem and the mistake of looking at my phone probably for too long before I turn the lights off.

01:42:44

Do you have a Is it a time where you go to bed or no?

01:42:48

Usually 11: 00 to midnight, and then up at 6: 30.

01:42:54

Man, I need eight hours, otherwise I'm a mess. I'm trying to get eight.

01:42:58

I hit between 6: 00 and 7: 00. Consistently, I try to go to bed that 11: 00 to 1: 00 AM window and get up the 7: 00 to 8: 00 window.

01:43:04

My problem is if I have work to do, I'll get on the computer or my laptop, and then when I start that in my evening routine, I can't stop. Then all of a sudden, it's like 3: 00 in the morning and I'm like, Oh, no, what did I just do? Then I still have to get up at 6: 30. That does happen to me.

01:43:19

Last night was unusual for me, but it was funny anyway. I thought, Oh, I should go to bed early because I'm on all in. But I ended up eating ice cream with the kids late.

01:43:30

Wait, what was the brand? You said you went for another brand.

01:43:33

I want to know the brand. I think it's Van Lewin or something like that.

01:43:36

Van Luyn, yeah, of course. New York and Brooklyn.

01:43:39

That is good.

01:43:39

The holiday cookies and cream. Oh, my God, so good. Yeah, it's so good. Van Luyn, good quote. After I've polished off. Then I was like, I probably ate too much to go to bed, so I better work out. So I did a kettlebell workout.

01:43:50

You sound like Jamal.

01:43:53

What did you say?

01:43:54

I have eight kettlebells right here, right next to me. Of course.

01:43:59

Freeberg, this is called working out for you, what you're seeing here.

01:44:03

Then while I'm doing my kettlebell suitcase carry, I was texting with an entrepreneur friend so you can tell how intense my workout was. He's in Singapore, so it was in the middle of the night for me and early for I knew who was. And then it was time to go to bed, I was like, Okay, now I got to get to bed. How do I get to bed? My body is all amped up. I've got food in my stomach. Ice cream. Ice cream. My brain is all amped up. And all in podcast is tomorrow. What It's 1: 30 in the morning. I better get to bed. I put on a little one of those spiritual videos to calm me down. Then I got in bed and I was like, There's no way I'm falling asleep. I started meditating, and five minutes later, I was asleep.

01:44:45

Actually, the Dalai Lama has these great... On his YouTube channel, he's got these great two-hour discussions. You get about 20, 30 minutes into that, you will fall asleep.

01:44:54

Well, yeah, but my learning is- You'll watch any Dharma lecture from the SSA center. Exactly. My learning is that the mind will do anything to avoid meditation.

01:45:06

Yes. By the way, did you guys see, just before we wrap, did you see all the confirmations? Rfk Junior confirmed, Brook Rollins confirmed. By the way, if you look at Polymarket, Polymarket had it all right a couple of weeks ago.

01:45:18

I was tracking Polymarket. There was a moment where TLC fell to 56%. There was a moment when RFK fell to 75%, but then they bounced back and it was done. You could have bought.

01:45:27

We got to snipe that, man. You could have made money.

01:45:29

Polymarket had it. The media was like, No way he's getting confirmed. This is not going to happen. But Polymarket knows. It's so interesting, huh?

01:45:37

I love Polymarket. I saw a very insightful tweet, and I forget who wrote it, so I'm sorry, I can't give credit. But the guy basically said, Look, Trump has a narrow majority in the House and the Senate, and he can get everything he wants as long as a Republican stay in line. So all the pressure and all the anger that all the mega movement is doing against the left is pointless. It's all about keeping the right wing in line. It's all the people saying to the senators, Hey, I'm going to primary you. It's Nicole Chanahan saying, I'm going to primary you. It's Scott Pressler saying, I'm moving to your district. That's the stuff that's moving the needle and causing the confirmation to go through. That's how you get Cash Patel. That's how you get Tulsi Gabbard, the DNI. That's how you get RFK and KHS.

01:46:20

You worry about any of these? Do you think any of them are too spicy for your taste, or you just like the whole burn it down, put in the crazy outsiders and let them- Jason, that's such a bad characterization.

01:46:32

That's not a fair characterization. I mean, whatever.

01:46:33

I mean, the outsiders.

01:46:34

Honestly, it's like I never thought I'd see it, but I think between Elon and Sacks and people like that, we actually have builders and doers and financially intelligent people and economically intelligent people in charge. Despite all the craziness, Elon's not doing this for the money. He's doing it because he thinks it's the right thing to do. Of course.

01:46:50

He moved into the Roosevelt building for the next four months.

01:46:54

I think many of us, I had bought into the great forces of history mindset where it's just like, Okay, it's inevitable. This is what's happening. Government always gets bigger, always gets slower. Yeah, me too. We just have to try and get stuff built before they just shut everything down and we turn into Europe. But the thing that happened then was Caesar crossed the Rubicon. The great man theory of history played out, and we're living in that time. It's an inspiration to all of us, despite Sam Altman and Elon's current fighting. I know Sam was inspired by Elon at one point, and I think all of us are inspired by Elon. I mean, the guy can be the Diablo player and do Doge and run SpaceX and Tesla and boring and Neuralink. It's incredibly impressive. That's why I'm doing a hardware company now. It makes me want to do something useful with my life. Elon always makes me question, Am I doing something useful enough with my life? It's why I don't want to be an investor. Peter Thiel, ironically, he's an investor, but he's inspirational that way, too, because he's like, Yeah, the future doesn't just happen.

01:47:50

You have to go make it. We get to go make the future, and I'm just glad that Elon and Doge and others are making the future that I'm living.

01:47:57

Is this a consumer hardware? What do we got going on here?

01:47:59

I'll give you a little hinty- Maybe I'll reveal it on the All In podcast in a couple of months, but it's really difficult. I'm not sure I can pull it off. So let me try. Let me just make sure it's viable. Is it drone-related?

01:48:07

Is it self-driving-related?

01:48:09

Drones are cool, but no, it's not.

01:48:11

Maybe all this podcast should be an angel investor.

01:48:14

Oh, yeah. Let's do a little syndicate. Absolutely.

01:48:15

Let's do a little syndicate. No syndicate, Jason. Just our money. What are you talking about?

01:48:19

You know how I learned about syndicates? It was N'Avall. The first syndicate I ever did on Angelist, I think is still the biggest. I don't know, 5%, and Nival is my partner on this, for calm. Com.

01:48:31

I think you'll love what I'm working on if I pull it off. I think you guys will love it. I'd love to show you a demo.

01:48:36

Let us know where to send the check. I love you guys. Get that black cherry chip Van Luyn.

01:48:39

I love you guys. What have we learned? I got to go. Okay. Big shout out to Bobby and to Tulsi. That's a huge, huge change for America.

01:48:46

I'm stoked about both of them.

01:48:49

Congratulations.

01:48:50

I love me some Bobby Kennedy.

01:48:53

Let's get Bobby Kennedy back on the pod.

01:48:55

Let's get Bobby... Hey, Bobby, come back on the pod. For Lizzar David Sacks, your Sultan of science, David Friedberg, the chairman dictator, Chamath Polyhapateeta Polyhapateya, and Namaste N'Avah. I am the world's greatest moderator. We'll see you next time on the All In Pan.

01:49:17

Namaste, bitches. Bye-bye. Thanks, guys.

01:49:20

We'll let your winners ride.

01:49:23

Rainman David Sack.

01:49:25

And it said, We open-source it to the fans, and they've just gone crazy with it. Love you, Westies.

01:49:32

The Queen of Kinwa. I'm doing all in. I'm going all in.

01:49:35

I'm going all in.

01:49:40

Besties are gone.

01:49:42

That's my dog taking it. I You're driving your driveway. Oh, man.

01:49:47

Oh, man.

01:49:49

My half a tasher will meet me at what it's like.

01:49:51

We should all just get a room and just have one big huge orgy because they're all just useless. It's like this sexual tension, but they just need to release somehow.

01:49:58

What? You're. You're the bee. What?

01:50:01

You're the bee. We need to get merches.

01:50:05

I'm doing all in. I'm doing all in.

AI Transcription provided by HappyScribe
Episode description

(0:00) The Besties intro Naval Ravikant! (9:07) Naval reflects on his thoughtful tweets and reputation (14:17) Unique views on parenting (23:20) Sacks joins to talk AI: JD Vance's speech in Paris, Techno-Optimists vs Doomers (1:11:06) Tariffs and the US economic experiment (1:21:15) Thomson Reuters wins first major AI copyright decision on behalf of rights holders (1:35:35) Chamath's dinner with Bryan Johnson, sleep hacks (1:45:09) Tulsi Gabbard, RFK Jr. confirmed Follow Naval: https://x.com/naval Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://x.com/naval/status/1002103360646823936 https://x.com/CollinRugg/status/1889349078657716680 https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence https://www.cnn.com/2021/06/09/politics/kamala-harris-foreign-trip/index.html https://www.cnbc.com/2025/02/11/anduril-to-take-over-microsofts-22-billion-us-army-headset-program.html https://x.com/JDVance/status/1889640434793910659 https://www.youtube.com/watch?v=QCNYhuISzxg https://www.wired.com/story/thomson-reuters-ai-copyright-lawsuit https://admin.bakerlaw.com/wp-content/uploads/2023/11/ECF-1-Complaint.pdf https://www.youtube.com/watch?v=7xTGNNLPyMI https://polymarket.com/event/which-trump-picks-will-be-confirmed?tid=1739471077488