Request Podcast

Transcript of OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war

All-In with Chamath, Jason, Sacks & Friedberg
Published about 1 year ago 1,822 views
Transcription of OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war from All-In with Chamath, Jason, Sacks & Friedberg Podcast
00:00:00

All right, everybody. Let's get the show started here.

00:00:03

Jason, why are you wearing a tux? What's going on there?

00:00:05

Oh, well, it's time for a very emotional segment we do here on the All In podcast. I just got to get myself composed for this.

00:00:16

Jason, are you okay?

00:00:17

I'm going to be okay, I think.

00:00:19

It looks like you're fighting back a tear.

00:00:21

Yeah, this is always a tough one. This year, we tragically lost giants in our industry. These individuals bravely own their craft at Open AI before departing. Ilya Suskiva, he left us in May. Jan Leica, also left in May. John Schulman tragically left us in August.

00:00:50

Wait, these are all OpenAI employees?

00:00:52

Yes. Zahard Sof left on Wednesday. Bob McGrew also left on Wednesday.

00:00:59

Too short. And Mira Moradi also left us tragically on Wednesday. We lost Mira, too?

00:01:07

Yeah. And Greg Brockman is on extended leave.

00:01:12

The enforcer? He left, too?

00:01:14

Thank you for your service. Your memories will live on as training data. And may your memories be a vesting.

00:01:25

Let your winners ride.

00:01:28

Rain Man, David Sack, I'm doing all in.

00:01:32

And it said, We open source it to the fans, and they've just gone crazy with it.

00:01:36

Love you guys.

00:01:36

I'm doing all in. I'm doing all in.

00:01:41

Sorry, guys. Oh, my goodness. All those losses. Wow, that is- I know. Three in one day. Three in one day. My goodness. I thought OpenAI was nothing without its people.

00:01:54

Well, I mean, this is a great... Whoa, we lost somebody. Whoa, what's happening?

00:01:58

Well, come back. Wait, Wait, what?

00:02:01

This is like the photo in Back to the future. It's like the photo in back to the future.

00:02:06

Wow, they're just all gone. Wait, no, don't worry. He's replacing everybody. Here we go. He's replacing with the G 700, a Bugatti. I guess Sam's got mountains of cash. So don't worry, he's got a backup plan, Shemov. Anyway, as an industry and as leaders in the industry, the show sends its regards to Sam and the OpenAI team on their tragic losses. And congratulations on the $150 billion valuation. And you're 7%. Sam now just cashed in $10 billion, apparently. So congratulations to a friend of the pod, Sam Omen.

00:02:39

That's all reportedly out of some article, right? That's not confirmed or anything.

00:02:45

Is all of that done?

00:02:45

I mean, it's reportedly, allegedly, that he's going to have 7% of the company, and we can jump right into our first story. No, no.

00:02:52

I mean, what I'm saying is, has the money been wired and the docs been signed?

00:02:55

According to reports, this round is contingent on not not not not not not not not a nonprofit anymore and sorting that all out.

00:03:04

They have to remove the profit cap and do the C-corp.

00:03:06

There's some article that reported this, right?

00:03:08

None of us have first- It was Bloomberg. It's not some article. It was Bloomberg, and it got a lot of traction, and it was re-reported by a lot of places, and I don't see anyone disputing it.

00:03:18

So is mainstream media?

00:03:20

We trust the mainstream media in this case because it aligns with- I just think- When we could do a good bit.

00:03:25

Yeah.

00:03:26

That's mine. No, I think that Bloomberg reported it based on, obviously, talks are ongoing with investors who have committed to this round. And no one's disputing it. Has anyone said it's not true?

00:03:36

This has been speculated for months, the $150 billion valuation, raising Something in the range of 6 to 7 billion. If you do the math on that, and Bloomberg is correct, that Sam Altman got his 7%, I guess that would be 10 billion.

00:03:54

The reality is you can't raise $6 billion without probably meeting with a few dozen firms, and some number of junior people in those few dozen firms are having a conversation or two with reporters. You can see how it gets out.

00:04:08

All right. Before we get to our first story there about OpenAI, congratulations to Chamoth. Let's pull up the photo here. He was a featured guest on the Alex Jones Show. No, sorry. I'm sorry. That would be Joe Rogan. Congratulations on coming to Austin and being on Joe Rogan. What was it like to do a three-hour podcast with Joe Rogan?

00:04:32

It's great. I mean, I loved it. He's really awesome. He's super cool. It's good to do long-form stuff like this so that I can actually talk.

00:04:40

Clearly, it's the limitation of this podcast is the other three of us. Finally, you have found a way to make it about yourself.

00:04:48

No, I saw a comment. Somebody commented like, Oh, wow, it's amazing to hear Chamoth expand on topics throughout the constant eruptions by J. Cal.

00:04:56

Also known as moderation.

00:04:59

I'm sorry. Someone called me Fromen or Fromen from the '70s show. I thought that was funny. The amount of trash talking in Rogan's YouTube comments, it's next level.

00:05:13

It is a troll center.

00:05:13

I mean, it is the Wild Wild West in terms of the comment section on YouTube.

00:05:20

A bunch of comments asking. Why do you call it Alex Jones? Is that because he's- It's just a Texas short podcaster who's short and stout and they look similar.

00:05:28

It's just But it looks like Alex Jones started lifting weights, actually. No, they're both the same height and both have podcasts.

00:05:38

I saw Joe Rogan 25 years ago doing stand-up. I have a photo with him at the club. It was like a small club in San Francisco, and we hung out with him afterwards. He was just like a nobody back in the day. He was like a stand-up guy, right? Now he's a media Uber star.

00:05:52

You have to go back pretty far for Joe Rogan to be a nobody. I mean, he had a TV show for a long time.

00:05:58

Two of them, in fact.

00:05:59

He was more He was a stand-up comic for a while. He was a stand-up comic. Stand-up comic, yeah.

00:06:03

Fear Factor, that's right.

00:06:04

That was the big show. But didn't he also do survivor or one of those? Then the UFC, I mean, this guy's got four distinct careers.

00:06:12

I feel like that's where he blew up, UFC, yeah.

00:06:14

Well, I think he got the UFC out of fear factor and being a UFC fighter and a comedian. There's a famous story where Dana White was pursuing him, and he was like, I don't know. Then Dana White's like, I'll send a plane for you. You can bring your friends. He's Okay, fine. I'll do it. He did it for free. Then Dana White pursued him heavily to become the voice of the UFC. Obviously, it's grown tremendously, and it's worth billions of dollars. Okay.

00:06:43

How is OpenAI worth $150 billion? Can anyone apply?

00:06:48

Well, why don't we get into the topic?

00:06:50

Should we make the bull case and the bear case?

00:06:51

All right. Openai, as we were just joking in the opening segment, is trying to convert into a for-profit benefit corporation That's a B Corp. It just means, and we'll explain B Corp later. Sam Altman is reportedly- I thought they're converting to a C Corp, no?

00:07:06

It's the same thing. B Corp doesn't really mean anything.

00:07:09

A benefit Corporation is a C Corporation variant that is not a nonprofit, but the board of Directors, Sacks, is required not only to be a fiduciary for all shareholders, but also for the stated mission of the company. That's my understanding of a B Corp. Am I right, Friedberg?

00:07:27

External stakeholders, yeah. So like the environment or society or whatever. But from all other legal tax factors, it's the same as a C Corp.

00:07:37

It's a way to, I guess, signal to investors, the market, employees, that you care about something more than just profit. So most famous B Corp, I think is Tom's. Is that the shoe company Tom's? That's a famous B Corp. Somebody will look it up here. Patagonia. Patagonia, yeah. That falls into that category. So for profit with a mission. Reuters has cited anonymous sources close to the company that the plan is still being hashed out with lawyers and shareholders, and the timeline isn't certain. But what's being discussed is that the nonprofit will continue to exist as a minority shareholder in the new company. How much of a minority shareholder, I guess, is the devil's in the detail there. Do they own 1% or 49%? The very much discussed Friedberg 100X profit cap for investors will be removed. That means investors like Vinod, friend of the pod, and Reid Hoffman, also friend of the pod, could see a 100X turn into a 1,000X or more. According to the Bloomberg report, Sam Waltz is going to get his equity, finally, 7%, that would put him at around 10.5 billion, if this is all true. And OpenAI could be valued as high as $150 billion.

00:08:51

We'll get into all the shenanigans, but let's start with your question, Friedberg. And since you asked it, I'm going to Boomerang it back to you. Make the bull case for $150 billion valuation.

00:09:06

The bull case would be that the moat in the business with respect to model performance and infrastructure gets extended with the large amount of capital that they're raising. They aggressively deploy it. They are very strategic and tactical with respect to how they deploy that infrastructure to continue to improve model performance and as a result, continue to extend their advantage in both consumer and enterprise applications, the API tools and so on that they offer. They can maintain both model and application performance leads that they have today. Across the board, I would say, The '01 model, their voice application, Sora has not been released publicly, but if it does and it looks like what it's been demoed to be, it's certainly ahead of the pack. There's a lot of aspects of OpenAI today that makes them a leader. If they can deploy infrastructure to maintain that lead and not let Google, Microsoft, Amazon, and others catch up, then their ability to use that capital wisely keeps them ahead. Ultimately, as we all know, there's a multi-trillion dollar market to capture here, making lots of verticals, lots of applications, lots of products. They could become a true global player here.

00:10:21

Plus the extension into computing, which I'm excited to talk about later when we get into this computing stuff. Sacks.

00:10:26

Here's a chart of OpenAri's revenue growth that has been piecemeal together from various sources at various times. But you'll see here they are, reportedly, as of June of 2024, on a $3.4 billion run rate for this year. I After hitting $2 billion in '23, $1.3 billion in October of '23. Then back in 2022, it's reported they only had $28 million in revenue. This is a pretty big streak here in terms of revenue growth. I would put it at 50 times top-line revenue, $150 billion valuation. You want to give us the bear case, maybe, or the bull case?

00:11:08

Well, the whisper numbers I heard was that their revenue run rate for this year was in the 4-6 billion range, which is a little higher than that. So you're right, if it's really more like 3.4, this valuation is about 50 times current revenue. But if it's more like 5 billion, then it's only 30 times. And if it's growing 100 % year over year, it's only 15 times next year. So depending on what the numbers actually are, the $150 billion valuation could be warranted. I don't think 15 times Ford ARR is a high valuation for a company that has this strategic opportunity. I think it all comes down to the durability of its comparative advantage here. I think there's no question that OpenAI is the leader of the pack. It has the most advanced AI models. It's got the best developer ecosystem, the best APIs. It keeps rolling out new products. And the question is just how durable that advantage is. Is there really a moat to any of this? For example, Meta just announced LLaMA 3.2, which can do voice. And this is roughly at the same time that OpenAI just released its voice API.

00:12:22

So the open source ecosystem is hot on OpenAI's heels. The large companies, Google, Microsoft, so forth, they're hot on their heels, too, although it seems like they're further behind where meta is. And the question is just, can OpenAI maintain its lead? Can it consolidate its lead? Can it develop some moats? If so, it's on track to be the next trillion dollar big tech company. But if not, it could be eroded and you could see the value of OpenAI get commoditized. And we'll look back on it as a cautionary tale.

00:12:56

Okay, Chamath, do us a favor here. If there is a bear case, What is it?

00:13:00

Okay, let's steal a man in the bear case.

00:13:02

Yes, that's what I'm asking, please.

00:13:05

So one would just be on the fundamental technology itself. I think the version of that story would go that the underlying frameworks that people are using to make these models great is well-described and available in open source. On top of that, there are at least two viable open source models that are as good or better at any point in time than OpenAI. What that would mean is that the value of those models, the economic value basically goes to zero, and it's a consumer surplus for the people that use it. That's very hard, theoretically, to monetize. I think the second part of the bear case would be that specifically, meta becomes much more aggressive in inserting meta AI into all of the critical apps that they control, because those apps really are the front door to billions of people on a daily basis. That would mean WhatsApp, Instagram, Messenger, the Facebook app, and threads gets refactored in a way where instead of leaving that application to go to a ChatGPT-like app, you would just stay in the Then the companion to that would be that Google also does the same thing with their version in front of search.

00:14:42

Those two big front doors to the internet become much more aggressive in giving you a reason to not have to go to ChatGPT because A, their answers are just as good, and B, they're right there in a few less clicks for you. That would be the second piece. The third piece is that all of these models basically run out of viable data to differentiate themselves, and it basically becomes a race around synthetic information and synthetic data, which is a cost problem. Meaning if you're going to invent synthetic data, you're going to have to spend money to do it. The large companies, Facebook, Microsoft, Amazon, Google, Apple, have effectively infinite money compared to any startup. Then the fourth, which is the most quizz one, is what does the human capital thing tell you about what's going on? It reads a little bit like a telenovela. I have not in my time in Silicon Valley ever seen I've never seen a company that's supposedly on such a straight line to a rocket ship have so much high-level churn. But I've also never seen a company have this much liquidity. How are people deciding to leave if they think it's going to be a trillion-dollar company?

00:16:06

Why, when things are just starting to cook, would you leave if you are technically enamored with what you're building? If you had to construct the bearer case, I think those would be the four things: open source, front door competition, the move to synthetic data, and all of the executive turnover would be why you would say, maybe there's a fire where there's all this smoke.

00:16:31

Okay, I think this is very well put. I have been using ChatGPT and Claude and Gemini exclusively. I stopped using Google Search, and I also stopped Sacks asking people on my team to do stuff before I asked ChatGPT to do it, specifically, Friedberg, the '01 version. The '01 version is distinctly different. Have you, gentlemen, been using '01 on a daily basis? I have. Yes, every day. Okay, so we can have a really interesting conversation here. I did something on my other podcast this week in Startups that I'll show you right now. That was crazy yesterday.

00:17:09

'01 is a game changer. It's the first real chain of thought production system that I think we've seen.

00:17:17

Are you using 0.1 Preview or 0.1 Mini?

00:17:20

I am using 0.1 Preview. Now, let me show you what I did here. Just so the audience can level set here, if you're not watching us, go to YouTube and type in All In, and you You can watch us. We do video here. I was analyzing just some early-stage deals and cap tables, and I put in here, Hey, a startup just raised some money at this valuation. Here's what the friends and family invested, the accelerator, the seed investor, et cetera. In other words, like the history, the investment history in a company. What a one does distinctly differently than the previous versions. And the previous version, I felt, was three to six months ahead of competitors. This is a year ahead of competitors. And so here, Chamath, if you look, it said it thought for 77 seconds. And if you click the down arrow, Sacks, what you'll see is it gives you an idea of what its rationale is for interpreting and what secondary query is it's doing in order to give the answer.

00:18:17

This is called chain of thought. This is the underlying mega model that sits on top of the LLMs. The mega model, effectively, the chain of thought approach is the The model asks itself the question, How should I answer this question? Then it comes up with an answer. Then it says, Now, based on that, what are the steps I should take to answer the question? The model keeps asking itself questions related to the structure of the question that you ask. Then it comes up with a series of steps that it can then call the LLM to do to fill in the blanks, link them all together and come up with the answer. It's the same way that a human train of thought works. It really the ultimate evolution of what a lot of people have said these systems need to become, which is a much more, call it intuitive approach to answering questions rather than just predictive text based on the single statement you made. It really is changing the game, and everyone is going to chase this and follow this. It is the new paradigm for how these AI systems will work.

00:19:23

And by the way, what this did was what prompt engineers were doing or prompt engineering websites were doing, which was trying to to help you construct your question. And so if you look to this one, it says listing disparities, I'll compile a cap table with investments and valuations, building the cap table, accessing the share evaluation, breaking down ownership, breaking down ownership, etc, evaluating the terms, and then it checks its work a bit, it waits investment options. You can see this is fired off like two dozen different queries to, as Friedberg correctly pointed out, build this chain. It got incredible answers. Explain the form. So it's thinking about what your next question would be. When I share this with my team, it was like a super game changer. Sacks, you had some thoughts here.

00:20:13

Well, yeah, this is pretty impressive. And just to build on what Friedberg was saying about chain of thought, where this all leads is to agents, where you can actually tell the AI to do work for you. You give it an objective, it can break the objective down into tasks, and then it can work each of those tasks. Openai, at a recent meeting with investors, said that PhD-level reasoning was next on its roadmap, and then agents weren't far behind that. They've now released, at least the preview, of the PhD-level reasoning with this '01 model. So I think we can expect an announcement pretty soon about agents.

00:20:50

Yeah.

00:20:50

And so- If you think about business value, we think a lot about this, is like, where's the SaaS opportunity in all this? The software as a service opportunity? It's going to be in agents. I think we'll ultimately look back on these chat models as a little bit of a parlor trick compared to what agents are going to do in the workplace. If you've ever been to a call center or an operations center, there They're also called service factories. It's assembly lines of people doing very complicated knowledge work. But ultimately, you can unravel exactly what the chain is there, the chain of thought that goes into their decisions. It's It's very complicated, and that's why you have to have humans doing it. But you could imagine that once system integrators or enterprise SaaS apps go into these places, go into these companies, they integrate the data, and then they map out the workflow, you could replace a lot of these steps in the workflow with agents. A lot of knowledge. Yeah.

00:21:50

By the way, it's not just call centers. I had a conversation with... I'm on the board of a company with the CEO the other day, and he was like, Well, we're going to hire an analyst that's going to sit between our retail sales operations and figure out what's working to drive marketing decisions. I'm like, No, you're not. I really think that that would be a mistake because today you can use '01 and just feed it the data and describe the analysis you want to get out of that data. Within a few minutes, and I've now done this probably a dozen times in the last week with different projects internally at my company, it gives you the entire answer that an analyst would have taken days to put together for you. If you think about what an analyst's job has been historically is they take data and then they manipulate it. The big evolution in software over the last decade and a half has been tools that give that analyst leverage to do that data manipulation more quickly, like Tableau and R and all sorts of different toolkits that are out there. But now you don't even need the analyst because the analyst is the chain of thought.

00:22:55

It's the prompting from the model. It's completely going to change how knowledge work is done. Everyone that owns a function no longer needs an analyst. The analyst is the model that's sitting on the computer in front of you right now, and you tell it what you want, and not days later, but minutes later, you get your answer. It's completely revolutionary in ad hoc knowledge work as well as this repetitive structured knowledge.

00:23:21

This is such a good point, Friedberg. The ad hoc piece of it, when we're processing 20,000 applications for funding a year, we do 100 plus meetings a week. The analysts on our team are now putting in the transcripts and key questions about markets, and they are getting so smart, so fast that when somebody comes to them with a marketplace in diamonds, they're understanding of the diamond marketplace becomes so rich, so fast, that we can evaluate companies faster. Then we're also seeing Chamov, before we call our lawyers, when we have a legal question about a document, we start putting in, let's the standard note template or the standard safe template. We put in the new one, and there's a really cool project by Google called Notebook LLM, where you can put in multiple documents, and you can start asking questions. Imagine you take every single legal documents, Sacks, that Yammer had when you had Chamoth as an investor, I'm not sure if he was on the board, and you could start asking questions about the documents. And we have had people make changes to these documents, and it immediately finds them, explains them. And so everybody's just getting so So goddamn smart, so fast using these tools that I insisted that every person on the team, when they hit Control tab, it opens a ChatGPT 4 window in '01, and we burned out our credits immediately.

00:24:43

It stopped us. It said, You have to stop using it for the rest of the month. Chamath, your thoughts on this?

00:24:48

We're seeing it in real-time. In '80, '90, what I'll tell you is what Sacks said is totally right. There's so many companies that have very complicated processes that are a combination of well-trained and well-meaning people and bad software. What I mean by bad software is that some other third party came in, listened to what your business process was, and then wrote this clunky deterministic code, usually on top of some system of record, charged you tens or hundreds of millions of dollars for it, and then left and will support it only if you keep paying them millions of dollars a year. That whole thing is so nuts because the ability for people to do work, I think, has been very much constrained. It's constrained by people trying to do the right thing using really, really terrible software. All of that will go away. The radical idea that I would put out there is I think that systems of record no longer exist because they don't need to. The reason is because all you have is data and you have a pipeline of information.

00:26:00

Can you level set and just explain to people what system of record is, just so the audience- So inside of a company, you'll have a handful of systems that people would say are the single source of truth.

00:26:08

They are the things that are used for reporting, compliance. An example would be for your general ledger. So to- NetSuite. Record your revenues, you'd use NetSuite, or you'd use Oracle GL, or you'd use Workday financials. Then you'd have a different system of record for all of your revenue-generating activities. Who are all of the people you sell to? How are sales going? What is the pipeline look like? There's companies like Salesforce or HubSpot, Sugar CRM. Then there's a system of record for all the employees that work for you, all the benefits they have, what is their salary, this is the HRIS. So the point is that the software economy over the last 20 years, and this is trillions of dollars of market cap and hundreds of billions of revenue, have been built on this premise that we will create this system of record, you will build apps on top of the system of record, and the knowledge workers will come in, and that's how they will get work done. I think that Sacks is right. This totally flips that on its head. Instead, what will happen is people will provision an agent and roughly direct what they want the outcome to be, and they'll be process independent.

00:27:25

They won't care how they do it. They just want the answer. I think two things happen. The obvious thing that happens in that world is systems of record lose a grip on the vault that they had in terms of the data that runs a company. You don't necessarily need it in the same reliance and primacy that you did 5 and 10 years ago. That'll have an impact to the software economy. The second thing that I think is even more important than that is that then the atomic size of companies changes because each company will get much more leverage from using software and few people versus lots of people with a few pieces of software. And so that inversion, I think, creates tremendous potential for operating leverage.

00:28:14

All right. Your thoughts, Sacks, you operate in the SaaS space with system of records and investing in these type of companies. Give us your take.

00:28:21

Well, it's interesting. We were having a version of this conversation last week on the pod, and I started getting texts from Benioff as he was listening to it, and then he called me, and I think he got a little bit triggered by the idea that systems of record, like Salesforce, are going to be obsolete in this new AI era. And he made a very compelling case to me about why that wouldn't happen.

00:28:44

Which is?

00:28:44

Well, first of all, I think AI models are predictive. At the end of the day, they're predicting the next set of techs and so forth. And when it comes to your employee list or your customer list, you just want to have a source of truth. You don't want it to be 98% 100% accurate. You just want to be 100% accurate. You want to know if the federal government asks you for the tax ID numbers of your employees, you just want to be able to give it to them. If Wall Street analysts ask you for your customer list and what the gap revenue is, you just want to be able to provide that. You don't want AI models figuring it out. So you're still going to need a system of record. Furthermore, he made the point that you still need databases, you still need enterprise security if you're dealing with enterprises, you still need compliance, you still need sharing models. There's all these aspects, all these things that have been built on top of the database that SaaS company has been doing for 25 years. And then the final point that I think is compelling is that enterprise customers don't want to DIY it.

00:29:41

They don't want to have to figure out how to put this together. And you can't just hand them an LLM and say, Here you go. There's a lot of work that is needed in order to make these models productive. At a minimum, you're going to need system integrators and consultants to come in there, just connect all the enterprise data to these models, map the workflows.

00:30:05

You have to do that now. How is that different from how this clunky software is sold today? I mean, look, I don't want to take away from the quality of the company that Mark has built and what he's done for the cloud economy. So let's just put that aside. But I wish this is what we could have actually all been on stage and talked about. I told him that. When he was at the summit. I said that. Because I disagree with basically every premise Why do you think you have those three things. Number one, systems integrators exist today to build apps on top of these things. Why do you think you have companies like Viva? How can a $20 billion plus company get built on top of Salesforce? It's because it doesn't do what it's meant to do. That's why.

00:30:46

Otherwise, Salesforce would have subsumed that opportunity. In fairness, app stores are a great way to allow people to build on your platform and cover those niche cases.

00:30:54

The point I'm trying to make is that's no different than the economy that exists today. It's just going to transform to different groups of people, number one.

00:30:59

Well, by the way, he said he's willing to come on the pod and talk about this very issue. But just with you? No, no, no. He'll come on the pod.

00:31:06

He'll talk to all of us now? Great.

00:31:08

He'll come on the pod and discuss whether AI makes SaaS obsolete. A lot of people are asking that question.

00:31:13

Let's talk about it next year at the summit.

00:31:15

Can you talk about his philanthropy first? Okay, let's get back to focus here. Let's get focused, everybody.

00:31:21

Spicy.

00:31:22

Love you, Mark.

00:31:23

Who's coming to Dreamforce? Raise your hand.

00:31:26

I want to make another point. The second point is that when you have agents, I think that we are overestimating what a system of record is. David, what you talked about is actually just an encrypted file, or it's a bunch of rows in some database, or it's in some data lake somewhere. You don't need to spend tens or hundreds of millions of dollars to wrap your revenue in something that says it's a system of record. You don't need that, actually. You can just pipe that stuff directly from Stripe into Snowflake, and you can just transform it do what you will with it and then report it. I'll tell you what- You could do that today.

00:32:03

I'll tell you what- It's just that- That's an interesting point.

00:32:05

Through steak dinners and golf outings and all this stuff, we've sold CIOs this idea that you need to wrap it in something called a system of record. All I'm saying is when you confront the total cost of that versus what the alternative that is clearly going to happen in the next 5 or 10 years, irrespective of whether any of us build it or not, it will be deflationary. You just won't be able to adjust Justify it.

00:32:30

It's going to cost a fraction of the price.

00:32:32

There's probably also an aspect of this that we can't predict what is going to work with respect to data structure. Right now, all of the tooling for AI is on the front-end. We haven't yet unleashed AI on the back-end, which is if you told the AI, Here's all the data ingest I'm going to be doing from all these different points in my business, figure out what you want to do with all that data. The AI will eventually come up with its own data structure and data system. No, that's happening. That will look nothing like- No, that's already happening. Right. That's nothing like what we have today. In the same vein that we don't understand how the translation works in an LLM. We don't understand how a lot of the function works. A lot of the data structure and data architecture we won't understand clearly because it's going to be obfuscated by the model driving the development.

00:33:25

There are open source agentic frameworks that already do Friedberg what you're seeing. It's It's not true that it's not been done. It's already been done. Yeah, sure.

00:33:32

Maybe it's being done. It hasn't been fully implemented to replace this system in the record.

00:33:37

There are companies, I'll give you an example of one, like Mechanical Orchard. They'll go into the most gnarliest of environments What they will do is they will launch these agents that observe, it's what I told you guys before, the I/O stream of these apps, and then reconstruct everything in the middle automatically. I don't understand why we think that there's a world where customer quality and NPS would not go sky high for a company that has some old legacy Fortran system, and now they can just pay mechanical orchard a few million bucks, and they'll just replace it in a matter of months. It's going to happen.

00:34:14

That's I mean, very interesting piece for me is I'm watching startups working on this. The AI-first ones, I think, are going to come to it with a totally different cost structure. The idea of paying for seats, and some of these seats are 5,000 per person per year.

00:34:30

A year. You nailed it a year ago when you were like, Oh, you mentioned some company that had flat pricing. At first, by the way, when you said that, I thought, This is nuts. But you're right. It actually makes a ton of sense because if you have a fixed group of people who can use this tooling to basically effectively be as productive as a company that's 10 times as big as you, you can afford to flat price your software because you can just work backwards from what margin structure you want, and it's still meaningfully cheaper than any other alternative.

00:35:04

A lot of startups now are doing consumption-based pricing. So they're saying, How many sales calls are you doing? How many are we analyzing? As opposed to, How many sales executives do you have? Because when you have agents, as we're talking about, those agents are going to do a lot of the work. So we're going to see the number of people working at companies become fixed. And I think the static team size that we're seeing at a lot of large companies is only going to continue. It's going to be down into the right. And if you think you're going to get a high paying job at a big tech company, and you have to beat the agent, you're going to have to beat the maestro who has five agents working for them. I think this is going to be a completely different world. Chama, I want to get back to OpenAI with a couple of other pieces. So let's wrap this up so we get to the next thing.

00:35:53

Yes, please.

00:35:54

Last word for you.

00:35:57

Look, I think that on the whole, I agree with Benioff here that there's more net new opportunity for AI companies, whether they be startups or existing big companies like Salesforce that are trying to do AI, then there is disruption. I think there will be some disruption. It's very hard for us to see exactly what AI is going to look like in 5 or 10 years. So I don't want to discount the possibility that there will be some disruption of existing players. But I think on the whole, there's more net new opportunity. For example, the most highly valued public software company right now in terms of ARR multiple is Palantir. And I think that's largely because the market perceives Palantir as having a big AI opportunity. What is Palantir's approach? The first thing Palantir does when they go into a customer is they integrate with all of its systems. And they're dealing with the largest enterprises. They're dealing with the government, the Pentagon, Department of Defense. The first thing they do is go in and integrate with all of these legacy systems, and they collect all of the data in one place. They call it creating a digital twin.

00:37:02

And once all the data is in one place with the right permissions and safeguards, now analysts can start working it. And that was their historical value proposition. But in addition, AI can now start working that problem. Anything that the analyst could work, now AI is going to be able to work. And so they're in an ideal position to master these new AI workflows. So what is the point I'm making? It's just that you can't just throw an LLM at these large enterprises. You have to go in there and integrate with the existing systems. It's not about ripping out the existing systems because that's just a lot of headaches that nobody needs. It's generally an easier approach just to collect the data.

00:37:39

Except when the renewal comes, what happens when you have to... You got a really good negotiating position. You spend a billion dollars on something and then you're going to renegotiate. You're going to spend a billion dollars again five years from now. It just doesn't seem very likely.

00:37:50

There's going to be a lot of hardcore negotiations going on, Chamath. People are going to ask for 20% off, 50% off, and people are going to have to be more competitive. That's all.

00:37:58

I suspect Palantiers go to market. When the volunteers go to market, when they start to really scale, they'll be able to underprice a bunch of these other alternatives. I think that when you look at the impacts and pricing that All of these open-source and closed-source model companies have now introduced in terms of the price per token. What we've seen is just a massive step function lower. It is incredibly deflationary. The things that sit on top are going to get priced as a function of that cost, which means it will be an order of magnitude cheaper than the stuff that it replaces, which means that a company would almost have to purposely want to keep paying tens of millions of dollars when they don't have to. They would need to make that as an explicit decision. I think that very few companies will be in a position to be that cavalier in 5 and 10 years. You're either going to rebase the revenues of a bunch of these existing deterministic companies, or you're going to create an entire economy of new ones that have a fraction of the revenues today, but a very different profitability profile.

00:39:14

I just think that that's the cycle. I think whenever you're dealing with a disruption as big as this current one, I think it's always tempting to think in terms of the existing pie getting disrupted and shrunk, as opposed to the pie getting so big with new use cases that on the whole, the ecosystem benefits.

00:39:34

No, I agree with that.

00:39:35

I suspect that's what's going to happen. Both are true.

00:39:37

I agree with that. My only point is that the pie can get bigger while the slices get much, much smaller.

00:39:44

What do you think?

00:39:45

Well, right between the two of you, I think, is the truth, because what's happening is if you look at investing, it's very hard to get into these late-stage companies because they don't need as much capital because to your point, Shemoth, When they do hit profitability with 10 or 20 people, the revenue per employee is going way up. If you look at Google, Uber, Airbnb, and Facebook meta, they have the same number or less employees than they did three years ago, but they're all growing in that 20 to 30 % a year, which means in but 2-3 years, each of those companies has doubled revenue per employee. So that concept of more efficiency, and then that trickles down, Sacks, to the startup investing space where you and I are. I'm a pre-seed seed investor, you're a seed Series A investor. If you don't get in in those three or four rounds, I think it's going to be really expensive and the companies are not going to need as much money downstream.

00:40:41

Speaking of investing in late-stage companies, we never close the loop on the whole OpenAI thing. What did we think of the fact that they're completely changing the structure of this company? They're changing into a corporation from the nonprofit, and Sam's now getting a $10 billion stock package.

00:40:59

He's not in it for the money. He has health insurance sacs.

00:41:02

But we never- Did you say he was in Congress?

00:41:05

I don't need money. I've got enough money. I just needed the health insurance.

00:41:09

Pull the clip up, nick. Pull the clip up.

00:41:11

I mean, it's the funniest clip ever.

00:41:13

When I That's not relevant. No, it's in Congress. Watch it. This is what is in Congress.

00:41:18

You make a lot of money, do you?

00:41:21

No.

00:41:22

I'm paid enough for health insurance.

00:41:23

I have no equity in OpenAI.

00:41:24

Really? That's interesting. You need a lawyer.

00:41:28

I need a what?

00:41:29

You need a lawyer or an agent?

00:41:30

I'm doing this because I love it.

00:41:34

Thank you. That's the greatest. Look at me, you don't believe him. Can I ask you a question there, Sacks? Are you doing this, venture capital, where you put the money in the startups because you love it or because you're looking to get another home in a coastal city and put more jet fuel in that plane? I need an answer for the people of the sovereign state of Mississippi.

00:41:57

No, Louisiana. Louisiana. That's Senator John Kennedy from Louisiana. That's John Kennedy from Louisiana. He's a very smart guy, actually, with a lot of common folk wisdom.

00:42:08

He got that simple talk. Yeah, exactly. We spread shooters here in Louisiana.

00:42:12

He's hysterical, actually. Yeah, he's very funny, but- He's very funny. If you listen to him, he knows how to slice and dice his opponents.

00:42:21

You might need to get yourself one of them fancy agents from Hollywood or an attorney from the Wilson Sunsini Corporation to I'm going to negotiate your contract, son, because you're worth a lot more from what I can gather in your performance today than just some simple health care. I hope you took the Blue Cross, Blue Shield.

00:42:40

I would like to make two semi-serious observations.

00:42:44

Let's go. You please Let us back on track.

00:42:46

I think the first is that there's going to be a lot of people that are looking at the architecture of this conversion because if it passes muster, everybody should do it. Think about this model. Let's just say that you're in a market and you start as a non profit. What that really means is you pay no income tax. So for a long time, you put out a little bit of the percentage of whatever you earn, but you can now outspend and out repeat all of your competitors. And then once you win, you flip to a Corporation. That's a great hack on the tax code.

00:43:23

And you let the donors get first bite of the apple if you do convert. Because remember, Van Oord and Hoffmann got all their shares on the conversion.

00:43:33

The other way will also work as well, because there's nothing that says you can't go in the other direction. So let's assume that you're already a for-profit company, but you're in a space with a bunch of competitors. Can't you just do this conversion in reverse, become a nonprofit? Again, you pay no income tax, so now you are economically advantaged relative to your competitors. Then when they wither and die or you can outspend them, you flip back to a for-profit again. I think the point is that there's a lot of people that are going to watch this closely. If it's legal and it's allowed, I just don't understand why everybody wouldn't do this.

00:44:16

Yeah, that was Elon's point as well.

00:44:18

The second thing, which is just more of like cultural observation is, and you brought up Elon, my comment to you guys yesterday, and I'll just make the comment It's a little bit disheartening to see a situation where Elon built something absolutely incredible, defied every expectation, and then had the justice system take $55 billion away from him.

00:44:49

His payment package you're referring to at Tesla.

00:44:51

His payment package, the options in Tessna. Then on the other side, Sam is going to pull something like this off, definitely pushing the boundaries, and he's going to make 10 billion. I just think when you put those two things in contrast, that's not how the system should probably work, I think is what most people would say.

00:45:12

Friedberg, you've been a little quiet here. Any thoughts on the transaction, the nonprofit to for-profit? If you were looking at that in what you're doing, do you see a way that O'Halo could take a nonprofit status, raise a bunch of money through donations for virtuous work, then license those patents to your for-profit? Would that be advantageous to you? And do you think this could become a new model?

00:45:34

I have absolutely zero idea. I have no idea what they're doing. I don't know how they're converting a nonprofit to a for-profit. None of us have the details on this. There may be significant tax implications, payments they need to make. I don't think any of us know. I certainly don't. I don't know if there's actually a real benefit here. If there is, I'm sure everyone would do it. No one's doing it, so there's probably a reason why it's difficult. I don't know.

00:45:56

It's been done a couple of times. The Mozzila Foundation did it. We talked about that in previous episode. Sacks, you want to wrap us up here on the corporate structure. Any final thoughts? I mean, Elon put in 50 million. I think he gets the same as Sam. Don't you think he should just chip off 7% for Elon? Not that Elon needs the money where he's asking, but I'm just wondering why Elon doesn't get the 7% or if they're going to redo this.

00:46:19

Did Elon actually put in 50? Did he put in 50 million dollars? He put in 50 million is the report, right?

00:46:23

In the nonprofit. Hoffman put in 10.

00:46:26

Look, I said on a previous show that this organization traditional chart of OpenAI was ridiculously complicated, and they should go clean it up. They should open up the books and straighten everyone out. And I also said that as part of that, they could give Sam Altman a CEO option grant, and they should also give Elon some fair compensation for being the seed investor who put in the first $50 million and co founder. And what you're seeing is, well, they're doing that. They're opening up the books, they're straightening out the corporate structure, they're giving Sam his option grant, but they didn't do anything for Elon. And I'm not saying this as Elon's friend. I'm just saying that it's not really fair to basically go fix the original situation. You're making it into a for-profit. You're giving everyone shares, but If the guy who puts in the original C capital doesn't get anything, that's ridiculous. And what they're basically saying to Elon is, If you don't like it, just sue us. I mean, that's basically what they're doing. And I said that they should go clean this up, but they make it right with everybody. So how do you not make it right with Elon?

00:47:34

I haven't talked to him about this, but he reacted on X saying, This is really wrong. It appeared to be a surprise to him. I doubt he knew this was coming. So the company, apparently, made no effort to make things right with him. And I think that that is a bit ridiculous. If you're going to clean this up, if you're going to change the original purpose of this organization to being a standard for-profit company where the The CEO who previously said he wasn't going to get any compensation is now getting $10 billion of compensation. How do you do that and then not clean it up for the co founder who put in the first $50 million? That doesn't make sense to me. And when Reid was on our pod, he said, Well, Elon's rich enough. Well, that's not a principled excuse. I mean, does Vinod ever act that way? Does Reid ever act that way? Do they ever say, Well, you don't need to do what's fair for me because I'm already rich? That's not a principled answer.

00:48:30

The argument that I heard was that Elon was given the opportunity to invest along with Reid, along with Vinod, and he declined to participate in the for-profit investing side that everyone else participated in.

00:48:42

Reid made that argument, and I think it's the best argument the company has. But let's think about that argument. Maybe Elon was busy that week. Maybe Elon already felt like he had put all the money that he had allocated for something like this into it because he put in a $50 million check, whereas he put in 10. We don't know what Elon was thinking at that time. Maybe there was a crisis at Tesla, and he was just really busy. The point is, Elon shouldn't have been obligated to put in more money into this venture. The fact of the matter is, they're refactoring the whole venture. Elon had an expectation when he put in the 50 million that this would be a nonprofit and stay a nonprofit. And they're changing that. And if they change it, they have to make things right with him. It doesn't really matter whether he had a subsequent opportunity to invest. He wasn't obligated to make that investment. What he had an expectation of is that his $50 million be used for a philanthropic purpose. And clearly, it has not been.

00:49:38

Yeah. And in fairness to Vinod, he bought that incredible beachfront property and donated it to the public trust so we can all surf and have our Halloween party there. So it's all good. Thank you, Vinod, for giving us that incredible beach. I want to talk to you guys about interfaces that came up, Chamath, in your headwinds or your four-pack of reasons that OpenAI, when you steal them in the bear case, could have challenges. Obviously, we're seeing that, and it is emerging that Meta is working on some AR glasses that are really impressive. Additionally, I've installed iOS 18, which is Apple intelligence that works on 15 phones and 16 phones. 18 is the iOS. Did any of you install the beta of iOS 18 yet and use Siri? It's pretty clear with this new one that you're going to be able to talk to Siri as an LLM like you do in Chat GPT mode, which I think means they will not make themselves dependent on ChatGPT, and they will siphon off half the searches that would have gone to ChatGPT.

00:50:38

So I see that as a serious- Siri is not very good, Jekyll. And you know this because when you were driving me to the airport yesterday- We tested it and it didn't work. He tries to execute this joke where he's like, Hey, Siri, send Chamal Polyhapatea a message. And it was a very off-color message. I'm not going to say what it is.

00:50:55

It was a spicy joke.

00:50:56

And then it's like, Okay, great. Sending Linda blah, blah, blah.

00:51:00

He's like, No, stop, stop, stop. I was like, No, don't send that joke to her.

00:51:02

It hallucinates and almost sends it to some woman in his contact. It would have been really damaging. It's not very good, Jason. It's not very good.

00:51:10

But what I will say is there are features of it where if you squint a little bit, you will see that Siri is going to be conversational. When I was talking to it with music, and you can have a conversation with it and do math like you can do with the ChatGPT version. You have Microsoft doing that with their copilot, and now Meta is doing it at the top of each one. So everybody's going to try to intercept the queries and the voice interface. So ChatGPT 4 is now up against Meta, Siri, Apple, and Microsoft for that interface. It's going to be challenging. But let's talk about these Meta glasses here. Meta showed off the AR glasses that nick will pull up right now. These aren't goggles. Goggles look like ski goggles. That's what Apple is doing with their Vision Pro. Or when you see the meta Quest, how those work, those are VR with cameras that will create a version of the world. These are actual chunky sunglasses, like the ones I was wearing earlier when I was doing the bit. So these let you operate in the real world and are supposedly extremely expensive.

00:52:20

They made a thousand prototypes. They were letting a bunch of influencers and folks like Gary Vannerchuk use them, and they're not ready for primed time. But the way they work, Friedberg, is there's a wristband that will track your fingers and your wrist movement. So you could be in a conversation like we are here on the pod. And below the desk, you could be moving your arm and hand around to be doing replies to, I don't know, incoming messages or whatever it is. What do you think of this AR vision of the world and meta making this progress?

00:52:52

Well, I think it ties in a lot to the AI discussion because I think we're really witnessing this big shift and this big transition in computing, probably the biggest transition since mobile. We moved from mainframes to desktop computers. Everyone had this computer on their desktop, but you used a mouse and a keyboard to control it, to mobile where you had a keyboard and clicking and touching on the screen to do things on it. Now to what I would call this ambient computing method. I think the big difference is control and response. In directed computing, You're telling the computer what to do. You're controlling it, you're using your mouse or your keyboard to go to this website. You type in a website address. Then you click on the thing that you want to click on, and you keep doing a series of work to get the computer to go access the information that you ultimately want to achieve your objective. But with ambient computing, you can more cleanly state your objective without this directive process. You can say, Hey, I want to have dinner in New York next Thursday at a Michelin Star restaurant at 5:30.

00:54:01

Book me something, and it's done. I think that there are five core things that are needed for this to work, both in control and response. It's voice control, gesture control, and eye control, or the control pieces. That replace mice and clicking and touching and keyboards. Then response is audio and integrated visual, which is the idea of the goggles. Voice control works. Have you guys used the OpenAI voice control system lately? I mean, it is really incredible. I I had my earphones in and I was doing this exercise. I was trying to learn something. I told OpenAI to start quizz me on this thing. I just did a 30-minute walk. While I was walking, it was asking me quiz questions, and I would answer it and tell me I was right or wrong. It was really this incredible dialog experience. I think the voice control is there. I don't know if you guys have used Apple Vision Pro, but gesture control is here today. You can do single finger movements with Apple Vision Pro. It triggers actions. And eye control is incredible. You look at the letters you want to have spelled out or you look at the you want to activate and it does it.

00:55:02

All of the control systems for this ambient computing are there. Then the AI enables this audio response where it speaks to you. The big breakthrough that's needed that I don't think we're quite there yet, but maybe Zuck is highlighting that we're almost where an Apple Vision Pro feels like it's almost there, except it's big and bulky and expensive, is integrated visual, where the ambient visual interface is always there and you can engage with it. There's this big change. I don't think that mobile handsets are going to be around in 10 years. I don't think we're going to have this We're going to have a phone in our pocket that we're pressing buttons on and touching and telling it where on the browser to go to. The browser interface is going to go away. I think so much of how computing is done, how we integrate with data in the world and how the computer ultimately fetches that data and does stuff with it for us is going to completely change to this ambient model. I'm pretty excited about this evolution, but I think that what we're seeing with Zuck, what we saw with Apple Vision Pro, and all of the OpenAI demos, they all converge on this very incredible shift lift in computing that will become this ambient system that exists everywhere all the time.

00:56:05

I know folks have mentioned this in the past, but I think we're really seeing it all come together now with these five key things.

00:56:12

Jamath, any thoughts on Facebook's progress with AR and how that might impact computing and interfaces when paired with language models?

00:56:25

I think David's right that there's something that's going to be totally new and unexpected. I agree with that part of what Friedberg says. I am still not sure that glasses are the perfect form factor to be ubiquitous. When you look at a phone, a phone makes complete sense for literally everybody, right? Man, woman, old, young, every race, every country of the world. It's such a ubiquitously obvious form factor. But the thing is, that initial form factor was so different than what it replaced, even if you looked at flip phones versus that first generation iPhone. I do think, Friedberg, you're right, that there's this new way of interacting that is ready to explode onto the scene. I think that these guys have done a really good job with these classes. I give them a lot of credit for sticking with it and iterating through it and getting it to this place. It looks meaningfully better than the Vision Pro, to be totally honest. But I'm still not convinced that we've explored the best of our creativity in terms of the devices that we want to use with these AI models.

00:57:49

You need some visual interface. I think the question is, where is the visual interface? But do you? Is it in the wall? No, but do you? Well, when you're asking, I want to watch Chamath on Rogan, I don't just want to hear, I want to see. When I want to visualize stuff, I want to visualize it. I want to look at the food I'm buying online. I want to look at pictures of the restaurant I'm going to go to.

00:58:08

But how much of that time when you say those things, are you not near some screen that you can just project and broadcast that.

00:58:16

I think it's probably- Right.

00:58:18

I think it's probably- If the use case is, I'm walking in the park and I need to watch TV at the same time, I don't think that's a real use case.

00:58:24

I think you're on this one, Ron Chamoth, because I saw this revolution in Japan maybe 20 years ago, they got obsessed with augmented reality. There were a ton of startups right as they started getting to the mobile phones, and the use cases were really very compelling, and we're starting to see them now in education. When you're at dinner with a bunch of It depends how often does picking up your phone and looking at a message disturb the flow? Well, people will have glasses on. They'll be going for walks, they'll be driving, they'll be at a dinner party, they'll be with their kids, and you'll have something on, like focus mode, whatever the equivalent as an Apple, and a message will come in from your spouse or from your child, but you won't have to take your phone out of your pocket. I think once these things weigh a lot less, you're going to have four different ways to interact with your computer: in your pocket, your phone, your watch, your Air Cods, whatever you have in your ears, and the glasses. I bet you glasses are going to take a third of the tasks you do.

00:59:21

What is the point of taking out your phone and watching the Uber come to you? But seeing that little strip that tell you the Uber is 20 minutes away, 15 minutes away, or what the gate number is.

00:59:30

I don't have that anxiety.

00:59:33

Well, I don't know if it's anxiety, but I just think it's ease of use.

00:59:35

All those times- Fifteen minutes, 10 minutes, that's the definition.

00:59:38

I think it adds up. I think taking your phone out of your pocket 50 times a day- Those are all useless notifications.

00:59:44

The whole thing is to train yourself to realize that it'll come when it comes.

00:59:48

Okay, Zack, do you have any thoughts on this impressive demo or the demo that people who've seen have said is pretty darn compelling?

00:59:55

I think it does look pretty impressive. I mean, you can wear these Meta Orion glasses and look like a human. I mean, you might look like Eugene Levy, but you'll still look like a semi-normal person, whereas you can't wear the Apple Vision Pro. I mean, you can't wear that around. What, they don't look good?

01:00:14

You don't like them?

01:00:15

nick, can you please find a picture of Eugene Levy?

01:00:20

I mean, it seems like a major advancement, certainly compared to Apple Vision Pro. I mean, you don't hear about the Apple Vision pros anymore at all. I mean, those things came and went. It's pretty funny. It seems to me that- Who's that Galilla guy? Metta is executing extremely well. I mean, you had the very successful cost cutting, which Wall Street liked. Zuck published that letter, which I give him credit for, regretting the censorship that Metta did, which was at the behest of the deep state. They made huge advancements in AI. I don't think they were initially on the cutting the edge of that, but they've caught up, and now they're leading the open source movement. They're firing on all cylinders. Yeah, with LLaMA 3.2. And now it seems to me that they're ahead on augmented reality. Ever since Zuck grew out the hair- Gold chain. Don't ever cut the hair. It's like Samson. It's like Samson.

01:01:17

Baste Zuck is the best Zuck. He does not give a F.

01:01:21

I want to be clear. I think these glasses are going to be successful. My only comment is that I think that when you look back 25 and 30 years from now and say that was the killer AI device, I don't think it's going to look like something we know today. That's my only point. Maybe it's going to be this thing that Sam Altman and Johnny Ive are baking up that's supposed to be this AI-infused iPhone killer. Maybe it's that thing. I doubt that will be a pair of glasses or a phone or a pin.

01:01:54

If you think about... So take the constraints on, I don't need a keyboard because I'm not going to be typing stuff. I don't need a normal browser interface. You could see a device come out that's almost smaller than the palm of your hand that gives you enough of the visuals. All it is is a screen with maybe two buttons on the side. It's all audio-driven. You put a headset in and you're basically just talking or using gesture or looking at it to describe where you want things to go. It can create an entirely new computing interface because AI does all of these incredible things with predictive text, with gesture control, with eye control, and with audio control. Then it can just give you what you want on a screen, and all you're getting is a simple interface. So Chamath, you may be right. It might be a big watch or a handheld thing that's much smaller than an iPhone, and just all it is is a screen with nothing.

01:02:43

I really resonate When you talk about voice, only because I think there's a part of social decorum that all of these goggles and glasses violate. I think we're going to have to decide as a society whether that's going to be okay. Then I think when you go trekking in Nepal, are you going to encounter somebody wearing AR glasses? I think the odds are pretty low, but you do see people today with a phone. So what do they replace it with? I think voice as a modality is... I think it's more credible that that could be used by 8 billion people.

01:03:22

I think social fabric is more affected by people staring at their phones all the time. You sit on a bus, you sit at a restaurant, you go to dinner with someone and they're staring at their phone. Spouses, friends, we all deal with it where you feel like you're not getting attention from the person that you're interfacing with in the real world because they're so connected to the phone. If we can disconnect the phone, take away this addictive feedback loop system, but still give you this computing ability in a more ambient way that allows you to remain engaged in the physical world, I think everyone will feel a lot better about it.

01:03:51

You could say, Sacks hurts your feeling when he's playing chess and not paying attention.

01:03:54

Yeah, I'll be playing chess on my AR glasses while pretending to listen to you.

01:03:59

You idiot. I He's buying them.

01:04:02

He got version one. But one point I want to just hit on is the reason why these glasses have a chance of working is because of AI. Facebook initially made these- That's exactly my point.

01:04:13

It's exactly my point.

01:04:15

Facebook made these huge investments before AI was a thing. In a way, I think they've gotten lucky because what AI gives you is voice and audio. You can talk to the glasses or whatever the wearable is. It can talk to you. That's the five things. Perfect natural language. And computer vision allows it to understand the world around you. So whatever this device is, it can be a true personal digital assistant in the real world.

01:04:41

And that's the opportunity. If you guys play with Apple Vision Pro, have any of you actually used it to any extent?

01:04:46

No, I used it for a day or a night when we were playing poker, and I've never used it again since.

01:04:54

Which I get, but I do think that it has these tools in it similar to the original Macintosh had these incredible graphics editors like MacPaint and all these things that people didn't get addicted to at the time, but they became this tool that completely revolutionized everything in computing later and fonts and so on. But this, I think, has these tools, Apple Vision Pro with gesture control and the keyboard and the eye control. Those aspects of that device highlight where this could all go, which is these systems can be driven without keyboards, without typing, without moving your finger around, without clicking on buttons.

01:05:34

I think that's the key observation. I really agree with what you just said. It's this idea that you're liberated from the hunting and pecking and the tapping.

01:05:44

It's the controlling. You don't need to control the computer anymore. The computer now knows what you want, and then the computer can just go and do the work.

01:05:53

Now, this is the behavior change that I don't think we're fully giving enough credit to. So today, part of what Jason talked about, what I called anxiety, is because of the information architecture of these apps. That is totally broken. The reason why it's broken is when you tell an AI agent, Get me the cheapest car right now to go to XYZ place. It If people go and look at Lyft and Uber and whatever. It'll provision the car, and then it'll just tell you when it's coming. It will break this cycle that people have of having to check these apps for what is useless filler information. When you strip a lot of that notification traffic away, I think you'll find that people start looking at each other in the face more often. I think that that's a net positive. We'll meta-sell hundreds of millions of these things. I suspect, probably. But all I'm saying is if you look backwards 30 years from now, what is the device that sells in the billions? It's probably not a form factor that we understand today.

01:06:53

I just want to point out the form factor you're seeing now is going to get greatly reduced. These These were some of the early Apple. I don't know if you guys remember these, but FrogDesign made these crazy tablets in the '80s that were the eventual inspiration for the iPad 25 years later, I guess. Exactly. And so that's the journey we're on here right now. That's right. This clunky... And these are not functional prototypes, obviously.

01:07:21

Look at the Apple Newton, dude. The Apple Newton is like the perfect- Exactly. People forget about that. And then it turns out, hey, you throw away the stylist and you got an iPhone, right? Everything gets a million X better.

01:07:29

The Another subtle thing that's happening, which I don't think we should sleep on, is that the AirPods are probably going to become much more socially acceptable to wear on a 24 by 7 basis because of this feature that allows it to become a useful hearing aid. I think as it starts being worn in more and more social environments and as the form factor of that shrinks, that's when I really do think we're going to find some very novel use case, which is very unobtrusive. It blends into your own physical makeup as a person without it really sticking out. I think that's when you'll have a really killer feature. But I think that the Airpods as hearing aids will also add a lot. Meta is doing a lot, Apple is doing a lot, but I don't think we've yet seen the super killer hardware device yet.

01:08:18

There was an interesting waypoint. Microsoft had the first tablet. Here's the Microsoft tablet, for those of you watching. That came, I don't know, this was the late '90s or early 2000s, Friedberg, if you remember it. These incredibly- Oh, yeah. Bulky tablets that Bill Gates was bringing to all the events.

01:08:38

'99, 2000, that era.

01:08:39

You get a lot of false starts. They're spending, I think, close to $20 billion a year on this ARV or stuff.

01:08:46

Anyway, we're definitely on this path to ambient computing. I don't think this whole, Hey, you got to control a computer thing is anything my kids are going to be doing in 20 years.

01:08:52

This is the convergence of three or four really interesting technological waves. All right, just dovetailing with tech jobs and the static team size, there is a report of a blue collar boom. The tool belt generation is what Gen Z is being referred to as a report in the Wall Street Journal. Reports, Hey, tech jobs have dried up. We're all seeing that. And according to Indeed, developer jobs down within 30% since February of 2020, pre-COVID, of course. If you look at layoffs. Fyi, you'll see all the tech jobs that have been eliminated since 2022, over a half million of them. Bunch of things at play here. And the Wall Street Journal notes that entry-level tech workers are getting hit the hardest, especially all these recent college graduates. And if you look at a historical college enrollment, let's pull up that chart, nick, you can see here undergraduate, graduate, and total with the red line, we peaked at 21 million people in either graduate school or undergraduate in 2010, and that's come down to 8.6 million. At the same time, Obviously, in the last 12 years, the population has grown. So this is even, if it was a percentage basis, would be even more dramatic.

01:10:09

So what's behind this? A poll of a thousand teens this summer found that about half believe a high school degree, trade program, or two-year degree best meets their career needs, and 56 % said, real-world on-the-job experience is more valuable than obtaining a college degree. Something you've talked about with your own personal experience, Chamath at Waterloo, doing a apprenticeships, essentially. Your thoughts on Generation Tool Bell?

01:10:33

Such a positive trend. I mean, there's so many reasons why this is good. I'll just list a handful that come to the top of my mind. The first and probably the most important is that it breaks this stranglehold that the university education system has on America's kids. We have tricked millions and millions of people into getting trillions of dollars in debt on this idea that you're learning something in university that's somehow going to give you economic stability and ideally, freedom. It has turned out for so many people to not be true. It's just so absurd and unfair that that has happened. If you can go and get a trade degree and live a economically productive life where you can get married and have kids and take care of your family and do all the things you want to do, that's going to put an enormous amount of pressure on higher Ed. Why does it charge so much? What does it give in return? That's one thought. The second thought, which is much more narrow, Peter Thiel has that famous saying where if you have to put the word science behind it, it's not really a thing.

01:11:48

What we are going to find out is that that was true for a whole bunch of things where people went to school, like political science and- Social science. Social science. But I always thought that computer science would be immune. But I think he's going to be right about that, too, because you can spend $200,000 or $300,000 getting in debt to get a computer science degree, but you're probably better off learning JavaScript and learning these tools in some a boot camp for far, far less and graduating in a position to make money right away. Those are just two ideas. I think that it allows us to be a better functioning society. So I am really supportive of this trend.

01:12:28

Sacks, your thoughts on this general generation tool belt we're reading about and the combination with static team size that we're seeing in technology, companies keeping the number of employees the same or trending down while they grow 30% year over year.

01:12:45

Oh, my God, I'm so sick of this topic of job loss or job disruption. I got in so much trouble last week. You asked a question about whether the upper middle class is going to suffer because they're all going to be put out of work by AI. I just brush it off, not because I'm advocating for that, but just because I don't think it's going to happen. This whole thing about job loss is so overdone. There's going to be a lot of job disruption. But in the case of coders, just as an example, I think we can say that coders, depending on who you talk to, are 10, 20, 30 % more productive as a result of these coding assistant tools. But we still need coders. You can't automate 100 % of it. And the world needs so many of them. The need for software is unlimited. We can't hire enough of them at Glue. By the way, shout out if you're a coder who is afraid of not being able to get a job, apply for one at Glue. Believe me, we're hiring. I just think that this is so overdone. There's going to be a lot of disruption in the knowledge worker space.

01:13:44

We talked about the workflow at call centers and service factories. There's going to be a lot of change. But at the end of the day, I think there's going to be plenty of work for humans to do. And some of the work will be more in the blue collar space. I agree with Jamal that this is a good thing. I think there's been perhaps an over-emphasis on the idea that the only way to get ahead in life is to get a fancy degree from one of these universities. And we've seen that many universities, they're is not that great, they're overpriced. You end up graduating with a mountain of debt, and you get a degree that is maybe even far worse than computer science. That's just completely worthless. So if people learn more vocational skills, if they skip college because they have a proclivity to do something that doesn't need that degree, I think that's a good thing, and that's healthy for the economy.

01:14:36

Friedberg, is this just the pendulum swung too much and education got too expensive spending 200K to make $50,000 a year distinctly different than our child or I'm sorry, our adolescents, when we were able to go to college for 10K a year, 20K a year, graduate with some low tens of thousands in debt. If you did take debt, and then your entry-level job was 50, 60, 70K coming out of college. What are your thoughts here? Is this a value issue with College?

01:15:01

Well, yeah, I think the market's definitely correcting itself. I think for years, as Chamott said, there was this belief that if you went to college, regardless of the College, there was this outcome where you would make enough money to justify the debt you're taking on. I think folks have woken up to the fact that that's not reality. Again, if there was a free market, remember, most people go to college with student loans, and all student loans are funded by the federal government. The cost of education has balloon, and the underwriting criteria necessary for this free market to work has been completely destroyed because of the federal spending in the student loan program. There's no discrimination between one school or another. You could go to Trump University or you could go to Harvard. It doesn't matter. You still get a student loan, even if at the end of the process, you don't have a degree that's valuable. I think folks are now waking to this fact and the market is correcting itself, which is good. I'll also say that I think that there's this premium with generally mass production and industrialization of the human touch What I mean is if you think about, Hey, you could go to the store and buy a bunch of cheap food off of the store shelves.

01:16:23

You could buy a bunch of hershey's chocolate bars, or you can go to a Swiss chocolatier in downtown San Francisco, pay $20 for a box of handmade chocolates, you'll pay that premium for that better product. Same with clothes. There's this big trend in handmade clothes and high-end luxury goods.

01:16:39

Bespoke. Artisanal is the word.

01:16:41

Artisanal, handmade. Similarly, I think that there's a premium in human service, in the partnership with a human. It's not just about blue collar jobs. It's about having a waiter talk to you and serve you. If you go to a restaurant, instead of having a machine spit out the food to you. There's an experience associated with that that you'll pay a premium for. There's hundreds and hundreds of microbreweries in the United States that in aggregate outsell Budweiser and Miller and even Modello today. That's because they're handcrafted by local people and there's an artist and craft. While technology and AI are going to completely reduce the cost of a lot of things and increase the production and productivity of those things, one of the complementary Theory consequences of that is that there will be an emerging premium for human service. I think that there will be an absolute burgeoning and blossoming in the salaries and the availability and demand for human service in a lot of walks of life. Certainly, there's all the work at home, the electricians and the plumbers and so on, but also fitness classes, food, personal service around tutoring and learning and developing oneself.

01:17:56

There's going to be an incredible blossoming, I think, in human service jobs, and they don't need to have a degree in poli sci to be performed. I think that there will be a lot of people that will be very happy in that world.

01:18:05

How do you see the differentiation the person makes, Friedberg, in doing that job versus the agent or the AI or whatever?

01:18:12

Well, these are in-person human jobs. If I want to do a fitness class, do I want to stare at the tonal?

01:18:17

This is what I'm asking you. I think that there's an aspect of...

01:18:24

Look, it's like your Laura Piana. You talk about the story of Laura Piana. Where is the vicuña coming from? How's it made? Who's involved in it? Like, yes, look, you're-Oh, my God.

01:18:34

Look at those. Here it goes.

01:18:36

You're saying more.

01:18:37

Don't stop, Friedberg.

01:18:38

I could give you truffle flavoring out of a can, but you love the white truffles. You want to go to Italy, you want the storytelling. There's an aspect of it, right? I think that there's an aspect of humanity that we pay a premium that we do and will. Look, Etsy crushes. I don't know how much stuff you guys buy on etsy. I love buying from etsy. I love finding handmade stuff on etsy. I buy my underwear. No, you don't.

01:18:59

Do you really? Yes, I do.

01:19:00

Handcraft.

01:19:02

Yeah, handmade. I think that there's an aspect of this in a lot of walks of life. I have so many jokes right now.

01:19:08

They're just queuing up in their brain.

01:19:09

I don't know where to go. I've never used that seat, but I'm going to try it now after this. Have you guys taken music lessons lately? My kids do piano lessons. Last year, I started ducking in to do a 45-minute piano lesson with the piano teacher. There's just a great aspect to paying for these services, to getting- It's fascinating you bring that up. Oh, here we go.

01:19:29

You can play the harmonica?

01:19:32

Really? Why do you have that? I want to play some Zack Bryant songs, and he's got a couple of songs I like with the harmonica in them. So I just got a harmonica. My daughter and I have been playing harmonica.

01:19:41

Are you teaching yourself?

01:19:42

Let's hear it.

01:19:43

Let's hear it. I'll play it next week. I'm deep in the laboratory. It's not a bit. It could be a bit. It could be a bit. I'll write you a song next week.

01:19:52

He's a little shy.

01:19:54

No, I'll write a Trump song for you. He's organizing his harmonica a bit. I'll do the trials and tribulations of Donald Trump, and I'll do a little Bob Dylan send-up song for you. Trial, thank you.

01:20:06

Did you see that interview with Bob Dylan? I don't know when it was recently about how- And then writing the lyrics.

01:20:13

That clip?

01:20:14

Which one was it? Oh, the Ed Bradley clip is amazing.

01:20:15

With Ed Bradley clip?

01:20:17

About magic?

01:20:18

Yeah, well, you know some of those songs, I don't know how I wrote them. They just came out in- But the best part is what he said afterwards. You can't do that anymore?

01:20:26

He's like, No, but I did it once.

01:20:28

No, but I did it once in What an incredible... That means something.

01:20:34

Eclipse is both the sun and moon. That's really grounding. It's really grounding.

01:20:37

You understand, too soon there is no chance of dying. Yeah, that's an incredible clip. All right, guys, you want to wrap or you want to keep talking about more stuff? We were at 90 minutes.

01:20:46

Let me just tell you something. I think there's going to be a big war. I think by the time the show airs, Israel's incursion into Lebanon is going to get bigger. It's going to escalate. And by next week, we could be in a full-blown multinational war in the Middle East. If I am a betting man, I would bet that the odds are more than 30, 40% that this happens before the election, that this conflict in the Middle East escalates.

01:21:12

Thank you for bringing this up. I am not asking anybody to go listen to my interview with Rogan, but I will say this. Part of why I was so excited to go and talk to him in a long-form format That was this issue of war is, I think, the existential crisis of this election and of this moment. I really do agree with you, Friedberg. There is a non-trivially high probability, the highest it's ever been, that we are just bumbling and sleepwalking into a really bad situation we can't walk back from. I really hope you're wrong.

01:21:56

Here's the situation.

01:21:58

I really hope you're wrong.

01:21:59

If When Israel incurs further into Lebanon going after Hezbollah, and Iran ends up getting involved in a more active way, does Russia start to provide supplies to Iran, like we are supplying to Ukraine today? Does this bring everyone to a line? Just to give you a sense of the scale of what Israel could then respond with, Iran has 600,000 active duty military, another 350,000 in reserve. They have dozens of ships, they have 19 submarines, they have a 600-kilometer range missile system. Israel has 170,000 active duty and half a million reserve personnel, 15 warships, five submarines, potentially up to 400 nuclear weapons, including a very wide range of tactical sub-one-kiloton nuclear weapons, a small payload. You could see that if Israel starts to feel incurred upon further, they could respond in a more progressive way with what is, by far and away, the most significantly stocked arsenal and military force in the Middle East. Again, we've talked about what are these other countries going to do? What is Jordan going to do in this situation? How are the Saudi's going to respond? What is Russia going to do?

01:23:20

Well, the Russia-Ukraine thing, meanwhile, still goes on. We saw in our group chat, one of our friends posted, but Russia basically said, Any more attacks on our land, we reserve all rights, including nuclear response. That is insane.

01:23:35

Well, just to give you a sense- It's insane.

01:23:38

How are we here?

01:23:40

The nuclear bombs that were set off during World War II. I just want to show you how crazy this is. Do you see that image on the left? That all the way over on the left, that's a bunker Buster. You guys remember those from Afghanistan and the damage that those bunker Buster bombs caused? Hiroshima is a 15 kiloton nuclear, and you can see the size of it there on the left. That's a zoom in of the image on the right. The image on the right starts to show the biggest ever tested was Tsar Bomba by the Soviets. This was a 50 mega ton bomb. It caused shockwaves that went around the Earth three times. They could be felt as seismic shockwaves around the Earth three times from this one detonation. Today, there are a lot of 0.1 to 1 kiloton nuclear bombs that are considered these tactical nuclear weapons that fall closer to between the Bunker Buster and the Hiroshima. That's really where a lot of folks get concerned that if Israel or Russia or others get cornered in a way and there's no other tactical response, that that is what then gets pulled out.

01:24:55

Now, if someone detonates a 0.1 or 1 kiloton nuclear bomb, which is going to look like a mega bunker Buster, what is the other side and what's the world going to respond with? That's how on the brink we are. There's 12,000 nuclear weapons with an average payload of 100 kilotons around the world. The US has a large stockpile. Russia has the largest. Many of these are hair trigger alert systems. China has the third largest, and then Israel and India and so on. It is a very concerning situation because if anyone does get pushed to the brink that has a nuclear weapon and they pull out a tactical nuke, does that mean that game is on? And that's why I'm so nervous about where this all leads to. If we can't decelerate, it's very scary because you can very quickly see this thing.

01:25:42

I am the most objectively scared I've ever been. And I think that people grossly underestimate how quickly this could just spin up out of control. Right now, not enough of us are spending the time to really understand why that's possible, and then also try to figure out what's the offering. And I think it's just incredibly important that people take the time to figure out that this is a non-zero probability. And this is probably for many of us the first time in our lifetime where you could How would you say that.

01:26:15

Well, I think Friedberg is right. We're at the beginning stages of, I think, what will soon be referred to as the third Lebanon War. The first one was in 1982. Israel went into Lebanon and occupied it until 2000. Then it went back in 2006, left after about a month, and now we're in the third war. It's hard to say exactly how much this will escalate. The IDF is exhausted after the war in Gaza. There are significant opposition within Israel and within the armed forces to a big ground invasion of Lebanon. So far, most of the fighting has been Israel using its air superiority overwhelming firepower against Southern Lebanon. And I think that if Israel makes a ground invasion, they're giving Hezbollah the war that Hezbollah wants. I mean, Hezbollah would love for this to turn into a guerrilla war in Southern Lebanon. So I think there's still some question about whether Netanyahu will do that or not. At the same time, it's also possible that Hezbollah will attack Northern Israel. Nasrallah has threatened to invade the Galilee in response to what Israel is doing. So there's multiple ways this could escalate. And if Hezbollah and Israel are in a full-scale war with ground forces, it could be very easy for Iran to get pulled into it on Hezbollah's side.

01:27:50

And if that happens, I think it's just inevitable that the United States will be pulled into this war. So, yeah, look, I think we are drifting, and we have been drifting into a regional war in the Middle East that, ideally, would not pull in the US. I think the US should try to avoid being pulled in, but I think very likely will be pulled in if it escalates. And then, meanwhile, in terms of the war in Ukraine, I've been warning about this for two and a half years, how dangerous this situation was. And that's why we should have availed ourselves of every diplomatic opportunity to make peace. And we now know, because there's been such universal reporting, that in In Istanbul, in the first month of the Ukraine war, there was an opportunity to make a deal with Russia, where Ukraine would get all this territory back. It's just that Ukraine would have to agree not to be part of NATO, would have to agree to be neutral and not part of the Western military block that was so threatening to Russia. The Biden administration refused to make that deal. They sent in Boris Johnson to scuttle it.

01:28:51

They threw cold water on it. They blocked it. They told Zelenskyy, We'll give you all the weapons you need to fight Russia. Zelenskyy believed in that. It has not worked out that Ukraine is getting destroyed. It's very hard to get honest reporting on this from the mainstream media, but the sources I've read suggests that the Ukrainians are losing about 30,000 troops per month And that's just KIA. I don't even think that's wounded, that on a bad day, they're suffering 1,200 casualties. It's more than even during that failed counteroffensive last summer that Ukraine had. During that time, they were losing about 20,000 troops a month. So the The level of carnage is escalating. Russia has more of everything, more weapons, more firepower, air superiority, and they are destroying Ukraine. And it's very clear, I think that Ukraine Within, it could be in the next month, it could be in the next two months, it could be in the next six months, I think they're eventually going to collapse. They're getting close to being combat in Kabul. In a way, that poses the biggest danger because the closer Ukraine gets to collapse, the more West is going to be tempted to intervene directly in order to save them.

01:30:04

And that is what Zelenskyy was here in the US doing over the past week, is arguing for direct involvement by America in the Ukraine war to save him. How did he propose this? He said, We want to be directly admitted to NATO immediately. That was his request. And he called this the victory plan. So in other words, his plan for victory is to get America involved in the war and fighting it for him. But that is the only chance Ukraine has. And it is possible that the Biden-Herson administration will agree to do that, or at least agree to some significant escalation. So far, I think Biden, to his credit, has resisted another Zelenskyy demand, which is the ability to use America's long-range missiles, and British long-range missiles, the Stormshadows, against Russian cities. That is what Zelensky is asking for. If Zelensky wants a major escalation of the war, because that is the only thing that's going to save him, save his side, and maybe even his neck personally. We're one mistake away from the very dangerous situation that Chamoth and Friedberg have described. If a President Biden, who is basically senile, or a President Harris, agree to one of these Zelenskyy requests, we could very easily find ourselves in a direct war with the Russians.

01:31:21

The waltz into World War III is what it should be called.

01:31:25

The reason why this could happen is because we don't have a fair media that's fairly reported anything about this war. I mean, Trump is on the campaign trail making, I think, very valid points about this war, that the Ukrainian cause is doomed, and that we should be seeking a peace deal and a settlement before this can spiral into World War III. That is fundamentally correct. But the media portrays that as being pro-Russian and pro-Putin. If you say that you want peace, you are basically on the take from Putin and Russia. That is what the media has told the American public for three years.

01:31:57

The definition of liberalism has always been being completely against war of any kind and being completely in favor of free speech of all kinds. That's what being a liberal means. We've lost the script, and I think that people need to understand that this is the one issue where if we get it wrong, literally nothing else matters. We are sleepwalking and tiptoeing into a potential massive world war.

01:32:30

Jeffrey Sacks said it perfectly. You don't get a second chance in the nuclear age. All it takes is one big mistake.

01:32:35

You do not get a second chance. For me, I have become a single-issue voter. This is the only issue to me that matters. We can sort everything else out. We can figure it all out. We can find common ground and reason. Should taxes go up? Should taxes go down? Let's figure it out. Should regulations go up? Should regulations go down? We can It's hard to figure it out. But we are fighting a potential nuclear threat on three fronts. How have we allowed this to happen? Russia, Iran, China. You cannot Underestimate that when you add these kinds of risks on top of each other, something can happen here. I don't think people really know. They're too far away from it. They're too many generations removed from it. Or is something you heard maybe your grandparents talk about now, and you just thought, Okay, whatever. I lived it. It's not good.

01:33:37

Jamal, you're right. During the Cuban Missile crisis, all of America was huddled around their TV sets, worried about what would happen. There is no similar concern in this day and age about the escalatory wars that are happening. There's a little bit of concern, I think, about what's happening in the Middle East. There's virtually no concern about what's happening in Ukraine because people think it can't affect them. But it can. And what One of the reasons it could affect them is because we do not have a fair debate about that issue in the US media. The media has simply caricature any opposition to the war as being pro-Putin.

01:34:11

I would say that when every pundit and every person in a position to do something about it says, You have nothing to worry about, you probably have something to worry about. When everybody is trying to tell you, everybody, that this is not a risk, it's probably a bigger risk than we think.

01:34:39

Yeah, they're protesting too much. How can you say it's not a risk?

01:34:42

Me think thou doth protest it to me. All right.

01:34:45

All right.

01:34:46

Love you, boys. Bye-bye.

01:34:49

We'll let your winners ride.

01:34:52

Rain Man, David Sack.

01:34:56

And instead, we open-source it to the fans, and they've just gone crazy with it.

01:35:01

Love you, S-K.

01:35:02

I'm going all in.

01:35:06

I want your winners ride.

01:35:08

I want your winners ride. Besties are gone. That is my dog taking a notice.

01:35:14

I'm going to drive away.

01:35:15

Oh, man.

01:35:19

My haberdasher will meet me at place.

01:35:21

We should all just get a room and just have one big huge orgy because they're all just useless. It's like this sexual tension, but they just need to release them out.

01:35:28

What? You're bee.

01:35:30

What a bee. We need to get merches.

01:35:36

I'm doing all in.

01:35:43

I'm doing all in.

AI Transcription provided by HappyScribe
Episode description

(0:00) Bestie intros: In Memoriam (6:43) OpenAI's $150B valuation: bull and bear cases (24:46) Will AI hurt or help SaaS incumbents? (40:41) Implications from OpenAI's for-profit conversion (49:57) Meta's impressive new AR glasses: is this the killer product for the age of AI? (1:09:05) Blue collar boom: trades are becoming more popular with young people as entry-level tech jobs dry up (1:20:55) Risk of nuclear war increasing Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://www.reuters.com/technology/artificial-intelligence/openais-stunning-150-billionvaluation-hinges-upending-corporate-structure-2024-09-14/ https://www.bloomberg.com/news/articles/2024-09-25/openai-cto-mira-murati-says-she-will-leave-the-company https://x.com/chiefaioffice https://openai.com/our-structure https://x.com/unusual_whales/status/1658664383717978112 https://x.com/elonmusk/status/1839121268521492975 https://www.politico.com/news/2024/08/26/zuckerberg-meta-white-house-pressure-00176399 https://appleinsider.com/articles/12/12/28/early-apple-prototypes-by-frog-designs-hartmut-esslinger-featured-in-upcoming-book https://www.cnbc.com/2024/09/16/the-toolbelt-generation-why-teens-are-losing-faith-in-college.html https://www.wsj.com/tech/tech-jobs-artificial-intelligence-cce22393 https://layoffs.fyi https://educationdata.org/college-enrollment-statistics https://www.bloomberg.com/news/articles/2024-09-20/zelenskiy-to-push-us-for-nato-invite-weapons-guarantees