Jason, do you want to tell us about your new favorite podcast?
Oh, it's so good. My feed is now— because, you know, since cancel culture ended, Sax, everybody uses the R word and the F word right now. My entire feed on Instagram is either gay or Down syndrome or bulldogs. It's one of those three. And then I stumbled upon the Miss Thing pod, Miss Thing. And they do a bit called Gay Name, Straight Name. Here's gay name or straight name for David. This is good news and bad news, Freeberg.
Here we go.
Gay name or straight name? David. David to me is straight.
Okay. But he has my perfect body.
It can be confusing because I'm kind of like, are you gay?
And it's like, no, I just want to be you, David.
Totally. Well, it's so like the Michelangelo's David, the male ideal. It's like incredible body, kind Sorry.
Yeah.
Oh, it's a little rough.
What are you watching there, J. Cal?
They basically nailed these two, but okay, keep going.
I don't think Chamath is on their shortlist, but I know Jason will come up at some point.
Gay name or straight name?
Maybe this is it.
Chamath.
On the count of 3.
Yeah.
3, 2, 1.
Gay.
I'm thinking like Italian sweater, like really kind of like a loud Vibrant sweater. He like wears it to like poker night with his like his boys.
And like not— I'm not talking like straight poker.
I'm talking like gay poker nights.
Like at the bar.
Yeah.
Always talking about wine.
Talks about wine.
Always sort of like swishing.
Yeah, exactly.
Yep.
Also it's so like the guy at the gym taking off his shirt, taking selfies.
Yeah.
And everyone else is kind of like, excuse me, Chamoth, I'd like to use the mirror.
Yeah.
I'd like to see myself.
See you at the next gay poker night.
Totally. You bring the wine, we'll bring the sweater.
Yeah, there it is. Wow, they did do a chamath.
That is fantastic.
That is fantastic. A shout out to my guys at the Miss Thing podcast.
Wow, that was awesome.
I think I'm gay. I never knew.
Let your winners ride.
Rain Man David Sachs.
And it said we open source it to the fans and they've just gone crazy with it.
A lot of What did you do, like Cameo? Did you pay them to do that?
Did it for me as a favor, so I did it.
That's awesome. Well, thanks to those guys.
Thanks to the Miss Thing Pod.
I've seen those guys before in clips. I find them very funny.
It's so great. Shout out to my guys.
That was awesome.
All right, everybody, seriously, welcome back to the number one podcast in the world. It's the All In Podcast with me again, David Freeberg, Chamath Palihapitiya, David Sachs, And of course, I'm Jason Calacanis. You can call me J Cal if you're here for the first time. Topic 1, OpenAI. They missed their targets for ChatGPT, Friedberg, both on users and revenue. Let's talk about it. The Wall Street Journal says in a breaking investigative report on Tuesday that OpenAI expected to hit 1 billion WAOs, weekly active users, before the end of 2025. They missed that and they still haven't hit the milestone 4 months into 2026. Also, Chamath, they missed their 2025 revenue target for ChatGPT. Exact number wasn't specified, but as we've talked about here, they're at a $20-30 billion run rate. There's a little bit of accounting nuance that is yet to be worked out in the industry. Two reasons why this matters, Sachs. OpenAI has $600 billion in spending commitments for compute. Just to put that in perspective, that's about what they're trading for on secondary markets. In other words, the entire value of the OpenAI enterprise equals their spend commitments in the coming year.
CFO Sarah Friar, who is coming to liquidity, is reportedly worried, hey, that revenue isn't growing fast enough to keep up with expense, and OpenAI wants to IPO later this year. This has put Friar and Altman in conflict, or maybe there's some natural tension there. Fryer doesn't think OpenAI is ready for public reporting standards. According to the Wall Street Journal, Altman obviously wants to move faster, so they released a joint statement. This is ridiculous, yada, yada, yada. Let's go to you, Sachs. What do you think's going on here? Are these major headwinds, or is this just managing expectations as the leader of the pack in the most important race of our lifetimes, the race towards superintelligence?
Well, I actually have a little bit of a contrarian take on this. I know that OpenAI had a really bad week. Like you said, they had that Wall Street Journal article which said that they've missed their numbers, they missed their 1 billion user growth target, they missed their revenue numbers. That's called into question whether they can afford the data center commitments that they've made. And then in addition to that, they've also had the lawsuit with, with Elon happening this week. So in the press, it ended up being, I think, a pretty bad week for them. But I have a contrarian take on this, which is I think that over the past week or two, if you look at kind of what's happening at the product level, it's been a pretty good couple of weeks for them. They released ChatGPT 5.5, and the reviews from, you know, people I talk to in Silicon Valley have been really strong. You talk to developers, coders, they're very happy with it. At the same time, Opus 4.7, which is the latest Anthropic release, appears to be a bust. People are complaining about it. They're, in a lot of cases, are rolling back to 4.6. They're saying that Opus 4.7 is rationing compute, it's reducing thinking time, not as good.
There were some bugs in Claude. So if you just compare ChatGPT 5.5 to Opus 4.7, it does appear that OpenAI has had a better couple of weeks at a product level. And I think there's reason to believe that the product improvements will continue. GPT 5.5 is based on a new base model called Spud. Which is the first base model upgrade they've done in, I don't know, over a year. And having a new base model will pave the way for future improvements as well. So I think OpenAI is feeling pretty optimistic about their product right now. And I think you're starting to see on X, some of the developer mojo is shifting. I'm seeing a lot of people saying that they are shifting their, their coding usage from Opus to GPT-5.5. So I think that Sam may end up being right, but for the wrong reason. And what I mean by that is that when he made these big compute commitments, it was based on those estimates of hitting the billion users on the consumer side and hitting those revenue targets. The consumer business ended up being weak, so they missed those targets. But in the meantime, coding has become the all-important sector of AI.
And because they made all these compute commitments and they built out these data centers, they have more compute than Anthropic right now. Anthropic is token constrained. It's reducing their ability to serve Mythos, for example. It's causing them to engage in compute gating with Opus 4.7. And I understand why Dario made that decision. I'm not saying, I mean, it was a prudent business decision. I'm not criticizing him for it, but I think Again, I think Sam may end up being right here for the wrong reason, which is he missed on consumer, but enterprise is going gangbusters and is giving him the ability now, I think, to catch up on—
there's your Polly Market, which is the all-important market right now, of course. And we talked about Groq and Cursor teaming up last week, Elon and the team over there. Polly Market showing now a 32% chance that OpenAI goes public by the end of 2026. This is down from 60% in December. And Chamath, you gave a bit of a warning, hey, there's only so many dollars to go around. SpaceX IPO is obviously getting out first. And now if OpenAI doesn't go out this year and Anthropic does, this sets up an interesting dynamic. What are your thoughts here, generally speaking, about the massive commitment that OpenAI has made? Are they gonna run off the cliff or will it wind up being brilliant? Even if it wasn't strategically for the exact reasons?
I think they're going to be fine. I think this is a multi-trillion-dollar company. I think Anthropic is a multi-trillion-dollar company. I think the thing that's happening right now is a complete misunderstanding of what's actually happening inside of the world of AI. And there is one very specific choke point that is constraining everything, which is access to the power that's necessary to drive these tokens. To the extent that OpenAI missed, I think what that is is an insight to not enough compute capacity today. And that problem is only getting worse. You've already seen that with Anthropic as well, where they just found a way to economically induce Amazon to give them enough capacity so that you don't have to route through Bedrock to get to the Anthropic models. You're also seeing them do differentiated deals now with economic participation on top of what they already had from folks like Google to give them more capacity. What is my point? Everything in this market is power constrained. The reason that these folks may miss a number or a forecast have nothing to do with demand. It is entirely 100% due to the supply of the power necessary to generate the output token.
There is a really interesting thing that was just announced today that will make this problem even worse, which is what you're starting to see now is backlogs build up of not just the access to the power, but then the componentry that's actually necessary. Not just reciprocating engines and not just nat gas turbines, but now you're talking about transformers and all the actual tactical grid infrastructure. Why is this important? If you look at the actual amount of gigawatts that are under construction, we have a huge mismatch now. People have announced all these projects, Jason, but less than half of it is actually being built. Less than half. Most of it is stuck in red tape. Most of that is because there are these supply chain delays, so there's no credible strategy to turn any of this stuff on. Who will this hurt? It will hurt Anthropic and OpenAI the most. Who will this benefit? It will benefit the hyperscalers, specifically Oracle, Amazon, Meta, Microsoft, and Google. And now what you're going to see is a negotiation and a trade back and forth. How much equity do I have to give up? How much control do I have to give up to get access to the compute versus How badly will I miss my growth forecasts if I don't?
And now what that means is, and we spoke about this last week, that's a huge lane for Groq to just run through and SpaceX to run through cuz they have a ton of excess capacity. And so I think the Cursor deal was the appetizer, but if I were Elon now, I'd be running all over this market because if the models catch up in quality, I think he could also do something really crazy with Anthropic or OpenAI right now. Maybe not OpenAI because of the—
We'll get into the lawsuit in a minute.
The baggage.
Yeah.
But man, he and Dario should do a deal tomorrow.
So you're framing, hey, the limited resource here is compute, the demand is off the charts.
No, the limiting resource is power.
Power, which then powers compute, which then provides tokens, which then services the massive— Exactly. Developer and Cowork and all these other projects that consumers and enterprises can't get enough of. Got it.
And Jason, the other factor that complicates that for Anthropic and OpenAI is all the stuff that's sort of sitting around thumb twiddling, 40% of that is going to get canceled because they've done such a poor job of creating a good positive halo around AI that 40% of all the announced projects get canceled because 40% of all projects in the last 4 years have been canceled.
Yeah, and there's— there, there are some bad feelings about data centers, AI jobs, etc., and that's causing some headwind. People are literally doing violent things in society and blaming data centers and AI for it. I don't want to give it too much airtime. Freeberg, what's your take on the chessboard we're looking at here? Either through compute, energy, or through going public on a business level, you know, the strategic nature of capital, compute, and energy now playing a role in this massive amount of demand.
Still a ball-in-the-air kind of game. BCG had this theory, I think I talked about this once before, called the Rule of Three, where they've shown time and again that any stable, mature, competitive market evolved to a 4 to 2 to 1 ratio of market share for basically 90% of the market. So there's a market leader that has 4 times the market share of the second place. That's 2 times the market share of the third place. This is the case in pretty much every mature kind of competitive market. So you can kind of think about AI probably evolving into a consumer market and an enterprise market. OpenAI, even if they're not at a billion, they're still at 900 million weekly users. Which is well ahead of whatever Claude is at. I think Claude is like probably sub $100 million. Sacks, you may know. And then Gemini is probably closer to them at $700 to $1 billion, somewhere in that range, probably pretty neck and neck with OpenAI. So, you know, the consumer market looks like it's trending towards a ChatGPT slash Google fight for first place and second place, and then probably Anthropic in third place, and maybe Elon emerges and takes off.
Enabled by his compute capacity. And then the enterprise market is a little bit of a different story, and that's its own market, which is kind of Anthropic or probably Google in the lead. Actually, if you look at all the Vertex use, Google claims that 75% of GCP customers are active users of Vertex. So there's probably a pretty sizable market share that Google's captured on the enterprise side as well. This is also probably why Google stock has absolutely ripped. Over the last couple of months is they're literally in first place or fighting for first place in enterprise and consumer. But I still think that there's a lot of opportunity to Chamath's point about the compute and energy capacity constraints in improving how we actually scale and deploy models in both the enterprise and the consumer setting. And it is such early days. And I just want to highlight this paper that came out from MIT from these two scientists. and these guys published a paper on pruning techniques in neural networks. This paper showed that you could actually reduce the size of these networks by 90% and get the same accuracy out by pruning very large models down to smaller models, and then you can make a selection on which model to run for inference.
And by doing this, you can actually reduce inference costs by 10x. You can get 10x the output per energy unit that goes into the data center with no loss of accuracy. And so it's a really interesting, call it algorithmic technique that can be applied to the existing large models to actually make them much lower energy use. So if you think about it, you're firing up a very large model to answer a very simple question. You can actually prune away that model. Now, this is probably going to be the case in AI applications as it is in traditional Google Search. There's a long tail of searches, but there's a few searches that account for a large percentage of search volume. It's like, what is the weather? What are the movies? Times, you know, what's the stock price? Like, there's a certain set of things that make up the bulk of consumer energy, and there's probably a certain set of things that probably make up the bulk of coding output as well. And so if you can get that 80% of searches or chat interfaces or coding requests reduced down through pruning techniques to smaller models, and then you have a whole set of smaller models that can be called dynamically and you reduce inference cost by 90%, you can make much more use, call it 10 times the use on data center and energy capacity than we can today.
So I would argue that we're still in the very early days of getting efficiency in terms of output and tokens, and we're just in the very kind of early stage of that, which also unlocks the opportunity for guys like Elon to reinvent how this is done and potentially compete pretty aggressively.
There are two ways to win. You could throw compute at it, or you can do, I SLMs, small language models, and VSLMs, verticalized small language models. So if you had a verticalized small language model for the weather, let's say— that doesn't exist, but, uh, you can, you can use it as an example. They will have one for travel, as an example. When you hit Google for flight information, it's obviously going to route you to something lighter and faster that uses Google Flights, and Google Flights has been incorporated into Gemini. Gemini now is right behind 700, 750 million users. And it's exactly what we discussed, I don't know, 18 months ago on this podcast, Freeberg, that what if they put it at the top and what would that do to their search revenue? Search revenue is surging and they're also surging. So they figured out a way to balance those two competing forces, having search results that are AI-enabled and still getting people to click on links. They've done it brilliantly, apparently. And the stock is rewarding them.
I'll just add one statement to what you said, Jekyll, which is like, you're using what I would call a human heuristic on smaller models. And I think what we're evolving to is humans don't intuitively know what this model is. It's not just a verticalized model, but there are going to be models that will be discovered through automated pruning techniques that will then be working in concert. So lots of small models that link together, and we don't define each model by some human heuristic, like this is a search travel model.
Yeah, this is a maps model.
We don't, we don't know why these models work the way they do when they get broken down, but I do think that that's really where the evolution is happening. So effectively, a model becomes a macro model. It's got lots of smaller models underneath it that can be dynamically called, and that allows you to have 10x the inference for the same unit of energy.
Sacks, let me just build on your point about Google. J Cal, I would say that if there is a single reason why OpenAI did not hit its user targets and its revenue targets, certainly around consumer, you'd have to say it's because Google managed to take meaningful share. They were basically nowhere a year or so ago. Sergey came out of retirement, helped focus the company, and like you said, they did a brilliant job improving Gemini and putting it at the top of search, incorporating it Now, that being said, again, I don't think the news is all bad for OpenAI because I do think that the 5.5 release was great. We're hearing really good things about Codex. I do think that Codex is taking share in coding tokens right now, and I just think we're in a really interesting place where these companies are constantly one-upping each other. I mean, 2 weeks ago it looked like Anthropic was going to be completely dominant, right? I mean, Anthropic was growing at 10x, OpenAI was growing at 3x and it looked like—
and then the servers started running. Did you see that this week, Sax? The servers were going down. People were in my office were complaining, we can't get on Claude.
Listen, competition brings out the best in everyone. Anthropic forced OpenAI to compete. Google's forced OpenAI to compete in consumer. I just hope the market stays competitive for as long as possible. I do think that's what's best for consumers, our economy, and for our country overall. Let me just say one other area where I think OpenAI had a good week is in this red-hot area of cyber. Obviously Anthropic made a huge splash with Mythos. It hasn't been commercially released, they're compute constrained, but as a proof of concept or training model, it hit a new level of capabilities with cyber. But now OpenAI has released a new model called GPT-5.5 Cyber, which has just been through a bunch of tests and they've shown this was testing done by the AI Security Institute, that GPT-5.5 is the second model to complete one of their multi-step cyberattack simulations end to end. So it has the same level of capability as Mythos, and it does appear to be commercially ready. You know, they've got the compute to serve it. So I do think that that's a big accomplishment. I mean, look, we knew that other cyber models were coming. It wasn't just going to be Mythos.
In fact, within 6 months or so, all the frontier models are going to have Mythos-level cyber capability. But it's impressive that OpenAI got this GPT-5.5 cyber out so quickly. And I think 5.5 might be the first cyber model that cyber defenders actually get to use because again, I don't think they're as compute constrained as Anthropic is.
And this is an incredible opportunity, you know, for the Crowdstrikes and Palo Alto Networks of the world, both of which have been on the program, they come out and they start attacking this space, man, you could really see everything get tightened up. And this could be an incredible revenue stream for everybody who's got, whether it's Cursor, Claude, or OpenAI, or Gemini, this is an amazing opportunity to tighten up as much as it is to get attacked.
Can I make a point about that? Because there is so much fear right now, almost a level of panic about Mythos. People are treating it like a doomsday weapon or something like that. It's not. It's simply that the frontier models have reached the point where they're capable of automating cyber activities, just like they're capable of automating coding. But that means that a model could power up a cyber attacker or cyber defender the same way they can power up a coder and allow them to discover a lot more vulnerabilities. So there is obviously a risk there. But I think it's important to understand that Mythos or GPT-5.5, it doesn't create the vulnerabilities, it just discovers them. The bugs were already in the code. They were sitting there waiting for some hacker to discover. If we can now use AI to find these bugs in advance, these vulnerabilities, and patch them, then you actually harden our infrastructure and, and you harden our security. I also believe that this leap from, let's call it pre-AI cyber to post-AI cyber, it's going to be, I think, a big one-time upgrade cycle because again, you're going to find all these dormant bugs and vulnerabilities.
But I think that once we get past that upgrade cycle, you're going to reach a new equilibrium between AI-powered cyber offense and AI-powered cyber defense. It's going to become a lot more normal. It's not going to feel like this huge disruption, which is to say, I think, you know, people are treating this is like some existential threat. I don't think it is, as long as everyone does what they're supposed to do, which is use the new capabilities to harden their code bases and infrastructure and security before the hackers get a hold of these capabilities.
Yeah, and if Chamath, if you were to look at this, to build on Saks's point, there are about 5 million or so security experts in the world, and we talked about token cost, 40 hours of tokens just pounding it a week, you could create another 5 million for $100 per chief security officer, per security expert. So, it's the volume of security expert agents, Saks, to your point. Yeah, you could have 50 million of them, 100 million of them. They're not finding something new. They never sleep. They're relentless in their pursuit of these problems. It's a really great point.
To kind of just refine that. So, yeah, there's probably 5 million people in the cyber industry, but there's probably only a few thousand really elite hackers.
Sure.
Those hackers didn't have the time to go after the entire surface area of every possible attack vector out there. And so if you train a model to do what they do, obviously, like you said, it can operate with a scale and speed that a human hacker can't. So obviously, you know, what you need to do is get these tools in the hands of the white hats, let them do the cyberattacks themselves to then find the vulnerabilities and patch them. Before the black hats get a hold of these capabilities. But I think it's just, just one last point on this and I'll stop. It's just, it's really important to understand that the Chinese models are going to have these capabilities within approximately 6 months.
Oh, they have them now in DeepSea-4 for sure. They've got some level.
Well, no, DeepSea-4, I mean, DeepSea-4 is impressive in a lot of ways, but its capability is not at the frontier. It's maybe 80, 80, 85% of, let's call it, the American frontier.
Chamath, you wanted to get in on this. Let's get Chamath in.
Two things. The reason that this is even possible is because humans are error-prone, and when humans code, they create holes. And so humans exploiting humans is where we've been for a long time. Now we have computers exploiting humans because the computers go and seek out all these bugs that humans wrote. In the next phase, it'll be machines versus machines. And so I think the nature of cyber is going to completely change probably in the next 5 or 6 years. There'll be so much reason to rewrite all of the software that runs the world. In one part, because you're going to be asked to show more operating leverage and revenue growth, but in another part, because everything else that was handmade in the past is just fundamentally insecure. Either way, all roads will lead to all the operational software that runs the world will get rewritten. More and more of it will be written by machines. More and more of it will be impregnable as a result. But then the cyber threat actually will only increase because then you're going to try to figure out how to use a machine to inject something into another machine so that some agentic loop injects some malware or injects a bad token.
And I think that's a very complicated thing. What I will tell you is, I'm not even sure if I'm allowed to say this, but a very good, probably the best cybersecurity company in the world run by one of the very best CEOs in the world, who may or may not be speaking at Liquidity, would tell you that they have penetrated and can essentially manipulate every model. Let me just, let me just say it roughly that way.
Okay, perfect. Yeah. And I'm, uh, at the Breakthrough Prize, uh, which 3 of the 4 of us were at, I talked to George Kurz, the other person you were kind of describing was sitting beside Nikesh.
Yeah, I'm talking about Nikesh.
Well, Nikesh and George are the 2 guys leading this. Palo Alto Networks, CrowdStrike, They understand— what George told me was, there is just a line out the door of people who want this product or service. If you look at it, Freeberg, like the murder rate, we're sitting here with the lowest murder rate in the history of humanity. It has gone down massively in our lifetimes but massively over the arc of history. I think that's what's going to happen with cyber. There are only so many attack vectors and the remaining attack vectors are just going to be human factors, right, Freeberg? That's always been the case. And as we make the software more resilient, then the weak link is the secretary who puts her Post-it note with the password there, or the accountant who uses their dog's name plus 123 for their password. That's the historical one. Okay. Anything you want to add, Fred Berg, as we wrap there?
Why is your bed so messy, by the way? Why can't you just ask the room service to come in and clean your bed?
Listen, I can tell you what happened. Listen, I'm here in Atlanta.
And also, why don't you have a suite like where there's two rooms? Like, is it just one room? This hotel only has one room.
Yes, it is.
You know, either you're cheap or poor. Which one is it?
I'm cheap. Here's what I'll tell you.
Here's the situation.
I'm in Atlanta for the Knicks game tonight. Here's what I do. I just want to explain to you value for value. Some people spend their money on private jets and they spend $30,000 flying to Atlanta. I spent $30,000 on courtside seats.
I don't want the suite.
I want to put it into the seats.
Why don't you do both? You can do both.
I guess I could do both too.
I don't—
I'm, I'm in the process of becoming—
I don't understand—
of embracing my richness.
Okay, if you've already convinced yourself that you should spend $30,000 for courtside tickets, which I think is outrageous, but okay, you've already convinced yourself—
like $10K each, but yeah, yeah—
a hotel room that has 2 rooms, okay, I'm taking— probably costs 15% more than what you're paying.
It's 2x, but yes, you're right. I'll get the hotel room too.
Okay, or 20% more, but you just—
What's a room like? Like $200 a night? So you pay $400 a night, you get another room.
I mean, it's Atlanta. I'm in the best hotel. The most expensive hotel is $500 a night in Atlanta. It's no big deal. But everything's sold out because all the Knicks people are coming here.
So you're telling me double that would've been $1,000 and you couldn't spend $1,000?
Everything is sold out because the Knicks are here.
So we have to look at your dirty bed.
It's gross.
The bed's not that dirty. Come on. Just deal with it. Okay. Take it out in post.
I have a private jet story about flying to Atlanta.
Oh, tell us something.
There we go.
You reminded me.
Okay.
So yeah, there was some event there. So I flew my team there. You know, it was a few people on my plane and it's kind of a long flight. It's like 4 hours or something.
Well, from the Bay.
Yeah.
Yeah. So I went in the back to sleep. Well, first, you know, we started the flight and I had a few bottles of Pappy Van Winkle on the plane. And so we started off with a drink, and then I went in the back and fell asleep, and I woke up basically when we landed. So I come out and all 3 bottles are basically cashed of, like, half a Van Winkle.
Oops.
Those are, like, $2,000 a bottle.
No, no, there are more. These were antique bottles. One of them was, like—
Yeah, I have one of those from your plane. I have one of those from the old Falcon.
Yeah. Yeah. They were these vintage bottles of half-beers.
They were $4,000. I remember. Yeah.
Anyway, you can't even find this shit anymore. These guys, they asked me, like, when we land, it's like, hey, Sachs, how much did it cost for you to fly us to this event? And I said, well, about $8,000 in jet fuel and about $12,000 of Pappy Van Winkle.
You gotta, you gotta fuel the, the vibes as well as the plane. It's, uh, is Atlanta nice?
I've never really—
do those people still work for you or are they, are they, uh, yeah, they called in Atlanta. Um, is Atlanta nice? Listen, last year I went to the Detroit games and that city was on the rebound. Atlanta has an incredible opportunity opportunity to rebound.
I'll say it that way.
There's a great opportunity for them to upgrade the city. I went to a Waffle House at midnight last night. There was no shootings. Okay, let's keep moving.
By the way, do you get loyalty points at the Best Western Atlanta or no?
I get double points because I use my Best Western, uh, Visa card. Yeah, it's everywhere you want it to be. All right, use the promo code J-Cal and get 1,000 extra points. In other OpenAI news, Musk versus Altman, the trial of the century, maybe the decade, has started. Elon is of course accusing OpenAI of breach of charitable trust, unjust enrichment. He's accusing OpenAI of essentially flipping a nonprofit into a for-profit. He's seeking $150 billion in damages that they revert back to a nonprofit, that Altman and Brockman be removed. And there were some fireworks between Elon and the OpenAI lawyers. Elon kind of leveled up the discussion. He said, quote, if we make it okay to loot a charity, the entire foundation of charitable giving in America will be destroyed. That's my concern. Obviously, there's a ton of interesting nuances here, specifically Greg Brockman keeping a diary where he was journal maxing his plans like a Bond villain here. And the excerpts from his diary include: Conclusion: We truly want the B Corp. The true answer is that we want Elon out. If 3 months later we're doing B Corp, then it was a lie. Can't see us turning this into a for-profit without a nasty fight.
I'm just thinking about The Office, and we're in The Office, and this story will correctly be that we weren't honest with him. In the end, it's still about wanting a for-profit, just without him. Yada, yada, yada. Freeberg, your thoughts on this case? Is Elon gonna win?
I just don't know why Greg Brockman's got a freaking diary where he's like literally documenting. I mean, I love the guy, but what the fuck is he thinking? Like, you're just sitting here at home and like, let me write about the crime I'm committing, or let me write it like, and let me record it. And by the way, let me never delete it. I don't understand this.
It's not just journal maxing, it's discovery maxing. It's a smoking gun maxing.
I don't get it. I don't get it, man.
I mean, do you guys remember from The Wire in that scene where the guy's like, is you taking notes on a criminal fucking conspiracy? This has got everybody in the room. Can we play that clip?
It's like, yeah, find that clip.
What are you doing, Greg?
Nigga, is you taking notes on a criminal fucking conspiracy? What the fuck is you thinking, man?
If you're going to commit a crime, You do not write down the date and time of the crime in your journal.
Oh, look, we don't know it's a crime. Let's not—
okay, sure, don't care.
Crime, but well, yes, you keep in shenanigans.
Chamath, do you keep a diary?
What do you think?
Jekyll, do you keep a diary?
I, I believe— ruminate? No, I'll tell you right now, rumination is the path to unhappiness. Nobody gives a shit about your feelings. Writing your feelings down 100%.
It's only going to make you miserable talking to your spouse about your feelings.
Just go to a beautiful dinner, sit courtside at the Knicks, and do what I've been doing for 30 fucking years.
Retard maxing.
Retard maxing. And the register goes up. All you have to do is work, start new projects. 9 out of 10 fail. Place 9 out of 10 bets, 1 wins, and you're golden. Go sit courtside at the Knicks game.
Keep going. Life's too short.
And just keep moving forward. Don't write anything down, period, full stop.
It's good advice. Yeah, I just— the biggest surprise to me was this guy's got a diary. I just— I don't know anyone that has a diary. I've never heard of this. So anyway, that was shocking. Besides that, I have no view on what's going to happen with the case or what the judge will do.
I have no comment on the case either. I think it's weird that Polymarket hasn't budged even as all of this discovery has been published. It's effectively at 42 or 43% that Elon wins. So one of the friends in our group chat said, what may just happen is that Elon technically wins and he's just credited back the $40 million. And so maybe that's what this poll is front-running.
Mm-hmm.
But on a totally separate note, I think Jason, I know you say it as a joke, but this idea of just keep moving forward, don't ruminate, I think is very good general life advice. For everybody to follow.
The modern-day therapy industrial complex and the medication industrial complex, I believe, is— it pivots around rumination.
Well, it does pivot around rumination.
Yes.
That is the gateway drug to all these things.
Yep. Talk about your problems. Hey, you know, when these people go to therapy, you ever hear these people? Howard Stern's like, I've been in therapy with the same person 2 or 3 days a week for 40 years. I'm like, okay, what's the incentive for the therapist to stop charging you $1,200 an hour? There is none. Then they lose a revenue stream, they lose a customer. It's all a giant fucking fraud. Sacks, in terms of this case—
I wouldn't go that far. I do think that there's a lot of value in kind of untying some of these Gordian knots that people have because of how they grew up. But there's a difference between that and being specific and just randomly ruminating, because I don't think there's a lot of productive value.
You've got an acute issue, like in trauma in your life. Yeah, sure. Unpack it, figure it out. I'm just talking about this never-ending self-improvement, you know, ruminating thing. Uh, but getting back on topic here, Sachs, what's the— and we're talking about a jury, I believe, in Oakland.
No, but it's a bench trial. This is important. It's a bench trial where the jury is advisory in capacity, but ultimately that judge, she will make the final call and she'll do the damages.
And so is this a case, Sachs, of like We've got a Bay Area jury judge, and we've got Elon, who's considered, you know, a bit right-wing, and people don't all agree in that area in terms of his politics. And then you have this Sam Altman New Yorker story and people finding out that so many different people feel they got screwed by him. You put these two things together, it's impossible to handicap where this turns out, Sachs. Your thoughts?
Well, yeah, I don't think this is about politics. I mean, I guess you could argue that what Elon is seeking, which is to protect the charity, is, if anything, a left-coded sort of principle.
Sure.
Although I don't really think it's left versus right. Look, I don't want to take sides on this trial. I'm just watching like everyone else. The last time I weighed in on some Elon litigation, I got deposed for 6 hours. Remember that? Because they just assume that somehow I know something, right? I've never talked to Elon about the case. I don't know anything about it. Yeah, I'm going to see what happens like everyone else. Now, one thing I will say, having just read some of the coverage, is that apparently the company at some point did offer Elon shares in the company, but he thought that there was something kind of icky about it. Do you remember this? Yes, because at one point I said on our show when this dispute started happening, but before it became a court case, I said, look, if OpenAI at a certain point decided they had the wrong structure, they should have just gone and done a make-right with Elon, and he should have been a shareholder on the cap table. What I didn't know is that apparently they did try to do something like that, but Elon turned it down because he did want the entity to remain a charitable entity.
Yes.
In other words, see what I mean?
He had a principled view of it according to the reports. And was like, "No, we're trying to save humanity and then you're giving this keys to the kingdom to Microsoft." That's all come out. And I also have not talked to Elon about any of this, but my guess is like most of these things, there'll be some sort of settlement or something here, but maybe he takes it to the mat. Who knows? Judge Rogers, who's doing this, 61-year-old Obama appointee, politics has played a role, Sacks. They have had to tell the jury, however you feel about these individuals politically, whatever, please put that aside. But of note is that she oversaw the Epic Games versus Apple trial over App Store exclusivity, ruled in favor of Apple with some caveats, um, that they don't have a monopoly, etc., etc. So this is going to be a really interesting one. I think the worst-case scenario is OpenAI— for, for OpenAI— is they have to unravel this somehow, and that would delay the IPO. That would cause chaos in shareholders, and I guess The best case is some sort of settlement. And if Elon put the first $40 or $50 million in, he's, he's due 10, 20, 30% of the company after dilution.
All right, let's keep moving through the docket. Lots more to discuss. And, uh, good luck to everybody in their lawsuit. And those of you betting on the market, All In Summit selling out fast, our 5th edition, Los Angeles, September 13th to 15th. Go to allin.com/events. And, uh, speakers are going to be top tier. Apparently Freeberg is having this as his major creative outlet. I heard some backchannel chamath today that he's going to be doing Broadway musical, uh, illusionists.
I got a tap dancing situation.
He's literally going full-on entertainer. This is going to be vaudeville, Saks wrapped up. He's just going to take it to a whole new level.
Musical numbers like Nathan Lane.
I think if you're coding that it's going to be his big gay summit, yes, it could be a big gay summit.
Might be our last year in LA, guys.
Why?
Might be. Is—
might be.
Oh, everybody wants to go to Vegas apparently. Roll those bones, baby! Can you imagine leaving the summit for lunch and going and playing craps, Chamath. We get a fresh shooter in there.
Yes, I can, Jason.
Yes, I can.
Yes, I can imagine.
I got some bricks right here. Let's go.
Yum, yum. That way Sacks can come. Yeah.
Sacks is like, I'm never setting foot in California, but I will go to the Isaac Black Chapter.
You know, we're doing a couple live events. Are you coming to them?
Liquidity or something different?
Liquidity. And then there's the All In Summit happens in September.
Yeah, I'm going to do those too.
All right. Big tech smashed their earnings on Thursday. Google, Microsoft, Amazon, and Meta all reported. I don't know why they do this on the same night, folks, but they do. And performance was spectacular. It was great. However, the CapEx announcements were really the story here. Let me just queue this up and show the chart. $725 billion in CapEx guidance in 2026. From but 4 companies: Amazon, Microsoft, Google, and Meta. Amazon leading the pack with $200 billion, $190 billion each for Microsoft and Google, $145 billion for Meta. You add Groq, you add OpenAI and some other players to these plans, and we haven't heard from the new Apple CEO yet, but he's going to be taking over and he's going to have some plans here, I'm sure. We are going to see the— a trillion dollars, a trillion dollars in buildout over the next year. I don't know if this is even possible. But this is all being driven by AI and cloud computing. Google Cloud, which includes the Google Suite, that grew 63% year on year. Let that number sink in. 63% on $20 billion in revenue. That's in a quarter. Microsoft Cloud, that includes Azure, Windows Server, SQL Server.
They bundled some things together there to get the number to go up. That grew 30% on $30 $4.7 billion in revenue. Amazon Web Services, the original cloud, that grew 28% on $37.6 billion in revenue. That's a bit of a pure play, just counts Amazon's web services. Obviously these are all moving to Neo Clouds. These are all serving AI jobs and tokens. Now they have a massive customer base, and the customers from the smallest startups all the way to the biggest frontier models cannot get enough compute, and it is going to the bottom line. But this is shrinking Chamath cash flow massively. These were free cash flow machines, the largest money printing machines in the history of humanity, but they are giving up on free cash flow, stock buybacks, and dividends, and the focus on those three to invest in infrastructure. Amazon's free cash flow down 97%. Google Microsoft and Meta down 12, 12, and 8% respectively. Your thoughts on this free cash flow, the end of the free cash flow deluge and the massive, massive investment we're seeing in CapEx, Chamath.
I think we're seeing a very important structural shift in the capital markets. I think the last 20 or 30 years, well, 20 years, it's been that the Mag Seven just kind of ran away with it. That these big companies got bigger and bigger and it absorbed all of these investment dollars. And the biggest reason was that it had these very asset-light business models, right? You just build some more software and it just has all this leverage and it all just kind of worked except maybe for Amazon because they needed physical infrastructure for warehouses and delivery and whatnot. But by and large, it was a very asset-light investment cycle. Now all of a sudden the pendulum is swinging violently in the other direction. There's something that I think people misunderstand, which is as it moves back to these asset-heavy infrastructure investments, the hyperscalers are signing checks that, I mean, I suspect their body can cash, but there's a world in which they can't. I'll give you an example. When Microsoft convinced the owners of Three Mile Island to turn their Nuclear site back on. Yeah. Do you know what their forward purchase agreement was? It was for more than 2x the prevailing spot rate for energy, more than 2x.
The problem is that's not for an enormous percentage of their overall energy needs. So if you play that out and you think these 5 or 6 companies all of a sudden are not just spending, Jason, $700 billion a year of CapEx, which they are, but then from an operating cash flow, they're gonna be spending 2x the prevailing spot rate because they just want guaranteed demand into the future. Where's all this cash gonna go? It's not gonna go to the shareholder and it's not gonna stay on the balance sheet. These companies will now get levered. They're gonna get highly sophisticated around the financial engineering. They'll have more debt. They'll have all kinds of different vehicles and term loans and revolvers and all of this stuff. And so they're going to look like this big bulky industrial business in 5 years. And I'm not sure that there's a good valuation case to be made at that point. And so I think it may be simpler, and this is what I tweeted, to just follow the dollars, like a trillion dollars a year going out of the hyperscalers. Where is it going? Just follow those dollars and buy those companies because those companies are already underpriced.
This is, uh, obviously reminiscent of something we all experienced. Uh, Nick, can you pull up the Cisco chart I just sent you and put it at max? Uh, we had a massive build-out of the infrastructure of the internet in the late 1990s and into 2000, and what that caused was a lot of aggressive companies to do massive amounts of spending, a lot of retail investors to embrace these stocks like we're seeing with people trying to get into these private companies. And Saks, look at the 2000 peak of Cisco. This is the most extraordinary chart ever. It took them 25 years to get back to that peak, and, uh, they had a lost two decades. And we had a massive amount of fiber that wound up getting bought. We talked about that a couple years ago on the program. But there's something for you to build off of here when you look at this massive infrastructure. You think it's going to be Cisco Systems all over again, WorldCom, et cetera?
No, I really don't. The issue we had in 2000 was dark fiber. You had all this infrastructure being built out and it wasn't being used. There's no dark GPUs today, as Brad Gerstner likes to say. So what's driving the CapEx now is the voracious demand for compute, for tokens, and the demand is now pulling forward this additional investment in infrastructure. So I think what's happened here is that the bull thesis for AI just got validated in a single afternoon. I mean, again, you got Microsoft Azure, Google Cloud, Amazon AWS, Meta, they're all basically exceeding expectations, exceeding guidance in terms of where their cloud revenue would be and therefore how much they're going to reinvest in CapEx. This year, I think we were supposed to have $660 billion of hyperscaler CapEx up from $350 last year. I think there's now the new estimate is it's going to be over $700. So this is, again, it's more than 2% of GDP. This is a huge tailwind to GDP. There's another article saying that I think in the last quarter, AI was 75% of GDP growth. And by the way, this is just the CapEx part. This is the physical infrastructure.
This is not the economic impact of the tokens that are generated inside the token factory. This is the building of the factories. How do those tokens get used? Like we're seeing, they're being used not just to do research or to answer questions, but to create code. And so we're seeing this explosion of productivity in software development, and we're seeing an explosion of bespoke software being created. And that's going to accelerate every part of the economy. Every business that now wants to get code will be able to get code for the first time. Before, they couldn't even hire the engineers they needed to generate it. Now they will be able to. So that is a huge unlock of productivity across the economy. Then you're getting into these new use cases like the, the coworking use cases and agents, right? So the, the workflow automations that are happening, it's still early. I don't believe that this is going to replace humans. We had that in the past week we had that crazy case of an agent deleting a production database in 9 seconds.
So great.
Because of a bug. Look, what that said to me is that it's not that agents aren't valuable, they are valuable, but they have to be supervised. You know, this idea that you're just gonna be able to like automate all the jobs away, it is a massive amount of hand waving over the real technical problems and issues. The agents have to be supervised. Someone has to be accountable. It's not gonna be the CEO. The CEO doesn't wanna be accountable for thousands of agents.
You need people. Yeah. Despite what Jack Hadblock said.
Yeah.
You're so gonna have 6,000 direct reports is a great like goal, but it's not realistic. Yeah.
You need IT people who are savvy, who can supervise this and make sure it's working.
Yeah.
They have to be accountable to the CEO. Someone has to drive the productivity. It's like Balaji always said, AI is not end-to-end, it's middle-to-middle. You have to have someone to do the prompting. And you have to have someone do the validating, and I would add the supervision and accountability. So anyway, the larger point though is I'm speaking to the fact that I don't think there's going to be this huge job loss associated with this productivity boom that we're going to get. And in fact, I think what's actually happening now is that AI is becoming synonymous with the American economy. I mean, the fact that it's generating 75% of GDP, you have this CapEx explosion, this energy explosion. That feeds it. And again, just the beginning of the applications that are being unleashed by these new token factories. I think it's all a very, very positive thing. And all these doomers who are trying to throw a wet blanket on it are constantly scaring the daylights out of people. I mean, what do they want the American economy to do, just to stop? I mean, they just don't want any progress. I mean, like, again, you know, when you talk about stopping AI or halting AI progress, What you're really doing is stopping the American economy.
Now you're basically saying you don't want economic growth. AI is now synonymous with the growth of the American economy. And if there's no economic growth, there's not gonna be money to pay for all the social programs. There's not gonna be money to pay down the national debt. There's not gonna be money to basically build up our national defense. All these things we wanna spend money on, we have to have a vibrant economy, and that is now synonymous with AI. So I know that AI may not be popular. I see those polls, but having a strong economy is popular. And I believe that those things are now synonymous.
It's almost like there was some architect or czar who set up the chessboard in the first year of this to make sure that it was ultra competitive.
Uh, well, President Trump set the table on this.
Absolutely. With some good advice, I think, maybe. Freeberg, your thoughts?
Always good to have good advisors.
Always good to have good advisors. Absolutely. Absolutely.
No, but look, I've said it before, the president just wants America to win.
Literally, there are people who, if we were looking at this, you know, I don't know, 100 years ago, it'd be like people were like, yeah, you know what, we shouldn't build the highway system, or we've half built the highway system, let's stop, let's stop building the highways.
No, the highway system was funded by the federal government. There was no competition. It was the most expensive on a, on an inflation-adjusted basis. I think it was the most expensive project in U.S. history.
Yeah, and the railroads before that. Like, you can't stop these things. They have to keep going. Interesting point. You know, there is so much demand for the resource of tokens of intelligence, Friedberg, and it's quite different than the Fiber situation, as, uh, Zach correctly points out, where we built all this but we didn't actually have an application. Here the application is pretty, um, pretty well known, and you've got a large number of people in businesses who are trying to vibe code their way to success, trying to push this stuff And we had an interesting story referenced earlier in the show where Quad ate somebody's homework. This is the nightmare of all nightmares. Somebody was vibe coding, was the founder of PocketOS apparently. They make software for rental car companies. He was using Opus 4.6 through Cursor's AI platform, their coding platform, and which is like the most expensive tier. Uh, and he said he configured it with enough safety rules, but the agent was working on a routine task. They saw some sort of credentialing mismatch and they decided to fix the mismatch by deleting a railway volume without user confirmation. And they pushed the code from a repo to a live app and they deleted everything including the backups.
Literally a scene from Silicon Valley's HBO clip of Son of Anton. Hilarious. You gave your AI permission to overwrite code in the internal file system?
Were you gonna tell me about this?
No, I thought that was the company policy these days. Okay, well, your AI just failed epically. That's unclear. It's possible that Son of Anton decided that the most efficient way to get rid of all the bugs was to get rid of all the software, which is technically and statistically Correct. But artificial neural nets are sort of a black box, so we'll never know for sure. How did they get that so right, Sax? 5 or 6 years ago, artificial neural networks are a black box, so I guess we'll never know. But technically it was correct. Freeberg, when you blow up a Holo system with your vibe coding, which you were absolutely showing off in front of Jensen a couple of weeks ago about how much code you're pushing, who are you going to blame? You going to take responsibility yourself? Are you going to blame Claude or Kerster? Who are you going to blame when you blow up the entire stack over at Ohalo? Who you blame?
All right, yeah, I blame Dario.
You blame Dario? Okay, that's what I thought. That's the correct answer. Correct answer. Blame Dario. He's the one who says it's a doomsday machine. Uh, come on the pod anytime, Dario. 17th invite.
I would have invited the guy like 17 times. He is totally going to be—
he wants nothing to do with this podcast.
Actually, let me speak to that. So I think that there's maybe a misperception that this error occurred because of quote-unquote AI scheming, like kind of in that video that the AI decided that the best way to get rid of bugs is to basically eliminate the code base. This is kind of like the AI's going to turn the world into paperclips type thing where somehow it'll misscheme. That's not really what happened here. This is a case of just a old-fashioned bugs occurring at an edge case. You know, you've got the fact that this API was not designed for permissioned usage. You've got the fact that a credential was left kind of lying around, probably it should not be. There's kind of like a perfect storm that caused the AI to do something or the agent to do something that didn't quite understand it was what it was doing. I think that if there's a systemic problem here rather than just kind of a, like a random edge case, It's that AI still doesn't know what it doesn't know. A human would stop before deleting a production database and just say, "Oh, I'm about to do something really serious, really destructive.
Am I sure I want to do this?" And a human would've stopped and said, "Oh, wait a second. I need to be more confident in what I'm doing before I take that action." And AI still has this issue where again, it can be kind of overconfident. This is where the hallucinations come from is it doesn't know when it should have a low confidence in its output. But this is why it has to be supervised. The longer the time horizon for a task, the more likely it is to go off the rails.
It'll drift.
And it drifts. Exactly. And this is why I think people are starting to realize that this idea of eliminating all software developers was the peak of inflated expectations.
Yes.
Right? There was actually a really good tweet on this by Aaron Levy, who's got the right take on this. Aaron retweeted Matthew Iglesias, who sort of sardonically tweeted that, "5 months in, I think I've decided I don't want to vibe code. I want professionally managed software companies to use AI coding assistants to make more, better, cheaper software products that they sell to me for money." Just lower your prices, don't make me vibe code is the translation. Yeah. I mean, I think Rare win for Matt Iglesias there. Anyway, Aaron Levy then says, "Agentic coding is a huge boon for software developers that want to get more done, and it's fantastic for anyone curious to learn how to start coding. What it's less great for is casually building complex software that you have to maintain on an ongoing basis and take all the risks for upgrades, maintenance, keeping up to date with latest security issues, the bugs, cyber." Those are taxes on most knowledge workers who aren't familiar with the system.
It's not a tax, it's a huge risk.
Yes, it's a risk that has to be managed.
It's a risk.
People will get fired because there will be some public companies where some goofball tries to vibe code their way out of something and they're going to torch the enterprise value. It's going to be glorious to watch because we're all going to laugh and realize that was stupid and should never have happened in the first place.
Yeah. There is a chance that this improves to the point, passes trial of disillusionment and becomes super productive, and you'll be able to get an agent to do reasonable things without deleting your dataset. But we have a way to go. Here is your— this is the tech adoption chart. Basically, you got a technology gets triggered, you have this peak of inflated expectations, you go into the trial of disillusionment, and then the slope of enlightenment and eventually it becomes de rigueur, and it's an opportunity. Hey, uh, Friedberg, you have become Reddit True Tide curious. You have.
And also, tell me, tell me about Reddit, Friedberg, because I want it. I want to get on it. I want to use it. And I need you to tell Nat that it's okay for me to take it.
I have a friend who has some advice as well.
Friedberg, the coverage is coming out of this Phase 3 clinical trial data release that Lilly put out last month. So everyone's going crazy over the data, which continues to show pretty amazing results. So unlike tirzepatide, which is kind of Lilly's main product today, it's a, which is a dual agonist. It's got two peptides in it that, that bind to different receptors, the GLP-1, the GIP receptor. This other one now also binds to glucagon, which is a third receptor. And that glucagon receptor binding peptide causes the cells to increase their metabolism, which actually accelerates fat energy consumption over what would typically be muscle energy consumption. It's more likely to burn up fat early on, which causes more quick fat loss, but also reduces muscle loss. And some of the other data that's now coming out shows non-HDL cholesterol down 27%, triglycerides down 41%.
Liver fat down 80%.
The 80% reduction of liver fat. A1C drops from 7.9% to 6% in 40 weeks, which is amazing, by the way. If you're diabetic and your A1C drops that much in a couple of months, it's literally a life-saving product. The average user in this phase 3 trial saw their weight decline from 214 pounds. They lost 37 pounds. That's compared to 6 pounds on placebo. In 40 weeks. And, you know, modest side effects, 20% people felt more nauseous than the people that were on the placebo. There's a lot of other separate studies that are being done now that are showing significant reductions in inflammatory signaling molecules. So systemic signaling of like, hey, cells are in distress triggers this kind of inflammatory process that can have a lot of other damage to your body, can accelerate aging. And so one of the other conversations is that retatrutide might actually be kind of a de-aging drug as well.
Oh, Hercules, Hercules, Hercules, Hercules.
You know, and a lot of the studies, by the way, are done on the very high dose, 12 milligram dose, but you could probably get this thing dosed down to 2 milligrams and still see a lot of the anti-inflammatory maintenance and other benefits. I'm no doctor, but people are going nuts over this being more widely useful than just for clinical obesity or type 2 diabetes.
When's the projected date?
2027, mid-'27.
That's what they're saying. Could happen sooner. I mean, the data's in, the, you know, the FDA will take their time to evaluate it, but I think given the way this is all looking, could happen sooner, could happen sometime later this year.
Swimm, Chamath Swimm, said it's incredible and that it's living up to the hype in their experience.
Who?
Swim.
What is that?
What is that?
Someone who isn't me. Swim. Oh, this is a Reddit term. Someone who isn't me said— who has a guy— Swim has a guy and has cycled, uh, on Reddit True Tide and does push-ups and says muscle gain has been spectacular, no muscle loss, and a lowering of fat.
If you go on X and you just search up Retta, mm-hmm, it's like incredible. You see these like 65-year-old guys that go from a dad bod to looking like an incredibly ripped athlete in weeks. And I, I mean, I'm shocked. And then for me, I don't need that help per se, but my liver health is important to me, my cardiac health, cuz I'm South Asian. And it just looks like a wonder drug. I can't wait.
When you starve your body, when you turn off the, the appetite, which is the GLP-1 agonist function, normally your body goes into this kind of mode of starvation and you have this process by which your body tries to generate energy from your existing cells. And because muscle is much denser than fat, you can have a favoring of muscle tissue being kind of broken up over fat tissue. But what this new agonist, this glucagon agonist that they put into this, uh, retatrutide is it favors fat burning over muscle burning. And so that actually can drive short-term use at low dose for people to cut weight and maintain muscle and get ripped. And so that's why a lot of people in the kind of fitness community are talking about, hey, I want to get access to this and get on it for a while. So you'll see a lot more hype probably in that community as well as the, all the health effects.
It just feels like we're about to have an absolute avalanche of peptides to choose from.
On November of 2025, Lilly cut a deal with the Trump administration— I saw this— to drop the price on tirzepatide pretty significantly. I think it's like $50 on Medicare.
$50 from Medicare.
Yeah, yeah. Which is a pretty cheap price point. But it starts to make sense as you think about the portfolio of Lilly products. You get tirzepatide for $50, but if you want to upgrade, get the retatrutide, that's the high premium product. And that's where they're going to start to make all the money.
That'll be the Mercedes to the Honda.
Because I'm sure if I'm Lilly and I'm sitting there and I'm looking at this data coming out, I'm like, my God, people 'pay for this.' And that starts to become sort of like the upgrade to the BMW, or the Model S Plaid, if you will.
Yeah, the triseptide is like the one-bedroom messy bed hotel room, and the other one's the suite. Yeah, retropatide is like the two-bedroom suite.
But you can also, by the way, you guys know I'm a spokesman for Ro. They also have the Wegovy pill, uh, ro.co/twist, uh, to Get your—
wait, are you a paid sponsor?
What are you talking about?
Are you, are you a paid—
what are you talking about? Charles Barkley and— we're not having Serena Williams, our own spokesperson for RO, and then you come over on All In and you start promoting it? No, no, no, no, they're not happening.
Trust me, we'll get one of those as well. We'll get a ro.co sponsorship here.
What was the Ro pill that you had me get? What was it called?
Oh, Sparks. Sparks.
Did you take it?
I have taken it, and now amore.
Please.
No, no, no.
Maybe just a half of a lasagna.
I want to hear the story. I want to hear the story. Go.
It's so out of control. It is out of control.
I told you. I told you.
So then what happens is Nat and I are like, you can't just randomly use it. It's scheduled. We discuss it. We put it on the calendar.
We need a plan.
You need a plan. You need a plan. You can't go in with no sports. Otherwise it's too much. You just can't randomly take it.
What do you mean?
It's going to be a sesh.
It's a whole thing, man. It's like, I don't have the energy for that anymore.
It's an extended session. You have to be well rested.
This—
don't do this at 1 AM. This is like a 10 PM thing.
This is like a— no, this is more like a— this is more like on vacation, you know, like 10 AM to 12 PM, you know, to noon, you know. You got to really—
you got to plan it out, schedule it, schedule it, because otherwise kids around, otherwise it's got to be empty.
Otherwise it's unfair to her, and it's just a lot. It's a lot.
It's a big commitment, literally.
You look embarrassed. Chamath, do you feel embarrassed talking about it?
It's just a lot, man. It's like, it's a lot to handle. It's a lot.
It's— if you want to get the extra 20% in your performance, it's a lot, bro. It's a lot. It's basically going to overtime.
What happened was I was like, oh, what is this thing? Jason's like, dude, you must get it, you must get it. So we got it, we tried it, and we were like, what the fuck was that? And so then I've been trying to bleed the pills out. So I gave some to Stan Tang I'm like, Stanley, you try it.
Literally, he's dealing them like cards when we're having poker dinner.
I'm like, does anybody want to try these things?
But these are like the—
what is it? Rosebarks? Is that Rosebarks?
Shout out to my friends at Rosebarks. All right, let's keep moving here. Friedberg, guys. Friedberg had his own personal Super Bowl. You see me getting ready for Knicks playoff season. I get my courtside. Friedberg had the equivalent, Sacks. He went to the Supreme Court. In order to hear them talk about chemicals. This was a big deal for him. The Supreme Court coming together. Was it the Monsanto trial happened in the Supreme Court? And he went, he got courtside, he went to the Supreme Court and listened in the building.
Have you guys ever seen a live Supreme Court hearing?
No, I'd love to though. I'd love to.
Have you been?
No, I haven't actually.
I mean, honest, honestly, I think it was one of the most amazing experiences I've ever had. There was a massive protest out front. We went through the marshal's office to get in. And that building, you walk in, it's like sacred. It's all marble. It's, you're not allowed to talk. You have to be super quiet when you're in the building. Like they keep going, shh, shh, shh, like you're in some quiet library. It's like people treat it with this level of kind of sanctity and respect. And they're like, there is no politics here. There is no bullshit. There is no freedom of speech. This is the court. When you come into this court, the justices tell you how you will speak, how you will behave, what you will do, and you will not speak unless spoken to. You put all your stuff in a locker, you go up the stairs, you go into the, the courtroom and the courtroom. It's just so amazing being in there. They have this amazing marble frieze above the justices that has some of the great people of human history, Moses and these kind of amazing historical figures. And then below them are the 9 justices.
And the court case, if you guys haven't watched the case, you can listen to them, I think, online.
Yeah, hold on, wait, wait, wait, I have questions. So does Roberts sit in the middle because he's the chief?
Yes.
And then do all of the right justices sit on the right?
No, they're mixed. So they're— I think, um, I don't know, I don't know the exact seating, but they're mixed, probably based on appointment to the court. Yeah, I think that's right. I think that's right. And then, so yeah, that's right. And then it kind of goes out from the middle with Roberts in the middle. Roberts occasionally will name the justices and say, hey, do you have a question? Do you have a question? If no one's talking, but otherwise the justices will jump in with their questions when they want and they'll ask. Now, honest to God, watching this is like watching LeBron James play basketball. These lawyers are so mind-blowingly impressive on both sides that you would just like sit there and I was like in awe. It was so, I, I felt like my energy was completely sapped from me at the end of this process. Because you are just so engaged and so caught into the way that these guys are thinking and talking.
Did you take a rose sparks? You take a rose sparks when you were there?
No. And if you're familiar, if you're familiar enough with the case or the case history or the law that's being debated, because again, when you get to the Supreme Court, you never debate the case. What you're debating is the legal interpretation of the, the decisions that were made on the case. And so is this constitutional? How do you interpret this particular act, this law, this federal law? What's the right way to think about it. So you don't actually talk about the case, you talk about the interpretation of American law, of our laws, of the Constitution.
You're saying the facts have already been determined? That's right, right, at a lower court. There's questions of fact and questions of law. The facts have already been determined by the lower court. It's just Supreme Court is ruling on questions of law.
That's right. And so they have a full briefing with the full history of the case. And remember, they only hear 2 cases a day, so they're 1 hour each. For each hearing. So you go in and they only do it Monday, Tuesday, Wednesday on the last 2 weeks, and they only hear cases from October to April. There's only a handful of cases that are selected.
Wow. So you're really on a shot clock then to make your case.
You're on a shot clock, and you only have— and it's 30 minutes a side, and then the justices will ask questions.
So this was Monsanto and Roundup, right? So what was the law that was being debated?
For years, the regulatory body, the EPA, sets the label for pesticides. Does this cause cancer or not? What are the warnings? This can be damaging for birth defects, pregnancy, all the things that we're all used to seeing on labels when you buy a product, a chemical product. And the EPA and their regulatory authority determined that Roundup does not cause cancer. When you sell a pesticide, you first have to register it with the EPA, get it approved, and then the EPA gives you a label. And the label is written by the EPA. It says exactly what you're supposed to say. And in this case, it said, all this stuff doesn't say cancer because they determined it does not cause cancer. And I'm not going to debate whether or not it causes cancer, but that's the case that was made, is that the EPA is the regulatory body under a federal act called FIFRA, uh, Fungicide, Insecticide, Rodenticide Act. And that's where the EPA is given their regulatory authority to put the label on these products. And all of the cases that have been lost have been state failure to warn cases. To date, Bayer, which now owns Monsanto, has paid out $10 billion in these lawsuits, and they have reserved $10 billion on their balance sheet.
They have 90,000 cases still outstanding in the courts. 90,000.
Wow.
And so this one case got kind of appealed up to the Supreme Court last year. The White House Solicitor General— and if the Solicitor General steps up and asks the Supreme Court to take a case, it's more likely the case gets taken. So the White House said, please take this case. We need to have federal preemption, meaning the federal government has the right to set the label, because all of the cases that have been lost and that are being adjudicated are in state courts where the state has a law, like in California, called a failure to warn law, which means if a manufacturer knows that a product carries a risk, you have to warn the consumer. And so the, the lawyers have been arguing that Monsanto or Bayer knew that this product caused cancer and didn't warn the consumer. And they've been winning cases, they've been losing cases, but they've won enough cases that this has now become a multi-deca-billion-dollar problem. And so the argument is that the EPA says it doesn't cause cancer and they have federal preemption, so the EPA has the right to determine. So that's the one argument. But then when the other attorney came up, this guy was like literally like watching LeBron James.
And so going in, we're like, oh, 6-3, Bayer's gonna win. And then the other guy comes up And he was like, well, hey, you guys overturned the Chevron Doctrine last year. You guys remember that case? Yeah. Where basically when the Chevron Doctrine got overturned, it basically said that no longer does the federal agency get to decide. It has to be a direct reading of the law.
Dun dun dun.
Dun dun dun. So now, so he's saying like the states should have a right to read the law themselves. They shouldn't have to just defer to the EPA. And that's what this will come down to. So at the end of it, we were like, oh my God, this could be a 50-50 coin flip, 5-4 either way. And going into it, we were kind of like trying to say, hey, maybe this could be 6-3. So honestly, the whole experience was incredible. The case is interesting.
These are very complicated matters. How are these people able to make a fulsome argument in like, one side gets 30 minutes, the other side gets 30 minutes, there's a little Q&A, and then you're done in an hour.
There's this whole art and science, and Sacks, you're probably familiar with this, on how do you distill down a Supreme Court case in the briefing doc. Like, what is it you're petitioning around the court? And you try and distill it down to the exact legal interpretation you want the judges to rule on, not all the other bullshit.
And this is oral arguments? Yes, oral arguments, just a discussion.
And then the judges jump in, and all they're doing is asking the lawyer questions, one lawyer at a time, the one side and then the other side. And by the way, the Solicitor General came up in the middle and kind of made a few comments, and they asked her some questions from the White House, and she sat down. And then the two sides kind of went back and forth, and they just— it's like 30 minutes Q&A each on that one specific legal question. And Ketanji Brown Jackson said, but what if after the EPA issued the label, they found out information that it does cause cancer? Shouldn't they update the label? And he's saying, well, no, they're not allowed to. They can only issue the label the EPA says. And he says also, and it's, it's a criminal case if they find out that it does cause cancer and they don't report it to the EPA. And then she's saying, well, what if the EPA doesn't act? And shouldn't the states have a right to protect their people? So those are the legal arguments, the discussions that are going on in all of this. And there's interesting implications, which is fundamentally, if the states get to interpret federal law and ignore federal regulatory bodies, it opens up a whole new can of worms in terms of like all the states can start to ignore federal regulatory bodies like the EPA or the FDA or the USDA or, and on and on and on.
So the whole case has a whole bunch of really interesting implications wound up in it. When you hear these guys and they're just talking Chamath about that exact like interpretation of the law. And that's what this comes down to. It's not the actual case that matters.
And after the SACS, they will— oral arguments, and then they have like a private conference where they'll write their papers and give their final judgment. Yeah, SACS.
Yeah, I think what happens is that— so I guess there's some discussion that happens behind closed doors and they figure out where the majority is, and then the chief gets to assign who writes the opinion for the majority.
In that meeting, nobody is allowed in, and in fact you have a double-door system where like if anything needs to come in and out, you have to like kind of like knock on the door, you're led into this antechamber.
Then, oh, is it airlock?
It's effectively— we— I had— I don't know if you were there, Jason, but we had Ted Cruz come to play in the poker game. Uh, wasn't that— Ted Cruz clerked for William Rehnquist, and if you want to have an incredible dinner, ask him about the Supreme Court and Bill Rehnquist. He's a real student of the Supreme Court, and it just makes the Supreme Court, Freebird, to your point, sound like the most most incredible body that's ever been created anywhere.
By the way, more than the White House, more than the Capitol Building, more than any of these other big agencies, this place has— it's almost like being in England. It has these kind of ways that people operate. The, the security is so different. They kind of stand there in the court and they all exchange places every 20 minutes. It's very coordinated. They're dressed very differently than any other courtroom. Listening Maybe like 150, I would say. How do you take it?
So they're on—
so I think— I actually think everyone is a guest of a clerk or someone that works at the court. I don't think that it's like very publicly available to get in there.
You can't line up? There's no lineup?
There's— I don't know if there's a lineup. Um, this was a connection through the case. We got in through the Chief Justice. Um, he gave us the pass, but I think it was like very, um—
I think at the Elon versus OpenAI case, there's you can line up, and then the judge gave like 30 tickets to the press.
That's not the Supreme Court.
No, no, yeah, but I think there's a lineup for the Supreme Court as well. There's some public access that they're—
it did not look like anyone from the public was in this court. Everyone is dressed respectfully. I mean, this court has an incredible amount of like, you know, that's kind of cool experience.
It's a great time. I would just say, uh, enjoy it while you can. I mean, I think the Supreme Court is one of the last highly functional institutions in the United States, and 100% You know, at some point we're going to have like 13 or 21 or some crazy number of justices up there.
They should put in a Venus Act and get jerseys and everybody show up with jerseys.
Of justices there. And so enjoy it while it's still in the current— can you imagine the current form it's in?
Can you imagine showing up with jerseys with the justices' names on them and like having sections and like somebody selling Cracker Jacks?
The Idiocracy version of the Supreme Court. Exactly.
The popularity of the court really depends on whether it's issuing decisions that people agree with. That's what it comes down to. If, like, if you ask people whether they like the Supreme Court or not, it really just depends on whether they agree with the decisions that are coming out.
Sure, recency.
As opposed to the process of the decisions and how well argued it is and all these things that you're pointing to. And actually, the, the court— I mean, I just checked the numbers— the court is relatively popular right now. I think that it got as low as 35% in the 2024 Gallup survey, but I think it's back up to 44 to 50% favorability, which for something that's involved in politics is relatively high, right? You look at Congress or any particular politician, they're going to be lower than that typically.
I just felt so assured of like the institution when I visited and saw these guys interact and behave and how they behave, the process. It was like, man, this is what an amazing country. Yeah.
Well, the reason I say what I say is there was an interview with James Carville recently. Did you guys see this? I saw that. He said, look, when we get power, we're packing the court. So we're not even gonna, we're not gonna worry about it.
Yeah. We're gonna make it to 13, right?
He said, yeah, they're gonna go from 9 to 13. And then they're going to create some new states and all the rest of it. So that'll be that.
Uh, enjoy it while it lasts.
Enjoy it.
Uh, by the way, end on a high note.
It's the end of the empire.
That'll be that.
By the way, there is, uh, a Supreme Court— I was correct, there is an online ticketing lottery. So we can all sign up and you can get a 4-pack of tickets. I think they should make this— I— we should talk to Howard Lutnick. Maybe he can make this an auction. We get a revenue stream from the US. We could sell like 10 of the tickets as courtside seats for 20 grand.
Jason, you're exactly what they're trying to protect against happening. Exactly. Like, how can we—
how can we monetize the Supreme Court? All right, everybody, that's it. That's the World's Greatest Podcast for you. For Chamath Palihapitiya, David Friedberg, and David Sachs, I am the world's greatest moderator. We'll see you next time.
Justice.
I'm like the Chief Justice of the All In Podcast.
I'm thinking I noticed in your driveway.
Oh man.
We should all just get a room and just have one big huge orgy because they're all just used to this—
it's like this sexual tension that they just need to release somehow.
What?
You're a bee?
Bee. What? You're a bee?
Bee.
What?
Bee.
Bee.
We need to get merchies, Arthur. I'm going all in!
(0:00) Bestie intros (3:05) OpenAI misses targets, Codex gains on Claude (20:02) AI cybersecurity: a market that's about to explode (31:03) Elon vs Sam Altman lawsuit (41:00) Big tech smashes earnings, Capex explosion (52:44) Vibecoding nightmare: AI deleted someone's codebase (58:33) Retatrutide craze: peptides go mainstream (1:06:34) Friedberg's Supreme Court experience Apply for Summit 2026: https://allin.com/events Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://www.instagram.com/missthingthepod https://www.wsj.com/tech/ai/openai-misses-key-revenue-user-targets-in-high-stakes-sprint-toward-ipo-94a95273 https://polymarket.com/event/ipos-before-2027 https://x.com/aakashgupta/status/2049723185617412550 https://arxiv.org/pdf/1803.03635 https://x.com/AISecurityInst/status/2049868227740565890 https://x.com/ns123abc/status/2049527702076449244 https://www.reuters.com/legal/litigation/openai-trial-pitting-elon-musk-against-sam-altman-kicks-off-2026-04-28 https://x.com/ns123abc/status/2049527702076449244 https://www.google.com/finance/quote/CSCO:NASDAQ https://x.com/chamath/status/2049864100143104420 https://x.com/zerohedge/status/2049895327566561683 https://x.com/lifeof_jer/status/2048103471019434248 https://x.com/levie/status/2049163935182733396 https://www.foxnews.com/media/carville-tells-dems-quietly-prepare-power-grab-dc-puerto-rico-statehood-supreme-court-packing