How many PRs you think are going to get pushed to the core structural internet in 100 days? What's the over-under number? Because I'll give you a number.
You're going to say zero. My answer to that is— No, no, no.
I'll say like 10,000, but it's going to be a meaningless thing.
But if it prevents your browser history from being released to everybody in the world, Chamath, that may be something that you're willing to let 100 days pass on.
I think you got Chamath's attention when you said browser history.
What about the dick pics?
Chamath, he's going to release them himself. Rain Man, David Sachs.
And instead, we open source it to the fans and they've just gone crazy with it.
All right everybody, welcome back to the number one podcast in the world. David Friedberg is out this week, but in his place, the one, the only, our fifth bestie, Brian Gerstner.
I mean, why don't you ever give me— puts a little Namaste and your payday anymore. You used to be— you know what I mean? You used to be the greatest moderator, but now it's just—
it's kind of weird. You know what? These guys beat me up. They beat me up, and they just beat the joy out of me doing this program.
It's because you're a Ro Khanna apologist now.
No, I— we'll get into it. Okay, save it for the show.
Apologist?
Just because I said like, hey, they've stopped retard maxing and they've started doing like some logical things. Uh, yeah, okay.
Well, it's great to be here. Great to be here.
Good to have you here. And of course, uh, we have David Sachs is back. Everybody wants to hear from David Sachs. We missed you last week, bestie.
We didn't beat the joy out of you, we just tried to beat some of the hot air. Any fluff that you can put on the show that just involves you talking and saying nothing is—
that's the stuff we gotta Yeah, okay. Yeah, cut it right out. Um, and we'll cut it out and we'll just put a promo in for thesyndicate.com. Thank you. Also with us, Jamal Pollyapati is here. How's your retard maxing going since last week? Did you have a, a retard maxing full weekend? Did you have a good full weekend of just smoking cigars in the back deck and not ruminating about all the chaos you've caused in the last 20 years?
I think I've done generally more good than not.
Oh, you have, but there's been some chaotic moments. Don't think about it, Jamal.
You can't, bro. You can't have ups without downs, man. It's like, what are you there to do, just like placate everybody and be a loser? Are you there to be a winner?
Yes, you're in the arena. But have you stopped going to therapy after realizing—
Jake, what's up with this, uh, sudden interest in retard maxing? Are you like the clavicular for retard maxing?
No, the world finally caught up with me. That's it. I mean, I've been retard maxing this whole time. They just didn't have a name for it, guys.
Oh, okay.
Eli's videos are really good. I watched two more this week.
Take us through what's so appealing about not ruminating, smoking a cigar, and just living your life.
Because what he says actually works at every level of society and every sort of thing that you may want to achieve. Even if you're trying to climb the rungs, You very quickly learn that the more you want something, the less you're going to get it. And I think that's his real message is let go, live life, and just try stuff or don't try stuff. And I think that that detachment is really healthy for people. I like it. I like it a lot.
Who's the guy who says this? I actually didn't know.
Elijah Long, but Eli, I think is how he goes by. But he's fantastic.
He's got a YouTube channel.
Marc Andreessen found him and he's like, "This guy is the new guy." Modern-day philosopher.
He gives you a roadmap for how to live your life, right? New age sage.
What's the name of the guy, the character's name from Dune?
I was into girls.
Oh, the Lisan al-Gaib. I didn't read these books.
I was dating girls.
He's the Lisan al-Gaib of the modern internet.
This is why we need Freeberg here is to explain these deep holes. All right, listen, we got a lot to get to. Don't— the, the basic point is build something and don't ruminate, okay? Ruminating is just not worth it. Just everybody go forward.
No, just do stuff. Stop blathering in your own head. Just do stuff.
Absolutely. All right, listen, speaking of doing stuff, Anthropic is withholding its newest model, Mythos— I'm using the Greek, uh, pronunciation— its newest model, Mythos, uh, saying it is far too dangerous for any of us to have access to it, according to the company. The model autonomously found thousands of vulnerabilities, including bugs in every major operating system and web browser. This, uh, little study they did included 20-year-old exploits that had been missed by security audits for decades. Uh, some examples: they found a 27-year-old vulnerability in OpenBSD used in firewalls and critical infrastructure. They found a 16-year-old bug in FFmpeg that was missed by automated tools after 5 million scans. The Linux kernel, all kinds of bugs they found. They released a hype video hyping up why they were not going to share this model. Here's Dario. Come on the program anytime, brother.
But as a side effect of being good at code, it's also good at cyber.
The model that we're experimenting with is by and large as good as a professional human at identifying bugs. It's good for us because we can find more vulnerabilities sooner and we can fix them.
It has the ability to chain together vulnerabilities. So what this means is you find two vulnerabilities, either of which doesn't really get you very much independently, but this model is able to create exploits out of 3, 4, sometimes 5 vulnerabilities that in sequence give you some kind of very sophisticated end outcome.
All right, Brad, uh, by the way, that set they're using there, that's the same room those guys play Dungeons and Dragons in every Sunday. Brad, you're— sorry, Brad, you are an investor in this company. Is this virtue signaling or is it reality? Is this a good move by them to not release this model and be thoughtful, give it to a handful of people, and just find all the bugs it can before releasing it to the public? And we've got a lot more issues to discuss about this.
I actually think they deserve a ton of credit here, and let me walk you through why, right?
They—
the company could have just released Mythos broken a lot of core things on the internet. Oftentimes in Silicon Valley, we say move fast and break things. In this case, it means just releasing the model to move further ahead of your competition. But here, the company realized it would wreak havoc. They ran their own vulnerability testing. They saw that it would allow offensive hacking and people to expose browsers and browser history, expose credit cards, you know, on the internet. So, you know, what I like about this is they didn't need government to hold their hand on this. We have plenty of government regulations. They know what's in the best long-term interest of the company and the industry, you know, so they set up Project Glasswing. It's an AI-driven, you know, kind of cyber coalition. Apple, Microsoft, Google, Amazon, JPMorgan, 40 of the most important companies. And their goal is very simple. Let's spend 100 days using advanced AI to find and to fix and to harden these software vulnerabilities. Before hackers exploit them. Now, what I think this represents, Jason, is a threshold that we're crossing. Mythos and SpUD, which is going to be out from OpenAI any day now, which is the first Blackwell-trained model at OpenAI, they represent the beginning of what I would call AGI models.
These are models with massive step function improvements in intelligence, and they're just too smart to be released immediately You know, and by the way, there was nothing that said that every time you finish a model, you got to immediately release it GA. So they set up this idea of sandboxing, building defensive alliances, you know, in order to move away from that regime. I think it shows— and Saxon and I have talked about this a lot, so I'm interested to hear what he thinks— it shows you can trust the industry and market forces in coordination with the government. They were talking to the government about this. But they're not relying on some top-down regulation in order to do this. They laid out a blueprint that seems to me very pragmatic, that now that we're at this threshold, we're going to sandbox these things. I think that OpenAI will end up doing the same thing. I think Google will end up doing the same thing. It's an aggressive way to keep the rate, you know, the pressure on and win the race at AI while making the trade-offs to protect safety. So you know, I think you're always going to have to make these trade-offs.
I think in this case it was a great move by Dario and team, and I think they deserve a lot of credit.
Sachs, when you look at this, we had Emil Michael on the program a couple weeks ago, it might have been 4 or 5 weeks ago, and we had a very thoughtful discussion about, hey, if the government is going to have these tools, you know, Anthropic wants to withhold them, and you know, what, what is the proper relationship there? You have to think that the government— and I know you don't speak for all parts of the government— if you were just going to run through the game theory, they must have gone to the government and said, listen, this thing is so powerful, it can put together 2 or 3 hacks, create a novel attack vector, and this is incredibly dangerous. What if China has it? And if this thing is as powerful as Dario says it is, then this is an offensive weapon as well for us to take out— let's just pick, you know, uh, a pressing issue the North Korea's ballistic missile program. This is equivalent, the way it's being described, as the Manhattan Project, perhaps. So what are the chances— two-part question for you, Sachs— that China already has this and is using it?
And do you think Dario is doing the right thing by regulating themselves?
I think Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people. And we've seen a pattern in their previous releases of at the same time they roll out a new model or new model card, something like that, they also roll out some study showing really the worst possible implication of where the technology could lead. We saw this last year, about a year ago, they rolled out this blackmail study where supposedly the new model could blackmail users. There's been a whole bunch of these things. Actually, I went back to Grok and I just asked, hey, give me examples where Atropic has basically used scare tactics. And it's a pattern. Okay? It's a pattern.
Okay.
These guys, I'm not saying it's not sincere, but they have a proven pattern of using fear as a way to market their new products. And if you think back to, again, my favorite example is this blackmail study where they prompted the model over 200 times to get the result they wanted, and that result was clearly reverse engineered and it got them the headlines they wanted. And I would say the proof that it's reverse engineered is we're now a year later, there's a bunch of open source models out there that have the same level of capability that that Anthropic model had. And have you seen any examples of blackmail in the wild? I don't think so. So in other words, if that study were true in the sense of being a likely outcome of that model, I think you would see examples in the wild of that behavior. And we haven't seen any of that in the past year. Now let's talk about this specific example with cyber hacking. Yeah, I actually think that this one is more on the legitimate side. I mean, look, the, the reason why I bring this up is anytime Anthropic is scaring people, you have to ask, is this a tactic?
Is this part of their Chicken Little routine or is it real? You know, are they crying wolf or not? I actually would give them credit in this case and say this is more on the, the real side. It just makes sense, right? So that as the coding models become more and more capable, they're more capable of finding bugs. That means they're more capable of finding vulnerabilities. And like one of their engineers said, that means they're more capable of stringing together multiple vulnerabilities and creating an exploit. And so I do think that over, say, the next 6 months, we're gonna have this call it one-time period of catching up where AI-driven cyber is gonna be able to detect a whole range of, of bugs that maybe have been dormant over the past 20 years across a wide range of systems. And so I do think that there is real risk here.
Mm-hmm.
And I do think therefore that having this pre-release period makes a lot of sense where they're giving the capability to all these software companies that have existing code bases. To use the tool to detect the vulnerabilities themselves so they can patch them before these capabilities are widely available. And by the way, it won't just be Anthropic that makes these capabilities available. We know that, like, let's say the Chinese open source models like GIMME, K2, it's about 6 months behind. So we have a window here of maybe 6 months where we're still in this pre-release period where I think companies that have large code bases can get advanced access to this model. And I guess OpenAI is gonna release a similar thing in the next few weeks. I do think that every company or IT department or CISO that is managing code bases should take this seriously and use the next few months to detect any, again, like dormant bugs or vulnerabilities and, and roll out patches. If everybody does their job and reacts the right way, then I do not think it will be the doomsday scenario that Anthropic is sort of portraying. But it's one of these things where the fear might end up being a good thing in order to drive the— wake people up.
Yeah.
To—
it's in order to drive the correct behavior. So, sure. I ultimately think this is gonna work out fine, but you do need everyone to kind of pay attention, use the capabilities, fix the bugs. Then we're gonna get into a big arms race between AI being used for cyber offense and AI being used for cyber defense, but it'll be a more normal sort of period.
Chamath, we have, uh, Dario and, uh, you know, a number of the participants here taking this super seriously. They're making a big statement. SAC's very nuanced, uh, I think, take there. What's your take on how do these companies have it both ways? Hey, this shouldn't be regulated, this should be regulated. If this is in fact a cataclysmic Oh my God, they're going to hack everything. What if the Chinese have this right now? That would speak to more government, either coordination, regulation, or some kind of relationship between the CIA, the FBI for domestic stuff, and these companies, because there is a non-zero chance that the Chinese have an equal capability here. We're assuming they're behind, but who knows what they're doing behind closed doors. So what's your take on this? Is it, uh, the boy who cried wolf, or is this the real deal now?
I think it's mostly theater.
Okay.
In February of 2019, when Dario was still at OpenAI, they did the same thing with GPT-2. That was a 1.5 billion parameter model, which sounds like a total fart in the wind in 2026. But at that time, this 1.5 billion parameter model was supposed to be the end of days. and it was supposed to unleash this torrent of spam and misinformation, and that was the big bugaboo at the time. And so what happened? They went through this methodical rollout over 6 or 9 months. They started releasing the smaller parameter models, and then they scaled up to the big 1.5 billion parameter model. And at the end of it, it was a huge nothing burger. If you actually think that Mythos is capable of doing what it says it can do, two things are true. One is a very sophisticated hacker can probably do those things right now with Opus. And two, if these exploits are this easy to find, whether you use Opus or whether you use Mythos, the reality is you'd have to shut down the internet for about 5 years to patch them all. So when you see like a large multi-trillion dollar G-SIB bank It's a bit of theater.
Why? What do you think they can actually accomplish in 2 months? Do you actually think that if there's these vulnerabilities, it's all going to get fixed? Let's give them 6 months. Let's give them 9 months. But the reality is that capitalism moves forward, the funding needs moves forward, and the need for these guys to build adoption moves forward, and that's going to supersede what this is. So I do think that Sacks is right, that they have figured out a very clever go-to-market muscle here and a go-to-market motion that activates hyper-attention and hyper-usage. And so I give them tremendous credit and I'll maintain what I've maintained before. Anthropic is shooting the lights out right now. This is like Steph Curry going bananas from everywhere on the court. These guys are hucking threes.
Klay Thompson.
It's all net. Okay, so huge kudos to Anthropic, but we've seen it before. We saw it when these folks were the principal architects at OpenAI who are now seeing the same playbook here. I think we'll look back and I think what we'll say are these two things. One is if we're really going to patch all these security holes, we need to shut down the internet for some number of years, honestly, literally years. And the second is An advanced hacker can probably do this today with Opus if they really wanted to.
Okay. Hey, Brad, I'll get you in here for the last word. I'm going to go with, yeah, maybe they did cry wolf before, but based on what I see with these models advancing and using them, and I'm using a lot of the open source ones right now from China, I think that this is code red kind of moment. This is DEF CON. We should be taking this deadly seriously. And I think these companies got to coordinate with the CIA. And this is, uh, equally a defensive as offensive opportunity. Do you think this—
you're asking for the nationalization of AI now?
No, I'm actually— I, I don't think it should be nationalized, um, although I did see people sort of insinuating that. I think these companies need to build a group, Brad, that work and coordinate with the CIA. I assume that they're already doing this. I'm assuming, you know, Emil Michael and, uh, you know, Trump and everybody have these people in a room and that they've given the DEF CON and said, hey, how can our government use this to stop bad actors? And this is already being coordinated with the CIA and the FBI. I am 100% certain of that, that Dario went to them and said, look what we found, this is the real deal. I'll give you the last word on this, Brad, since you're an investor in both companies, you know them quite well.
The Frontier Model Forum, which was, which was put together in '23 is cooperating on anti and adversarial distillation stuff as we speak. Right. They don't want to make it easy on, you know, so Google and OpenAI and Anthropic, they're coordinating on this stuff. You know, there are times where I've pushed back on Anthropic because I thought it was, you know, perhaps regulatory capture or something else. This is very different in my mind. Right. He could have easily— Dario could have easily come out and said, oh my God, we passed a threshold. We need to have a government moratorium. Remember, even our friend Elon called for a 6-month moratorium in 2023 because of civilization risk. This guy didn't do that. Instead, he said, okay, what, what should we do? I'm going to get 40 of the leading companies together. We're going to spend 100 days sandboxing, hardening the systems, and then we're going to keep pushing forward.
What do you honestly think is going to get accomplished in 100 days? How many PRs do you think are going to get pushed to the core structural internet in 100 days? What's the over-under number? Because I'll give you a number.
You're going to say zero.
My, my answer to that is I'll say like 10,000, but it's going to be immediate.
But if it prevents your browser history from being released to everybody in the world, Chamath, that may be something that you're willing to, you know, let 100 days pass on.
I think you got Chamath's attention when you said browser history.
What about the dick pics?
Chamath is—
he's going to release them himself right now. Chamath's like, hey, Chinese hackers, here are my dick pics, please put them out.
Oh my God.
We have to be out there complimenting when they're doing the right things and relying on the market rather than running to the nanny state and saying, do more of this. So this to me was just an example of a good balance. I'm sure we're going to have plenty of debates about this in the future. But, you know, this is one I would like to see more of.
This is why, to use your word, Jake, I tried to have a more nuanced take is because we have no choice but to take this seriously, whether it's total theater, whether it's fearmongering. And they do have a pattern around this. We can't take the risk. Right? And it does logically make sense that as these models become more and more capable at coding, they're gonna get better at cyber. And there's gonna be that one-time period where you're moving from pre-AI to post-AI and you need a patch for that. So my guess is we're gonna see a lot of patches over the next few months. I think that that will resolve the problem. I think this is a case where I'm gonna give them the benefit of the doubt. I, I think that You know, I've criticized them in the past. I think that blackmail study was embarrassing to the level of being a hoax. But I think in this case, I'm going to give them credit and say that I think that it's legit.
So it's not the anthropic hoax. This could be legit. I, you know, looking at—
we have no choice but to treat it that way.
Of course. Yeah, I mean, even if two things could be true at the same time, Sachs, they could have used this tactic before It could be performative, like the video with the dramatic music in the background. It does have a little bit of drama to it, and the way they presented it is very dramatic, but it does make logical sense that the one company that made the bet on code bigger than anybody else would be the one who would discover this quickest. And in 100 days, that's a pretty good— that's a pretty big advantage versus the hackers. But let me make one more point there, Chamath. The most important thing that people haven't talked about here is the amount of code being pushed right now because of these tools is 10x, 100x in most organizations. So we need to have this type of security embedded in these new coding tools to do it in real time. That's the opportunity. There should be real-time correcting of this.
If this is real, they picked the wrong companies, meaning there are energy companies, folks that control nuclear reactors. There are airplane companies that are flying hundreds of thousands of people in essentially manufactured missiles of streaming gas going at 500 miles an hour. None of those companies were the ones that were included in this. And so I think if you really thought that this was end of days, at a minimum, we can agree maybe we should have expanded the circle a touch.
Well, maybe those are customers of the ones they're including here. Anyway, uh, this is a really important story. We'll obviously track it in the coming weeks to see what turns out to be reality. And, uh, Dario, do come on the program at some point. Hey, uh, Brad, will you get Dario to come on the program? I've invited him like 3 times. I got his phone number. He's ghosted me. I don't know why.
Wait, he's ignored you?
I literally got an introduction from the number— like, one of the number 1 venture capitalists in the world. He's on the cap table very early. He just won't respond. I don't know why.
I would tell you Dario's podcast with Dworkish, who I think is an excellent podcaster. I've listened to that 3 or 4 times, taken notes every time. It is a really exceptional piece, really exceptional piece of work by, by, by them.
All right, let's keep moving. We got a lot on the job.
You may once again be tarred with your affiliation with us.
Poor you.
I mean, I don't care. Literally, I've got friends on both sides of the aisle. I have friends.
Of course you do.
Even J-Cal.
Even J-Cal has friends everywhere.
Let me ask Brad a question here just while we're on the topic of Anthropic. There was a really interesting story or tweet, I guess you could say, by the founder of OpenClaw that—
Peter.
Peter, yeah.
What's his name?
Peter Steinberger. Steinberger.
Steinberger. Yeah.
Renowned coder created OpenClaw, which is kind of the thing that launched this whole agent era now, I guess you could say. Any event, he said that Anthropic was cutting off his access to—
was it to—
was to Claude? Is that the next topic?
This is on the docket. It's a little bit nuanced. Everybody using OpenClaw would take their $200 a month subscription to Anthropic, which was essentially like people were using more tokens and it's an average. The people from OpenClaw, it is very verbose and those people are 100x the usage of the average subscriber. So he said, you can't use your $200, you have to use the API. You move from the $200 plan to the API, add a zero to your token use, so, or more. And so they essentially ankled Open Claw, and then 10 days later or less, they released or announced their new agent technology, which is, uh, according to them, a safer, better version of Open Claw. So hey, all's fair in love and war, and they have Basically shot a huge cannon across the bow of OpenClaw.
Wait, can you just explain that exactly? So I think you're right that they systematically copied feature by feature of OpenClaw, incorporated that into Claude, and then the coup de grâce was basically cutting off OpenClaw.
The oxygen, yes.
Can you just explain exactly what they did?
Okay, very simply. When you buy a subscription to these services, they have blended your usage across many users. So there's 9 out of 10 users use less than the tokens they're paying for, and the top 10% use much more. When OpenClaw became a phenomenon, the number one open source project in history on GitHub with all of this usage, people went crazy. And you heard me talking about how crazy I went for it. Those people with the $200 subscriptions were using $2,000, $20,000 worth of tokens. So they said, you can no longer use your subscription to either your professional or enterprise subscription at $200 and plug that into your Open Clock. You now have to go to the API and pay per usage. So no more unlimited access.
But if you use Anthropic's own agent harness, are you part of the bundled flat rate?
You can assume that that's what they'll do, which if you were thinking on an antitrust level, might be token dumping or price dumping. I'm not saying I'm ratting them out like that.
No, it's like bundling, isn't it?
Well, price dumping or bundling. When you price something under the market price in antitrust, that would be price dumping, right? And if you were to bundle, it would be like the bundling issue.
Critically important, you can use OpenClaw via Cloud API, and every company has a right to set the price for its products. It's just saying that you were for— under their current regime, they were selling dollars for 10 cents via OpenClaw because these were such power users and And now they're just saying, we have to price this rationally, but we're happy to have you guys use the API. So, okay, okay.
But Brad, when you use the Open Claw competitor that Anthropic now offers, correct, are they subsidizing that? Are you paying?
We don't know yet because it's in closed beta.
So in other words, what I'm saying is if they charge for API usage, right, their own first-party agent harness or system, then that would be apples to apples. But if if they end up charging the bundled flat rate, let's say, for their stuff, but then charge the metered rate for third-party stuff, you could make a bundling argument.
Sure, sure.
And you could say it's anti-competitive, assuming that Anthropic has dominant market share in coding, which I think most people would say they do at this point.
And assuming that it's the same product. I mean, the reason most enterprises will probably use the Anthropic version of this Agentic product is because it meets all of your security parameters, right? So Altimeter runs, you know, a lot of stuff on Anthropic. They're already integrated within our data warehouse, our data lake, things of that nature. So just letting OpenClaw loose on the Altimeter, you know, dataset would not be wise. And so it's a different fundamental product.
No, I get that. And I think that Anthropic has a huge advantage let's say cloning OpenClaw and just building it into Claude. I'm not denying that. To me, that would be the reason why they don't need to do price discrimination is because there's already a very good reason to use the, let's call it the bundled offering on a featured basis. But the question I'm specifically asking is whether they're giving themselves a price advantage because—
I think you're at us giving the most generous interpretation. You're taking a more cynical one. I'm with you, Sachs. I'm 100% on the cynical side. OpenClaw is so powerful, it's got so much momentum that not only is Anthropic trying to ankle it, I believe when Sam Altman bought it, it was, and he didn't buy OpenClaw itself, he hired, Aqua hired Peter. I believe it was to subvert the open source project to get Peter's next set of genius ideas inside of OpenAI as opposed to letting them go there. People are going to say I'm a conspiracy theorist, but this is the number one focus. And let me just give you a list of who is trying to kill OpenClaw/compete with them. Obviously, you have Anthropic, but also Perplexity Computer launched. It's awesome. I've been using it. Anthropic has this Claude-managed agents. They dropped that on Wednesday, April 8th, yesterday. Today's Thursday when we tape, you guys listen on Fridays. And then you have Ermes agent that was released on February 25th. That's also open source and very good, so that's in the open source camp. Alibaba is coming out with one that's going to be based on their Qin model.
Then you have Elon, who said he's got something called Groq Computer coming out of MacroHard, which is a play on words for Microsoft. In addition to that, Amazon and Apple are preparing new releases of their retard-maxing assistants Alexa and Siri that will be less retarded in this new version. And then nothing out of Satya and Microsoft yet. So the number one goal, I believe, in the large language model frontier model space is to kill this open source product.
No, I mean, come on, like, why? They're building multi-functioning agents that can move from answering questions to actually doing something for you. Uh, like, you got to do that because that's what consumers and enterprises want. It doesn't mean that it's about killing OpenClaw. It's just this is an obvious thing.
They have the right to do it. But this is a giant movement to stop it because this is the equivalent of having an open-source Android-like player in the market. And that could be incredibly disruptive. These, I believe open source is going to win the day on the large language models and take 90% of the token usage. And I think the entire frontier model space could be undercut by open source. And I think they realize that SLMs, the smaller language models that are verticalized now that will run on desktops and laptops and is even starting to run on the top ones, that is their biggest competitive threat. And I hope it happens. All due respect to your investments, Brad, I think this technology and the interface is— he plays bets, but I think it's imperative that the agent level, which is essentially your entire life, you don't give that to Anthropic, you don't give that to OpenAI. That's your entire business, your entire life. It is foolish for you, Brad, to give your entire business and all the knowledge you have to Anthropic through that, unless you're just doing it to boost your investment in those companies.
But I would be very concerned if I was you with putting all of your knowledge that you've earned over a lifetime into any of these large language models.
All right, J Cal, let me ask you, can I ask a question? Thank you for that impassioned monolog. Actually, I want to ask—
Thank you for coming to my TED Talk.
Yes, thank you for that TED Talk. I have a yes/no question for each of you. Do you believe that Anthropic has dominant market share in coding right now? Yes, no.
No.
In coding?
Yes.
Just coding. They have the lead, but not dominant.
I think it's a trillion-dollar market and these guys have less than 10% of it today. So it's hard to make a case that—
What percent of coding tokens do you think that Anthropic is providing the market right now.
Greater than 50%. Yeah, that's true.
Okay, that's called dominant market share.
Uh, I don't know about that.
More than 50% of the market. You got to look at what the—
it's got to look at what the TAM is. You got to look at what the TAM is, right? There are a lot of people who provide, you know, that, that are in the business of helping generate software.
The tiebreaker before we move on to the next.
I'm not saying it's a permanent condition, but if you're telling me that today Anthropic is delivering over half of the coding tokens, that's clearly a dominant position in the market for coding. It's an early market. It could change.
But if I were representing them, David, I would say 9 months ago, everybody called us, you know, out of the game. We were being destroyed by OpenAI. In 3 months now, people are saying we have dominant market position. This is the fastest changing, most competitive market in the world. I think it would be very hard pressed to walk into, you know, some district court and make the case that these guys have somehow already formed a monopoly against Amazon, Google, Microsoft, OpenAI, et cetera?
Well, I'm not saying it's already a permanent monopoly, but I am just asking about market share. And I do think you guys all agree they have market share.
Okay, let's get Chamath in here.
Chamath, go ahead.
They probably have 50 to 60% market share because I think Codex is actually quite broadly used as well. But that belies the more important point, which is AI-enabled coding, I think, is still 5% of the broad market. So it's kind of a nothing burger. Yes, they're leading, but they're leading in something that isn't that big yet. Now you would say, how could it not be big? And what I would say is because most of the stuff that's being written is still white sheet de novo code. And I think the ugly truth is I don't care what model you have, but the long horizon ability for any of these models to actually build enterprise-grade software is still Shit. S-H-I-T. Shit. And that's the actual lived experience, not for me, but when I call on our customers, half a trillion dollar banks, $100 billion insurance companies, none of these guys are like, wow, it just works out of the box. It doesn't work. So most of it is still hand-tuned. So until I can honestly tell you that we can point a model at this with the right guardrails, which I can't today, what I would say is it's a small market that will become large as these models become better.
But we are in the world where we have 50 years of accumulated tech debt as a world. And I suspect when you enumerate the number of lines that that represents, it's hundreds of trillions of lines of just pretty marginal mediocre code to bad code. On top of that, we have all these legacy languages. I'll tell you one of our customers, they have to go and get 60-year-old pensioners to come into the office to interpret COBOL. No, I'm not joking.
This is a COBOL, Fortran.
This is a $100 billion a year revenue company, and that's how they solve these problems. It's not Opus just solves it. I would just keep in mind that most of the tech debt in the world that exists, 99% of it is still poorly addressed by these models. We are untying this Gordian knot. It's going to take decades to do it right. So all the breathlessness about all this other stuff, I really think it's not where the money is. It's not the big time stuff. And you can tell me, oh yeah, it's going to be the future. And I would say, tell this business that's $100 billion a year of revenue and 50 million billing relationships that all of a sudden you're going to open claw your way to a solution. It's bullshit. Not to say that you can't have a great chief of staff and not to say you can't do some useful stuff and trickery and have a good knowledge base. I'd like that too. But the core things that your lived experience sits on today is a mess of tech debt that will get very slowly replaced. And that's just the reality of life.
And there are competitors that are extremely disruptive. I'll tell you about one. We talked about BitTensor Tau. On this program a couple weeks ago when we had the Jensen interview. You brought it up actually, Chamath. There's a project that's Subnet 62, it's called Bridges AI. And what they're doing is a competitor that is not only open source, but anybody can contribute to it. They spent about $1 million in TAO rewards, and in 45 days they hit 80% of what Claude 4 is. And they did that in under 45 days. The way that works is they give rewards for people who— and they can do this anonymously— make that coding product, which is like Codex or Claude Code, better. That flywheel is racing right now with participation in the same way Bitcoin is. So you're going to see a lot of open source and these crypto open source combinations. And, uh, anybody who's not investigated this, I highly recommend you investigate this.
I do think you're right about one specific thing. I would put zero, literally the probability zero of any important company worth anything more than a dollar having and outsourcing their production code to an open source project. That'll never happen. However, what will happen though is when you look at the cost of training this 10 trillion parameter model on Blackwell, and when you look in the future, let's just say in 6 or 9 months, that a $15 or $20 trillion per amp model is going to get trained on Vera Rubin. I think, Jason, where you are right, I have zero, and just to be clear, I have no investments in this at all.
I do, to be super clear.
I'm just observing because another project other than BitTensor that someone brought up to me is Venice. The concept of open source training and orchestration is a hugely disruptive idea. Which is the complete orthogonal attack vector to this idea that you have to raise tens and tens of billions of dollars to train your models. Because if the capital markets run out of $10 and $20 billion checks to give people, the only solution is to be totally distributed. So I tend to agree with you, Jason, that there is going to be at some point a very successful open source project for pre-training. Absolutely will there never ever be an open source way where a real company that has any skin in the game says, "Here guys, re-engineer my code base as an open source project." Never going to happen.
Yeah, I think the coding tools will. And if you look at the history of open source, Brad, you actually, I think, had a lot of bets in this space. Linux, Kubernetes, Apache, Postgres, Terraform. These open source projects are deep inside of enterprises, deep. If we were sitting here 15, 20 years ago, the same argument was made. Nobody will ever adopt these inside the enterprise. You gotta go with Oracle, whatever. And fair enough, many people do. But I think this is, this $29 Ridge's subscription to do this versus $200, it's starting to take hold inside of startups. And that's where I always look at the tip of the spear. Startups love to, you know, use open source products. I think this could be the next big thing, but Listen, I invest in things that have a 90% chance of going to zero, so do your own research. No crying in the casino.
Can I just make a final few points?
Please.
Yes. So just quickly. So number one is, with respect to this market for code or code tokens, whatever you want to call it, it might be 5% today, meaning 5% of the code's AI-generated versus human-generated. I think it's going to 95%. I mean, I bet any amount of money on that. The only question is when, probably over the next few years. So that's point number one. Point number two is it's possible that if you're the early leader in coding as an AI model company, let's say you have 50 to 60% market share, you have the most developers using it, therefore you have the most access to code bases, you might get the most training tokens. There is a potential flywheel there. Where you can see the early market leader consolidating its lead because it's generating the most code tokens and it's getting access to the most existing code. Now, I'm not saying for sure that's going to happen. It's possible that the other guys catch up, but I think there is a possibility of a flywheel there and strong, I guess you'd call it data scale effects, things like that. So I do believe that the market for coding tokens could be monopolized.
Third, Anthropic's revenue run rate is, based on what I can tell and what's been publicly released, is the fastest growing revenue run rate at scale that I think we've ever seen. Uh, we can—
perfect segue. It's the next story.
Okay, maybe pull up the, the tweets, but this thing is ramping at a rate we've never seen before. Yeah, we can get into that in a second, but just one last final point is I think it's pretty clear that where we go from here is agents And coding gives you a huge step up on agents because one of the main things that agents need to do is write code to be able to enable them to complete tasks. And so if it is the case that coding is this huge market that's going to be dominated by one or two companies, and then that leads to another huge market, which is agents, my point is just I think all these companies need to behave in a very clean, way.
Yeah, for sure.
And not engage in tactics that later the government might say, you know what, that was anti-competitive. Everyone should just, I think, play fair. Do not engage in discrimination against other people's products. Engage in fair pricing. I'm not accusing anyone of breaking any of the rules, but what I'm saying is that eventually the government's going to look at this market with the benefit of 20/20 hindsight. And I think everyone should just basically, you know, Keep it—
Keep your nose clean.
Keep it tight. Keep it tight and right.
Keep it tight. Tight is right. I think it's an excellent point. Let's talk about the revenue ramp of Anthropic. This is just unprecedented. Anthropic's revenue run rate has topped $30 billion with a B. Early 2023, they turned on revenue. They started charging for API access. End of 2024, they're at a billion-dollar run rate. February '25, They launched Clogcode. That was the starter's pistol. Mid-2025, $4 billion run rate. End of 2025, $9 billion run rate. Just a couple of months later in April, $30 billion run rate. Yes, that's right. Triple. And the way they did this is enterprise customers are a major part of the spend. Dario announced a couple of months ago that there's over 1,000 enterprises paying Over 1 million annually. This is truly mind-boggling when you think about it because those are the most coveted customers in the world. These are the big fish that you just— when people are running enterprise software, they dream, Slack dreamed of getting these million-dollar customers, Salesforce dreams of getting these million-dollar customers. Brad, you're an investor. I guess Sam famously on BG2 asked you to sell your OpenAI stock back to him. You didn't, you demurred, but you're an investor in both.
How shocking is it to you to place both of those bets and then see one of them come from so far behind? ChatGPT has 900 million users. I don't know if they've passed a billion officially yet, but they are the verb, right? They're the Uber, they're the Xerox, they're the Polaroid of AI, but they didn't go after the enterprise. Dario made that, and Dario worked— he was the co-founder of OpenAI. He left, and according to the New Yorker story that came out from Ronan Farrow this week, he was basically left because of his disgust in working with Sam Altman. Your thoughts, Brad?
Well, you know, before we go down the OpenAI rabbit hole, let's just really contextualize like what's going on here. You know, I, I have this additional chart. You showed one. You know, they added $4 billion of revenue. In January, $7 billion in February, $11 billion of annualized run rates, um, or $10 or $11 billion in March. Just to put in perspective, that's Databricks plus Palantir combined that they added in a single month, right? So we started with everybody at the start of the year wringing their hands, including, you know, Gurley and others saying we're in a big bubble, asking whether the AI revenues would show up to justify all of this investment. And bam, you have the largest revenue explosion in the history of technology. So the company's plans were to end the year at about a $30 billion exit run rate. They got there by the end of March. Right. And I suspect that it's continuing in April. So you have to ask what's going on and what's the big so what? The first thing for me is that model and product capability just hit this threshold we talked about earlier, near AGI, whatever the hell you want to call it.
And everybody like Altimeter said, damn, this is so good, I have to have it. This is no longer about my IT budget. This is about labor augmentation and labor replacement. And by the way, Cowork is growing even faster than Claude Goode at the same stage of development. So what it showed is we have a near infinite TAM. It turns out that the TAM for intelligence is radically different than anything that we've seen before. And I think the best example of this, right, this is millions of self-interested parties, consumers, enterprises, $1,000, now over $1 million, right? It's not that there was some great go-to-market in Anthropic that all of a sudden, you know, they snuck up and blew everybody away. No, it was companies demanding the product. They're getting throttled on the product. Why? Because it's so good, it makes them better at their business. We are all self-interested actors, and when millions of those people are all making the same decision, there's a huge tell. And the tell here is that the TAM is as big as Dario and Sam and others have been saying. We knew intelligence was going to scale on the exponential.
The question was whether revenue will scale on the exponential, and that's what we're seeing. And remember, they're doing this with only 1.5 to 2 gigawatts of compute, right? Right, these guys are massively compute constrained. They're each gonna be adding 3 gigawatts of compute this year. And so that will unlock, they would be growing even faster but for that. And then Jason, to your point about the open source models that we all want to be a part of this solution. I've talked to a lot of big companies, 65 to 70% of their token consumption is open source model, right? Are these cheap Chinese and other tokens? So these revenue ramps are happening while the world is already using open source. This is not frontier only, this is frontier plus open source. We're gonna see massive token optimization over the course of the year. But what happens on this Jevons paradox is the unit cost, right, of intelligence is plummeting. Not the cost of tokens, the unit cost of intelligence is plummeting because the capabilities of these models is so much better. I look at what it does for Altimeter day in and day out. I talked to a major company yesterday.
They're on a run rate to do $100 million of token consumption this year on about $5 billion in OpEx. They think that we're now nearing peak employment in their company, but that their token, their intelligence consumption, okay, let's not call it token consumption, right? Because tokens may go up a lot, but their intelligence consumption is going to go up, you know, a lot. I would leave you with this. We're early to Chamath's point. We have low penetration of the Global 2000. We have low penetration of the use cases. We have low penetration of, uh, within the use cases that they're already using, and the models are only getting better. So I think when you look out toward the end of the year, I would not be shocked if you see Anthropic exiting this year at $80 to $100 billion in revenue. And by the way, doing it at the same time that OpenAI, who is also on the wave, they'll be releasing an incredible model in the next— imminently. They're going to be on that wave and you're going to see an inflection in their revenues as well.
Okay. Chamath, question 1 has been answered. The question of, hey, does this stuff actually have utility? That went from a question mark to an exclamation point. Of course it's got utility. People are getting value from it and it might be variable. Some people get more value than others. Number 2, the revenue ramp was a big question. Now that's turned into an explanation point. The final piece of the puzzle that you've brought up many times is, can this be profitable? And these companies are burning through a large amount of cash. So what is your take on when these companies can get out of the J-curve? We talked about this, I think, 3 episodes ago. I estimated we're going to be looking at $400 or $500 billion in investment into these data centers at a minimum, and then they have to climb out of that to get to profitability. So What are your thoughts on these becoming profitable companies?
Do you remember that investor that published this list, Jason, where he put all of the terms you talk about when one of the terms you can't talk about is profit? It's a list where it's like, if you can't talk about free cash flow, you talk about EBITDA. When you can't talk about EBITDA, you talk about margin. Commutative EBITDA. When you can't talk about that, you talk about Revenue. And then when you can't talk about revenue, you talk about gross revenue.
Bookings.
So you can kind of figure out, I think, where we are in any part of any cycle by just indexing into what does everybody talk about. I think where we are is we are between gross revenue and net revenue. That's where the discussion is.
Okay.
There was another article, I think today in, I think maybe it was The Information, that try to categorize and distinguish that Anthropic presents gross, OpenAI presents net. They're different. We don't know what the various take rates are. So they're saying that there's a difference. If it's not true, there's been no clarity provided by these companies. So at a minimum, you have this confusion where there's the breathless talk, then there's people that don't even know the difference between actual recognized revenue and run rate revenue.
Totally.
And how to multiply. I mean, so we're definitely there. Okay. We can quibble about the details, but we are not at the place where people are like, oh, here's your steady state, you know, free cash flow margin and here's what your EBITDA is. We're never, we're, we're years from that.
They're gonna have token maxing EBITDA, like accumulate the EBITDA at WeWork.
The thing that we need to understand is how gross margin negative is this revenue growth.
Mm-hmm.
We don't know that. And at least we don't as outsiders.
Brad might know.
Brad may know.
I, I, I, I would tell you, think about this. What are their big cost inputs? The number one cost input is the cost of compute. Cost of compute, right? I just told you they only have a gigawatt and a half of compute, and they had that gigawatt and a half of compute whether they have a billion in revenue or whether they have $80 billion in revenue. So you might actually expect to see these companies— their gross margins are exploding higher, like The fastest increase in gross margins I've probably seen out of any technology company.
So this is not gross margin negative, you're saying?
No, definitely not gross margin negative. And what I would tell you is—
So then they must be hugely profitable then.
Well, you may see accidental, what I call it, accidental profitability. They may not be able to spend this revenue fast enough, Chamath, on compute. And remember, it's only 2,500 people. Google crossed this revenue threshold when they had 120,000 people. These guys have 2,500 people. So the only thing you can really spend money on, right, is compute. And they can't stand up the compute fast enough.
But none of this foots to me then, to be honest, because if you were on a threshold of 90% plus gross margin—
I'm not saying it's there. I'm not saying it's 90% plus. I'm just saying it's gone from meaningfully negative 18 months ago to, you know, very, very positive. I've seen rumored out there 50, 60% is what you're saying.
The trend is there.
Let me just say this. I think if you're an incumbent, you want the cost of compute to go down. I think if you're not an incumbent, so specifically who do I mean? Meta, Google, and SpaceX. I think those three people who have all three of them, well, sorry, Meta and Google have a fortress balance sheet. I think by the end of June, SpaceX will also have a fortress balance sheet. What they will want to do is they will want to make this a compute problem because they will control the conditions on the field. You already see this today. Meta's models today, what people's general reviews are, it's okay, but the one thing that people say is it's incredibly performant. The model quality is okay, but the performance is great, which speaks to Meta's huge advantage. They have a massive compute infrastructure. So if you're not OpenAI and Anthropic, they'll want to make this a capital problem because then they can win it. If you're Anthropic and OpenAI, you want this thing to be as efficient as possible. I think where we are is very much in the early innings and we're bumbling around talking about gross margins and revenues.
We are not at profitability. And what is true for Facebook and what was true for Google was irrespective of where they got to a billion, Who the fuck cares? They were profitable by year 3 and they never looked back. I was there, I remember it was glorious.
The cost of building AI, totally stipulate, is radically higher than the cost of building retrieval at Google, right? It's just a fundamentally more expensive problem. But I will tell you that there's a lot of FUD out there about negative gross margins. I mean, Jason, you started this segment. By saying they're burning through large amounts of cash. I think people are going to be shocked at the burn, how low the burn levels are at these companies.
Yes. Anthropic or OpenAI?
Yes. And I would say at OpenAI as well. Like, if they're on, you know, if they do $50 billion this year, again, just look at the number of people they have. Revenue per people, it's pretty low. And the inference cost is plummeting. Inference cost is down by 90% year over year. And so just finally, I want to make— respond to this point about gross versus net, this tweet that Chamath was referencing. Okay, so there's a certain percentage, a smallish percentage of Anthropic's revenue, right, that they distribute through the hyperscalers. And like a lot of arrangements, whether it's Snowflake or Databricks or others, you pay a commission, right, on that. I will just tell you that you're talking single-digit percentage of total revenue of these companies. So the gross versus net thing isn't what's being reported. Like the apples for apples is pretty easy. And if you want to be conservative on it, take down Anthropic's revenue by, you know, 5 to 10%, which, you know, again, I don't— I think it's better to gross up OpenAI's revenue. But any way you do it, I just think it's a distraction from what's really, what's really going on here.
Happy to—
Zach, you have any thoughts on this, uh, massive revenue ramp?
Yeah, I mean, I want to go back to a point that Brad made because I think it was just really important, and I want to just underline it. Consider where we were at the beginning of the year. And what everybody was saying is that AI was a big bubble. And the evidence they would point to was the fact that hundreds of billions of dollars was going into CapEx that needed to be spent on these data centers. And there was no evidence of significant revenue to justify that spend. Where was the ROI? By the way, as an aside, the same doomers who were saying that AI was in a bubble were also the ones who were saying that AI was so powerful, it's gonna put us all outta work. And it's going to take over from humanity. I mean, in other words, they couldn't decide if AI was too powerful or not powerful enough. But putting aside that contradiction, they clearly were making this case that AI was this big bubble and that there'd be no payoff or justification for this massive CapEx that's being spent. And I think we're starting to see here, there is justification for it.
We're seeing it just in this one vertical of AI, which is coding, we're again seeing the fastest revenue growth in history. It's utterly unprecedented. And this is just one category or vertical of AI. We know that agents are coming next and the enterprise adoption of that is gonna be absolutely massive. So I guess what I'm saying is that this is early proof for, I think the thing that makes Silicon Valley special, which is we're willing to basically bet on things that just intuitively on a gut level we know are the next big thing. We're not that spreadsheet driven actually. Silicon Valley believes that if you build it, they will come and is willing to finance that buildout. And that's basically what's been happening. Again, just the top 4 hyperscalers, $350 billion of expected CapEx this year on its way. I think Jensen said $1 trillion by 2030. So Silicon Valley, whether it's big companies, whether it's founders, are always willing to bet on this next big thing. They're not like Wall Street. They don't need spreadsheets to tell 'em where to go. They know where the technology is going and they make their bets based on that.
And I think that there is going to be a big payoff for this. And I think it's the thing that's going to make our economy and the United States in general remain extremely dynamic and in the lead on this thing. Is that we are willing to make those kinds of bets, and I think it's going to pay off big time.
Yeah, clearly. Hey, um, Brad, you didn't answer my question about the vibes over at OpenAI versus Claude. OpenAI is, um, I wouldn't say reeling, but there's a lot of hand-wringing going on, a lot of employees leaving, a lot of people who are wondering, like, is our strategy the winning strategy of, like, consumer first they shut down Sora, you know, unwinding the Disney deal and really trying to get the company focused. And it's kind of like, I mean, listen, the New Yorker story was a bit of a rehash. I don't think we have to go into the blow-by-blow because we covered here 3 years ago. But the truth is, a lot of the great founders, co-founders of OpenAI, and a lot of the great contributors are now at Anthropic and other large language models And in the secondary market, OpenAI is trading lower than the last valuation and Anthropic is trading significantly above the $380 billion. So maybe talk a little bit about this competition, this Microsoft versus Apple, this Google versus Facebook.
Well, let's, let's start with immense credit where credit is due. Anthropic was literally counted out of the game last year. Yep. Right. And here they come over the last 12 months. And they've kicked OpenAI's ass over the last 90 days. Right. And what did Anthropic do? Anthropic made choices. No multimodal, no video, no hardware, no chips, no building data centers. They said, we're just going to focus on coding and co-work. We think that is the path to AGI and ASI. They executed their butts off. They took the lead. 2,500 people tight pulling on the oar in the same direction. But I think you would be seriously foolish to count out OpenAI. Right. And I think we're—
why?
We're at peak OpenAI FUD. And I'll tell you, it starts with great researchers and great models. And I think when you see the Spud model they're about ready to release, I think it's going to be an excellent model, shows that they're firmly on the wave. If you look at what's going on with Codex, incredible ramp on Codex, fastest ramping model with 5.4, I think 5.5 or Spud, whatever we're going to call it, it's going to be an even faster ramp.
Have you seen Spud? Have you used it? Have you gotten a preview?
People are using Spud, right? So it, it is being previewed.
And so, so you're talking to people who've used it, and what are they telling you?
They're telling us that it's an incredible model on par with Mythos, right? And that it's a, a very usable model in terms of, um, how it's packaged. I will say that, back to David's point, now this is the most important point I think anybody can take away here. This is not zero-sum. The TAM of intelligence is dramatically larger than any TAM we've ever seen in our investing careers over the last two decades, right? And if you're on the wave, which OpenAI is, you are going to be selling into the world's biggest TAM. They are going to build a very big company. I'm a buyer of the shares today, notwithstanding all of the vibes that you describe. I think these companies are firmly on the wave. They are jarred. They are sitting there saying, what did we do wrong and how do we get our mojo back? They want to compete. It is embarrassing to people on the research team and the product team over there. So I'm not saying there's not a real awakening occurring there, but I think that's what the case is. And by the way, to Chamath's point, do not count out Meta, right?
I think Meta is absolutely in this game. Google is absolutely in this game. Elon is absolute in this game. And if you're on team—
got some stuff dropping shortly that's going to be very impressive.
If you're on Team America, the fact that we have 5 Frontier models competing against each other, and David made sure they weren't throttled by excessive government regulation. We have Mythos come out. It's a self-imposed safe harbor, you know, to harden our system. It wasn't a call for moratoriums or getting the government involved. We have the type of competition that's causing us to accelerate our lead against the rest of the world. We can't take our eye off the prize. We got to stop adversarial distillation and we need to make sure that we're distributing our products around the world. But I view this as really good for Team America.
Well said. And here is your Polly Market IPOs before 2027. Obviously SpaceX at 95%, Cerebras at 94%. And hey, number 5 on this list, 51% chance that Anthropic goes out before the end of the year. 44% chance that OpenAI comes out before then. All right, here is the closing market cap for Anthropic on Polymarket. Only $158,000 in volume. So Chamath, when you put in $400K, you're going to really tilt this market. 78% chance that it's above $600 billion. 19% chance that it doesn't go out. So it's looking like this would be a decent investment for you. Brad, what valuation did you get into Anthropic at?
We first invested in, I believe it was the $130 or $150 billion round.
So this will be a 7x, 5x for Altimeter. L please. Congratulations.
I mean, listen, I, I, I, again, there are lots of people who were there before us and who are on the board and who are gonna do better than us.
What'd you put in? 50? 100? What'd you put?
Now we've got billions in both companies.
Billions in both companies.
Oh my Lord. Young sum. I think there's this existential thing going on in venture today. David could talk about it as well. I mean, people can't— they're extraordinarily nervous about— you look at the IGV stock index, down 30% year to date, down 5% today. All software stocks plummeting, right? Venture capitalists are terrified to invest money in anything other than these frontier models and things like SpaceX or military modernization. Finding something that's out of harm's way of AI, right, where you can count on the terminal value, to Chamath's insights over the last few weeks, is very difficult to do. That's why you see this crowding. So we've taken a barbell approach, right? We've got a lot in what we think are the most important companies that are on the frontier, and then we're betting on really small teams that we think have very defensible businesses in a world of, uh, you know, AGI. But it's shocking.
What happens to all these enterprise software companies? Do they become PE takeouts? Do they get consolidated? Um, or do they just have to adopt these AI technologies and, and solve this problem of, hey, the frontier model is just going to solve for whatever these niche software companies do?
I think the market's probably being a little too pessimistic with respect to At least some of these software companies, I mean, obviously there's going to be big differences in the quality of the moats of these companies. And so look, software is going to be a lot cheaper and easier to generate, but I'm not sure that was the competitive advantage of a lot of these companies. So there's probably a little bit of the baby being thrown out with the bathwater right now, and there probably are some value buys in enterprise software. I think the interesting question here, and we've been talking about this for a couple of years on the pod, is just where you see the AI value capture being in terms of layer of the stack. Remember where we started, it was really just the chip layer of the stack was where all the value capture was. It was basically Nvidia was the first company to be worth multiple trillions of dollars because of AI. And for a while it looked like that's where all the value capture was gonna be because OpenAI, for example, was losing so much money and Anthropic wasn't on the radar as much.
Now we're seeing, wait a second, it's not just the chip companies, it's also the hyperscalers are now benefiting. And now we're seeing at the model layer, it looks like Anthropic and OpenAI, they're all gonna be huge beneficiaries. I think the next question is at the application layer of the stack, okay, well now does all that value capture just get eaten by the model companies or are there applications that get turbocharged? I guess you could say that Palantir is already one of them, right? It's an application company that's been turbocharged by these model capabilities. Who else will be a big beneficiary? Is it, again, is it all gonna be at the model layer or will you see an explosion of value at the application layer? I'm hoping obviously that it'll be at all layers of the stack you see beneficiaries. But to me, that's a really interesting question right now.
Yeah, what happens to Salesforce, HubSpot, you know, Oracle, right down the line? David, Chamath, your thoughts here on, the, the layers here and where the value is captured.
It's too early to tell.
Too early to tell, right? And energy, we kind of put into sort of data center as well, but that's obviously been a clear winner. A little housekeeping here. Liquidity— put a little Tiffany in here, uh, producer Nick— is sold out. There's a waitlist of hundreds of people, but it is what it is, folks. If you snooze, you lose. And top-tier speakers are coming, uh, it's going to be great. We'll get an update from Chamath, but I think Brad, you're going to be joining us again, yes, for liquidity?
I have an update.
That's probably not your headliner though. I'm probably not your headliner.
No, but you always score so high. Every event you've spoken at, you've been either number 1, 2. I don't think you've ever dropped to 3. Go ahead, Chamath, make your announcement here.
Da da da da da. Nat sent me an article from Wikipedia about PNYLX when you guys were talking about Dickies.
Okay, breaking news.
Showing me that I'm in the Large category, top 5%. She highlighted it.
Top 5%. Okay, and that's with— is that with Nano Banana or without? Is that—
sorry, she just texted, dummy, it's Claude. My apologies, Claude.
Oh, all right.
This is why Chamath isn't afraid of the cyber, is because nothing's going to come out that's more embarrassing than what he says himself on the—
it's like Bezos. When Bezos got hacked, he's like, guys, I got hacked.
So I saw the agenda for this thing. It's incredible. Congrats to you guys. I mean, like, the, uh, like, just the fun of being in Napa, all the poker, all the, the dining experience. This is 5-star all the way. It looks really cool.
It's Amon level because Chamath was, I dare I say, belligerent in his demands. He said, this has to be 6-star or I will not show up, Jake Al. I said, okay, boss, get to work. And, uh, Chamath What do you got? And no mids. This is all elite. And for the hundreds of people who are on the waitlist, I am sorry, but we have a capacity issue. We'll try to get you in for next year. But Chamath, give us some updates here. You have any updates that you want to share? Because you are running programming for Liquidity 2026 up in Yonkers.
Look, it's going really well. Really excited to hear all of these great folks speak. I think the next two will release today, Brad Gerstner and Thomas Lafont of Coatue.
Oh, from Coatue? That's a great get.
We also have, I think, 3 people confirmed for their best ideas pitch. Really interesting folks. They each run between $1 and $6 or $7 billion.
Awesome.
Superstar compounders.
This is the new zone. This is the new zone, Chamath.
It's great. So right now we have Bill Ackman, we have Andrej Karpathy, we have Dan Loeb, we have Thomas Lafont, we have Brad Gerstner, We have Sarah Fryer and more to come. We will announce more to come.
There might be one or two surprises. Jay Cowell, maybe.
And a couple of surprises.
Yeah, we don't announce all the speakers. Jay Cowell's got a couple of surprises coming. And if you didn't get into Liquidity, apologies, you're on the waitlist. We are going to be hosting the 5th annual All-In Summit in Los Angeles, September 13th to the 15th.
Sacks are going to come to that?
Allin.com/events.
Sachs, you should come to that.
I've been advised that I can attend business. I can be in the state for business reasons.
Okay. There you go.
Oh, then we'll see you at Liquidity and the summit.
Correct.
This is great. That's big news. Now we just got a bunch of Sachs stans who are racing, and now we're going to get Sachs. This is what happens every year behind the scenes. Sachs at the last minute says, "Oh, I have 4 speakers and I have 72 people who need tickets." And then the whole team has to do a fire drill 48 hours before the event. Okay, here we go, guys. We're going to go to the third rail here. We've got to catch up on the Iran war. Here's the latest. Two weeks into a ceasefire, have started just two days ago at the taping of this. VP J.D. Vance, friend of the pod, and some special consultants, Wikoff and friend of the pod Jared Kushner, are headed to Islamabad, the capital of Pakistan, for talks this very weekend. So while you're listening to this event, they are going to be working on the peace deal. Easter Sunday, Trump posted a truth stating, "Open the fucking strait, you crazy bastards, or you're gonna be living in hell, just watch. Praise be to Allah." On Tuesday morning, Trump posted another threat on social media, "A whole civilization will die tonight, never to be brought back again.
I don't want that to happen, but it probably will." Tweets were obviously discussed a lot over the last week. He gave them an 8:00 PM deadline. At 6:30 PM, POTUS announced on Truth Social that he had agreed— President Trump had agreed to a 2-week ceasefire if Iran opens the strait. He also said, hey, listen, we got the strait, maybe there'll be a toll booth, but we'll take the majority of the toll and we'll split it with Iran. Here's the quote: We received a 10-point proposal from Iran and we believe it's a workable— it is a workable basis on which to negotiate. And apparently Netanyahu took the ceasefire to mean level Lebanon, dropping 160 bombs in 10 minutes yesterday. Sachs, you were out last week. Everybody wants to know your position on the war. I'll hand it off to you. What are your thoughts on the two-week ceasefire and everything that's occurred up until this point?
Well, look, I have to preface what I'm about to say, which is I'm not part of the foreign policy team at the White House. And the last time I commented on the war on this show, it somehow made international headlines that Trump advisor says XYZ. And I'm not a Trump advisor on this issue. I think that'd be a fair headline to write if it was a technology issue, but this is not. So whatever I say is just my personal opinion, but then the media is going to somehow portray it or attribute it to the White House to try and create an issue out of it. So I feel like I'm limited in what I can say except that to say that I think it's terrific that we have the ceasefire. I think it's great that there's going to be this meeting in Islamabad to hammer it out. And I think what the president's accomplished so far with the ceasefire is it's a great thing because what happens with these wars is they take on a life of their own, meaning they tend to go up the escalation ladder. There's a lot of podcasts that are discussing the so-called escalation trap and supposedly there are stages to this based on historical patterns.
And so I think it's actually very hard to pull out of these things. And I give the president tremendous credit for negotiating the ceasefire that we've achieved so far and then sending the team to hopefully work this out.
Brad, actually, my first trip to the Middle East was when you and I, uh, maybe 4 years ago. When— thank you for taking me. What is your take on where we're at here? I think we just wrapped up week 6 of this and we're going into week 7.
First, on March 4th, I tweeted, "The Trump doctrine in Iran: massively destroy all military capabilities, kill the people building lethal weapons to use against us, and get out. Reserve the right to do it again if needed. Zero efforts to build Madisonian democracy. Iran's going to have to build what comes next." And I think what the market has said, right? If you look back at last year on tariffs, Jason, the top to bottom drawdown was about 15%. On the NASDAQ, intraday was down 22%. Okay, the drawdown in this period over Iran was only down about 5 to 7% on S&P and NASDAQ, right? So the market has said, listen, trust Trump at his words. He said he's not going to get into an entangled war here. I think he terrifies the hell out of people with his tweets about, you know, destroying civilization and all this other stuff. But I think people, even though they don't like to hear it, they've resolved for themselves that when he says he's going to get out, he will in fact get out. Of course, there was a lot of hand-wringing. But if you look at the markets today, we basically bounced all the way back from where we were pre-Iran on both the S&P and the Nasdaq.
If in fact we land the plane, if JD lands the plane— and by the way, on Lebanon, yes, they were bombing yesterday, but Netanyahu has now said that you're going to have direct government talks between Israel and Lebanon. So if we land the plane on these two things, I think it's off to the races in the market. And by the way, while while everybody's focused on Iran, stay tuned. I think we're getting close to a deal on Ukraine-Russia, right? Venezuela is, you know, kind of going seemingly very well. I think there's also going to be news on Cuba. You could envision a world— there's risk to the downside, certainly, I will stipulate, but you also have to pay attention to the risk to the upside. If you land the plane on those things heading into America 250, July 4th, the market could really take off.
All right. Well, let's maybe up-level this a little bit and talk about why we're in this war to begin with. And that's the big discussion amongst both sides of the aisle. On Tuesday, The New York Times dropped an inside-the-room piece on how President Trump made the decision, according to this report, if it's true. I know some people don't subscribe to The New York Times anymore or think it's fake news, but how Trump decided to basically follow Netanyahu into this war on February 11th. Netanyahu met with Trump at the White House where he gave him a 4-part pitch. On attacking Iran. JD Vance, according to the story, if it's true, disclaimer, disclaimer, warned Trump that the war could cause regional chaos and break apart Trump's MAGA 2.0, the Trump 2.0 coalition we talked about here, the big tent. And that's turned out actually to be true. There's been a bunch of hand-wringing from Megyn Kelly, Tucker Carlson, right on down the line. Rubio was anti-regime change, but he was largely ambivalent, according to this story, about the bombing campaign. Susie Wiles, chief of staff, said she had concerns about gas prices before the midterms. Pretty good advice there.
And General Dan Cain, chairman of the Joint Chiefs of Staff, said this of Netanyahu's pitch, quote, sir, this is in my experience standard operating procedure for the Israelis. They oversell and their plans are not always well developed. They know they need us and that's why they're hard selling. If you put this together with Rubio's walked-back comments, At the start of the war, we knew, this is a quote from Rubio, "We knew there was going to be an Israeli action. We knew that would precipitate an attack against American forces, and that's why we did it." I had Josh Shapiro on the All In interview show and he talked a lot about this. There is a big underpinning here, Chamath, that the United States foreign policy is being driven by Netanyahu. Every Jewish American person I've talked to feels Netanyahu is not doing Jewish American and the Jewish diaspora any favors here by his approach to these wars. What are your thoughts on why we got into this and how we get out of it?
I mean, the person that decides is the president of the United States. So a foreign leader isn't getting to call the shots in the United States. I think very practically speaking, the markets are effectively pricing in that this was a small blip for whatever people think. That's just what the best prediction market that we have is telling us. I think that's important to acknowledge that we're probably in the endgame here. And the second thing to acknowledge is, If I was Israel, I would really be concerned that unless I help find an off-ramp quickly, the risk that Israel loses America as a predictably steadfast ally could go down. And I think that that's problematic for Israel far more than it's problematic for the United States. So all of that kind of tells me that we will find an off-ramp, A, because I think economically it makes sense, and then B, geopolitically, I think Israel will want to make sure that this doesn't burn a longstanding relationship.
Yeah, that seems to me to be the major issue here is Americans basically do not want to be in this war. Americans do not want our foreign policy being influenced to the extent they believe, I'm not putting my belief in here, just Americans believe we are being dragged into this by Israel and that Israel has too much, or Netanyahu specifically, has far too much influence. And then people believe the antisemitism that's occurring here, Josh Shapiro gave me a lot of pushback on this. But all the Jewish Americans I talked to say Netanyahu's causing, with his actions in Gaza, Lebanon, Iran, he's gone too far and it's causing the antisemitism we're experiencing today. So you can make your own decisions about that. Any final thoughts here, Brad, on the American foreign policy being influenced too much by Israel?
It's the discussion of the moment. No. I mean, listen, Kind of like Saq said earlier, I think that we will ultimately be judged by the outcomes, right? And that everybody is an armchair pundit today on, you know, the approach that we're taking in these two different places. I think we could be on the verge of a massive transformation of the Gulf states. You went there with me, Jason, Saudi, Qataris, Kuwaitis, Emiratis. I've talked to a lot of them this week. I think they're very hopeful and optimistic. I think you could bring Iran into the fold. But listen, I'm an optimist on all of this stuff. I just want to remind people, doing nothing in Iran had tremendous risks. Doing nothing in Venezuela had tremendous risks. So it's not as though this was, you know, something that I think wasn't well-calculated. But I think we have to let the cards be played and then let history be the judge. But I think there's a risk in both directions, but I'm going to remain optimistic.
All right, Sachin, you said in the Gaza situation, we should have a wide berth for criticism of Israel and Netanyahu. What are your thoughts on this belief here in the United States now in this discussion that Israel's having far too much influence over the United States foreign policy?
Well, I noticed in my feed today that Naftali Bennett, who's a major Israeli politician who was a former prime minister, tweeted polling that showed that Israel was becoming very unpopular in the US, and he was expressing concern about that and expressing the need to basically address that or fix that. So I think you're starting to see Israeli politicians raising that as an issue, and I think that's probably a good thing. Yeah, there it is. And it's really cool actually how X now just automatically translates things from foreign languages, in this case Hebrew, and it puts it in your feed. So yeah, so here's Naftali Bennett, former prime minister, saying this is a serious situation. There's a lot of work ahead of us to fix everything. Now, obviously this is not Netanyahu. This is one of his political opponents, but Yeah, I mean, this is something for Israel to consider and think about, and I think that they would improve their popularity if they got behind the ceasefire. And I have no indication that they won't, but that would certainly be a good place to start.
I have to say, just as an aside, this auto-translate feature has done more for understanding across borders than anything I've ever seen, and it is the most impressive tech feature I've seen released in years, putting AI and large language models aside. For people who don't know what's happening, because of Grok being really good at doing auto-translate, they've taken the pockets of the best of what's happening in Japan, what's happening in Israel, what's happening in France, and they're surfacing it auto-translated. Then when you reply as an American to somebody in Japan, they see it auto-translated as well. which has led to people who don't speak the same language engaging on X in a very nuanced, fun, interesting way. And that, as a truth mechanism, is just absolutely extraordinary. I think this is going to have such a profound effect. Maybe Elon and the X team should get a Nobel Peace Prize award for this. I think it's going to change. I mean, I hate to be hyperbolic, but have you been using this feature, Chamath? Has it been coming up in your feed? Which language is up in your feed right now?
English.
Okay, so you're not part of the translation thing. Brad, has this hit your feed yet? And which regions are you seeing?
Definitely see it in on the Middle East stuff. And, you know, I've seen on Chinese, I've seen it on the Russian, Japanese. Super helpful.
Let me tell you, base Japanese is a whole nother level of base.
Whoa, man, base Japanese. Makes like Fuentes and Alex Jones seem tame. They're like, look at this group of people, insert whatever group of immigrants you like. And they're like, this is unacceptable behavior. This is not Japanese culture. These people need to be— get the hell out of Japan.
It is wild, folks.
And if you don't have an X account, you are missing out. Go to x.com and sign up for this reason only, because you think about the velocity Like journalists are not even taking the time to translate and cover what's going on in those areas. And this is happening automatically in real time. So you start thinking about what happened in Ukraine. If you had people, you know what, so far Russia and Ukraine doing this and having conversations with each other, it would be wild.
You're like such a good hype man. The problem is you hype buttered bread the same way you hype a nuclear reactor. And so it's hard to really tell, you know, what you're really hyping because your level of excitement, the intonation is exactly the same.
Yo, man, there's nothing better than a slice of great toast. I mean, if this is very— this, in a way, it is like sliced bread. It's very simple, but it is so powerful in the experience.
This has been—
it is true. X is better today than it's ever been. And remember, they have 70% fewer employees than they had the day Elon walked into the building. And so if there were ever a debate about this, like, and I remember everybody saying, oh, it's going to tip over. Oh, it's going to be a crappy experience. Oh, it's going to be strong. The fact of the matter is, we are a few years later, 70% fewer employees, and every other company in Silicon Valley is looking at that. I think for a lot of these tech companies, we've hit peak employment. We're going to create a tremendous number of new jobs, but for the existing jobs, these companies are all realizing they can do more with less.
Nikita Beer just tweeted that they're about to go ham on these bot accounts that auto-reply. Yes. Those literally ruined my feed.
That's why I went to subscriber mode in my replies, and it's worked out great. Yeah, no, shout out to him and to Chris Sacca, who was in tears at what happened to Twitter.
It's going to be okay, Chris. Sorry, you only—
No more tears.
You only let subscribers respond to your tweets?
I do 50/50. Sometimes I'll just let it rip and get chaos, and then other times I have 2,000 paid subscribers. I give all the money to charity, like $30,000 a year, and it's just wonderful to get to know the same 2,000 people out of my million followers. It's kind of like having this little subset. So sometimes I'm like, I don't have time to deal with 100 or 200 or 300 replies.
You have a million followers.
I'll deal with 30.
That's incredible. I mean, it's just—
I mean, you have 2 million. I think Sacks must have a million, right? You have a million, right, Sacks?
Only a million.
Brad, how many you have now? You're getting popular.
You built a brand.
I got a couple hundy.
Got a couple hundy.
What's your— oh, your alt cap, A-L-T-C-A-P?
I'm at 1.4 million. What do you got, J.C.? Have I surpassed you?
I think you have. I'm like 1.1.
How much would it cost me to get my real name, Jason?
Uh, I know a guy, could find out.
You're at 1.1. Yeah, I made it to 1.4. I don't know how that happened exactly.
And just having the number one podcast in the world, uh, another amazing episode of the number one. And Chamath has 2 million, but that's only because he engages— he has just incredible Moments of engaging with his haters. Oh my God, the replies that Chamath sometimes drops are so great. I love when Chamath goes face.
I light them up. I light them up.
He lights them up. And then you had somebody who was like, "Oh my God, I was in the casino and you told me to bet black, so you bet black, so I bet black and I lost my money, and so you're responsible.
And then you paid for the kids' college?" He has two young girls, and so I funded their college. Accounts. I thought that was hilarious. Just as obviously I'm very happy for him and his two daughters, I'm even more happy at how much it'll anger all these other goofball dorks living in their mom's basement.
Yes.
Who literally have no— take— they take no responsibility for their lives.
And, uh, they should enjoy those Hot Pockets. By the way, for those folks in their mom's basement, the Hot Pockets and the fish sticks are ready. Yeah. And you get one more hour of Xbox from mom. All right, listen, we missed you, Friedberg, but this is the best episode in 2 years. Uh, and we will see you all at the Liquidity Summit, except for the 400 people on the waitlist who aren't going to get in.
We got an email from the guys at Athena because we were just— oh my God, they, they're, they're gonna hire like 500 new Athena assistants.
Yes, they had 1,000 people after last week when we mentioned how much we love Athena. Go to Athena.com.
But that's amazing. Those are like 500 hardworking men and women who are working in the Philippines. Sacks, I'm going to get you a couple of Athena assistants as a birthday present. That's what I'm going to get you.
You're going to love this, Sacks. Athena assistants are the best. Congratulations to my friends over there. All right, everybody, we'll see you next time.
Love you, boys.
On Brad's favorite podcast, The All-In Podcast. Love you. Bye-bye.
We let your winners ride.
Rain Man David Sachs.
And instead, we open-sourced it to the fans, and they've just gone crazy with it. Love you, Westside.
Queen of Kin Wah. Dog taking a piss in your driveway.
Oh man, my appetizer will meet me at—
We should all just get a room and just have one big huge orgy, cuz they're all just useless. It's like this, like, sexual tension that they just need to release somehow.
Wet your feet, wet your feet, wet your feet.
We need to get merchies. I'm going all in.
(0:00) Bestie intros: Brad Gerstner joins the show! (4:22) Anthropic blocks Mythos release for security concerns: serious or marketing stunt? (24:07) Are OpenAI and Anthropic trying to kill OpenClaw? Does Anthropic already have market dominance in AI coding? (42:20) Anthropic $30B run rate, fastest revenue ramp ever, the TAM for intelligence (58:01) Major vibe shift: Anthropic ripping, OpenAI reeling (1:10:12) Iran War: Ceasefire, Israel's influence, market impact Apply for Summit 2026: https://allin.com/events Follow Brad: https://x.com/altcap Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://www.youtube.com/watch?v=INGOC6-LLv0 https://openai.com/index/better-language-models https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf https://x.com/steipete/status/2040811558427648357 https://x.com/juliusai/status/2041292301234999668 https://polymarket.com/event/ipos-before-2027 https://www.google.com/finance/quote/IGV:BATS https://polymarket.com/event/anthropic-ipo-closing-market-cap-119 https://truthsocial.com/@realDonaldTrump/posts/116351998782539414 https://truthsocial.com/@realDonaldTrump/posts/116363336033995961 https://truthsocial.com/@realDonaldTrump/posts/116365796713313030 https://www.nytimes.com/2026/04/07/us/politics/trump-iran-war.html https://www.state.gov/releases/office-of-the-spokesperson/2026/03/secretary-of-state-marco-rubio-remarks-to-press-6