Well, when I was about eight or nine years old, I discovered the most beautiful thing in the universe. Spreadsheets. I had to go and go get full immersion in the decision disciplines. I went to college when I was 15. I had actually growing up where I did, I didn't realize how big the world was.
And she's way smarter than me by a million X.
Every single story is never about the genie. It is about the unskilled wisher. Knowing what you want, that's the hard It's a nice thing, isn't it?
So when they're making their genie wishes pre-AI in the world of unblinded, that was our matrix, our framework.
Ai? Not autonomous in the sense that humans can wash their hands of responsibility for it. If a data point falls in a forest, does anybody care? And if information isn't connected to action, does it matter? Every company now is saying it's an AI company. And the leadership of these companies, do they know what they're doing? If not, oh, dear God.
Yes. But so what a to the stage, the Unblinded space, none other than Cassie Kloznikow. Let's welcome Cassie. On your feet. Come on, I said, Cassie. Cassie, you're from South Africa. How fun is that? Is it all about zebras and girafes? No, we had a joke about that in I'm like, I know that it's not all that zebras and girafes.
Well, zebras and girafs, yes, but don't forget the ostrages, which I have ridden an ostrich,which is a no thing. It's here for that. It turns out, you know how when you cover the cage of your bird, it's like, Nighttime, go to sleep. So you have this little bag that you cover the head with. That's your way to stop. Now you all know how to ride an ostrich. That was your morning lesson you didn't expect.
And we've already uncovered ostrich riding. I'm win. Tink, how happy are you that you've gotten ostrich riding lessons from Cassie? It's a win. It's a win for Tink, yes, who's also been to your great country. So just out of curiosity, in the dynamic of communication, what are some of the differences from your perspective between the culture of South African communication, whether in life and business, American communication, life and business? Yes, please, from your perspective, Cassie.
Well, I have to say that I like things to be exactly what it says on the tin, which essentially means that if I were in charge of marketing, that would be terrible because the point of marketing is if you tell the thing exactly like it is, then what's the value add, right? But the just straightforward, no adding Any embellishments, keeping things insufficiently heroic. I know I could use a little bit of a heroism injection, but there's definitely a feeling sometimes of that there needs to be translation between when I hear an American talking about something and when I say exactly the same thing, there could be two At different levels there.
Well, thank you for that. So we love this because in this space, I'm blind, that we talk about empathy, respect, precision, and directness. So would you already be relating to Cassie as a maven of precision. Yeah, you're there. Yeah, amazing. Okay, so, Cassie, from that place, just a little bit about you. How do you end up in the world that you're in? What was growing like, and who were you, and your life up until your professional career begins.
We want to go far back. Yeah. Okay. Well, when I was about eight or nine years old, I discovered the most beautiful thing in the universe. Which I'm sure you all know what that is. Spreadsheets. All right? It was gorgeous. And so while the other kids were playing outside and they were climbing their trees, I had this gemstone collection. And the entire purpose of this collection was every time I got another gemstone, there was another row for my spreadsheet. What color is it? How hard is it? What is it called? A obsessiveness. I loved data from a very early age. At about 11 years old, I had graduated to my next love, which was databases. I was playing with Microsoft Access. I was sitting...
What was your household that created this possibility?
Yeah. My father is from Moldova. My parents are both Soviet physicists. As one does, absolutely normal South African household, two Soviet physicists, and they're a strange kid who likes spreadsheets. I'm not sure, actually, that they particularly encourage this strangeness. They were like, Read some fantasy books and go play outside. And I'm like, But no, the computer. The computer compels me. I went to college when I was 15 to Nelson Mandela Metropolitan University, as it was called then. Now it's just Nelson Mandela University. They realized that it was a better way to name that. I had actually Actually, growing up where I did, I didn't realize how big the world was. Maybe because I was too set in my spreadsheets. By this time, you can imagine, 15 or so, I'm now collecting data because it's beautiful, folks. It's Beautiful. I'm collecting data on all kinds of biometric things about myself. How much I slept, and who did I talk to today, and what did I study? I've got data going back really far, which just tells you I really was quite a weird kid. But we were a little behind the US in our Internet connectivity, and my household was behind as well.
Even in my freshman year, when I would go to the computer lab, which was theoretically connected to the Internet, I'd have to go at 4: 00 AM just to get my homepage for my email to load. That's where... It's a little different now, connectivity-wise. The The town that I grew up in, the suburb, the same kindergarten fed the same elementary school, which fed the same high school, which fed the same university. You had choices with respect to your higher education, and they were three choices. Choice number one is you don't go. Choice number two is you go to the technical college part of the university system, or choice number three is you go to the University part of the University system. That's it. Those are your three choices. There I am. I'm 17 years old. I'm studying economics, mathematical statistics, and applied statistics. These two statisticses were in different departments. They had their courses at the same time because it never occurred to anyone that someone would want to study both of them. So I always had an alibi. If the class was boring, I'd be like, I'm going next door. And then I'd be like, I'm going back next door.
I'd run between these two things. I had to be in two places at once, like Hermione and Harry Potter. But there I am in my economics class, 17. And the professor says-Wait, I'm sorry, Cassie.
How do you know who Harry Potter is if all you're interested in were spreadsheets and databases?
Because eventually, things that are fun caught up with me.
Let's hear for Cassie. Please, back to you.
So there I am. I'm in an economics class at this university that is, as far as I'm concerned, And the only option, this is where we go. If you go to higher education, that's where. And my economics professor says something that blows my mind. He goes, No, no, no, no, no, College for Economics in the World? There's a second... There's a ranking? What's the first best? Was my question. And he goes, Oh, University of Chicago, and continues like, he didn't just break my world. I was just like, What? There's different... Brain broken. And I decided, I deserve the first best. With no prep or understanding, where I thought submitting GMAT results, those are the ones of the MBA, in lieu of SAT results for transferring to the University of Chicago was the right way to do things, and they were very confused. They were like, That's a very good GMAT course. But we do, in fact, need the SATs. Go and redo that. But long story short, I got into the University of Chicago. And then you got into the University of Chicago for Economics.
Of course, the number one program.
The number one program. Yeah. What I found beautiful about economics, day one, it's the science of scarcity, right? Not the science of money, not supply and demand. It is the science of scarcity. And something in this world is always going to be scarce, no matter how much abundance we get. Maybe it's time. Maybe it's time that's scarce. Maybe It's optionality. I'm seated in this chair, and by doing so, I'm not seated in that chair. There's always some choice or trade or something that we have to make. And so that was beautiful to me. And data is always very pretty to me. But now I'm reaching this ripe old age of 18, where I'm beginning to think things like, If a data point falls in a forest, does anybody care? And if information isn't connected to action, does it matter? And I kept thinking, What is the one thing that I could study that would be the most useful thing? And I thought, well, it's how to take better actions. It's how to choose. It's how to deal with scarcity, right?
Are you listening to this? Yeah. And you cannot tell me in integrity, the University of Chicago taught you had to make those decisions. It may have opened the framework for it. We'll talk about that offline. No, I was already a brat.
I was already a brat. I'll tell you at the University of Chicago. No, I'll tell you what happened at the University of Chicago.
She only dropped this This is everything we talk about. It's literally what we're creating as a decision-making matrix of what you do at all times, 24/7, and have the power of choice. I'm not going to go any deeper in that. I'm in shock in what you just said, and it would be ridiculous. And they're all thinking that as you're speaking. Yeah, we didn't have this conversation before then.
This is crazy. So please. This is 18-year-old me. This is more than half my life ago, for those who are wondering what the ever good dermatologist. I became convinced that if I could just give myself a little bit of an edge, just a little bit of an edge on the decisions that I make, and I think of it as return on decision effort. Either I can get better quality for less... Sorry. Better quality for the same effort or the same quality for less effort. Return on effort. That's what I'm maximizing here.
I'm like, this is unbelievable. I swear to God, this is not scripted. I swear to God. Cassie, did we have any conversation about what you're sharing?
We had a conversation where you said, Talk about yourself, and I went, I'm South African. I don't like to talk about myself. You went, No, talk about yourself.
But how much does she actually like to talk about herself? Because what is everyone's favorite topic if they feel safe and seen? Decisions. Yourself and the decisions you make. We did not... Truth, we did not prepare for this. We did not prepare for this. Yes. This is the truth, because it's the truth. And she's way smarter than me by a million X. Yes.
Yeah. So there I am realizing that if you get better at decision making, that compounds over time. I mean, think about taking a long trip to the moon. Do you talk about going to the moon? Yeah.
Everything you're saying is what... So there's a diversity of the audience. There's people in our certification program, our highest level program that are up front here, and they're in disbelief that you're saying these things because this is all we ever talk about. And what we talk about is the fact that you discovered this as a genius, but you didn't walk into the University of Chicago, the number one program in economics, and they said, Listen, everything is about your decision-making model. This is what you have to do, and this is how it works.
Let me tell you what they said to me. Please. They said to me... And look, I love the University of Chicago, my heart, but my college, what are they called? The Advisors, right? The ones that check through your course list and make sure everything's working as intended, said to me, What is this? What job do you think you're trying to get with all this chaos? Here You're not going to get any job. You're not studying anything coherent. Why is there this bit from psychology and this bit from neuroscience and biology and this bit from economics? And here is statistics and here is game theory. And what is this mishmash? Well, the answer is that what this mishmash was all about was decision making. And there was no discipline that would actually teach you all this thing. She's saying it.
She's saying it.
So I had to go and get full immersion in the decision disciplines.
Yes.
Separately, separately. I had to suffer. I had to suffer. I had to get full degrees in economics, full degrees, multiple of them in applied statistics, mathematical statistics, with AI and machine learning, because that is the automation of decision making. I had to study statistics because what is that? That is about the information that's going to go into decision making. I have to understand analytics because that will inspire decision making. Psychology, I have to understand how humans actually make the decisions. Economics, also the theory and the experimental and behavioral, which was very weird, by the way, going from your auction theory class, that's all the The mathematics of how people should make decisions, to your experimental economics class, which is like, they do not make decisions like that. Humors are pretty weird. Reading philosophy, taking business school courses. What else do I got? I've got neuroeconomics. I'm at a Are you in that? The neuroscience of decision making. Yeah. So I suffered. I got full emersion.
If you are in shock in this room, say yes. Yes. Please.
And so my belief was, if decision making is the absolute most important thing, which I believe it is, then how dare we have it be scattered like a bunch of lost toys across all these different disciplines? How dare we? We can do better than this. How dare we? Particularly when we have things like AI rising up, where what that is, really, is it is a decision discipline, but we allow people to see as something else, as something technical, as something devoid of the human element. When you've been in AI as long as I have, and you've looked at it from every different angle, and it's like being in that dark room and you grew up some part and you're like, This is a curtain, and someone else groups that part. They're like, That's a rope. Turns out there's an elephant in that room. Well, I have groped that whole damn elephant. We have elephants in South Africa, too. I have found all its bits, the AI elephant. I'm going to stop this analogy now before it runs away with us. But when you actually know AI, you know how the sausage is made in the kitchen.
So to speak, not continuing about the elephant's bits. See, you encourage me, Sean. This is what you get when you encourage me.
I walked in the room, she's like, Hello. How are you? Pleasure? Can't talk about this or that or this or that. And now we have this, please.
When you've been around the everything of AI, you realize that it is all humans all the way through. It is very human. It is both a tool for decision making, which you can use wisely and you can use less wisely, and you can use completely stupidly if you insist. But it is also the product of decisions. And that is why your different tools that you might come across out there, why they behave so differently. Not because one of them is wrong and it's a wrong type of AI, and so that's the winner, and the others are the losers. But because it is the decisions that people make that shapes what these systems do. And when we say autonomous, When I teach my MBA students about autonomous systems, and because it's MBAs, we have to get into real talk, and we have to get into difficult talk, and we have to say the uncomfortable things. And so I say to them, Okay, those of you who are made on very uncomfortable, trigger warning, tune out, go away for five minutes. The rest of you, click. I show them a landmine, and I ask them, How is this different from an AI system in terms of decision making.
How is it different? Does it make decisions? Well, it responds to stimuli. It responds to inputs. It collects data in some sense in terms of pressure. It is also a system whose effect is fairly complex relative to the understanding of whoever put it in the ground. They didn't know what was going to happen and why and how it was going to be triggered. But still, though, so it has a lingering effect It has a potentially powerful effect, complex, hard to predict. But it's not autonomous in the sense that humans can wash their hands of responsibility for it. It's not autonomous in the sense that no human actually made it, thought it was a good idea to create them, thought it was a good idea to put it in the ground. It is still a thing of human responsibility. When we build these systems, when we integrate these systems, we have to be careful and wise about how we do that. All of it is human decisions. There are good ways and there are bad ways. You could build systems with safety nets. You could build them without safety nets. That's an important difference, and it's not a difference of the mathematics on the inside.
It's a difference in how we lead. There are decision questions all the way through these technologies, and these technologies will affect how we make decisions on the other side.
Let's hear for Cassie. Okay. So would you be okay with, what's the future? What do you foresee of the future of AI? And let's go through the prism of everything from utopia, dystopia at the end, through the redistribution of value, legal jobs, accounting jobs, financial jobs, the service-based white-collar workforce. How do you see this evolution taking place over the next 5 and 10 years from your masterful seat, please? That's just one question? Yes. That's Okay.
Actually, I want to say something else. To make these bold predictions about a thing, and I want to empower everyone here to make their own bold predictions so I don't have to get stuck with having to be everyone's Nostradamus, despite Despite being a recovering statistician, and that's what we do. But let's start with the basics. There are different kinds of AI all building on one another. The first kind is the research kind, the mathematics. In a nutshell, what's that? That's finding patterns in data. Do you care? Not particularly. That's fine. Next one up from that. If we have mathematics that can pull patterns out of data, what will we apply it to? We begin to apply it to using examples instead of instructions to get our wishes across at scale. You might say, Well, what are the instructions that I would have to write to take a photograph and figure out what to do with every single pixel in there to classify whether the animal in it is a cat or is not a cat. What's that formula? Tell me, what are we doing with the top left pixel and the one next to it?
Can you tell me what catness is? Because to write those instructions is quite hard, isn't it? That's why computer vision was going nowhere in the '90s, because that's how they would approach it with traditional programming, where you write the steps. And by the way, with traditional programming, you can only solve a problem where you know what the steps are. How many complex problems in this world do we not know what the steps are for that? So many of them, and so much complexity will overwhelm our small human minds. It's true.
We call it the puny human brain.
Yeah, the puny human How do we write down even the formula for what is a cat? But you know what we do have? The Internet's made of cats. Examples. Photograph after photograph after photograph, millions of them, billions of them. We can use those examples, and we can pull out patterns in them, and turn that into the code that the computer is going to follow without anybody writing the explicit instructions. That's your older version of AI AI, your pre-ChatGPT version of AI, that mostly it's enterprises. It's not like regular person is going to wake up one day and be like, I need to get five million photographs of cats, and then I need to go and pull some, extract some patterns, right? That's Google. And if you use Google photos, any Google photos fans? A little bit, right? I don't know if you remember. Because I like data, and probably you should have been a libertarian or a monk in the medieval times, but I grew up computers. I remember labeling my photographs because I like to keep my data in order. And so if I want to quickly find my cats, I would have some thing in the file name.
Do you remember how painful that is? Does anyone still do that? Because it's okay to admit it if you do, but I am here to tell you that that is a solved problem now, and you can just upload it to Google Photos and then search cat and it pops up all the ones with cats. That's using this technology. And this is not a 2021 and further thing. This has been around for many years before that. But it's big company He's solving big company things using this concept. Now, a quick thing that I want to get everybody ready here to understand using data for stuff. We pronounce data like it's got a capital D. It doesn't. It deserves no respect. None. Data is just memory. It's just what we chose to write down. It's writing. And you hear people... If someone said to you, If it's written in a book, it must be true. What would you say to them? Yeah, you would laugh. And yet the same person who laughs turns around and goes, It's in the data. Data sets are like books. They're like textbooks. Some human makes them. Some human decides this is important to keep and this is not important to keep.
And so it's very, very subjective, what goes in, what doesn't. Yeah, and by the way, AI bias, if you've heard of that, I'll give you what I call the Hemingway lecture. Are you familiar with the saddest short story in the English language? No. You know this? The six-word short story? It's incorrectly attributed to Hemingway. It is, For sale, baby shoes never worn. Right? Sad. Unless you're Walmart and then you're like, It's a good thing I'm selling you shoes. They've never been worn. But similarly, sad short story in six words about AI bias. Ai bias, inappropriate examples, never examined. Someone put in some garbage data, and now we built systems patterns, whatever, automating based on it. Okay, but we're still in the 2010s here. We need to move towards the future. What But if we could take this concept of examples and turning them into recipes for machines to follow, and we could apply that to a very specific application? We've got one, and we've talked about computer vision, figuring out whether there's cats in your photos or automated passport control. Do you match your passport photo or whatever else? What if the thing that we solve is the universal interface for human collaboration.
Do we know what that's called? It has one word. What's the universal interface for human collaboration? Language. I love it, language. Language. If we have no language, and language can be gestures, and language can be pictures, and language can be words, but if we have no language, we are not going to be able to collaborate with one another. Nor are we going to be able to collaborate with machines. But if we're able to make machines understand our language, then we, without learning anything extra, just using our mother tongue, can make ourselves understood to machines. And we can ask for weird things. Think about it. Whatever occurs, if we were like, Hey, ChatGPT, give me a poem about bananas in the style of Shakespeare, we can ask. Whereas before, when we think about traditional programming, Imagine the year is 1995. Anybody here liked playing Minesweeper in 1995? Yes, yes, yes. Okay. Yeah, me too. Now, it's 1995, and you are not computer programmers. You don't know how to program. And you see this mind sweeper thing and you're like, Minds. I like the game, but minds are violent. I want a game called Heart Sweeper. And I don't like the grid the way that it is.
I it to be a 15 by 15 grid. And you want to change some rules around. What would it take for you to actually make the attempt to get your thing? No, but you, you, you. Forget language for a moment. You want Heart sweeper, not mind sweeper. It's 1995. You insist that the game you want to play is called Heart sweeper, but you have never programmed a gosh darn thing. What are you going to do? You suffer. Right, so either you're going to find a programmer and pay them. Yes. Right, this one option. Good. Delegate. I love it. Or you become the programmer. What does that take? Several years of learning how to speak an un-natural language. It is not natural these things, these C++s and Pythons and assembly and whatever else. It's not natural. You have to be some masochist like me to think It's fun to do this. It's fun. Speaking of Harry Potter, I've always thought programming was a little bit like magic spells, like abracadabra. But you first have to learn the language. Wingardium leviosa. You have to learn these funny words. And after a while, after seven years in Wizarding School, you can make yourself a custom Minesweeper app, can't you?
But today, you go into ChatGPT, Claude, whichever one of these tools. And now, what does it take in order to make the attempt? To make the attempt, you just say what you want. Minesweeper app, 15 by 15, instead of minds, use hearts. If you run that, I promise you, under five times, under five tries, maybe on your first try, like me, you are going to get a playable Heart Sweeper app. Playable. What's the worst thing that's going to happen? You get nothing back. But how much did it cost you to try? Zero, nothing, because you already speak the language. It's your mother tongue. 1995, you got to learn some books upon books upon books about ugly, unnatural languages. Today, whatever you want, you just speak it. And the worst thing that's going to happen is you get nothing back. So what do you need? You need the courage to try and the vision to know what you want. There is no other thing. And if you still think of technology as Something that is difficult and fiddly because I get it. The technology abused us for a long time. It was not lovable. There were a lot of steps to learning it.
You still expect that you have to take courses and textbooks and all that. Then you still think If you take this AI thing, this new one, which is the automation of language put in all of your hands, this new thing, it's probably something difficult. Let's leave it for somebody else to deal with. Whereas all you need to do is know what you want and speak what you want. But you know what? Knowing what you want, that's the hardest thing, isn't it?
It used to be. It used to be. Let's hear for Cassie. We're on this. Okay. So, Cassie. So you couldn't make up what's happening. Because everything you are sharing is precisely what we share. Except it's coming from your voice, your mastery, your perspective. Serve partners, elite. Do you agree? Yes. At levels that are actually eerie and frightening. Is it not Shannon? Yes. So I invite you to become present to the depth of these people's listening. And the energy you're experiencing from me and everyone is gratitude and absolute agreement because you're literally framing the foundation of everything we talk about for what we're doing here pre-AI. And now what we're doing to be enhanced by AI, because what we say is since the dawn With humanity, people have struggled with two questions: why am I here? And how do I fulfill my ultimate mission, vision, and purpose? And with AI, we have the ability, clearly, at exponential levels to effectuate the second, which is how do I fulfill my ultimate vision, vision purpose? And dare I say, I think with what we're building, it is extremely simple to help people discern why we're here, because we always say that people are here for more money and less time with more magic.
And dare I also say that I think you've got money and time taken care of with the incredible things you've done. So now part of your magic is you teach MBA students. Part of your magic is you speak and enlighten and impact people. And what I'm also curious about, and I still want to get to the back half of your answer, but it's just so present, is what's the ultimate why for Cassie as you decided today? You're speaking I am very clear that you, at least I believe I'm clear, that it wouldn't be for the money. I am very clear that you have endless choices of what you could do for massive abundance with your genius. You teach MBA program. I'm sure that's not for the money either. You teach ethical AI. What's the why behind all this for you? And please feel free to step into that or continue to go with it.
No, I'm happy to because we're almost there, aren't we? See, 18-year-old me, back at the University of Chicago, was operating in a world where individual things are kept individual. Individuals themselves have some opportunity for impact, but it's on human scale. And then when we have these automation tools that are able to scale ourselves up a lot, when we increase ourselves with technology, which is what this technology allows now, gives us unprecedented granted opportunities to impact the world. Opportunities at the scales reserved centuries ago for the likes of Kings and popes and such. Now, individuals can, with technology, scale themselves. When we scale ourselves with technology, we also make it easier, unfortunately, to step on the people around us. And so if it's just you steering your life a little bit better and getting some compounding value from better decision making. That's cute and that's okay. But what if your decisions actually have a lot of impact on the world? Well, then it's time to be much more thoughtful and much more careful, much more responsible about how you go about things. And AI, you see, AI in this equation, what Silicon Valley wants you to hear about all the time is essentially a proliferation of We love different magic, magic genies.
Why magic genies? Because there's very good money in being able to say, We don't know what your problem is, but here's the solution in advance. It's going to help you. We don't know what you're going to use it for, but here you go, it's probably what you need. Pretty awesome. That's like a genie. Here's the genie. We don't know what you're going to wish for, but your wish can be granted. Now, Silicon Valley is talking is talking all about the better, better, better wishers. Sorry, the better, better, better genies. I get ahead of myself. The better, better genies. Stronger, better genies. But I know, I'm painfully aware of what every genie story or Midas and his magic touch or the goldfish that Grants wishes, every single story is never about the Genie. It is about the unskilled wisher. That's what it's about. And the unskilled wisher put steroids and able to impact the world like never before, that is a scary thing. I want that wisher to be skilled. And so while all the folks who are classic AI folks, are all about their Genies, and to be allowed in the same room as them, I have to be able to arm-wrestle them with as much knowledge about genies as they have.
See those guns? Yeah, right here. But what I have always been about since I understood that it's our decisions that are important, I was always about, how do I make sure that the wishers are better, that the wishers are prepared for the responsibility that they're about to get? See, I don't want people to be faced with essentially the equivalent of magic lamp arrives on your desk, and in that moment, you're like, There are some things I should learn to get to point. Because we're getting there, digitally speaking. In the digital sphere, the genius are only getting better. But I don't see enough being put into how good the wishers are going to be. Who is teaching the Wishes? So my why is I am teaching the wishers.
And if in this room, you're in dirt or elite, and you see this as learning to choose your wishes wisely, say yes. Yes. So the alignment is frightening. It's inspiring. It's powerful. And this is a room, we call this integrity-based human influence. You're not preaching. You are sharing wisdom to people who are in full alignment with the wisdom you are sharing. Because we say that full, full, Yes. Yes. We say that integrity is transparency to all relevant truth. And what the people, certainly in the front of the room, the back of the room, heard when you say you're not a great marketer, we believe that you are and could be one of the most masterful marketers on Earth. Because in our version of marketing, it is all about truth. Which is why in that room, I went through Calleigh with Cassie and said, If you think with your mastery that this isn't potentially a thing, tell me now, because I don't want to be lying to anyone. And we had a positive interaction on that topic, and I'm not going to quote her in any different ways, but I will continue to say the things that I am saying.
Because everything for us about marketing is full transparency to the relevant truth that you always seek to add more value than you will receive, and that the thing you say works to do that thing is what it does, which I could speak 10,000 miles per hour, and all this human masterful genius is going to do is receive it all. But that's the folks that you're speaking with, which is not common, not normal. And I'll also, please, if you know, Cassie, which you may not know, I don't make, I do not make more than 0. 01 1% of my income from the Unblind platform. So this is a mission for me. It is a for-profit enterprise. But this is designed to create an army of people that in a slightly distinct language from what you're sharing, Cassie, are doing exactly what you're saying. And we believe that we are in the arms race to have the greatest influence over the future of where this goes. And people in our certification program have said that we will become the chief justice officers of the planet. And create a mechanism of discernment of integrity and truth unlike anything the world has ever seen.
To crush and eliminate bureaucracy, oppression, tyranny, and also just ignorance. Because exactly what you're saying that you to experience the University of Chicago. Here was my challenge. I went to Columbia University, an Ivy League school. I wouldn't have gotten in without baseball. I'm smart. We joke and have fun. I'm a smart person, not nearly as smart as you and IQ. My practical learning has become really at a revolutionary speed of my discernment and learning capabilities. Still not at yours, but a very high level of processing. I learned how to learn later in life after Columbia. But I took economics at Columbia, as you did an even better economics program at the University of Chicago. And I graduated top honors from my law school class. And nowhere did I ever learn anything that you didn't learn and had to figure out for yourself in all these disconnected disciplines of decision making. Because I arrived less precisely later at your same decision making because I had a gun to my head called blindness, and I did not want to go blind to be broke. And my mom pushed a hot dog cart in Jersey City when I was a baby, and we've never resourced it.
And everybody in my family with my genetic eye disease was blind and broke, and the vast majority were also alcoholics. So this same discernment mechanism that you were blessed with, I was blessed with in realizing that there had to be a formula in a way that I could avoid, I could find the codification, the pattern of not blind, not broke, not alcoholic, having financial abundance to take care of a family. So what I committed to create was to eat first and only complete holistic diagnostic, dynamic, interconnected actualization tool for all of human, now I say AI, business and mission acceleration on planet Earth. Because everywhere I looked, from I leave university to MBA program, to psychology, to the My literature, Humanities, books that we read, The Iliid, The Odyssey, Homer, all of it, I couldn't find it. Plato, Socrates, Aristotle. I found TIPS, Think and Grow Rich, The Personal Development Landscape, all of it, I couldn't find it. And so It is... And these folks heard we had David Mazel, the founder of Marvel Studios, creator of the Marvel Cinematic University, yesterday in that seat. We had Charlie Sheehan in that seat yesterday. And I said, David Mazel, I cannot possibly think we will be exceeded in what that was for me in my heart and my soul and my being.
And I mean this with all of my heart. And like Ralph Mastia being here later today is going to be mind-blowing, absolutely incredible and inspirational, all of it. But To hear it coming from a human, from another continent who has studied in disciplines that are different than mine and most of the people in this room, it is so inspiring. It is so validating. It is so foundation moving forward because every single person in this room agrees that every word you said because all you're speaking is the absolute mathematical and human truth in every word you said. If you're present, then I'd say yes. It's so exciting. For no other reason in this room, just to know that I'm not and we're not crazy, because every single word, the genie dynamic, every word you're saying is exactly what the people in this room believe and I believe. And I want to learn more, learn more from you. This room wants to learn more from you. So it is... Yes, yes, yeah. It is unbelievable. And to make sure that is valuable for you. So like sight unseen, we are... In this world that we're in, I'm not asking on stage to put you in a comfortable position, but we don't negotiate here.
So we don't pretend we're not interested. We teach Integri scarcity, not fake scarcity. So with absolute precision, I can't wait to continue these conversations and go later. And you could rest assured that I will do everything in my power to make sure that we are continuing to learn from this genius as we go forward, because there's so much from the ethics, the practical execution of all of it, and we are present to the conversation right now. Some of these people in this room are brand new here, so they wouldn't have any idea of the synchronicities of which we speak. So thank you. Thank you. And I'm happy to go any way you want. But two different things that I still would love to just truly learn more about, about you, Cassie, in your heart, mind, soul, is in the end, if you had the privilege of knowing 100 years from now, I hope, a thousand years from now, I hope, because I think the world would be a far, far greater place if you live another thousand years, would be what would you want your... If you consider your legacy to be, even if it was in your heart what you would know or mind, that people relate to language very differently.
We're very clear on that. We speak at this session very much. So you may not relate to legacy, but whatever translation of that for you where, Hey, this is why I do what I do. This is my heart imprint. Yes, I want ethics to be better AI, clear. You answered that. Thank you. But in the end, if you were not in inhibited by any sense of... If you could free yourself from that inhibition of any sense of feeling like maybe you're saying something that may make you seem overly not humble and overly certain about your capabilities, if in the secret recesses your heart and mind, if you feel comfortable and safe in this part of saying it, what would you hope that the end of your journey in this lifetime would be that you would have imprinted and caused if you could truly have it all your way at the greatest degree of possibility. I'd love if you feel comfortable going to that a bit. And then still this predictive modeling of this shifting world of we are on the cutting edge of legal AI. I know that because we've talked to the top companies.
They don't have what we have. And we've taken things that took us 300 hours to do, and that take us three hours. And so we're in very many diverse applications of the space using what we master we know to do and applying it through the prism of. But that would be a secondary curiosity of what you would see as some of these potential displacements where you think they might be exaggerated. Some people say that legal jobs, and these aren't just all lawyers. These are legal account and financial, real estate, doctors, and brain surgeons in this room. But what you might see is some of those displacements where they are maybe minimized, maybe they're exaggerated. That would be one area I'd love to still explore in this other area of truly what your greatest heart, dream, desires are, please.
That's just one question, right? Yes.
I do that to people. Can I explain why, truly? What I do, and this is one of the things we teach, is we give people the power of choosing. So we just put it on a landscape. Here's a few different topics. It'll be amazing. Where would you like to go? Because you're our guests and we're on our privilege.
I'm just like, take them all down and then check them all off. My block and bridge is just like, There's no block and bridge. I'll just land the plane eventually. Okay, so let's start with what I hope to achieve. One is I want the people who have the biggest ability to impact the world, to be better at it. Now, here's a very honest thing I'm going to say. There are many people whose life's work will be in preventing intentional harm, and I fully support, applaud, and adore those efforts. I'm very interested myself in unintentional harm, right? So much bad can come from very good intentions and no skills. When you have vast leverage, when you have high impact, and you're a doofus, essentially, from the previous century, and you are put in charge of enormous resources, enormous projects, and all of it is going technological. Every company now is saying it's an AI company. The leadership of these companies, do they know what they're doing? If not, oh, dear God. Pardon my Shakespeare. It really upsets me to think that we still have people who are in charge of without realizing that they're in charge of, shaping vast changes in this world, and their absentees, they're not participating.
They're just letting things slip into whatever default. I want to see steering. I don't like this Ma, no hands, watch me let go of the steering wheel attitude. To the extent that I can reach leaders who are in the position To be smart wishers or not, I want to reach them. That's one. Number two is I want to protect human agency because the lie that AI is autonomous, that it's an entity, that it makes decisions for us is a lie. It is a poisonous lie that makes life worse, not better. Somewhere in any scaling technology, there are humans. If you say, for, let's say, an unimportant decision, where you shouldn't be putting too much effort in anyway. For me, choosing jeans, what style of jeans? I don't know. I don't care. Chatgpt, what's fashionable right now? I have to know what and who I'm delegating that decision to. Those are humans. Those are humans who figured out which data, which objectives. There's a whole bunch of human stuff in there that I am delegating my decisions to. And when it's genes, fine. But when it's medical decisions, when it's love, when it's career, when it's bigger things, if we begin to treat these systems like all-knowing oracles, we are giving away our human agency.
Whereas if we instead use these as tools, tools to make ourselves better, but we keep human judgment, because there is no way to automate human judgment. We all judge and discern differently. It matters that we understand that we are still responsible. We take that responsibility and we use these beautiful tools when they serve us, and we drop them When they don't, and we keep our human agency. I don't want to see swaths of humanity all doing the same thing that ChatGPT told them to do. I want us to keep thinking. I want us to keep creating. I want us to keep being beautiful. And I see these tools as having such potential to elevate every single one of us. Just think of all the drudgery you could drop. You don't like something, complain about it into one of these tools. Maybe you'll find a way to do whatever it is better. Ask for advice. The advice has never been cheaper. Just don't take the bad advice. Obviously, that's judgment, too, right? Yes. But so much flourishing is available, but also so much opportunity for us to stop thinking. Ai is the great thoughtlessness enabler. I need us to be thoughtful.
And so if my legacy is to have made even some humans more thoughtful and for us to keep our agency and to do more with these tools, not become less as people, that's what I want to see, and that's what I want to do.
Let's hear for that.
Now, to the future. I still owe you the future. Please.
I owe you the future. May I share just a quick comment on that? So, alignment only increases, and what could be present in the recesses of Cassie's nervous system is where something that she's going to say is going to be... And not that she's concerned about it because this is her ethics and integrity, but only concerned potentially about is that going to be something offensive or harmful towards us. And it is remarkable that every single thing you say keeps being completely in alignment. So one of the things that we are working on creating is not a universal catch-all. These fine folks up front are working with what we call our interview agents, which are getting to know them. And their traumas, their life, their being, who they are, what they are, what matters to them. Because what we always say is whether somebody believes that they are making the money they want to be making because they live at Hate Asbury in San Francisco, which is a homeless area where people, and some people went in 1967 for the Summer of Love in a utopian possibility and never came back and have lived in that park with their dogs for decades.
Or somebody is as wealthy as Oprah Winfrey or exponentially more wealthy like Neil and Musk, we believe that people making decisions about their financial abundance, their time freedom, and when they invest that, is a completely absolute individualized decision. So we say that people want more money and less time with more magic until they have the amount of money they're comfortable with. And for some people, that would be having enough money to eat and feed their dog at Hey Dashbury. And for some people, it's the next yacht in the Monaco Yacht Club. I've been to both places. And for some people, it's having the 450-foot yacht, the Monaco Yacht Club, even though there isn't one yet, at least the last time I was there. And what we're about is people making that decision as the most masterful, healthy, individualized decision maker, which would include the decision of knowing what they would have to sacrifice. So when they're making their genie wishes pre-AI in the world of unblinded, that was our matrix, our framework. So just a quick example, and this is for all new people as well as you guys. Some people don't know these pieces.
So when I began to think about what my future looked like, I began to think about the fact that I didn't want to be blind and be broke. And then I figured out, wow, you could not be blind and broke really easily. And I discern this way of being. And I achieved something I thought might take a lot longer, a lot faster from human technology, not AI technology, in 1997 to 1999, my massive transformation of my life. And I did it in entrepreneurship. I did it by building a law firm. And all of a sudden, I had all the money, more money I ever dreamt of. I had my primary residence at a beach house, which was a dream. I thought someday maybe in my life, I feel like that would be incredible. And I decided that now all I want to do is have time, freedom, and be present for my children. And I want to coach all their teams, their games, their sports. My son may not be in this room. And every word, if he is, and he'll listen to some He plays 26 years of graduate from law school, and his concerns are your concerns about AI.
And they're my concerns at the highest level of ethics and possibility. And so I had the privilege for 12 years of being at everything. My kids played in a thousand games. I missed nine of them, literally. And my day started when they got out of school. I had to just find ways. I owned a 100-person law firm, and I went to it five hours a month. I was a business owner, not an operator. And in integrity, all that are not here, I created the framework to tell people the power of choice. We teach conscious imbalance. Things go large as momentum abundance. And for you to create a X-level or Y-level financial abundance, We've prioritized what you would do to achieve those levels of financial abundance, business development being more challenging or valuable or not. So people that work for me for 25 years that have stayed the exact same position because they've chosen the path intentionally, wisely, and transparently. So what our work in the world of pre-AI was about the genie in the bottle, except being the masterful Integrius Wishr, where you had true discernment of your choices and the power of choice. Ai, we'll say, has only exponentially accelerated that.
But the reason that we're launching our own large language model, the reason that we're not building on ChatGPT is because We're creating this. We believe that the agents must know. That's why we say dynamic, complete holistic diagnostic dynamic. The dynamic part is something, obviously, ChatGPT has nothing to do with in the world. But we're taking this massive truth-based wisdom of operating a capitalist structure and then allowing people to truly understand themselves at the highest level, putting all that together so people are consistently, dynamically making new choices, not governed or limited by what genes do you want, but truly the healthiest place of ethical, and we call it ethical, we say call it integrus, integrus decision making. So I only wanted to contextualize that for all the listening of the audience that doesn't know that, and also for your listening, because every word you're saying... I didn't know what was going to happen today. I didn't know where this was going to go, and that's part of the fun and magic of it all. I knew your mastery. I had no idea. And I did do your research. I do understand your positional ethics, but I didn't know where you truly were in your heart and mind.
It is It's disruptively shocking for me to hear how present a genius like yourself is to what we on a human level are absolutely present to long before AI. And that's been the quest of my life and the And blinded mission is that people are fully informed. Because I do our marketing and say, How could they have all been wrong? How could I have gone to Columbia? How could I have gone to law school? How could I have been all of these places And nowhere were they telling me the truth. Were they giving me the truth of how I can create freedom for myself? And I joke, and I'm not asking to make any comment all about Google. You Google the answer, it's all the wrong answers. If you Google how do you market and build your business, how do you create freedom? You get a marketing company's SEO optimization of answers. I'm not asking you to comment at all in any way, shape, or form. Or if you're doing ChatGPT, they're the wrong answers. They are objectively the wrong answers. And I could prove that in three minutes. By the way, I also happen to be, someone would argue, the greatest trial attorney in America.
I don't think that part of me. So my ability to get the truth is what it is. And I've never taken a case, would never take a case, where I don't believe 100% in the ethics and integrity of the outcome. And my top jury, verks in the nation, are based upon that. And I've never examined a witness to make them look like they're lying when they're telling the truth. I would never do it. So I'm also about revamping our entire legal justice system, consistent with that principle because I think it's broken tremendously because it incentivizes lawyers to lie. Because judges do not enforce the rules. The rules are right. It's the collegiality, the bias, the corruption of relationships that prevent judges from enforcing and disciplining lawyers when they should be disciplined and lawyers who are very nice people who make really silly little mistakes in their trust account. None of these things ever happened to me, by the way, at all. So this is not defensive. I've had the blessing and privilege of many beautiful things, but the system is broken. And what we're about is fixing systems. Now, I am clear that I'm sure I am at some level, Homer Simpson in a doofus somewhere.
How I relate to you now is to help me see where I'm a doofus and a Homer Simpson because we are going to wield great power. And my ability to stand up in front of rooms full of people and make them say yes, there's not a human on Earth better than I. And we're codified buying influence, but with integrity, because it is what causes everything. It is what caused the United States of America. It's what caused Nelson Mandela. It's what caused that University to be renamed. It's what caused the end of Apartheid. It's what to cause the Sons of Liberty to cause the breaking free of England. It's what has caused every single thing. It's what caused Oprah Winfrey to be 25,000 times the net worth of the average American household, even though she had four statistics statistically horrific facts. She was a woman, she was black, she was abused in her teens, and she was a teen in her childhood and a teen pregnant. You put those four things together, mathematically, catastrophe. But we're not statistics. We have the capacity to master formulas to be statistical outliers like Oprah. And that's what this work is all about.
And so I wanted to share that so you can truly receive how your last answer of legacy, only from our perspective makes your words more valuable in the room, makes everything we're up to in the world be more inspired. Those people back there who are sitting on our team are losing their minds on everything you're saying because they are living that every day. And I'm blessed, as I'm sure you are, to never have to work another day in my life. I've been blessed for a long, long time. And I'm doing this for the same reasons you're doing it. And we may not have exactly the same direct path of mission we're on, but I am here to make sure when I leave this Earth on the final day, I know that I did not bury nor squander, but exponentially multiply every time I have. And I'm building an army. But that's what I'm doing. I'm building an army of people. That are armed with ethics and integrity to create influence with very specific meanings, not just, Oh, tell the truth. Because in the edges of integrity, there's massive complexity. I am clear. We discuss all that.
But in terms of just the basics of the broken educational system, and you go into University of Chicago, I scream about that all the time here, of our traditional educational models, and you lived it. You discerned it. You proved that you rose above it. So I just want to also share thank you because we agree with you, Cassie, everyone in this room, that typing in, Hey, how do I get cooler? Is one of the saddest inputs somebody could possibly have. Hey, how do I make them like me? Even sadder. And so we a stand for humans running and driving AI at the highest level to never see our decision making authority, to only have it enrich the possibilities to do, to do more well, to do more good till the end. So I thank you for what you're sharing. I'm hearing that from you as well. Thank you. And anything you want to share on that or not or the future.
So this is the future. Before the future, though, I want to agree as to why Google Search, particularly pre-generative AI, Google Search, was not good for you as a source of advice. Why is that? Because it is helping you find things that were not written for you. Right? The search results. So think about this. You want to get advice on something. Maybe you are, maybe it's... Something is troubling in your marriage, and you are offered the best advisor in the world. You've never talked to them, though, but here's Sean, who's the best advisor in the world. And you expect that the way you're going to get advice is you're going to run up to Sean, you're going to shout, Should I get a divorce? And then you're going to listen to the first three seconds and then run away, right? No, no, that's not going to work for you. Why? Because you have to give context. You have to make it personal. And even Even here around smart people in the AI community saying dumb things like the right amount of context or the right context. That's like the right decision. There's no the right decision.
There is a decision made well that suits what you need. The right decision sounds like we would all make the decision the same way. No, we need the skills of knowing what we need, having the vision, being able to articulate, and putting in the information that is necessary, the context, but also leaving out things. Those are all choices, how we craft all that. That's judgment. And so to get a human, like Sean, to help you with your marriage, you would have to give a lot of context and make it quite personalized. But with Google, it's 2019, you have just put five pages of context in there, and what do you get? No results. Google taught you how to go online and be under-advised to reduce the ambitiousness of your queries until they are about what every generic person would want an answer to. Even on the niche stuff, it's still generic. And you know the results you find? They're not made for you. They're made in anticipation of, hopefully people would care, with SEO and all the rest of it. So if you are after something that is part of the human condition and/or what all of us need a lot of, or is the answer to your 101 homework, it's probably Googleable.
But if it is something that is actually important to your life, that's why when you find a human advisor, you give them a lot of context. And you have to make choices all the way through. Apply judgment about that. Now, this new thing, the automation of language, it makes advice cheap and abundant, where context makes things better, not worse. If you can get over your Google habits and now begin to approach it more like you would approach people. But hey, still, if you were going to ask a person, still ask a person. This is just your second opinion. Now you can get abundant advice. You still apply judgment, still apply judgment. But what is the future? Notice what I'm talking about here on the advice side is I'm talking about personalized. The Internet and the digital world were not personalized. In fact, most of your experience is not personalized. Now, let's talk about Elon Musk for a moment, or I think we would take him a lot less seriously if he was called Elon Musk. I think that would have worked. It would have worked better. It would sound less like an alien. But let's say someone with that amount of wealth could have every experience personalized.
Build him a restaurant exactly the way that he wants it, and he has it for dinner, and then he has the restaurant for dinner. He has the food in the restaurant for dinner, and then it's demolished the next day. Presumably possible. Now, why is that not available to you in digital and physical spaces? It costs a lot of resources to personalize. Except now in the digital space, the cost of personalization is going down. You want a mind sweeper app that's got hearts instead of minds? There you go. And that personalization is going to be sweeping. It's going to be everywhere. You will start to see that all kinds of things that we take for granted, one size fits all types of things. Why is your landing page for your bank the same for you as it is for the person next to you? Why can't we have a completely, fully on-demand Internet? Actually, we're seeing browser releases. Google is releasing some things in exactly that direction. Imagine if everything was personalized. And imagine if what it took was you having a vision for how you want things to be, you having the clarity and precision of communication to express that, to wish responsibly, so that the genie gives you what you actually need, because that's what you asked for, not what you thought you asked for.
Now we have all kinds of things in the digital space are going personal. And then we have in the physical space, because maybe products don't need to be made in small, medium, large, and put on a shelf, but might instead be able to be customized to your physical form and your measurements. Maybe that is very soon on the horizon. You want some cool thing bob 3D printed for to put under the Christmas tree for your friend? Sure, why not? That's the future as well. But here's the thing. When we start to personalize everything, then everybody, and this is a beautiful world, is doing all kinds of their own stuff. So far, so good. Except that makes more work for everybody, not less. You think we're going to have less to do in a world like that? The number of different ways that this all fits together, this insane jigsaw puzzle, where before we knew what boxes we all fitted into. We expected that one student would know approximately the same as another student coming out of school, or one lawyer would be approximately similarly qualified to another lawyer. But now in this future, everything is harder to coordinate and everyone's doing their own thing.
Again, so much more surprise, so much more creativity, so interesting, but so much more to do just because of that. It's such a matchmaking problem of who are the colleagues out there that you're going to need on your unique journey with your unique requests. We will have a period, I imagine, where unambitious people and unambitious companies and unambitious business owners will go pinching yesterday's pennies and will try to use all these fantastic tools to reduce costs, to replace people. That is small-minded thinking. Because instead of pinching yesterday's pennies, why don't you prepare for such a strange future that we're all moving into with so many more moving parts? And then you realize that you need the people that you have because these are the people who carry the context. And what is more important than context for getting anything done? Context isn't in the data. Some of it is, but very little. Most of the world, the reality, is not digitized. And so you're going to need these people to carry that context with you, and you're going to reach for more instead of pinching yesterday's pennies. You're going to say, These things we thought were expensive or impossible, or It's too difficult to personalize your world like Elon Musk might.
Why don't you ask yourself, Maybe we're here now. Maybe we could personalize things. Just think of the silly things, right? You call customer support, you You want to claw your eyes out because of the awful music that you get tortured with. Imagine a future where you are on that menu and you go and play me metal music with the vocals of Britney Spears while I listen to this. I don't know what you're into. And all the menus on 3x speed and give me fun facts about dolphins while I wait. You could have futures like that. How How interesting. And think about education. What is the purpose anymore of teaching everybody the same thing? There are so many more things to learn. So think about this. So think about learning about something, right? World War II. Instead of the teacher dryly writes it out on the board and has everybody do the same reading, the teacher could say, Hey, everybody in the class, you You can pick anything that is of interest to you within this period. Come use these tools and expand your knowledge. If you are into horses, learn about the role of horses in World War II and how that connects to everything else.
The teacher will have tools that make them even faster than the students to, in real-time, be fact-checking and corralling the discussion so that the whole class can learn a useful principles while everybody is interested interested and engaged in their own thing. We will have everybody is their own keeper of one facet in this beautiful multifaceted reality. And again, so much more for us to do, so much more coordination. But think about how far we could leap. And then if you're worried about things like dying, as one does, I don't know if you are, but we could live longer if we would be to be able to cure diseases. Are we going to cure diseases the old way that we've been doing it with no technological assistance? Pen and paper and... No. No, we need to be able to have bigger expanded minds. These tools give us that possibility. We keep the parts that are human and the heart, the judgment, the integrity. But memory, we're bad at that. Let's It's automate. Let's get better at memory. Let's get better at putting knowledge at our fingertips. Let's get better at trying many different solutions quickly and safely.
Let's expand what we can do. Then we start curing diseases. What about climate change? We're going to solve that by recycling our cans. Every little bit helps, but how we're actually going to solve that is cheaper energy that is not environmentally damaging, that is safe. What else are we going to do? Materials, better materials. How are we going to find those better materials? How are we going to use our resources so much more effectively that it's not about each of us individually trying to be a better person, but just the whole thing together is so efficient that we are saving the environment more than we're it. It is precisely through technologies like AI that we get there, that we become collectively smarter. I see a bright and beautiful future. I see some short term moments with greedy people pinching yesterday's pennies, which will be a painful thing to watch as society adjusts. But then when we open our eyes and we see how much we could reach for and how much better life can be and how unique and personal and beautiful it's all going to be, then I think we would have started taking steps in the right direction.
Let's hear for Cassie. Wow. Amen. Cassie, it has been truly an honor. And if you had just one headline, we always have a little bit of fun with headlines or three words in a headline, perhaps, about what you would want this extraordinary group of humans to remember about today or anything you want them to know, what would that be? One headline sentence you'd love to leave them with.
Well, you already told me that they want to learn to be better wishers, so they already need that. They don't need that headline. In which case, I would say, reach for more. Reach for more.
Cassie, it has been an honor. Some of these fine folks are going to see you in a couple of minutes, and I will have you know this. Charlie Sheen, Hollywood movie star. Ralph Macho, can't wait. Hollywood movie star. The founder of Marvel Studios. The person that was responsible for the creation of all of the Batman Cinematic world. And a number of other remarkable people here. No one, and there was a certification partners first. There was order of who got to request the private moment they'll have with you in a moment for photos and saying hello. You were the person at this emersion, by far not close, notwithstanding these Hollywood stars that these people wanted to meet the most. So I thank you for that. Let's hear it for Cassie. Thank you. That's what I do. It's an honor. Thank you. One more time. Let's hear it for Cassie.
What happens when one of the world’s foremost decision-making and AI ethics leaders steps into a room built around integrity-based human influence?In this powerful episode of Unblinded with Sean Callagy, Sean sits down with Cassie Kozyrkov—former Chief Decision Scientist at Google and a global authority on AI, data, and decision intelligence—for a conversation that reframes how we think about choices, responsibility, and the future of AI.From her unconventional upbringing in South Africa and early obsession with spreadsheets, to her groundbreaking work at the intersection of human judgment and machine intelligence, Cassie reveals why AI is not about replacing humans—but about amplifying human agency.Together, Sean and Cassie dismantle the myth of “autonomous AI,” explore why decision-making is the most important skill of the future, and challenge leaders to become better wishers in a world where technology increasingly grants our wishes at scale.This episode is a must-listen for founders, executives, technologists, and anyone navigating leadership in an AI-accelerated world. Episode HighlightsWhy decision-making, not intelligence, is the ultimate competitive advantageHow AI is fundamentally a human system shaped by human choicesThe danger of outsourcing judgment—and how to protect human agencyWhy bad outcomes come from unskilled wishers, not bad technologyHow personalization, language, and AI will reshape work, education, and societyThe ethical responsibility leaders carry as AI scales human impactCassie’s vision for a future where AI elevates humanity instead of dulling itMemorable Quotes“AI is not autonomous. It is human decisions all the way through.”“The real danger isn’t powerful technology—it’s unskilled wishers with powerful tools.”“If information isn’t connected to action, it doesn’t matter.”“AI should never replace human judgment. Judgment is not automatable.”“Reach for more—but be prepared to choose wisely.”Timestamps00:00 Introduction – Decision-Making, AI & Human Responsibility03:45 Cassie’s Background & Growing Up in South Africa08:10 Early Fascination with Data, Logic & Systems12:30 Why Decision-Making Is the Ultimate Life Skill17:05 Education’s Failure to Teach How to Choose Well22:10 What AI Really Is (And Why It’s Not Autonomous)28:40 Human Judgment vs Machine Intelligence34:15 The “Unskilled Wishers” Problem Explained40:10 Ethics, Responsibility & Power at Scale46:30 AI as a Multiplier of Human Intent52:20 Personalization, Language & the Future of Work58:10 Leadership in an AI-Accelerated World1:03:45 Why Agency Must Stay Human1:09:20 Reaching for More — Vision, Legacy & Purpose1:15:10 Final Reflections, Takeaways & ClosingWhy You Should ListenIf you’re using—or planning to use—AI in business, leadership, or life, this episode will fundamentally shift how you think about responsibility, ethics, and power. Cassie doesn’t just explain the future—she equips you to lead it wisely.