Transcript of The Surprising Future of AI with Fathom’s Founder - Richard White
Proven PodcastWelcome to the Proving Podcast, where it doesn't matter what you think, only what you can prove. Richard proved it. In a time where everyone's trying to be successful in AI and they're rushing around, he did it five years ago. He's the CEO and founder of Fathom. He's also a really great guy until he starts telling you the unforgiving truth of what's actually going to happen with AI in the next 24 months. It's terrifying. Anyway, I hope you enjoy it. The show starts now. Hey, everybody, welcome back. I am excited to have you on the show, Richard. Thank you so much for joining us.
Hey, thanks for having me.
For the four or five people who don't know who you are, can you explain what you've done, what's your success has been?
Sure. I'm the founder and CEO over here at Fathom AI. We are the number one AI note taker on G2 and HubSpot. No one likes taking notes on their meetings. We have basically an AI that will join your meeting, record it, transcribe it, summarize it, write the notes, write the action items, fill in your CRM, slack it to you, email it to you, you name it, so that you can just focus on your conversations and not doing a bunch of data entry work.
I think most people are familiar with your product. I think the stuff we're going to talk about now is stuff that people aren't familiar with about the reality of AI. A lot of people think AI means artificial intelligence. It also means always incorrect. There's also a side of this that you believe about what it means for you as well, and some of the harsh realities of what AI does. Can you share what some of those harsh realities are?
Yeah, I think one of the things... I've been doing software for 20 years, and AI has completely upended how we think about building software. Made it much more of an R&D process now, whereas before it was more of a manufacturing process. It's also made the failure rates much higher. It takes a long time to some of them ship an AI feature because it'll fail three times before you get something to work. That exists for both when we're building features for our product. It also exists when we're trying to buy AI products to basically move our business forward. We actually have a goal at Fathom of getting to 100 million in revenue while staying below 150 employees. We have this big emphasis on efficiency and automation. It's interesting because we had this... I just gave this talk where I expected to give a talk about how we've transformed everything with AI, and we actually have a 60% failure rate on AI initiative. I think there's a lot of really interesting gotchas when you're trying to build or deploy AI solutions.
What you're trying to tell me is that AI isn't the Holy Grail. All of a sudden, I'm not going to start floating and curing cancer because I was bored on the toilet one day. That's not how things actually work. Damn you, man. You It's ruined it all for us forever.
I'm so sorry.
As you go into these, you're talking about failure. What do you mean by failures? I mean, a 60%, I want to get on a plane that had a 60% failure. I would get married because that's a 62% failure ratio. But okay, we'll get on a plane that has a 62% failure. What do you mean there's a 62% failure ratio in AI?
Actually, there's this MIT study just came out and said the average company right now is actually a 95% failure rate on AI initiatives. What I mean for us is basically it produced the outcome we wanted. I think that's actually the hardest part. In the AI land, it's easy to get it to produce something. It's easy to get the AI to spit out something. Our part is getting it to spit out the right thing. What is the right thing? For example, in our business, you could build an AI that gives you an accurate summary of a meeting that's six pages long, but accurate may not be enough. That's too verbose. I don't want a six-page. It's a 10-minute meeting. I don't want six pages. There's this whole new nuance of quality that I think is hard for us to a judge. We're not used to judging it. We're used to software as binary. It works or it doesn't. I click the button, the thing moves on the screen. Now we're in this world where I click the button and it spits out some words. I'm like, Are those the right words or not? It makes you judgment call, is that the right judgment call or not?
I think one of the things that's really changing everything is we have to rethink how we evaluate tools because we have to actually get in there. It's almost like evaluating a higher. It's more like a higher because you're basically buying not features now. It's upended how we think about purchasing products.
I can't even get ChatGPT not to put dashes in the damn responses that it gives me, which I can't tell you how many cursing that I've done at that thing. You're talking something significantly higher. How do we get it to produce content that we actually want or go from that 10-page dissertation that's so verbose into what we want? How do we do that at the home level for your everyday consumer? Then also, as a CEO, it's a very successful company because every single meeting I'm in, your damn software is there before anyone else joins. Thanks for that. I'm a little angry about you with that one. How do we do that in both the personal level and the professional level?
Yeah, it's actually that same study said that the success rate for things like ChatGPT is actually 40%, which is still not great, but way higher than 5%, right? I think AI is actually easier for individuals to use because individuals are basically taking ownership of that output, right? It's Oh, it's right in this email for me. Yeah, I hate that it always puts the end dashes in there, too, but I can at least remove them. Where it becomes problematic is when we're using these things at scale and no one's basically been properly equipped to QA the thing. We have a whole team at Fathom that all they do all day is play what I call a AI version of Jenga, where we think about this as all day we are experimenting with basically models and use cases. And does this model good at this? Use case? Can this model find action items from a transcript? I call it Jenga because if you push on a block and it gives us any resistance, you give up. You find another block that move smoothly, right? Because there's a weird problem you've got now where you got so many models with differing performance parameters, cost parameters, and so many different things you want to do.
It's a really big problem. You don't need a full-time team if you're building stuff. Either you're building or evaluating it to evaluate multiple vendors in parallel and try, Okay, we're going to hire three vendors. We're going to put each of them on a 90-day pilot, which, by the way, we make every vendor give us a 90-day pilot for AI. We're going to have a whole team that QAs it. When we don't do that, it almost never works.
When new GPTs or new models come out, there are so many times where I, personally, I've spent so much time training my old model and trying to teach it and say, Hey, do this, do that. I have very specific calls for it to do that. When a new one comes out, do you guys over at Fathom have the same fuckery motion that we have on our side? We're like, Oh, God, everything's about to blow up again. Is that something you guys are facing as well?
Yeah, on two dimensions. I mean, one, we get excited because usually the new models unlock something for us. For example, GPT-5 for the lackluster reaction it got from the market did actually solve a significant problem for us. Hallucination rates are way down, and that actually ends up a whole new class of problems that we were trying to solve before but couldn't. But it causes also other problems in that none of these models are forward compatible. You get something working on GPT-4. It's not necessarily the same with GPT-5. Even more problematically, I think this is something that everyone in the industry is starting to realize, the EOL cycles on these LLMs is now measured in months. Anthropic puts out Sonnet 3. 5. They put out six months later Sonnet 3. 7. Sonnet 3. 7 is more powerful, but now there's a limited number of GPU compute in the world. They're shifting all of their compute to this new model. Now you end up on what we call the LLM treadmill, where if you don't upgrade your models, all of a sudden you find out you're getting all these errors because there's no compute to service them.
Now you're spending as much time upgrading your models as you are basically building new stuff from scratch. The maintenance load on these tools and processes is way higher than anything you've ever seen in software land.
It's one of those things, and I'm going to date myself here, but the original Warcraft 2, because I'm that old. Before UK, so I can see by the smile, you played it. Before I would go and I would attack the orks or I would attack the knights or whatever it was, I would save my military formation. If this doesn't go well, I'm going to go attack it. If they all die, I could just go back again. I wish that existed inside ChatGPT or any GPT that we're going, we're like, Okay, I'm going to try to do this. Quit giving me dashes, or I want you to word it this way. And then for some reason, AI becomes always incorrect, and it just goes off on a tangent. I'm like, Excuse me, sir, can you just go back 30 seconds? That would be nice. And then it sounds like what you're saying is, Hey, I did this great game. Can I pick it up and drop it in over here as well? And it seems like both of those things are absent in the market, even at the highest levels, which is where you are.
Yeah, that's right. I think a lot of advice I give the company is, If you can, try to solve a problem with building something in-house, but know that that in-house solution has six to nine months of shelf life and know you're going to throw it away and probably buy a vendor at some point. But by building it in-house, you have a better sense of like, cool, we at least know we got it to do the one small critical thing that you need to do. A lot of vendors throw a lot of things at you. We have 10 different features, and 2 out of 10 of them work. But yeah, it's like this whole new, again, this whole new paradigm. It's very much an R&D lab. It's very much not at some point in line. It's not as predictable as what we had before this in SaaS.
I wish this was something new in a sense of tech, because I remember, because I'm old enough to remember when the dot com boom and everything was going out with the internet. This is going to be amazing. And then pes. Com is going to be amazing. And this is going to be amazing. It also just blew up all the time. So not just on a personal level, but professional level. Companies you thought would be, fortune A ton of companies are going to be there forever, would be gone two, three weeks after. Are you seeing that with established companies are sitting there going, Oh, shit. The light at the end of the tunnel is not a light, it's a train. We got to adjust because what work today just won't even exist. How short is that time for it?
I think the exciting thing as an entrepreneur right now is that a lot of the big companies are really struggling to release good AI features because it breaks their paradigm of how to do software. Their use against this assembly line where it's like, how do we build software? We say we want to build this feature, we spec it out, we build it for three months, and then we click the button and it moves 10 pixels to the left, we're done. It requires a whole new way of doing QA that most of these companies are doing, which is why I think if you look at most of the big new AI features from a lot of these companies, they're really mediocre. Because they just don't know. They don't have the muscle in the company, which is what is quality. They don't know how to judge basically, subjective quality. They're still looking at it from their objective, did it do the thing? Did it spit out words? Yes, great. Pass it to a, ship it, thing. I actually think this is a challenge if you're buying software because a lot of times the bigger incumbents I actually have inferior products to the new startups.
New startups have their own problems of instability and whatnot. But if you're an entrepreneur, I actually think it's a fantastic time because it's like the incumbents are completely out of their depth in how to build software in this new era. I think it's exciting, actually, as much as it is also terrifying.
Yeah, I think the best example I've heard of this is imagine you're on a train that's going as fast as possible, and you're on one car in the train, and you're friction as much as you can, but all of a sudden, it's going to unlock, and that car is going to be gone. So you better jump or good luck. I wish you nothing but the best, because that's just the reality that we're going to be in. So as someone who's tipped of the spirit, who has been to become very successful with what you're doing and has created a company that, as much as I do hate your things showing up to all the means, is something that everyone uses. Where do you see AI? Because everyone's like, Oh, my God, it's the greatest thing since fire. And there's other people who are like, Oh, my God, it is fire. It's going to burn down my house. There seems to be people who are very polar opposite. Either you're completely madly in love with AI or, Oh, my God, it's the devil incarnated, and they have this paradigm shift. Where do you see it going? Since you are, again, you're in there, you're with the CEOs, you know what's going on better than even someone, the regular person would be, how does this look in five years?
Yeah. I mean, one thing I will say is this is, to me, the greatest technological shift of my lifetime. It's really bigger than mobile, bigger than social. I don't see it bigger than the Internet itself. There is real there. I agree. For all the failure rates and stuff like that, the denominator is huge. Everyone I was trying to because this is the closest thing I've seen to magic. One of the challenges is like, yeah, what is... I have board meetings and we're talking about what's our five-year plan? What's our 10-year plan? I don't know. If you get to AGI in five years, does anything really matter? Can you really plan on AGI type things? Smatter people than I... I think the real question is the open question right now on the market is, my core friend group, the same folks that I leaned on five years ago before Gen AI got to make me feel confident building a business, betting on Gen AI getting really good. It's like we started this company in 2020. 2021, we launched, we put AI in the name of the product, and all my investors were like, What are you doing?
Everyone hates AI. It's easy to forget. It was only four years where AI was being marketed in 2015, 2016, 2017, and it was terrible. It was not AI. It was basically fraudulent stuff. But now we're at this point where everyone's like, Oh, my God, AI is going to happen in two There's some people who still believe that we're going to keep accelerating. I think that group of people that I'm surrounded with thinks it's about 50/50 between we're going to reach a plateau of what you can do with the current tech, and we're going to find more of a off-style the next step up. It's clear that we're getting diminishing returns from the current generation of transparent-based AI, like GPT-5. I think everyone sees all the latest models are now more optimized for efficiency. They're not wildly smarter than the previous model, but they're cheaper run, which is important. Companies running for their margins and all that stuff. I'm taking the approach of we have to assume that things are going to slow down because we assume they're going to continue to accelerate. It's almost a possible plan for anyways. Again, I think GP5 was a good data point of like, okay, it seems they were plodding towards the plateau and we're waiting for whatever the next thing is after transformer models alone.
But it is the most volatile market I can ever imagine. We've been on by... This company has been, objectively, a rocket ship by the last 10-year standards, and we're now just doing pretty good by modern standards where you see companies go from zero to 100 million, a billion in revenue in two years. Then you go back down to zero two years later. Look at the J-R and stuff like that. It's an insanely volatile market full of tons of opportunity, but how long lived those opportunities are, I think, is to be seen.
I think to your point of what does this mean to the human race, I will give a little bit of pushback. I don't think it's better than Internet. I don't think it's better than the industrial revolution. I think it's better than... The only thing better than this is fire. That's as far as the human race is concerned, this is fire as far as what could do. Now, fire was good and bad. It could burn down your entire village, yes, but it also makes good food. We could do these things. As far as what I'm concerned, what I've seen with it, AI is as good as fire. Now, what that means going forward. Good luck. I wish you nothing but the best because it's going to be pretty interesting. You mentioned there's companies that go from zero to a billion dollar valuation, and then two weeks later, gone. Do you think we're going to see in our lifetime, the first $100 million company run with just a single employee? Do you think that's going to happen?
Yeah, I think Sam Altman talks about the first billion dollar company with a single person. I think that's highly possible. Then you can extrapolate all the concerns you have about societal people and wealth inequality and from that pretty easily. But yeah, no, I think that's perfectly reasonable to expect.
Yeah, I think this is something that people don't understand. This is no longer a luxury. We don't get to sit back and say, Hey, I wonder if this is going to happen. I wonder if this is going to affect me. This is going to create wealth distribution issues on the equivalent of basically India. When you look at how people are distributed, especially here in the United States, you're going to see that. So for those of you who are playing at home who might not understand everything that Richard is talking about and what he's doing, you do not have the luxury to sit on the sidelines. So either you're going to be panhandling or you're going to embrace AI because this is what it is. This is electricity. So if someone's walking into that and they're like, Oh, my God, this is terrifying. You're telling me that, Hey, I need to embrace it, but then you're telling me the company is going to disappear in five months. When you're an entrepreneur, you're like, Oh, God, I have to go into this. I know I have to go into this, but I could get punched in the face, or I most likely will.
How do you advise entrepreneurs? How do you advise business owners? I said, Hey, these are some proven tactics that work. Let's do these. Just do these for now. Make sure that if you do get knocked on your butt, you can get back up somewhat gently and go from there. What are the things you advise with?
Honestly, I think there's never been a better time to start something that's really narrowly focused. You hear a lot about the big platforms that are, again, going from zero, like a jazz program, going to zero to 100 million and right back down. But the real beauty of this stuff is you can really tailor stuff to specific use cases, specific problems. You can build faster and cheaper and better than you ever have before. You don't have to have a CS degree like I have in a team of six engineers anymore to build something useful. You can just be a pretty good hobbyist prompt engineer, plus some magic patterns and some prototyping tools, and you can build something of value. I remember 10, 15 years ago, everyone was doing the lean startup stuff. They're like, They're selling stuff before they really even built it. That got taken to an extreme. But now you literally can really narrow down and find a very specific niche, and you can build a really good... I know this is a majority of a lot of markets, but lifestyle business out of like, great, I've got the best new software that solves this one burning problem for car washes.
I actually think that's where a lot of the gold is. I actually think a lot of the gold is at the application layer. A lot of the Investment and noise and all the stuff is all at the foundational layer. It's all about who's building the big infrastructure stuff. But that's a billionaire's game. You need a lot of money up front to do that. I think there's a lot of money to be made at the application layer sitting on top of these tools. If you can get good at bringing them, and that's where I think that person that's going to be the single person company doing 100 million revenue, I don't think it's going to be a foundational model. I don't think they're going to be something like Fathom. I think they're going to be something that sits above something like Fathom or above these foundational models. It just finds a really good niche that just happens to catch a wildfire.
I think that's for the entrepreneurs. I think for the employees, there needs to be this conversation of what's happening because you're seeing in their orgs, you're seeing people where entire divisions are getting eradicated, people who have master's degrees from high school are trying to get jobs at McDonald's right now, and they're terrified. I rightfully think they should be. This is a welcome to this new world. When I was there, being an entrepreneur is not sexy. They did not like that idea. Being in a comic book is not sexy. Being a dork, not sexy. And then all of a sudden now, it's our time. Our time has come. Same thing with entrepreneurs. The employees that I know are terrified. They are fundamentally, they're like, Hey. And they go back to their old model, which is I'm going to be in another degree. I'm like, That's not going to help you. That's over. Those times are gone. So what do you say to those mid-level managers, just senior directors, VPs? What do you I would say to those guys who have said, I built and I have busted my butt to fit into this model of this process of this American dream.
And as George Carlin said it really well, he goes, It's called the American dream because you have to be asleep to believe it. If you no longer believe this model and you are no longer built for this, and the thing you were built for does not exist anymore, how do you adapt?
Yeah, that is the question. That will be the question of the next 5, 10 years. I remember I was a big proponent of... I was telling everyone to listen about UBI 10 years ago, and I was worried about truck drivers back then. Truck driver is number one profession in 30 or 40 states. It's going to go away soon. It's funny. It's really hard to predict these things. I think everyone will be sure that That was the first industry to get through. And years ago, here we are, it's 2025. Actually, it's no, it's artists, it's copywriters, it's pretty soon going to be lawyers, middle-level management. It's all knowledge work. Therapists. Yeah, exactly. What would I say? Honestly, it's like there are no... I would tell you, your fear is well-founded, first of all. And unfortunately, I'd love to sit here and tell you that you've got nothing to worry about, and I think you do. I think what you're seeing when you look at what's happening, college enrollment is down, trade school enrollment is up. I think people that are solving this for first principles, the folks coming out of high school are looking at that saying, Gosh, there would have been a better time to be in the trades.
Now, am I going to tell some VP to like, Hey, you should go back to Uni College and become a plumber? I think that's a tough sell, too. I think there's a middle ground where if you really become a student of this stuff, I still think there's a lot of opportunities in the next couple of years. Again, at the application layer where you could be the person that helps companies get from a 5% success rate that we're seeing to a 25% success rate, and there will be a lot of opportunities there. I think it really depends a lot where you are in your career. I've been building software for 20 years, and I've always thought that I can always fall back. I know how to organize people to build great software. I'm not sure that would even be a skill set in five years. No, that's car. I'm very much planning, if I don't have an exit or retirement plan over the next 5, 10 years. We need to be thinking about what value can provide beyond that. But I do think, very tangible, I think trades will be coming back in a big way.
I think there's a lot of opportunity for people to learn how to become experts. You can be an expert in replacing your own job with AI. That gives you a job over the next couple of years.
We talked about entrepreneurs, we've talked about employees, we talked about where we think this is going and how this is the new fire. What are some of the conversations that none of us are here having? Let me rephrase that. None of us other than you are having in these boardrooms with these people who are very much a tip of the spear, what are the things that you guys haven't made as public yet? If you can, this is, Hey, this is what we're talking about, and these are the things that keep up us at night. Because we know what keeps the entrepreneurs up at night. We know what keeps the employees up at night. Here are us as founders. This is what keeps us up at night as well.
I think the boardroom conversations are more about pace of AI change and how quickly will... It used to be build a software company, and usually at least 10 years before someone really disrupted you. And it started now, it's like five years. It's pretty soon it'll be two years, where there's so much technological change that just undoes... If Valuations for SaaS businesses, you look at them today versus five years ago. Oh, my God. Yeah. Right. The boardroom, I think there's a lot of conversation with that, again, about AGI and what would that mean? Could that just render a lot of businesses irrelevant? I think the conversation we should be having is the one we're tiptoeing around, which is like, how do we as a society handle this? There's a really good short book called Manna, M-A-N-N-A, by this guy Marshall Brain. Do you remember howstuffworks. Com? Awesome website. The guy's actually from my hometown, around North Carolina. He wrote this 25-page book, and it was a tale of two cities. One city actually set in US, it was dystopian AI future, where the robots are in the ears of the humans telling them exactly what, walk 10 steps this way, turn over the burger, that thing.
Another city where it's like, oh, no, a lot of the gains from AI are more shared in mix of society. It's a little hyperbolic, right? But I think a really interesting thought experiment of this is coming, which I don't know that we'll get as utopian as one example or as utopian as the other. But I think everyone's busy fighting, trying to put the genie back in the bottle. The genie is not going back in the bottle. We need to talk about where do we want to put guardrails and push the genie in one way or another? I think the other thing people are talking about is also, candidly, AI regulation. The other thing is, I think a lot of folks in tech land voted for Trump. And one of the reasons they voted for Trump is because he wouldn't regulate AI. And a lot of folks see that basically there's an arms race between us and China around AI. And if there's this belief, right or wrong, that if China gets to AGI first, if you believe in Western-style democracy, bad things happen. And so I think there's so many different levels to this upheaval, but those are the three I would think about.
So where do you think things are going? Because people do have this dystopian fear that all of a sudden it's going to be Terminator. You're going to have the day, it cuts over, and then the rollouts are going to take us over and turn us into cottage cheese. Where do you think and what's more realistic for that?
I think all the paths are still open.
I think- It's not the answer I wanted to hear, but okay. I just peed on myself a little bit there.
I think we would be foolish. I think there's a lot of folks in the island that are concerned about AI safety. A lot of the open revolt that they had at OpenAI a year ago was about this fear that this thing was founded on the premise of AI safety, and it seems to have gotten off that mission thing. A lot of people way smarter than me seem to be very concerned with that. I think, don't want to be alarm us, but I think we should all be alive to the danger. This feels like the critical moment in in human civilization, and everyone needs to educate themselves a little bit and do what they can to make sure we're nudging ourselves in the right direction.
For all of you who have just caught the podcast, we've decided that we're all going to die, we're all going to be out of jobs, and it's completely over, and it's a horrible time to be like, Okay, so let's try to give people a little bit more hope about what's done. There's a lot of conversation about what AI can do and what AI does, and not only just the basic stuff with business, but what's been done medically. Like, Hey, we've made X, Y, Z discoveries, and we've pushed the envelope with that, and how we've looked at a problem that couldn't have been solved by humans for 100 years, it's in 27 seconds. There are some amazing things with AI. Can you share some of your favorite ones that you've seen that have I remember like, Oh, my God, I can't believe it just did that or it figured out that.
I think you just touched on the big one, just a lot of the stuff you're seeing happening in health care, where things that used to be really expensive, analyzing scans, the early detection. Our health care system, my father was in emergency medicine for 30 years. He was the first one to tell you we are really reactionary in health care for a number of reasons. But first and foremost, it's very bit expensive to be basically proactive in health care because someone's got to analyze these scans. They got to look at these blood markers. They got to do all these things, both on the preventative maintenance, preventative medicine stuff, as well as research. This is going to drive down the cost of all that stuff dramatically. Oh, yeah. To the point where you don't have to be rich to get life-extending care well ahead of some acute medical crisis. I think there's a lot of... I think that's going to be the thing we're going to look back at and be like, Wow, we're going to hopefully cure or greatly reduce the harm on a lot of diseases in a very short period of time. But it's going to be the Wild West in the meantime because also our medical regulations haven't really caught up with that.
We don't have to. We don't know how to handle it. But I think that's probably one area you can look at point two and be like, it's going to be a lot of good done there. I think for all the disruption that we're going to see with self-driving cars, that's also going to place we're going to point two. The number one cause of death of people, the number one use of urban land, you think about housing affordability, think about what happened when you don't have to dedicate 40% of your city to parking. Think about what happened when people weren't getting in car accidents left, right, and center. I think there's going to be... On the other side of this crucible, there There are a lot of things that we look forward to. In the same way, you look at the same thing as the industrial revolution and stuff like that. There were a lot of painful things in that transition. A lot of terrible things happened. Humanity was better for that transition in the end. But it will be-I don't think we have It doesn't go as far back as the industrial revolution either.
Even with the IT boom, when technology kicked in, people were like, Oh, my God, these are going to wipe out jobs. Yeah, they did. When tech rolled out, when we had the dot com boom and everything took off the internet, it wiped out walls of jobs. But the job that you had right now did not exist before that. The jobs that I did, the careers I had. So yes, it will wipe out a ton of shit. It will also create a ton. So I think there is, and I think to your medical point, now there's a difference between our DNA and DNA. There's a difference with that, how we measure those things. Some things Don't change, because even if you die of cancer, your DNA is that's your DNA. But the other stuff we can analyze and say, Hey, you know what? We say that everyone should take these medicines. However, based on your stuff, your individualized goodies, you should be taking this. I was sitting with the CEO of one of the companies that does that. We broke down. He's like, Yeah, let's run your blood work. And within a day, he's like, Okay, this is what you need to stop eating right now.
I was like, I'm sorry, what? He's like, Yeah, I'm like, Yeah, but that's supposed to be healthy. He's like, Yeah, for everyone but you, don't eat that. He regrettably did not say that I could have ice cream every day, so I'm still mad at him, but that's what it's a little... I was like, Why can't I have ice cream every day? What the hell? So there is that. We understand, I think, for every single level that we're at, be it an employee, entrepreneur, founder, there is this Optimism, and there's also a little bit of fear. As we get through that, I think having the tools and the techniques right now is like, what are the tools that you're using other than obviously everyone needs to use your software? I get it. Please stop using it on my meetings, you bastard. Everyone needs to use their software. What are some of the tools that you use every day and how to use them differently than everyone else?
I think everyone thinks that in Silicon Valley, we have a whole different set of tools than everyone else. We actually don't. I think there's... What?
I'm done.
We're all using ChatGPT. We're using things like... Magic Pattern is another one I love where it's an easy way to just... Basically, it's an AI for generating prototypes. If you want to use your interface or something, so we build a lot of prototype tools. At the high end, I think the secret is to actually build good products with AI, you actually end up using multiple models. Any feature in Fathom, whether it's generating meeting summaries or finding action items or answering questions based on transcripts. There's a pipeline, and we're using four different models from different providers in that pipeline. We use some from Gemini, some are Anthropic, we use some self-hosted ones. So at the high end, when you're actually building really sophisticated stuff and trying to build the highest quality AI and take it to market, it's a whole different game. But an individual, frankly, I don't think there's a lot of... There's actually, I think, there's so much word-of-mouth adoption of these tools. It's why all these tools go from zero to 100 million so fast, because they're so good that there aren't a lot of secret tools that people are using. It is a lot of plodge at GPT, make, et cetera.
I think the other thing that's really important is when you pick a new tool, you have to understand what you used to do, how you used to operate will also have to change. The simplest example I can give this is we gave very specific PowerPoint presentation slides that looked in a very specific way at a very specific class level for that. That took a ton of time. We use a tool called, and again, do sponsorships or affiliates, I refuse to do it, so this is and that. We use this tool called Gamma, and my team got a hold of it. I was like, Okay, this looks completely different. They're like, Yeah, but we created 300 slides in a week versus a month and a half. I was like, Okay, I guess our slides now look different. Having that adaptability was vital Really important. What are some of the ones that you've used that you're like, Hey, okay, yes. I used to do it like this. I don't work anymore.
That was actually the example I was going to give, which is like, I want my slides to look like this. Gamma is great getting slides out. Are they going to be exactly what I had before? No. No. That's the thing. It's like using ChatGPT in Google search is exactly what I got on search results. No, it's actually better, but you have to be flexible and rethink, what do I actually need out of this tool? Gamma would be the same example when I do. I love creating. Honestly, I do waste more time now on there generating fun AI images from it.
I do, too. I'm glad you brought it out because I didn't want to be the first shameful one to say that. I spend way too long in there just messing with the images because it's fun.
I'm like, weee. Yeah, I'm a put an image in two words on the slide guy. Our branding, actually, we just rebranded and we put astronauts in it. It's the reason we put astronauts in it is because I had so much fun in every deck we've had for the last nine I've got astronauts fencing on the moon, astronauts fighting monsters, astronaut getting to do math with their helmets on. I love it, right? Yeah. Fun is not to be discounted in the workplace. It's worth doing.
It's still got to be fun out there and get that rock and rolling. I'm glad that you stepped up and said that you, too, are a dork like me. I appreciate that you step into that world for me. So when people are sitting there and they're looking at this, one of the things that they're concerned with is if I go to Google and I type in What's the best food in my city? I'm going to get thousands of answers. With ChatGPT, I'm going to get one. People are a little afraid of that. Like, okay, we're now getting it, so I don't have the option to think on my own. I'm now being told, and I've now had the data so synthesized down to this one thing. Is that something you're concerned with as well? Because if I go to the library and there's one book on history, I know I'm missing a lot.
Yeah, there's a big concern about... We've already had this bifurcation, I feel, of what reality or truth is in America to a certain degree. What do you mean? We're not going to get in that. But it is interesting. For as much as you have to be getting things right, there's certain in our cases where it's really, really bad. I think my girlfriend the other day was like, Oh, looking up some place that would, I think, sow something for her. Oh, I need like a... And it gave her three answers, and all of them were completely made up. I mean, that one's at least easy to spot because you can easily verify like, Oh, that's not a real place. But it is a little scary because we are outsourcing judgment. Why we like it is because we're outsourcing judgment. Because who wants to go through a thousand restaurant recommendations? I just want three. Let me pick three. But yeah, we're outsourcing judgment to this AI, and that's why I think, again, I'm grateful, at least if there are reasonable competitors, and it does seem to be that there isn't as much moat in building foundational models as we thought Now, there's a ton of moat in that.
From a consumer brand perspective, ChatGPT has 98% of the market. But I would encourage people to get a second, whether it's Gemini, whether it's Cahod. When you're skeptical, get a second source, Crock, you name it. I think all the smart people generally are diversifying. I don't rely on one to answer the question for that very reason.
I also think if you are trying are trapped in one ecosystem by your own choice because no one traps it at this point. And to your point with the girlfriend asking for a place for someone, I'm like, Yeah, okay, Schmucko, now go check Yelp and compare your options. You're going together. So having that cross-reference is important. It's one of the things that I've coded into mind, which is one of the things I love about GPT so much. I'm like, Okay, if you give me an answer like this, always do this after. And outside of dashes, it seems to resolve. But I think everyone I know will just celebrate so much when the damn dashes and the OGs are no longer included. Stop it. No one writes like that. That doesn't sound like a human. What is wrong with you? So if anyone, by the way, on a side note, who's listening to this, knows how to get rid of the dashes permanently, please send me a message. I will pay you for it. It drives me out of my mind. So with that done-I actually don't even know how to get rid of the end dashes.
I think I read something that they're aware of this. They're like, We're not sure how this got in there. It feels like it's the AI's fingerprint thing. I don't know. It really is.
It's funny because I will sit there and I will tell it over and over and over and over and over That's bringing it here, sweetie. For those of you who are sitting there, we've got tools, we've got adaptive. When those are going through and we talk about what's next for you, not in five years, but what's the immediate next 90 days? Again, you're tip of the spirit with what you're doing over at Fathom. What is the next 90 days for you having conversation with your staff? Because you have to lead differently because now we're in an AI age. How do you lead differently? How do you show up differently in that environment? How do you build 90 day plans? Because anything beyond that, you're, yeah, come on, we don't know.
We've been fortunate in some ways this has come back to our strength. Even from the beginning of this company, I've always been like, We only build 90 day plans. I actually think that... I think in a lot of companies, planning is this art of self-deception and like false perception, where it's like, you're doing technology even before AI, you really can't know exactly where you're going to be in a year. I think it's important to have hypotheses about the future. We believe the future will take this and not this. But then we react. We're more reactionary on a local level. I mentioned our goal earlier of when I get to 100 million in revenue and have less than 150 employees. That's way easier to achieve when you start from 10 employees and you start from 500 employees. We're also a fully remote business. We're pushing the envelope on two dimensions. How do you use AI to basically streamline communication in a 100-person org that doesn't ever see each other in person more than once a year. But I'll tell you that right now, it's still, I think, really exciting times. For our business, the thing we've been really excited about is not just writing notes for meetings.
That's never been our goal. Our goal is what happens when we get all of your meetings, all of your team meetings, all of your company's meetings into one data repository? Because it's a really big data set. It's really hard to move. Historically never been captured, certainly not structured. But if you get all that into one place, we're finding a place where the modern LLMs can actually do really interesting things. We did an example with Protyp the other day where we said, Hey, Fathom, tell us what's the history of transcription engines at Fathom? And it went back through every all hands, every engineering meet for four years, and it wrote a six-page article about everything we've ever done. You think about for knowledge management, right? Yeah.
Also seeing where your loopholes are and where your vulnerabilities are. Say, Hey, you've listened to four years of my conversation. I don't remember what I had for dinner last night, let alone anything else. Being able to sit there and analyze, Okay, where are the holes in our things? What have we missed that was mission critical? That's something that... Because again, I love picking on Father because it shows up and it annoys me all the time. I want permission. I'm like, Bugger off. But the ability to do that and then query everything down the road, that data set is invaluable.
Right. Once you get to a point where it's like everyone hates meetings, but we love having great conversations. I think we're moving towards a world where you can have meetings and just speak things into existence. We could talk about it When we get done with the meeting, it's done. The SOW is written, the email is drafted, the Gamma PowerPoint is already cued out. We get to a world where we get this really interesting dissemination of knowledge across the org in a fun One of the things we're experimenting with is everyone hates sitting in all these meetings where I didn't need to hear most of this. How do we start building everyone a customized podcast that listens to every meeting adjacent to your function and gives you like having across the board today? There's just so many fun things you can do now that you literally couldn't do even six months ago with the LLMs we had then. I still think I still wake up every day feeling pretty optimistic. I look inside my window and feel less optimistic, but I feel like we'll get there. Humans always solve things at the absolute last possible minute, but we usually.
Churchill said it really well. He says Americans always do the right thing after they've done everything else. Exactly. That's where we are on this. I'm like, Oh, God, here we go. Here we go. All right, just survive and hold your breath long enough. Go on that one. How are you dealing with... Because a lot of, and this is getting away from the AI, you've created a very successful brand, a very successful company. It's all remote. A A lot of founders, a lot of owners of companies have problems with that. Be there, how do we keep my team motivated? How do I keep them honest? How do we keep them unified? How do we build this cohesive culture? How have you survived and thrived in that environment?
I think one of the reasons why I have this goal around 100 million with less than 150 employees is I've had a lot of very successful friends that go IPO, get to really big companies, and all of them say, Gosh, when I tell them we're like 80, 90 people, they're like, Oh, I missed That was so much fun. I always ask them, When did it stop being fun? They're like, Well, the answers vary, 100, 150, 200, but it's all in that range. I hypothesize from talking to them like, There's some point at which you switch from a high trust environment to a low trust environment. I picked 150 for our goal because that's the Dunbar number, which is this theoretical limit of how many real friends you can have. I think when you get above that number, it's impossible for everyone to be friends in your work, and you're almost inherently going to be a low trust environment. I think it's interesting. I see all the same stuff where it's like, I let my employees work from home and they're not really working that hard. Oh, that's because you have a low trust environment. I don't exactly know what creates high trust versus low trust.
I think it's a cultural thing. I think it's a lot about maybe how we lead and how we communicate and how we motivate folks. But I do know you should just be aware of what environment you have. You're right. If you have a low trust environment with your employees, one, maybe you should get curious about how did that happen? Two, yeah, they need to get people back in the office because if you can't trust that they're going to work, put in the work. Instead of structures might be evaluated. But I think we've been very fortunate in that we have an amazing team that loves the work they do. They're each are given enough autonomy and given trust. I think high trust environments happen because when we hire people, I I tell our team, tell our execs, You should trust by default. If you didn't want to trust them by default, you shouldn't have hired them. But you should by default, you should give them room to run. It's like a Gamma. You shouldn't be prescriptive about the deck needs to look exactly like this, is it 80% what you thought it was, but 100% what it needed to be?
Yes.
I think that's an important factor. Eighty % of what you thought it was, 100% of what you needed it to be. I think when hiring people, one of the best advice I ever heard was, Would you trust this person to feed your children? In other words, if you got in an accident and you couldn't provide for your family, would you trust that these people could do it for you? If you can't say yes to that, then you have failed in the hiring process. I guess my next question is, as you built this high trust environment, which takes time and it takes personalities and there's very specific things, help me with our view on getting rid of someone who does not fit into that environment.
Our goal is 90 days. You usually know by 45, 60 days, and then just out of an abundance of caution, I think you can go as long as 90 days. You really can't go any longer than that. But that's our goal. I think it's generally pretty quick. The nice thing is once you have a high trust organism, the organism will reject any organs that don't seem to fit in with that. They themselves, As long as you've got a good way to have listening posts that are not just... That's what gets harder, I think it gets bigger. It's like, how do people trust they can tell me, Hey, this new executive brought in is not our DNA. Thing. But the organism knows if you can find a way to observe it.
It's interesting that you do 45 days. I'm much faster on that. We're very quick. My grandmother said it really well, When you're dating someone, you will know within three weeks. And if you don't know, you know. And she's just bulletproof with that. And I miss her greatly. She's no longer with us. But when it comes to hiring someone, normally within the first 48 hours, and we don't pull the trigger that quickly, but within the first 48 hours, you've got enough of an icky, you've got enough of a, Okay, I don't know if I want a second date. This might have not thought over again. So I love that you have a big heart and you have high empathy. So model tough to you and your people.
Well, what I'd say, actually, we used to probably have... I would say that number used to be lower. But then every time we looked at it, we said, Any time we find out it's not a fit in the first week, that is a real indictment of our hiring process. A thousand %. I think now we're generally getting to things like, Okay, we think our hiring process is pretty good, which means no one should be failing inside of three weeks, four weeks. We shouldn't be able to tell. It shouldn't be anything that crazy. But you can't test for everything in the hiring process. That's where I think like, Okay, even with the best hiring process, those issues will show up month in. That's when, Oh, they were all in their best behavior in the hiring process, and we got our lucky in our references and stuff like that.
We normally give people tests. We're like, Hey, need you to do this? Did you to do that? We go through that process like, Hey, do these things. We still have people actually test of what they need to do. That helps us out with what we're doing. As we go through this and as things are changing as an organization, and as for you, as you had a level of success that you never thought you were going to have, doing something you never thought you were going to do, what's next? What's the next big thing that you're like, Hey, I really want to accomplish this?
I think one of my superpowers as an entrepreneur is I have these built-in blinders thing, where I get so passionate about what I'm working on. I actually think one of my superpowers is just getting passionate about things. I would say, I always like to hire passionate people because passionate people get passionate about anything. They get passionate about plumbing. They're going back to what you transition your career into. I think if you told me, Hey, Rich, go be a plumber, I would get so excited about fittings and the right and stuff like that. I think right now, there's just so much... It's the most fun time to build. It's the most volatile time to build. It's also the most fun time to build. I do, on a personal level, get really passionate about what I see happening in public discourse and what I'll hesitate to call politics. I met with another entrepreneur yesterday who told me he's running for city council, and I think he expected me to be disappointed. It might be confused by that. No, I think that's amazing. I was like, That's amazing. I was like, politics, not enough, I think, people of high character, good judgment go into politics because they judge it to be EV negative, and it is EV negative.
But that's not why you do it. You do it after you've gotten so much from the society that you feel like you should give back. I think there's a lot of stuff that I would love to do in that sphere in the future because I think our country could use some help. I think it could use some high judgment people that are not out for themselves.
A thousand %. It's interesting because it's a similar conversation I had over the weekend. We were talking about, Hey, we've all been very blessed. We've all been very successful. Maybe it's time to get back and offset and maybe course-correct some of the things that are going on that have been going on, not just for this administration, but for many, many, many, many, many administrations. We're going back double-digit. It's like, Oh, my gosh, we have to pivot this, and it's time to have these people take over and do something different. Other than you're running for president in the next 27 minutes, if someone wants to track you down and they want to learn more about you, and they want to connect, because I'm just super grateful that you shared this stuff, what's the best way? How do they get a hold of you? How do they get a hold of Fathom? What's the best idea?
Yeah, check out Fathom. Fathom. Ai. It's free to use. Please give it a shout. Then you can find me on the only social media that I use, which is LinkedIn. Find me on the side of the social media is LinkedIn. That's the interview there. I love you.
I got you. I really appreciate you coming on. Thank you so very much.
Charles, this is awesome. Thanks for having me. Absolutely.
All right, guys, that wraps up our episode with Richard. I want to thank him for going out and sharing some insights on where things are going and the unforgiving truth of what's next with AI. It has two very specific paths, and it's in our ability to dictate where that goes. All right, guys, I'll see you in the next.
In this candid and fast-moving episode, Charles sits down with Richard White—founder and CEO of Fathom AI, the top-rated AI note-taking platform on G2 and HubSpot—to unpack the truth behind the AI gold rush. Richard shares why only 5% of internal AI initiatives actually succeed, and what separates innovation from illusion in today’s hype-driven market. Together, they dig deep into the harsh realities of building and buying AI software—from skyrocketing failure rates and short model lifecycles to the “LLM treadmill” that forces companies to constantly rebuild just to keep up. Richard breaks down why most big corporations are struggling to adapt, how startups can outmaneuver them with speed and focus, and why the future of work will favor the few who learn to “think with AI.” The conversation stretches beyond business—exploring the coming social upheaval, the rise of one-person billion-dollar companies, and the ethical crossroads of automation, employment, and human creativity. Both Charles and Richard keep it honest, funny, and forward-looking as they challenge listeners to rethink what it means to lead, learn, and stay relevant in the AI age. This isn’t just another talk about artificial intelligence—it’s a survival guide for entrepreneurs, employees, and visionaries navigating the most disruptive technological shift since fire itself. KEY TAKEAWAYS: -Why the future belongs to those who learn to think with AI, not compete against it -How automation is paving the way for one-person billion-dollar companies -The ethical and human implications of an AI-driven economy—and how to stay grounded amid disruption -The mindset shifts needed to stay relevant, creative, and adaptable in the AI age Head over to provenpodcast.com to download your exclusive companion guide, designed to guide you step-by-step in implementing the strategies revealed in this episode. KEY POINTS: 01:18 – The 5% reality check: Richard opens up about why 95% of internal AI initiatives fail—while Charles unpacks what separates companies that truly innovate from those just chasing the hype. 04:42 – From bootstrapping to breakthrough: Richard shares the journey of building Fathom AI into a top-rated platform—while Charles highlights the timeless lessons in execution, focus, and product-market fit. 08:15 – The LLM treadmill explained: Richard reveals how rapid model updates force companies to constantly rebuild—while Charles reflects on why adaptability is now the ultimate competitive edge. 13:28 – The illusion of enterprise AI: Richard breaks down why corporate AI projects struggle to deliver results—while Charles explores how small, agile teams can move faster and smarter. 20:04 – Thinking with AI, not competing against it: Richard discusses how humans and AI can complement each other—while Charles reframes the idea of “AI replacement” into one of “AI augmentation.” 26:51 – The one-person billion-dollar company: Richard predicts a future where automation and leverage allow individuals to achieve massive scale—while Charles examines what this means for the workforce and leadership. 34:22 – Ethics, disruption, and the human future: Richard warns about the social impact of AI’s rapid acceleration—while Charles challenges listeners to shape technology with purpose, empathy, and accountability. 41:10 – Staying relevant in the age of acceleration: Richard closes by sharing how curiosity and lifelong learning keep innovators ahead—while Charles reminds us that the real advantage isn’t AI itself—it’s how you use it.