Beyond The Prompt - How to use AI in your company

From Roadmaps to R&D: How AI Is Changing Product Development - with Richard White, Founder of Fathom AI

Episode Summary

Richard White, founder and CEO of Fathom AI, joins Beyond the Prompt to discuss how generative AI is changing the way product teams plan and build. As model capabilities improve quickly, estimating effort and impact becomes harder, challenging traditional roadmap driven development. Richard explains why his team operates more like an R&D function, how they experiment with different models and use cases, and what it takes to build in a world where the underlying technology keeps evolving.

Episode Notes

Fathom was built on the assumption that transcription would become commoditized and generative models would steadily improve. Rather than training proprietary models, Richard focused on building the infrastructure around them and waiting for model capabilities to reach the right threshold.

In this conversation, he explains why AI has made effort and impact harder to predict, and why that shifts product development from roadmap execution toward experimentation. He describes separating an exploratory AI team from core engineering, structuring that team to prototype and write specs, and expecting a meaningful portion of experiments not to work.
Richard introduces his Jenga model for AI development, testing different models and use cases to find where resistance is lowest. He also discusses the operational realities of rapid model updates, hallucination rates, and what he calls the LLM treadmill.

The discussion explores qualitative QA, organizational design, buy versus build decisions, and why leadership taste plays an increasingly important role as AI lowers the barrier to generating outputs.

Key takeaways: 

Fathom: fathom.ai
Fathom LinkedIn: linkedin/company/fathom-video/
Richard's LinkedIn: linkedin/in/rrwhite/

00:00 Intro: Why AI Breaks Roadmaps
00:19 Meet Richard White (Fathom AI)
02:16 From Roadmaps to R&D
04:49 Designing AI Teams for Speed
07:11 The Jenga Model
09:56 Failing 50% & AI Team Psychology
13:40 LLMs as Interns & Anti-Planning
21:01 QA, Data Pain & Developing Taste
24:59 Executive Taste & Culture Rules
27:20 Reacting to AI Waves
28:50 Fathom’s 4-Step Product Plan
30:47 What New Models Unlock
32:13 From Scribe to Second Brain
40:32 Build vs Buy in AI
45:32 The Debrief

📜 Read the transcript for this episode: from-roadmaps-to-rd-how-ai-is-changing-product-development-with-richard-white-founder-of-fathom-ai/transcript

Episode Transcription

[00:00:00] Richard White: It's really hard to estimate effort impact. I could spec out a project today using an LM model and maybe it'll take me six months to build it. And if I wait six months, maybe it'll take me six hours to build it. Now, not only does it take six hours instead of six months, but the output is way better. So the impact is way higher.

Everything you knew about product management kind of gets thrown out the window.

Hi, I'm Richard White, founder of Fathom AI. Uh, one of the top AI note takers. Really excited to have this conversation, getting to talk about how we think about. How AI has changed product development, our Jenga model of software development that we use to build AI features and how to get off the LLM treadmill.

[00:00:34] Jeremy Utley: For folks who don't know the name, Richard White, and maybe you don't even know, fathom, would you just give a quick intro? Why should you be interested in this conversation today?

[00:00:45] Richard White: I'm a software engineer by trade, but kind of turn like product designer, you know, Jack of all trades, product person, you know, we sort of fathom about five years ago.

Fat nails, like one of the top kind of AI note takers, join your meaning, takes notes, figures out your action and et cetera, et cetera. We started this company five years ago before Gen AI really ever kicked off, and we started with two hypotheses. One was we thought transcription cost was gonna go to zero, and we thought Gen AI is gonna get really good.

Um, we kind of believed that we shouldn't build models, but we kind of should apply models. And I think I was fortunate enough to have this insight because I. Been kind surrounded by like y Combinary folks for about almost 20 years and so new, which people are investing in things against optic and open ai, and I think kind of all in that five-year journey from those hypothesis to those hypotheses clearly paying off, right?

Transcription is basically free now, and Gen AI obviously works. Uh, I don't like the AI that we had five years ago. I think two things that we've kind of discovered along the way is like, one, how genis kind of completely upended how I think about software development. We do software development for 20, 25 years, and it's almost completely inverted now.

It's much more of an r and d process than it's a roadmap process. And I think also we think a lot about kind of what I call the Jenga model of product development with ai, as well as kind of like, when do we think about, just internally when we buy or build, uh, basically AI automations within our org.

Sure. Right? And like, how do we scope them in notary side and when do we use our AI c and when do we not, sort of thing.

[00:02:16] Jeremy Utley: So, uh, maybe we could start there with the question of, uh, r and d versus we, we, I, I can't remember exactly what you said, but I would love to start there.

[00:02:25] Richard White: Yeah, I mean, so I think, look, I think if you've, if you've done product management, if you've done software development for the last 15 years, right?

You, you kind of put stuff on the roadmap and there's not a lot of risk and things you've put on the roadmap, right? Technical risk, if you're doing SaaS, you're doing anything for the last 10, 15 years. Technical risk wasn't a big component. The big challenge was kind of estimating effort and impact, right?

We'd spent a lot of time trying to figure out those two things, and if you could do those two things accurately, you can pretty much have an optimized roadmap. I mean, this is what CIGA was pretty famous for 10, 15 years ago. They had amazing kind of quantitative, uh, prowess. And so they got really good at like knowing if we add this, this thing to Farmville, we will see this lift.

Right? And that was the whole game. But now with like ai, it's kind of shifted in that. It's really hard to estimate effort impact.

[00:03:12] Jeremy Utley: Yeah.

[00:03:13] Richard White: For example, you know, effort shifts very quickly, right? You could, I could spec out a project today using an LM model and maybe it'll take me six months to build it and if I wait six months, maybe it'll take me six hours to build it right As a new model

[00:03:24] Jeremy Utley: comes

[00:03:24] Richard White: out.

[00:03:24] Jeremy Utley: Sure.

[00:03:24] Richard White: Right? And then in the other dimension, impact also shifts, right? Like that, that version I was gonna build six months ago, that would take me six months to build because that's kind of brute forcing an LLM that really wasn't quite ready for that use case. Maybe now not only does it takes six hours instead of six months, but the output is way better.

So the impact is way higher. And so it's kind of funny now it's like everything you knew about kind of product management kind of gets thrown out the out the window. Um, and you kinda have to think of a new model.

[00:03:52] Henrik Werdelin: Do you feel like you already have changed the way that you build? I'm asking because I have businesses that are big, you know, like I started that public listed, blah, blah.

And then I've involved in new companies and there's obviously. A completely different mindset in the companies that have been started in the last 24 months because they almost have been able to go completely to this kind of new AI stack where the organizational design is a little bit different because a lot of people can do a lot of different things and software, for example, is often pitched now and actually functionally code that is being basically submitted as in like, do you want to merge this to the master copy already?

Right? Rather than just kinda like a concept and a wire frame and design, blah, blah, blah. When you have a company that's five years old, but that's born basically on ai, do you feel you kind of have to morph your organization in a way of thinking about it just because you're five years old? Or do you think that you kind of are already in the new way of operating?

[00:04:49] Richard White: I mean, I think there's kind of two sides of where I think AI has an impact. One is taking away kinda a lot of the current work of software development, which I think is what you're alluding to. It's like, you know. I wouldn't wanna be a front end engineer in 2026, right? Like a lot of that stuff just gets generated automatically.

I, there's other side of that, which is like if you're actually building AI features yourself in your product, the way you think about vetting and building them also is very different. I would say we're still kind of traditional in the first sense, because Fathom is actually not a traditional kind of SaaS product.

Traditional SaaS product is candidly just a lot of forms and workflows, right? Uh, a lot of our technical engineering challenges are actually one of like, it's a distributed system that's a real time system that it has to be highly reliable. And, you know, we try to get you your transcripts and your meeting recordings and your summaries within 30 seconds of meeting ending that.

And we're trying to do it for hundreds of thousands of people a day. That's like a, you know, a big technical challenge that's way beyond what you can have spin out of cloud code today. But on the other side, when it comes to like building AI features, yes, I think we've completely shifted how we build AI features.

I think one of the things I see people get wrong about this, and I talk to people all the time, it's like, dare you have an engineering team? And they're like, great, I've got 10 engineers. Three of them are now gonna start building AI features. And that means they're gonna start writing the prompts, figuring out which models, figuring how to host the models, et cetera, et cetera.

And the one big shift that we made that I think's been paid a lot of dividends as we kinda separated out the the what model, what prompt, what pipeline from the, how do we host it, how do we serve it? Like how do we scale it? We have a separate team that's kind of what we call just the AI team, AI engineering team.

And they spend all their time just kind of honestly kinda just prototyping. And they use tools like magic patterns to kind of make really high avail functional prototypes. But it's all in service of writing a spec. They throw away that prototype. They just, that output is a spec that then goes to engineering, Hey, here's the five models we're gonna use to build a feature that finds action items.

Right.

[00:06:46] Jeremy Utley: That makes a lot

[00:06:47] Richard White: sense. And what's challenging about it is that like the. The service space continues to, to grow, right? In that like 4 24 months ago, it was like, well, we've got CLO two and we've got GP P four. We kind of, you know, play with, well, what's cloud two is better? Some of these things, GP four better at these.

Now we've got Gemini Open ai, Andro, as well as like, we've got deep seq, we've got Quinn, we've got all these sorts of things, right? And so, um, I kind of teach this, you know, we, we call it the Jenga model. So it's like you play Jenga, right? Like push on a block. You know how to play. It's like if you could play the game where you get to touch the block.

Right. Some people don't play the rules. Once you touch it, you gotta, you're committed to it. Let's assume you're playing where it's like, I can touch the block and I can untouch it.

[00:07:26] Jeremy Utley: Wait, are those the rules? Just, just as an aside, are the rules of jingga that once you touch, you have to try that? I didn't

[00:07:30] Henrik Werdelin: even know.

[00:07:31] Richard White: Technical, yes.

[00:07:32] Jeremy Utley: Okay.

[00:07:32] Richard White: I believe it's like's. And

[00:07:33] Jeremy Utley: so do you enforce those rules with ai or are you saying you're looser? No, you touch, you go, whoa, whoa, whoa, whoa. That's gonna move the tower.

[00:07:39] Richard White: Yeah. We're much more of the house rules that I think everyone plays. Yes. Where it's like you can kind of drill some test wells, you can touch it.

Right.

[00:07:45] Jeremy Utley: Okay.

[00:07:45] Richard White: And I think about. I think about blocks in the Jenga tower being kind of models and use cases. Right. And so if you push on a block and it, you know, you get resistance. Our thing is find another block.

[00:07:56] Jeremy Utley: Yeah. Right?

[00:07:57] Richard White: That means that model is not the right tool for the job. Well, so

[00:07:59] Jeremy Utley: we're looking for where is easy, but I like that where you said model and use case, right?

Because sometimes the challenge is the use case is the wrong use case for the model. And other times the model is the wrong model for the use case. What you're saying is even having that kind of fluidity as a development team, hey, if you're trying on this use case, try these four models and see and kinda do a horse race.

Or if you're working in this model and you're not making progress in this use case, you've got a whole tower. Uh, is that how, like how do you think about if, just to use your metaphor, where does the tower of use cases live? If a developer is trying to think of what are other blocks I could push on? Is that.

In an internal kind of leaderboard of ideas in the organization where, where do they go to find blocks?

[00:08:41] Richard White: And so this is where also where it really differs from traditional software development, right? I meet with our AI team once a month and we kind of just brainstorm like, you know, they bought a context on the product, they kind of know what kind of things we're talking about.

You know, they've got my wish list. I'm like, I wish we had this feature. I wish we could do this or wish we do that. I'd always tell them like 60% of what we explore should probably be coming from the wishlist of things that myself and the product team would brainstorm. I give you carte blanche to go. If you just think of something, oh, this new model came out and I think you could do this thing really well.

Go for it. Right, and that's why it feels true like an RD team.

[00:09:13] Jeremy Utley: Yes.

[00:09:14] Richard White: We don't say exactly, here's what we want you to build. We say, here's the universe of things we're trying to build. We're gonna give you as kind of artisans, right? Like the charter to go explore in that universe.

[00:09:25] Jeremy Utley: It's

[00:09:25] Richard White: so interest, interesting and interesting.

Key thing is it's

[00:09:27] Jeremy Utley: brilliant,

[00:09:27] Richard White: we're gonna give you also the opportunity to fail and that's the other engineering teams can't get. Like the AI team should be failing like 50% of the time. I think

[00:09:35] Henrik Werdelin: we have this feature in one of the companies I'm involved in where something takes a bit of time, like 30.

Seconds, 60 seconds. And so somebody built a little snake game that the end user could play while they were waiting and it's like completely random. That's great, but hugely popular now. Right? And so now there's like a score.

[00:09:53] Jeremy Utley: Now they actually wanted to take longer. That's great.

[00:09:55] Henrik Werdelin: Exactly right.

[00:09:56] Jeremy Utley: Okay. Wait, you said an AI team should be failing 50% of the time.

Say more about that. Be because there are two ways that I think one could take that statement from the CEO of the organization. One is I tolerate 50 per, I'm not concerned if you fail, no big deal. The other way to take that is team, you're succeeding too much. You aren't trying bold enough things. I want to see you fail more.

Can you talk about those two kind of aspects of it?

[00:10:22] Richard White: I mean, I think it's both, right? I think there is a, we have a reasonable appetite for risk here. Let's go throw things at the wall. I think it's a, we want you go try things you're not sure gonna work. And there's a number of times we put things on there.

We're like, this is kind of a hail Mary. Occasionally those things actually work, right? It's kinda surprising LLM land that sometimes you, you know, that's cool. Throw Hail Mary and you, you catch it, right? Yeah. So, so yeah, I think that's an important part of it. And I think it's also what kinda keeps it fun.

So it's a little bit pressure on taking risks and it's also back pressure on don't, you know, sometimes people go down a rabbit hole and get stuck, right? It's like, cool, don't bang your head against the wall for weeks on this. Again, push on the block. See if you get resistance.

[00:11:01] Jeremy Utley: How is it managing psychology here?

I don't, I don't know if you guys have watched the Thinking game yet. It just came out about demos and, you know, the whole deep learning and neural networks, et cetera. One thing that they had a kind of a cutaway with Paul Nurse, who is a Nobel Prize winning, uh, biologist. I think he's the CEO of the Francis Creek Institute now at Harvard.

But one of the things he said, which I love, which is so applicable here, he said, you know, I've been running a research lab for the last 50 years. He said 90% of the time. I'm an amateur psychiatrist helping people work through how they feel failing so much. And I wonder if you, Richard, do you feel that way?

Like how is it for engineers who maybe have been trained in a world of deterministic models and code verification, and probably they went to elite universities, they've always succeeded. Are you having to play the role of psychiatrist amidst failure or how is that going for you?

[00:11:55] Richard White: So I think two things on this.

One is. The profile of the engineer on our AI engineering team is very different than the profile of the engineer on our core engineering team.

[00:12:03] Henrik Werdelin: Say more.

[00:12:04] Richard White: They're often, much earlier in their career. They generally have. I think we've found a lot of success with folks that have, you know, usually some advanced degrees, at least they have an appetite or generally they're comfortable reading white papers.

We've seen like people that read white paper free regularly or like have had that kind of muscle, like they know how to reevaluate things. They make better decisions when it comes to, oh, this new model came out. I understand what this new model can do. And so they, it, they have to guess in test a little less, but it doesn't look like, I don't think from an engineering perspective, any of them have the kind of engineering background or at least the experience level that would ever get them on our or core engineering team.

And my already to is like, you know, we've had some people like, oh, I wanna get on the core engineering team. It's like why That core engineering team is not gonna be relevant in 10 years. You're, what you're doing is gonna be relevant 10 years. Right.

[00:12:49] Jeremy Utley: Well let's revisit that 'cause that's interesting, but continue.

[00:12:53] Richard White: But yeah, so it looks very different, right? So the folks much earlier in their career. And so I think naturally that probably lends to a little more optimism. Um, that's one thing I'll say. The other thing I'll say is we're a fully remote company. I love fully remote. I'm like a huge fan of it being the kids.

I like to work from anywhere and I hate going to an office. Having said that, the one team that raised their hand and said, we wanna be in an office together, was our AI engineering team. And I, you know. So it's their RD lap. So sometimes we try to pair them more than our typical engineers because I think it's easier to fight.

You know, it's, to your point about psychology, it's much easier to be two of you against a problem, right? Mm-hmm. Uh, than just one of you. And so I think that in Personness is helpful where they can be like, oh, I'm banging my head against this. Gimme a look at this. Oh, da. Right? So I think there's something to that.

[00:13:40] Henrik Werdelin: Can I, um, ask a little bit different, you are kind of an expert in. Kind of communication, right? You last company, it was kind of customer communicating to a company. Now you're kind of like, you sit in the middle of people communicating to people. As we talk about AI become this Ironman suit, where do you think that AI currently provides most value outside the pure kind of transcription?

Do you have like a way of thinking about how we should all be using this to really get the benefits?

[00:14:14] Richard White: My, uh. My friend Sheer, who was CEO of Twitch and is now off doing a new as Rup has like this good mental model where he kind of describes LMS in kind of, ah, GT three was like a, I think like a sophomore in high school.

Right? Right. Yeah. Ah, GP four is like a senior in high school, and so. We talk a lot about like what would you, you know, now that the modern alums, you're getting into like, okay, now we're into like, clearly into college. Some of 'em my even postgrad.

[00:14:38] Jeremy Utley: Yeah.

[00:14:39] Richard White: And kinda like if you had an intern that was this, what would you have them do sort of thing.

Um, and I think what's been kind of exciting for us is, like I mentioned, we built this business. Assuming gene AI would show up eventually, and it'd be like, we're gonna go build the rest of the car and eventually someone's gonna just show up and drop off a new engine. We're gonna drop it in. And two years ago, the state of the art for us was.

AI can just write really good notes on a meeting. Right? It can do again, things you would imagine a college intern scribe could do. Now we've like, cool. I think we can take really good notes for you. We can take a really good action. We can do all that sort of stuff. Now, the frontier for us is moving to the next phase, and I think it's even more exciting, which is now is it?

It's the scale of that college intern. It's not just one college intern. It's a college intern that actually has 800 hours a day.

[00:15:24] Jeremy Utley: Right?

[00:15:24] Richard White: And can watch every meeting that's happening at your company. Then can you each have one of these people and they can basically curate from that, those 800 meetings, the 10 minutes you need to see?

And so we're kind of moving towards this world where AI is actually a source of insight across an org as opposed to just being like a, it's a scribe in your meetings, right? That's just taking away some rote low value work. It's now kind of a second brain kind of thing.

[00:15:49] Jeremy Utley: Is that when, when you think about the future of Fathom, is that not to get you to kinda make forward facing statements or something, but like.

Do you see connecting for, I mean, if you're taking notes in everybody's meetings, you probably are far better positioned to make insights across meetings than any individual perhaps. Is that the future of the, how do you think about that?

[00:16:10] Richard White: Yeah, I think there's two dimensions that we are really excited about.

One is, look, people don't like meetings, right? Why do they not like meetings? Well, they hate having to like talk and take notes at the same time. It's kinda stressful. Cool. We've gotten rid of that. They also hate.

[00:16:23] Jeremy Utley: No, you haven't. I'm, I'm doing it right now. If I, well, okay. I would actually put that to you as a challenge, as a separate thing.

If you could help me and Henrik actually be more present, I find like I have to re-listen to these conversations because I'm scrambling so fast to take notes and think about that by the end of the conversation. It's as if I didn't even have it and when I listened to it, when we release, I'm like, man, that was amazing.

It's really, I mean, to your point, I mean, uh, it's the ultimate challenge of listening and taking lessons.

[00:16:50] Henrik Werdelin: I, on the other side, don't ever take a single note and I am fully reliant

[00:16:54] Jeremy Utley: on the transcript. He's perfectly happy. He's perfectly happy.

[00:16:57] Henrik Werdelin: That's because by among other things like, uh, what's he called Jensen from Nvidia.

He had this kind of pretty cool video out where he says, I don't even do long-term planning. The only thing I focus on is trying to be better at what I do right now.

[00:17:10] Richard White: Mm.

[00:17:10] Henrik Werdelin: And I actually thought, and if we know that, I think it was 49% of everybody's lost in daydreaming, you know, like throughout the day. And we know of happiness research that.

If you daydream of something bad, that's the worst. If you daydream something good, that's the second. If you do not daydream, if you're fully oppress and you are more happy than the two other ones. And so I am now on a 2026 kind of crest of just trying to be ever more present.

[00:17:35] Jeremy Utley: Okay. Now, okay, so as the rabbit trail coming back, Richard, to the future, you're saying nobody likes to be in meetings because they're simultaneously taking notes.

Please continue.

[00:17:45] Richard White: I will say I didn't know that Jensen quote. That's fantastic. I actually, I'm a huge anti fan of planning. I do not like planning. I think planning is false precision at most companies, right? Like we have big goals about where we're going, but we don't do sprints. We don't do estimates. We don't do quarterly, na da da.

I try to get my team every day to be like, what's the highest leverage thing you could do today to like approve the org. Anyways, as an aside,

[00:18:08] Henrik Werdelin: I agree with

[00:18:09] Jeremy Utley: that. Okay. But hang, hang on. Sorry, this, we're gonna go on a tangent here for just a second. I agree. I agree a hundred percent. I was reading a book that one of our listeners actually recommended to me called Primal Intelligence, which shout out to Simon Wallace Jones who recommended it.

It's written by someone who runs a lab at Ohio State. I think. And he's been working with special forces in developing kind of a framework around primal intelligence. And I, I haven't gotten through the book. I kind of question the premise, but I just shared this with you all in case it's interesting as fellow nerds.

He said one of the core elements of primal intelligence is imagination and our ability to imagine possible futures and things like that. And he said he was asking these army rangers, they're in the middle of a war zone. They're responding instinctively imaginatively to unexpected challenges. He said as if it was a difficult question.

He said, how do you train your imagination? And the response he got from this army ranger colonel, was planning, planning, planning, planning. And what's interesting is, and I'm very much like y'all, I actually stopped reading the book right there because I was like, I don't buy that. I don't know what you think about that, but I, I can understand how planning spark.

Can I just cut into

[00:19:25] Henrik Werdelin: the one, the things imagination, Richard, we're sorry. We're going more in tangents than we normally do. This is great. I'm also super fascinated, this book that I've talked about on this podcast a lot called, uh, white Greatness Can't Be Plant The Fallacy of the Objective. It's an AI researcher that basically trying to talk about how we get to agi.

I, but his point is all big systems comes from open-endedness. This idea that we can kind of predict it and then just find the resources to get there is a fallacy in his mind. And I think in many ways for entrepreneurship, it's a very good kind of philosophy because he basically say that you should follow with great intensity, your interestingness, and that that creates stepping stones.

And then you land somewhere good if you're lucky. And I actually think that's a better articulation of a strategy than this. If you can see it, you can make it kind of, uh, way of thinking about it.

[00:20:09] Richard White: Well, since we're doing literary references, I, I'll throw to, uh, one of my favorite quotes is Eisenhower, and, you know, another military guy.

And he is like, you know, planning is essential. Plans are useless. Yeah. Essentially, right? Like, and it's the same thing through Fathom, right? Like, we had a plan. We, we planned, we thought about how we do go to market. We thought about where Jen, I was going, but we didn't try to, I, it's the intermediate part where you try to make it too much of a concrete artifact, right?

Cool. We think a lot about the the

[00:20:34] Henrik Werdelin: future, the plan become funnel, the plan becomes the whole objective instead of like what customers are telling you or what your intuition is saying. Correct.

[00:20:39] Jeremy Utley: That's cool. Correct.

[00:20:40] Richard White: That's cool. I think that's, especially being in AI land, you know, we, we do product roadmap webinars to customers.

Our product robot does not go more than 60 days out because we want to give ourselves also the license to today. Oh crap, this new model came out. R and d team just dropped this. You know, just put this extra precedent under the tree. Like we wanna be able to follow that. Right. I think it, it really behooves you to be very fluid in your kind of execution.

[00:21:02] Jeremy Utley: Can you talk for a second about when you think about organizing the team, there's a group of it. It sounds like your core engineering team is kinda working on core engineering challenges and then the AI team is more exploratory in nature. I think a lot of organizations, a lot of folks that we talk to, uh, especially more kind of conventional companies, it's very difficult to carve out resourcing at all for exploration.

You know, they're 99.999% core ENG team. How do you think about the proportion. Of executing and exploring, just even from a high level conceptual and how do you build the kind of ROI case to yourself as the single fiduciary responsible for, you know, returning capital to investors, et cetera, et cetera.

[00:21:49] Richard White: It, it probably varies on what you're building needs to be, you know, infused ai, like how core AI is to what you're doing.

Right? I imagine that ratio of come core engineering that probably is based on how technically challenging is the core product. And like how much AI is in it, right? So for example, we have about a four to one ratio of like core engineers to AI engineers. If we were just more attritional on SaaS app, we didn't have this like big distributed system scaling problem every day.

I'd imagine that ratio would be more like two to one, right? We have less core engineers, more AI engineers. We were more just like a, you know, some other traditional software product that has small AI features. I imagine that ratio would go back to being four to one. I think the thing though, if, if I'm running a bigger company, here's the actual thing I think you need to worry about, about building AI features.

You don't know how to build them. And the reason you don't know how to build them is because it's also broken qa. And what I mean by that is like historically on the QA side, it's pretty easy. We write you tests or integration tests. Someone clicks the button. Yep. You know, it did the thing we expected it to do.

It's really easy in AI land to get it to spit out something. The binary of like, oh, it spit out. Something used to be like, ah, that worked, moved to the next form. It submitted the field sort of thing. But now you also need almost like a, there's artistry to ai, which is kind of why I like it. A lot of our AI team, like I said, they've got a background sometimes in machine learning.

They also have a background in data and data People backgrounds have been, we've done really good 'cause data. People have a high tolerance for pain. By way, that means a lot of data. People spend their time doing data integrity. You've done anything with data integrity is nothing but pain. Brutal. It's like brutal.

Look at hundreds of rows of things and figure out why it's wrong 2% of the time. And it literally is still that, you know, yes, you can have LMS evaluate other LM outputs and we tried that as well. But at the end of the day, there's right now no substitute for someone who cares deeply about the problem.

Just eyeballing 200 answers. Sure. You could set up some feedback loops with your users and stuff too. But like you're gonna get so much more variance from a team that really cares deeply about the output and really scrutinizes it. And I think that one of the reasons why all these large companies kind of aren't very good at shipping, you know, when they're traditional software aren't very good at shipping a features 'cause they have no taste and 'cause they have no taste, they just don't have qa this and so spits out something and they ship it again.

They're probably also on deadlines, so they're not giving their team time to fail and time to keep iterating it. So. That's why I think all those things are pretty mediocre.

[00:24:11] Jeremy Utley: Click one more into they have no taste. 'cause I think that if I'm at a big company, that could just seem like a spurious insult, but I think there's real substance there.

How do you define taste and how could someone develop it if they want to develop it?

[00:24:27] Richard White: I think it's like a, we have to move to a world where you figure out how to QA with qualitative feedback, right? Not something that can be automated. And I think of these larger companies, that kind of qualitative feedback in a QA process happens at smaller companies almost organically.

'cause you don't have yet the apparatus. You're not investing the time do these big unit tests. But I think you tend to move away from that as you scale. And so I think, you know, when I say taste, I just basically mean like you need to come back and reemphasize the qualitative side. Of testing some feature or output that you're building, getting from an AI product.

[00:24:59] Henrik Werdelin: I would imagine that the other thing is to have a little bit of consistency. You know, because it's so easy to produce so many different things. You end up having some of these tools being thrown at you. That just does a lot of liking, and I've always been thinking about what is a founder's kind of role.

And in many ways I think about it in the context of Steve Jobs. My sense is that Apple was very consistent for a long time because people basically asked themselves. Steve Jobs like this, right? And it became this kind of like really easy way to QA stuff because you can just go like, he probably wouldn't like it or he would like it and if he would like it, he would put in.

And so I think in many ways I'm increasingly obsessed with this narratives becoming source coats almost like that. If you don't have a strong story about what it is that you want to be as a company, but you just define yourself of what you do. You have this very difficult time to basically share that articulation and therefore people don't really know if a feature that works in this way or that way should go in or not.

[00:25:54] Richard White: It's funny you say that 'cause my knee jerk answer to your last question about if you're a large company, how do you get taste or whatever is, I think someone in the executive suite has that taste, they need to be stepping into the QA process. Yeah. They need to start doing themselves. They need to start basically beating the backstop and start, you know, creating a culture around that Like Steve did.

Just like Steve did. Like Steve did. Yeah. Yeah.

[00:26:14] Henrik Werdelin: It's so funny. I think it's so important. I remember like at BarkBox for example, we have this kind of rule called band the bone, and the band the Bone was that I would have a little bit of a theatrical hissy fit if I saw somebody design anything with a bone.

Because obviously when you design stuff for docs, the most obvious thing to do is a bone, right? All paw print

[00:26:31] Jeremy Utley: ban the bone,

[00:26:32] Henrik Werdelin: and I just didn't want designers to be that lacy. Yeah. It just meant

[00:26:35] Richard White: that they hadn't really, they just done their homework, right. They just, somebody

[00:26:38] Henrik Werdelin: asked me to sign something.

Here's like a paw print and a and a bone. That's so good. And so this band, the Bone became a thing, but people didn't remember the rule. They remember that that Henrik would get upset. Right. And so sometimes when you had a new designer come on board and they designed something with a bone or a paw print, people go like, oh, have you seen, have you shown this to Henrik?

I go, bet better not show this to Henrik. Right. And I think actually it's increasingly gonna be important for us, to your point, to create these almost like folk laws of these systems. Because obviously not only will people have to understand, but models will have to understand them too.

[00:27:09] Jeremy Utley: Yeah. Well said. So Henrik, I love your phrase, just to put a double click on it.

Narrative is source code. I think it's beautiful. And that word code reminded me of something through, oh, there goes my notebook. Um, something we talked about with, uh, Wade, actually, Richard, which I'd love to get your thoughts on. So back in, what was it? Was it 23? Was it when GPT-4 came out, Henrik, that Wade declared code red at Zapier?

Yeah. And he basically said everybody take two weeks off work. Because the, the period of time between 3.5 and four and the capability improvement was so significant. Wade and Mike Nup, his co-founder, basically said, we need everybody to stop everything they're doing and go deep. I contrast that slightly with your story.

It sounds like you had an intimation through the, the grapevine, so to speak, that. AI is kind of coming down the pipe and you actually said, what will we build if we assume somebody's gonna do that thing? So I don't know if you, you didn't pre-code red in a way, but you kind of built with that mindset. And then the third kind of piece of this puzzle, I'd love for you just to riff on if you're willing.

I've heard one strategy people talk about is in enterprises, what would we be if we could start today? Right. Knowing that AI is now here, how, and that's actually I think, a really difficult thing to imagine. But think about those kind of three things. You started knowing it's coming, what should I build?

Wade said, oh, crud. It's here. Code red. And then there's this kind of existential, what would we do if we started over? How should an organization think about ai? And its kind of existential connection to the business.

[00:28:50] Richard White: Okay. So to go back to my, like, we don't like plans, but we had a plan, which was, you know

[00:28:56] Jeremy Utley: Right, right.

Of course.

[00:28:57] Richard White: We had kinda a, kinda almost like a four step plan here, right? For Fathom going back five years ago, one was, okay, we believe Gene is coming. Um, it's not here yet. What should we build before it gets here? What we're gonna do is work on all the things around the engine of the car. We're gonna work on distribution channels, we're gonna work on video streaming, transcription, like all the plumbing.

We think that part's really hard. We're intentionally not try to build our own models because we know that like they're coming. We saw a bunch of competitors hire linguists and ML people and we thought that was the wrong move. Then GT four gets here, right? Create, create. We can drop that engine out. It can take notes, it can use that in the infrastructure.

Take notes. We always thought the step after that, sort of talking about earlier, it's like it not as just like a scribe, but as it as like a second brain, and so we kind of built almost another car with another engine. It's a bigger car, a bigger engine cavity. Thinking about like, okay, when it gets to this threshold where we can actually horizontally scale it cheaply enough that we can have a Duke massively parallel analysis of thousands of meetings at once, to answer any question you've got, great.

Now we're in this phase. And then there's a phase beyond that where it's not just answering questions you ask, it's gonna start pushing you the answers. And so the world we're really excited about is the world where you get off the meeting and it's done 80% of your action items for you. Also, by the way, you're in a third of meetings.

Used to be because now you just have an agent that goes, runs around and finds all the topics you care about and feeds 'em back to you at the end of the day. And so we, we haven't really had to remake these things 'cause we already had like a almost, when it hits its capability, create smash, that red button.

It's time for us to pause the assembly line and like rework in.

[00:30:34] Jeremy Utley: So you've, you've kind pre-built some of this infrastructure, so to speak, and you're waiting for the new model to drop in. So is what happens, Jim, you know, Gemini three comes out, you go, let's drop in Gemini and see if it's got enough horsepower.

Is that what you're doing?

[00:30:47] Richard White: Basically? Yeah. There's certain limitations that we, we run into and we're always looking for, does this new model unlock ones limitations? A good example of this is, uh, GPT five. GBD five, I think in the marketplace has been kind of panned, right? Not the most exciting launch to us, it's the most exciting launch we've seen in almost over probably 18 months.

[00:31:06] Jeremy Utley: Why so?

[00:31:06] Richard White: Because it solved hallucination rates. And for us at scale, when we're trying to find needles in the haystack trying to get 2000 meetings, we're trying to find you were, you're curious about something that only happened 0.1% of the time, 0.5% of the time. If the hallucination rates pre GP five are high enough that, well, the.

Two thirds of what we're gonna return you is hallucinated. And so like, it's interesting where it's like that's a hallucination. Rates are not a thing that matters if we're just writing notes because it doesn't get it wrong. But when we're doing that use case, it matters a lot. Mm-hmm. So we do have, I think like a, we're on the lookout for certain things.

The other thing is we kind of know, there's a lot of times it's like we can make this work today, but it's too expensive and too slow, which are basically the same thing in AI land, right? Cost and Speedo the same thing. You don't want to ask question fathom, it takes 30 minutes to get the answer. But if we can make it work in 30 minutes, that means we can probably put it on the shelf and if we wait another LLM cycle or two, it'll get down to 30 seconds.

And that'll be both a latency and a cost. We can stomach. So sometimes we use expensive models to like prototype out, like, okay, if the profess model can do it, we see it. We can't build it yet, but like it's coming.

[00:32:13] Henrik Werdelin: Purely going back to your insight in how we as users should use a product like yours better.

I read somewhere that if you have a partner for a long time, you basically, and you do cat scannings of their brains, they're like whole pos that you basically turned off because you assume your partner remember it. Right? So you, you're really good at maps and I'm not, then I'm basically, I don't have to deal with maps anymore because my partner doesn't.

Is the way to think about it is that that is kind of becoming these systems that you're creating these basically libraries of everything that's been said in meetings. Becoming this kind of very extended memory. And if you have that, what is the stuff that people are not doing today, but they would get a lot of benefit from?

[00:32:58] Richard White: I think the thing we see a lot of is you come to Fathom and you're trying to get fathom to approximate your current workflow, which Jeremy's illustrating well, right? Is taking notes the whole time. He's gonna, afterwards, he's gonna clean it up, fix, et cetera, et cetera, up. He's gonna spend a lot of time with it so that it kind of like sinks into his brain.

When people first come to Fathom, that's what they get excited about. Oh, wow. It took really good notes for me. But very quickly you realize like, I don't even need those notes. I don't need them. Now, what I need them is when I go meet with Jeremy again in two weeks. And so instead of me, even, I don't review my notes after any meeting.

I just have the meeting. I go on the next thing. It's when I have the follow up or when I need to do something about that meeting. Now I go back to the meeting and I ask the question, Hey, what did we say we were gonna do here? Or, I watched the last two minutes, or I read the summary. Then I go into the next call just having reviewed that before.

So it's kind of like a just in time memory system. So instead of putting all this work up front to try to like prime your brain to remember these things in the future. Yeah, just when you need it, come back to it.

[00:33:54] Henrik Werdelin: I did a fun use case just recently. So I write a substack, like other people have a podcast and uh, it has the same format as like these eight to 10 points and it's very structured, but it's a little bit of a chore to make it because I want it to be high quality, blah, blah.

So what I do now is I, as you suggested, I feed it my last 20, uh, newsletters, and then it basically says, go through all the meetings I've had the last month or so, and take anything that is interesting enough that, that I can package in the same way and then give me a source and then, hey, Presto, of course, it like, finds all these interesting things that Jeremy have said.

I'll then repackage them in my words and then it works very well because then

[00:34:32] Jeremy Utley: you just, I've wondered about that. I feel like a lot of my ideas are in your newsletter. What's up with that? Okay. When you say go through, go through. I think, I think that's a good, what is it going through? It's

[00:34:40] Richard White: good

[00:34:40] Henrik Werdelin: examples.

It

just

[00:34:41] Henrik Werdelin: goes through all my meetings. I just,

[00:34:42] Jeremy Utley: because I don't remember where some, but granola and fathom and what does mean to go

[00:34:46] Henrik Werdelin: through meetings. Yeah. Like, you know, like I use this local model because some people get freaked out. You've used cloud stuff and you probably have a solution for that. So I use Mac Whisper, which thinks no sort thing.

It's taking type

[00:34:56] Jeremy Utley: of product, it's just taking notes all, all the time.

[00:34:58] Henrik Werdelin: It just records everything and, and, and stores it. Right? So this conversation will be in there and then in to the whole transcript.

[00:35:04] Richard White: I'm imagining

[00:35:05] Henrik Werdelin: Richard's point, like in two, three month when I've forgotten about all these smart things that Richard said, then it will, he will find that point and then I'll go, oh yeah, remember that?

That's actually is a really interesting point. I should have put that in my newsletter. But it's just, it's a workflow that just. Couldn't have happened before because to your point, I didn't have the transcript and I didn't even know that this was something that could be done.

[00:35:26] Richard White: And I think this is like the evolution of AI as first like an assistant, like a scribe doing a specific job for you.

Say AI's like a co-creator with you. Right. And now I think we're very much into like we're co-creating things, but I'm good at some parts of this process. It's good at some parts process, right? It's much better at memory than, than we are. We're be much better at taste than it's Right. So like, you know, I'm guessing you have some editing process, it gives you a bunch of stuff and you're like, eh, these two are good, those two are bad.

Right. Et

[00:35:50] Henrik Werdelin: cetera.

[00:35:50] Richard White: And then I think the next phase is honestly, it starts, you know, we start becoming, it starts maybe passing us and then it starts suggesting things with less of a co-creation. And we're more, just more of like a Steve Jobs we're just kind of. Approving thumbs up and thumbs up. Well,

[00:36:03] Henrik Werdelin: can I give you, uh, one thing that I just thought was an interesting that I

[00:36:06] Richard White: Yeah.

[00:36:06] Henrik Werdelin: Been talking more on this podcast than you, so apologize for that. So one of the companies I'm involved is called Autos and we help people build startups with ai. And we had a journalist from a major publication go through this process and he ended up building something that basically helped him navigate his parents getting older.

So like what kind of nursery homes could be stuff like that. So the system built it, built the website and all these different things. And then he basically started to get real customer through. And I think what he then realized was like, this might actually be a decent business, but not for me. I don't wanna build that business.

And so it's not just about taste, it's also just about preference, right? Like I don't want to be the one who posts that specific thing, like it's just not for me. And so I think there's like an interesting, used taste earlier, which I think is such a profound thing. It kind of like very much ducktails in with just preference.

And of course, AI can't pick your preference because it doesn't know you well enough yet.

[00:37:00] Richard White: Yet. Like all things in ai, we always throw, you gotta throw yet on the end so that you, you in the future, right? Yeah.

[00:37:07] Jeremy Utley: One thing I'd be curious about, Richard, is can you talk for a second about organizational rhythms, rituals mechanisms to drive exploration and adoption?

One, one word actually that has not come up yet is adoption. You haven't spoken one se, and that's not a criticism, it's some people talk their entire thing is adoption. Yours clearly not. And as an example I'll mention, we had Hunza Ani, who's the chief Strategy and Innovation Officer at Maple League Sports.

So. Over the Toronto Raptors and the Maple Leaf and their soccer team and their arenas, he has a running list of ideas that get submitted by employees and once every quarter they have what they call a build day and whatever the top three vote getting ideas are, and they have thumbs up and thumbs down on all these ideas, they bring the person who submitted the idea into their, they call a V one lab and they build soup to nuts the idea in a day and every quarter they do that.

I thought it was really elegant. Way of kind of demonstrating what's possible and, and stimulating engagement. What do you do to spark the imaginations of your team?

[00:38:12] Richard White: One of the things that we got really excited about last year was this idea of AI ops, where it's kind of like, how do we, you know, so we have an internal goal of getting to a hundred million in revenue with less than 150 employees.

And that's one from just, uh, I've interviewed all my other startup. Friends who have gotten well beyond that point. And every time I tell them we're like a 90% company, their eyes kind of glaze over and they just, they get rose colored glasses and they're like, oh, that's when it was fun. And they'll, I asked them all, would it stop being fun?

And they all say somewhere around one 50 to 200. But also because I just think from a, you know, conservational momentum, I'm like, how do you stay nimble and move fast? You wanna stay small. And so for that reason, I get really excited about how do we have a internal team that kind of just partners with various other functional teams to be like, Hey.

It's almost do like a work consultant esque mindset of like, let's, let's look at all the stuff you're doing, what's its, what was 20% of value and how do we automate that with AI or things like that. It turned out to be really challenging actually, to do this and why. So I think we found two things. One is that like often the scope of these things was like somewhat ill-defined, and having poor scope makes it really hard to build an AI project.

I think the other thing we found though is like the things we run into building AI features what I call the LLM treadmill. And the LM treadmill is like you build something on Gemini 2.0, 2.5 comes out. It's not really forward compatible. You almost have to generally build it again, and oh, by the way, as soon as 2.5 comes.

Which came out six months after 2.0. Uh, Google is now removing capacity from 2.0 very rapidly because there's finite compute in the world. Right. Especially GPU compute. So you almost like have to constantly keep this thing up to date

[00:39:53] Jeremy Utley: software update.

[00:39:54] Richard White: And so, yeah. So you like it's the most aggressive EOL cycle I've ever seen in my life.

Right? Imagine it would rebuild a feature every six to nine months.

[00:40:02] Jeremy Utley: Hmm.

[00:40:02] Richard White: And oh, by the way, you have like three weeks to rebuild it. Maybe that, I mean, we're in a situation right now, Gemini three came out, they're already deprovision 2.5 capacity. We don't even have an API for 3.0 yet. Right? Wow. So like, it's insane.

And so a lot of our initial attempts to build stuff internally we're using these kind of models and we were building internally and we just very quickly found that like, we don't have the capacity to like these things that are like nice little, you know, they're not huge movers of ROI, but they're decent little mover.

ROI wasn't, wasn't worth it to try to figure out how to keep them up to date. And we just started shifting more into like great work. We're gonna just be a buyer of AI products. Lord knows there's enough of them out there, right? So like why build internally when there's tons to stop and we've just instead moved to like, how do we just get better at evaluating AI vendors?

Interesting.

[00:40:50] Jeremy Utley: And getting

[00:40:51] Richard White: more rigorous about it.

[00:40:52] Jeremy Utley: Instead of having a team to be building, you're, you've decided, right, build versus buy. We're gonna buy. The challenge is how do we make buying so easy and so quick that it's frictionless? Is that it Basically,

[00:41:04] Richard White: yeah, basically. Yeah. I mean, I think probably most folks.

18 months ago, we were just buying everything we get our hands on, right? We're just like, oh yeah, that looks like we're like a kid in the candy store. And then what's the, what's the

[00:41:14] Jeremy Utley: process to validate?

[00:41:17] Richard White: That's the thing. And I don't think we were as rigorous in process to validate, because you know, I think internally we do a lot of work to validate AI features.

We take a lot of pride in the quality of the outputs of our ai. Then we think we learned in the marketplace, that's not always the case. Right? Like a lot of vendors don't maybe have that same pride of, of, and so, you know, they'll sell you a thing and you'd find out like, you know, wild we can't do that thing.

So kind of similar in the same way we had to, you know, you have to develop taste and like a really good qualitative QA process for stuff we built internally. We had to do the same thing for stuff we buy.

[00:41:46] Jeremy Utley: Mm-hmm.

[00:41:46] Richard White: Right. And so

[00:41:47] Jeremy Utley: like QAing your AI purchases.

[00:41:51] Richard White: Yeah, aggressively. We almost won't buy anything unless you give us a 90 day pilot at, at a minimum.

We just refuse and we do the same thing for our customer, right? We now assume you won't trust us either or knows there's enough people. So we do 90 day opt outs as well. Just by default, we make sure we have 90 days to pilot, we make sure we have a test plan, we make sure we know who's gonna be testing it, et cetera, et cetera.

Versus before, honestly, we were just like. The demo looked like it worked, right? Like ship it, put it in front of customers sort of thing. And so, right, it, it varies from product to product, but we have a lot more rigor in that buy evaluation process in the PSC

[00:42:24] Jeremy Utley: process. You know what's interesting, just as kind of a, this is a meta observation about innovation in general, but there's a question about what do we have to make really easy.

And I think that's a kind of first principle I glean that I was listening to. If you haven't listened to Jenssen's interview on Joe Rogan, which I think just came out this week. It's one of the best interviews I've ever heard. I mean, it's truly fantastic. And Jenssen's origin story and NVIDIA's origin story.

I mean, it's truly incredible. But one of the things that I'm, I'm kind of a sucker for origin story, so I love hearing how companies get made, and Jensen tells a story, I've never heard it written about anywhere, that basically they got to the point that they had a new chip design, they had one purchase order.

They knew that the typical way of kind of fabricating was you send your spec to a fab and then they send it back. It almost always is buggy, and then you gotta redo it, redo it, et cetera, et cetera. And he said, we looked at our cash in the bank. We did not have enough money. We were gonna run out of money if we had to go back and forth.

So he said we couldn't do it the traditional way. He said, I had heard of this product called an emulator. Which basically would take onto the device your chip specifications and then act like it was the chip. And he said, I said the only way that we're gonna be able to, to, to preserve money is if we have an emulator and we do all the QA ourselves.

And then we send the final design to the fab and we tell them, put it straight into production. Which by the way, no one had ever done. He said, so we spent, this is a, had you heard this story? I haven't heard this. I mean, it's amazing. Said. 50% of our remaining cash is what the emulator cost, 50%. He said, we reached out to the company, they'd gone bankrupt because they didn't have any customers.

He said, oh no, I can't buy it. He said, we have one emulator in inventory if you want it. He said, I bought the one they had out of their inventory despite the fact that they were bankrupt. We did all of our QA on the emulator and then we sent to TSMC, who was small at the time. We sent them the final design and we said, go directly to production with that design.

I have chills because he said it ended up working and to me the, the amazing insight there is recognizing the thing that has to be tested as cheaply as possible. Is the back and forth. Right. In that case here, you're saying, I don't want, I, what I'm, I'm, I'm projecting, and I'd love for you to clarify not, but it sounds like what you realized is it's too difficult to build.

We wanna buy and what we gotta do is make buying as easy as possible and then we'll rigorously qa whatever we do buy, but we don't want anybody wasting any time or effort with the, you know, uh, new tool purchase process. Is that an accurate

[00:45:02] Richard White: a hundred percent. Yep.

[00:45:04] Jeremy Utley: That's really cool. A hundred percent.

[00:45:06] Henrik Werdelin: Maybe that's a good, uh, time to wrap. I think we're coming to the end of the hour, although it sounds like Jeremy, I have a bunch of more questions, so maybe we can invite you back to the pot another time. I hope we didn't scare you away.

[00:45:18] Richard White: I be honored coming on again.

[00:45:20] Jeremy Utley: It's super fun to get to talk to you.

Thanks for coming on. Yeah, this is super fun. We love what, we, love what you're done, and hopefully just drive some more spotlight and attention to your great work. Take care. See you. It was

[00:45:29] Richard White: a lot of fun.

[00:45:29] Jeremy Utley: Thanks guys.

[00:45:30] Richard White: Thank

[00:45:30] Henrik Werdelin: you.

Thank

[00:45:30] Jeremy Utley: you so

[00:45:30] Henrik Werdelin: much.

[00:45:31] Richard White: Bye now,

[00:45:32] Henrik Werdelin: Mr. Oddly. Interesting conversation is always so interesting to talk to entrepreneurs that are doing something cool because they obviously thought a lot about these things.

[00:45:42] Jeremy Utley: Well, you know, w what I really like about a couple of the conversations we've had recently, if I think about Ilya, who anticipated Chad pt, you know, five years too early as one example, Richard is similar in that because of the, you know, the, the pools he swam in, so to speak. He was anticipating gen AI long before the rest of us were, were surprised by it.

And to hear him talk about how the, the kind of structural decisions he made about the product he wanted to build, basically assuming that model makers were going to drop in the engine, the product he wanted to build around that I thought was a really fascinating approach to company building. And I couldn't help but wonder.

We talk a lot about existing enterprises and how they grapple with adoption and things like that, but taking as a premise models are going to improve almost. What should we do if we take, as a premise models will improve, I think is a very different question. Even from what should we do, given what's here, and folks right now are still trying to kind of catch up with what's here.

And there's a whole kind of other strata of strategy, so to speak, which is. What should we do anticipating what's gonna come down the pipe?

[00:47:01] Henrik Werdelin: What I think is, is such an interesting dilemma that a lot of companies are facing right now, you know, newer and older, is this, that the first two steps of getting introduced to AI seem to be pretty clear.

First, you need to upgrade your capabilities of the organization, and then second, you need to kind of look at what are workflows can be done with agents. Those two, so I, I can, but they almost all like cost reduction thing. We talked a lot about then what do you do all this. Unlocked energy that you might be able to get out of those exercises.

And I think there is this chasm between box one and box two and box three, which is basically living in this new world where if AI is electricity, you basically went from working before without electricity and now you have a company in an age with electricity. And I think jumping that chasm is complicated and some people do it as Richard does, where he basically see where the puck will be going.

Then he starts to build, you know, towards that already, even if it hasn't been released yet. And I think it's a, it's a way of saying the same thing of you I is like, you can't just build from the, the reality there is there today with the models releasing so fast and getting so much more capabilities every time they come out, you kind of like already have to basically figure out what does GPT six or seven potentially do and make some bets towards that.

[00:48:24] Jeremy Utley: It's such a fun and interesting and scary time for strategy, I think in a way, because the last era has trained folks to think very incrementally about what the business could be and in an era of uncapped intelligence. You know, as Ilya says, or, or Dario, maybe one of one of them says, you know, a country of PhDs in a data center.

When you think about that a country of PhDs or a country of geniuses in a data center are gonna be available to you as a company. I mean, what Richard said was we now have college seniors and doctoral students who have 800 hours a day. That's kind of another way of thinking about it, but very few companies can even allow themselves to imagine.

What do we do? With somebody who has 800 hours a day with a totally, even if they only had one employee, a fully capable employee that had 800 hours a day.

[00:49:17] Henrik Werdelin: Let me ask you a question. As I was just saying this other thing, I was like, that might actually be wrong, because there is these two competing thoughts that we were just introduced to.

We talked about you had to be able to figure out where the puck is going. Richard was talking about that he was building software. Knowing that at one point he'll be able to run models on the client side, but the time he couldn't do it so clearly kind of was seeing where the future is going. Meanwhile, he, and we were kind of freaking out about Jensen Sacrament, about not having long-term plans.

So he talked about like he only have a 90 day roadmap, he doesn't do long-term planning, all these different things. How do you think people. I think I have an answer. Maybe I'll just say it. I, I think it's complicated for people to square up this thing about trying to be visionary of where things are going, but also being nimble enough to kind of work so it in an agile way.

And so there seem to be this tension, right? And where I kind of come back is to, the only thing you can stay true to is to be very good of having a narrative about what you want to be. Who do you wanna serve? Because that can stay consistent. And then you kind of have to have like an overall narrative of like, what is it, you know,

[00:50:39] Jeremy Utley: what

[00:50:40] Henrik Werdelin: is

[00:50:40] Jeremy Utley: the change you wanna bring to the world?

But not

[00:50:42] Henrik Werdelin: in a roadmap

[00:50:43] Jeremy Utley: terms, but more kind of like brag. It's like we want to make dogs and the people who love them happy.

[00:50:48] Henrik Werdelin: I say this as a statement, but I I, I really mean it as a question. Yeah. 'cause I, I think it's just so complicated on long-term strategy where it's just like, agile. You shouldn't be an army, you should be like a SEAL team, but you can't think too far ahead because you should really, like, how do you see that dilemma?

[00:51:04] Jeremy Utley: No, I, I really agree. I resonate with what you're saying. It reminds me of the Victor Frankl quote, not to get sober, but, you know, someone who has a why can survive anyhow. Any what? I can't remember exactly. I hate to invoke him. So kind of, uh, superficially, but the why matters a lot. And I think one of the things I took away a ritual that, that Richard didn't reference when I asked him about rituals, but he referenced it earlier.

He's meeting with the AI team once a month and he's told them 60% of the stuff that you do. I hope it comes from my wishlist and from the team's wishlist, but fully 40% of the stuff you do. I expect you to do because I want you to follow your interestingness right to, to use your word, right? And so the why, obviously it's gonna be in service of the mission of the organization, but folks who are tightly coupled to the mission of the organization probably can be given a lot of leash or a lot of leeway to explore lots of different kinds of things.

And especially if your expectation is, hey, 50% of the stuff you're gonna try isn't gonna work. It seems like you've created the conditions to succeed, but

[00:52:09] Henrik Werdelin: I think that statement, and this is why I think increasingly AI is both the capabilities, but it's also a way of thinking, right?

[00:52:14] Jeremy Utley: Hmm.

[00:52:15] Henrik Werdelin: That statement is true also because that he talks about basically what seemed to be a dumb bar theory of the firm, right?

Which is basically

[00:52:23] Jeremy Utley: right.

[00:52:24] Henrik Werdelin: He believes that a company should be 150 people max. You'd like to get to a hundred million dollars of revenue with 150 people. That's why other founders seems that basically the intimacy breaks down, or everything gets complicated, right? So he can, I think, deploy all this trust because he kind of understand the 150 people.

And he said like one out of four people was these r and d AI engineers, right? And so let's say like he only have a hundred people now, blah, blah, blah. You know, it is probably a handful or two people that he has to do. So the trust is probably immense. And so there's this interesting kind of like issue that when we talk about how to think.

Ai. Think about how we build our companies with ai. There's these underlying kind of assumptions where we, a lot of companies, I think, have organizational debt to pay. They don't have trust, they don't have originality, they don't have these other things. You know, entrepreneurship and soulfulness. They have all these, there's all these things that they'd probably be complaining about that they didn't have in the past, but.

It didn't really matter as much, but now with AI it suddenly matters a lot. Right? And so it's fascinating.

[00:53:34] Jeremy Utley: Wait, what matters a lot? Say it again. I think a ai, what matters a lot,

[00:53:37] Henrik Werdelin: a lot of these capabilities that I think companies have always had an, uh, it was envious that startups had, they didn't matter as much as they do now, where more people can do much more with less.

So suddenly the issue just becomes much more magnified and all these kind of hidden issues. Bad culture, bad originality or innovation or you know, creativity. Not having people who are resourceful and can do a lot with a little very clear departmental structures. That means that people don't feel they can go into other swim lanes.

All these things that we've all talked about in innovation, entrepreneurship in a, in a bigger company context for a long time is now becoming these like super obvious that it will also have to fix. What's weird about is, of course these are not AI issues. These are secondary AI issues because they become a much bigger problem in an O ai.

[00:54:31] Jeremy Utley: Right, right. Well said. Well said.

[00:54:34] Henrik Werdelin: I think the, you know, just to summarize some of the things that this AI got very fascinated about, he basically has a frontline deployed r and d team, which is an interesting way of thinking about it, that basically don't speak in ideas and workshops. They work in code and prototypes.

So I think that was just a fascinating thing to hear about. I think, you know, he talks about the Jenga model, which is basically use AI and, and, and use cases, the

[00:55:00] Jeremy Utley: Jan

[00:55:01] Henrik Werdelin: and then Jenga blocks and you press a little bit and see which, which loose and you can kind of pick off easy.

[00:55:05] Jeremy Utley: I, I love, I loved his point there, by the way, about the Hail Mary, that every once in a while you throw a Hail Mary and it works.

[00:55:11] Henrik Werdelin: Yep.

[00:55:11] Jeremy Utley: And if you aren't throwing those long passes, and if you aren't taking big bets, you don't see the stuff that could surprise you.

[00:55:18] Henrik Werdelin: The last thing that really resonated with me, which is kind of also a softy kind of thing, is he said something that people who are not making good products because of the, the QA process is broken in the H of ai, but they have this issue because they have no taste.

And so it is an super interesting question of who in an organization have the permission to be the one whose taste we're following. And for startups it's easier 'cause the founder obviously have like a very big voice. But if you walk into a p and g or whatever, is it the CEO? Is this the CMO? Is the head of strategy, is the taste maker, like is it all of these people that creates consensus?

Like who, who is this person? Who's taste we're gonna go for? And how do we define that? And I think that's kind of back to this whole point about your narrative is your source code. If you. Can now make everything. What do you want to make and what is the thing where your taste picks out and saying, this is the thing that I want to put to the world, and I just find this to be super fascinating kind of issue and problem to solve for the future.

[00:56:30] Jeremy Utley: It was a great conversation. Thanks to Anna Eva for introducing us to Richard and other listeners, if you have interesting guests that we should be speaking with, put us in touch. We love learning. From the folks who are at the front lines

[00:56:44] Henrik Werdelin: and for everybody who has been through the whole episode and now listen to so deeper, thank you for staying all the way through.

And for the ones who are just listening to this as a teaser, you should really go and listen to the podcast. The whole interview is really incredible and it's filled with a lot of good kind of food for thought, so you should go and listen. And with that, Jeremy and I only have one thing to say and that is, bye-bye.