Bryan McCann, CTO and co-founder of You.com, joins Beyond the Prompt to discuss how AI is changing the way people search for information, work day to day, and think about productivity. Drawing on his experience as an AI researcher, Bryan explains how he shifted from measuring productivity by writing code to focusing on making machines work continuously by “keeping the GPUs full,” and how that mindset applies to individuals, teams, and leaders.
In this episode, Bryan McCann joins Henrik and Jeremy to explore how search is evolving from simple queries into more conversational and agent-driven systems, and why prompting is likely a temporary skill. Bryan shares how his definition of productivity changed as an AI researcher, moving away from doing the work himself and toward designing plans and experiments that machines could run continuously.
The conversation expands to leadership and organizational design. Bryan explains why helping others learn how to work with AI became his highest-leverage activity, and offers a simple rule of thumb: try to get AI to do the task first, and treat anything it can’t do as an interesting research problem. Henrik and Jeremy connect this to Bryan’s view that organizations may increasingly resemble neural networks, with information flowing more freely and decisions less tied to rigid hierarchies.
Key Takeaways:
YOU: You.com
Bryan's website: bryanmccann.org
LinkedIn: linkedin/company/youdotcom/
00:00 Intro: Keeping the GPUs Full
00:22 Meet Bryan McCann: CTO & co-founder of You.com
00:43 Why Search Is Breaking - and Why It Becomes a Skill
01:41 From Search to Agents
03:18 The Case for Proactive, Context-Aware AI
04:30 We Don’t Need New Hardware - We Need Trust
05:43 The Trust Problem of Always-On Listening
07:57 Trust as the Real Bottleneck (Not AI Capability)
09:52 Delivering Immediate Value to Earn Trust
12:13 Business Models and Escaping the Attention Economy
17:27 What “Agents” Really Mean - and Why the Term Will Fade
20:37 Productivity, Parkinson’s Law, and Keeping the Machines Running
23:52 Scaling Yourself vs. Scaling Your Team
29:57 Building Culture: Automate, Throw Away, Rebuild
35:46 Designing Organizations Like Neural Networks
45:02 Recruiting for Initiative in an AI-Native Organization
49:18 The debrief
📜 Read the transcript for this episode: podcast.beyondtheprompt.ai/heres-how-to-know-if-youre-getting-the-most-out-of-ai-with-bryan-mccann-cto-of-youcom/transcript
[00:00:00] Bryan McCann: My main measure of productivity that I learned very early on was my goal is to make the machines work for me as much as possible and keep the GPUs full. My goal during the day is to make a plan, I guess you might call it a strategy, although I didn't think of it as doing strategy, right, and maybe that's the shift we all kind of need to make.
Hi, I am Bryan McCann. I'm the CTO and Co-founder at You.com I was an AI researcher in a past life and a philosopher before that one, so I'm super stoked to talk to Jeremy and Henrik today about ai, where we're going and everything it means for our teams, our organizations, and ourselves as people.
[00:00:44] Henrik Werdelin: I was one of your, the early either Com user, I had the plugin and it introduced me to this whole kind of like, uh, that basically search didn't have to be, I searched something, but it could be a bespoke created answer to me.
Right. Which was kind of like neat. I would imagine, and you obviously spoken many places about this different, this is change that's coming to search.
[00:01:08] Jeremy Utley: Mm-hmm.
[00:01:08] Henrik Werdelin: If you are to offer people advice on how to think about what's the best way for now attaining the information that they need.
[00:01:18] Bryan McCann: Mm-hmm.
[00:01:18] Henrik Werdelin: And normally there would be only way you search better in Google.
Obviously increasingly we know that it's like more of a conversation, but you probably thought a great deal about. How do you quickest, most efficient, get the best out of this, this information now rendered through these agents? Um, what kind of your advice on, on how to become better at that?
[00:01:41] Bryan McCann: Well, it's a skill right now.
Just like many of us had to learn at some point how to use Google, and that was, uh, a form of searching for information people weren't really used to. And it's still true that depending on how you phrase your query, you're gonna get different types of information and you can encode all sorts of different biases into that.
Uh, but with prompts and how detailed you can be with prompts and how important it is to get some of those details right. Given that agents now will run for minutes or perhaps even hours in some cases. So if you can. Describe accurately the problem that you're after. I think that's a good and useful skill, but I really hope that the need for that skill goes away very quickly as well.
Uh, most of what I try to do on the deep research side of you.com and the automated research side is expanding your queries, rewriting your queries, discovering the unknown unknowns, iterating on searches over time so that you don't have to, my real hope is to move away from search to chat or prompting and then move away from that entirely as well, so that there is no need.
So it
[00:03:13] Henrik Werdelin: becomes proactive. Kind of like information presentation served
[00:03:17] Bryan McCann: up.
[00:03:17] Jeremy Utley: Yeah, yeah,
[00:03:18] Bryan McCann: yeah. It seems entirely intuitive to me that it could be. Inferred from most of what you're already doing and most of what you're already typing in. I don't see why you need to go to a place, a special interface and type something in, in this very particular way and embed or encode all of your thought process into a few sentences and then hope that, you know, the magic thing brings information back.
It seems like based on everything you're doing, you could infer that already.
[00:03:55] Jeremy Utley: So if I, if I can track with your vision of the future there, Bryan is the idea that you have a context to where AI that's basically saying, Hey, I was listening into your meeting. It sounds like you needed to do these three things.
Here's a first draft I've done that, that should be, or whatever that should be done. Right. It's basically context to where, and all of a sudden, in the context of the conversation. It no longer requires you to take initiative, but it takes initiative and then offers you a draft or maybe even iterates the draft.
Is that the kind of future you're imagining?
[00:04:31] Bryan McCann: A hundred percent.
[00:04:31] Jeremy Utley: Yeah.
[00:04:32] Henrik Werdelin: I mean, like I've, uh, I've, uh, hacked together. I use this little device, limitless that records meetings and it already picks up the work to do. Mm-hmm. And then to ate, I create a little thing that talk to my to-do list kind of thing. So it just puts in there.
[00:04:47] Bryan McCann: Mm-hmm.
[00:04:48] Henrik Werdelin: When that happens, you are like, holy shit, this is just incredible. Where do you think we are in that? I mean, like you guys were in many ways, as I understand it before the perplexity and the Googles of the world to, and I realize your business have evolved since my, my favorite, uh, Chrome plugin extension, but you were quick to see that that's where the world would happen.
Where are you and how quickly we'll move to, to what you're suggesting?
[00:05:15] Bryan McCann: I think it'll happen fairly quickly. I. I actually don't think that we need any new hardware. I, I'm not on the new hardware boat. Um, it seems like we have everything we need. There are already plenty of devices around us that
[00:05:30] Henrik Werdelin: you're thinking
[00:05:31] Bryan McCann: like lot of phone, like you don't need like a little
[00:05:32] Henrik Werdelin: device
[00:05:33] Bryan McCann: to record.
You already stuff. You have a laptop, you already have a phone, like everything's already happening there anyways. I don't see why you need a specific piece of hardware sitting on a table.
[00:05:43] Jeremy Utley: How do you think about, um, I, I agree with you that, that they, they're listing devices everywhere. There's something interesting about trust where right now I have to have the wherewithal to open a notion, transcriber, for example, or mm-hmm.
Buy a limit list or something like that. I'm always to, by way of analogy, I'm always somewhat skeptical whenever I see the popup come up that says, turn on location services. I'm like, why do they need to know where I am? You know? And so, and often I find myself not doing that. To your point about hardware is already here, it does require kind of a level of trust and a level of opt-in that basically I'm comfortable being listened to all the time or effectively.
Right. And I agree that if that's true, then we have all the hardware we need. I wonder how much of a hurdle or how much friction it is to basically, I think I, correct me if I'm wrong, but I think basically what we need is for humans who've got these devices to basically say, sure, listen to everything.
Right? Is, is that what
[00:06:46] Bryan McCann: we need? Why would you trust a new hardware device more than the ones already in your pocket?
[00:06:50] Henrik Werdelin: Only, only because the phone factor kind of like has a different user behavior and understanding or trust. Right. Because I think what Jeremy gets to is he used this example another podcast, but like increasingly, a lot of students now write their job applications with ai.
So they make thousands and companies get thousands. So they read it with ai. The agent to agent workflow that is now replacing the human to human workflow basically makes the original workflow obsolete, right? Mm-hmm. And one of the mm-hmm. Issues, I assume everybody, we puts granola whatever on for every conversation.
Sir, I would assume that every conversation that I have through whatever online medium is recorded, but I'm not sure that the world has kind of really catched up. But I'm not sure that the way that I talk completely, freely with Jeremy when I think I'm not rerecorded is not, isn't necessarily something that I would truly enjoy.
And so I do think that there is like, it's interesting use case of we have this world that has this notion of sometimes you're recorded and sometimes you're not. Sometimes you're documenting and sometimes you're not. And now that we're documenting all the time, we'll probably have a bunch of workflows or trust issues that we'll have to kind of figure out.
No,
[00:07:57] Bryan McCann: I a hundred percent agree with that. Yeah. I think that that's true regardless of whether it's a new device or not
[00:08:02] Henrik Werdelin: for sure.
[00:08:03] Bryan McCann: But absolutely this issue of trust. Seems like perhaps the opportunity for some sort of technical innovation that would bridge that trust gap. Um, maybe even more so than AI at this point, right?
Like maybe innovating on some sort of model where people could trust I, 'cause I don't know that that is there. I don't know that I personally have seen a model that I truly trust in that sense.
[00:08:35] Jeremy Utley: I wonder if there's almost like a, um, I'm just kind of entering the role of product design with you here for a moment.
I'm wondering if there's almost something like, give us a day, turn this on. And by the way, the, the default setting is, it turns off after a day,
[00:08:51] Bryan McCann: Uhhuh,
[00:08:51] Jeremy Utley: but give us one day and see what we do for you. And then to you. I think the design challenge is what can you proactively serve up? Yeah. That goes so far, you know.
I mean, I gave the example, I was interviewed for a magazine recently, and I had the wherewithal to think about taking that transcript, put it into my Claude chief of staff and get brutally honest feedback, and that's actually super interesting. Gave me a bunch of stuff that I can work on, which I'm gonna implement for a media interview later today, right?
Mm-hmm. So that's helpful to me, but it took me having the wherewithal to realize, wow, I have a communications expert available to me. Yeah. I wonder if there's like three or four things like that where, you know, proactively we're gonna listen to the meeting and we're gonna tell 'em, what are three things you could do to be a better teammate in your next meeting?
And you block it on the calendar and you look in their email and you offer the help text for the next meeting, right? Or whatever it is, right? But it's gotta be, to me so tangible and so discreet that if somebody turns it on for merely 24 hours, they go, I am never turning this thing off.
[00:09:52] Bryan McCann: I think you're, I think you're a hundred percent right.
I was thinking about this this weekend, the same thing, and, and I, I tried to take it a little bit further and just say, okay, gimme one screenshot. Just what's the minimal amount of information and the maximum, like, maximum amount of impact I can have.
[00:10:12] Jeremy Utley: Right?
[00:10:12] Bryan McCann: So that you wanna turn it on again, so that you wanna give me a screenshot next hour so that, oh, now if you, now if you increase the frequency of the screenshots, eventually it turns into like a recording of an hour or whatever every day.
But like, how do I do this in such a way that from screenshot one, there's value providing immediately. Yeah. So much value that you don't want to turn it off. And I, I think if you can crack that trust may become secondary in people's minds.
[00:10:43] Henrik Werdelin: We, we had, uh, IA on the podcast a few weeks back who, uh, was one of the co-authors of the attention is All You Need paper.
And he's working on this kind of the blockchain. Hybrid of ai, where basically he feels that a lot of the trust will work if suddenly people can own the information themselves. They have full access to the models, but they don't necessarily pass on what happens. And so I think that's kind of one area that is at least interesting.
I'll, I think I talked to Dan Shipper, who's uh, at everybody's local New York. He had an interesting observation the other day where he said that this trust is gonna come because that where social media was about just showing the most perfect version of ourselves, which then led to everybody clicking on ambulances.
AI is interesting because it has this ability of being insanely introspective, right? Like suddenly you can literally listen to everything you say and therefore understand you much better. And so, mm-hmm. He's coming to this thinking from Yes, trust will kind of get overcome because there's gonna be like this ability to show your true self.
I know that you've written. At least like make a subtle hints on the attention economy and how other business models kind of fuel a specific behavior. I was curious in terms of like adding business model to all this, where do you think you'll go and what are maybe you guys trying to do to avoid the, the issues that you've written about?
[00:12:14] Bryan McCann: Well, with with you.com, we first and foremost stayed away from the ads business. I think that was, you know, the most successful business model we've seen with Google, of course. But one of the things that I didn't like as much about the attention economy, uh, I was starting e.com. So we've seen the world move more towards subscriptions, which was like a major hurdle to get over on the AI side because finally no one was willing to pay for search.
People are willing to pay for these AI summaries, which call out the search engines and summarize them for you. So there's enough of a shift, enough of a value increase for people to start paying for it directly. That enabled a lot. And the fact that people are even willing to pay higher and higher for deeper and deeper, uh, workflows and more and more automation is a great trend for this type of thing.
So I think if you were ever to, you know, enter this world of proactive search that we were talking about as well before, I, I, I would not want to give you all my data if I had any inkling that it would just be used to target me better. Right, right, right. So we might even need, it might be a timing question in that we need a little bit of space and time from the world of super hyper optimized targeting, um, so that you.
Do have the ability to trust anyone with that type of data. With u.com, we're, we're an enterprise. Now, you know, we, we don't, there's your data retention policies. You can do a multi-tenant setup. You can bring your own key and encrypt at all. We don't have that issue, but we're not necessarily collecting everything that you're doing all day, every day from your devices.
That would be like a next level set of data that you'd really want to have some protections around.
[00:14:19] Jeremy Utley: It starts to feel like there's a, you have to redefine targeting in some way, right? When you think about proactivity, it strikes me that sometimes what someone needs is a thing or a recommendation or, right, and then that's, that targeting isn't maybe motivated by the other side of the ad marketplace, but it is.
Yeah. You know, it is. You could say altruistic. It is benevolently intended. So you start to think about how do we redefine targeting where maybe we're just a one sided there. There's not a marketplace. We're only, yeah. You know what I mean? It's an interesting kind of,
[00:14:54] Bryan McCann: uh, philosophical, a hundred percent
[00:14:55] Jeremy Utley: philosophical question.
[00:14:57] Bryan McCann: The only reason I was thinking about any of this constant context collection and everything was, uh, I don't know. I was trying to think about how you could actually realize a lot of the promises of social media and social networks in connecting people. That I guess, kind of happened, but ish, I don't know.
It seems like I don't know that I'm always being, again, proactively connected to the really best person at the right time.
[00:15:34] Henrik Werdelin: I think you probably very actively are not. Right. The call, for example, dating algorithm is a visual taste, right? Like, I like that picture. Yes and no. Right? Which obviously having read.
If you could read all your textless and to conversations, you would imagine that you would have a better ability to connect you with somebody that you would have a meaningful relationship with.
[00:15:53] Bryan McCann: That seems intuitive to me as well. So if you could break that trust barrier and can get past that, and you could have all that data, um, maybe you start with a screenshot.
Perhaps you could slowly win the trust and do something genuinely good for connecting humans instead of even much of the consumer version of AI today seems to be going down the path of engagement optimization, not necessarily for your benefit, although you're certainly getting some benefit. You're getting enough benefit to pay for it, enough benefit to enjoy it and keep going.
But. It's feeling to me a little bit more like the way of social media, uh, in that we had this core idea that we really liked and then it's getting wrapped up in engagement algorithms or attention economy style algorithms. They kind of needed, I suppose, for these businesses to continue the kind of growth that they're pushing for.
But again, to bring it back to you.com, uh, a little bit, you know, we are in the enterprise space. We're successful for our customers are successful or not successful because we give you the right dopamine hits on the right cadences. Like, I want to get work done for you so that you can go do some of those more fulfilling things
[00:17:27] Henrik Werdelin: on the enterprise stuff.
I mean, like, I know you guys have a background in it. In Salesforce, when you look at your version, there's agents a very prominent. So I was kind of curious on, this is a slight aloof question. I think everybody's talked a lot about agents. I think not this necessarily certain that we all have like a collective kind of definition of what that actually means.
True. So right now there seem to be like semi intelligence pieces of software that does stuff.
[00:17:58] Bryan McCann: Mm-hmm.
[00:17:59] Henrik Werdelin: Where do you think agents go? I know that you have talked about multimodal, you obviously now have this enterprise thing where you have deep access into data sources that foundational models don't have.
So you can elevate other pieces of information. But what's your kind of like 2 cents on what's the next thinking that we will have around agents?
[00:18:25] Bryan McCann: Agents have grown in a weird way, haven't they? I mean the, the original agent, when I was hearing the term agent, it was because it had the agency to choose which tools to use at any given time.
So you type something in prompt and then it would enter a loop where it can have the agency to choose tools. Now, every piece of software is an agent, so the, you know, and everybody's had to do that to some extent. But if you go back to the tool usage, if you go back to the relationship between that and automation, I would expect agents themselves to just continue developing and becoming more, longer, running more automation, more multistep, probably in a year.
I wouldn't be surprised if we didn't talk about agents though, the way that people are already, like the terms rag, right? Retrieve augmented generation it, it's obviously is still there and it's incredibly important for so many businesses. Basically every customer that I work with needs us to do that, but it's already.
Becoming this just table stakes default thing that isn't, it's not, it doesn't have that feeling of, oh, it's agents, right? Yeah. But last year it was rag and Rag did have that feeling and it became
[00:19:52] Henrik Werdelin: MCP for a second, right?
[00:19:53] Bryan McCann: And then that's almost
[00:19:54] Henrik Werdelin: kind of gone now.
[00:19:56] Bryan McCann: That's almost gone now. So I think they all kind of fade into what ends up being this new version of writing software that has AI baked in.
And, and very likely what we're thinking about is AI right now as well, uh, will, will itself fade away and the dialectic, like the discussion will move towards a new term. The new term on the horizon is super intelligence, right? Everybody's gonna fight over what super intelligence is and the best way to get to it.
And those discretions will continue, uh, and kind of get compressed into these layers of abstraction, uh, so that we can forget about them.
[00:20:37] Jeremy Utley: Okay, we, we've kind of gone existential or, or future facing. I want to, I want to get hyper practical for our audience here, and I wanna talk for a second about productivity and specifically when you talk about your enterprise customers and helping them be more confident in their outputs, things like that.
One of the fascinating phenomena I have observed is what do we do with our time? You know, and there's, I dunno if you're familiar, but there's this phenomenon called Parkinson's Law, which states that an A task will fill the time you give it to be completed. Right? So
[00:21:12] Bryan McCann: yeah,
[00:21:12] Jeremy Utley: and we've all experienced that, right?
When you only get an hour, you bang out the thing in an hour. If you have eight hours, weirdly it takes eight hours to do. Right.
[00:21:18] Henrik Werdelin: What's that called? Parkinson's law.
[00:21:20] Jeremy Utley: Parkinson's Law. I hadn't heard that.
[00:21:21] Bryan McCann: Yeah.
[00:21:21] Jeremy Utley: Yeah. And so I think there's an interesting question about, you know, and, and I would argue by the way.
After you gain efficiency, it's too late to think about what else you will do. You actually need to think strategically before your productivity gain about what would I do if I had another eight hours this week? Sure. Because if you don't, you'll fritter it away. So all that say, I'm just kind of providing a little bit of fodder there.
There. How do you think about helping enterprises, individuals, and then ultimately enterprises make the most of the productivity gains you're delivering? If I set as a premise, the tendency of most people is to waste those gains.
[00:22:01] Bryan McCann: Mm-hmm. Well, there's a clear mindset shift, I suppose. I like to take a lot of lessons and analogies from AI itself, uh, or even when I was doing more AI research and when I was an AI researcher, right?
And I was first training language models and making neur networks. I could have seen my job as writing. And running experiments. And that could have been my measure of productivity, but my main measure of productivity that I learned very early on was my goal is to make the machines work for me as much as possible and keep the GPUs full.
My goal during the day is to make a plan, I guess you might call it a strategy, although I didn't think of it as doing strategy, right. And maybe that's the shift we all kind of need to make. Like I didn't, I never thought I was doing strategy, but I would come in, in the morning, I would write down all my ideas.
I would code up as many of them as I could, and I would hit go so that they would run overnight because the time between me leaving the office and coming the next morning was probably much more valuable and like keeping those. And on Friday, my goal for the week was to make sure that my biggest experiments were already to go on Friday.
So I can hit the button and I can go away for the weekend. But I know when I come back on Monday working, I have, have results. I can process them. I have analysis to do to set myself up for that next week. So it's all about keeping the GPUs full in every job. Now
[00:23:43] Henrik Werdelin: I was thinking about this the other day as I
[00:23:46] Jeremy Utley: was That's, that's fascinating.
That's, that's by the way, a great quote. It's about keeping the GPUs full in every job now.
[00:23:52] Henrik Werdelin: Because I think when you think about it, like a lot of people talk about 30% more effective stuff like that. I was sitting the other day in a conference call not paying a lot of attention. 'cause I have like six or seven clock code, uh, agents running and my job is like the clown in circus, you know, with the plate spinning.
Mm-hmm. That basically if one of them is not running, it seems to be over time. Oh yeah. But then you, you are not just getting 30% more if you, you, you are five x more. Right. You're literally taking. Your thinking abilities. Oh yeah. And then just multiplying them because there's five things that are kind of moving forward while you're just sitting in, in the meeting, which is,
[00:24:31] Bryan McCann: that was similar transition I guess when I went like, you can again literally take these, we can learn how to learn from how we've been teaching AI and making ai.
Right. So if I take that same thought process. Okay. Early researcher, Bryan was just trying to keep the GPUs full. Well then once I could do that, it was about expanding the number of machines in a network of machines and, uh, experiments that are running in parallel. Right. Until, I think my first paper, the week before I submitted my first paper ever, which was very important to me because I had this deal that was basically like I didn't have a PhD.
I got a research science job. If I've published a top tier paper at a top tier conference within a year, I could keep my research scientist job. If not, I had to go be a normal engineer, you know? So I was like, the week before I submitted that paper, I slept in the office all week. I had as many machines running as I possibly could with as many experiments going.
And we're talking like exactly what you're talking about, what you're doing with Claude Code. I just had terminals open with ook or something like that, like into 64 different AWS machines with eight GPUs on each of them, all running individual experiments. And then I ended up having to write a bunch of software to help me manage that, right?
And collect the results very easily and make plots so that my brain could digest what I'm I'm getting. So I had to actually build up the levels of abstraction just to be able to scale myself up. And we all need to be scaling ourselves up in that way. You're doing this with cloud code now. Someone's gonna come out with some way to do.
That next layer, and then you just need the most important bits of information to, to decide what to do next.
[00:26:25] Henrik Werdelin: What is your, uh, like it's 9:00 AM New York, right? What's the agents that are running for you as first thing in the morning after coffee? Do you have like, let's get going on these things? Uh, is it more kind of ad hoc?
[00:26:40] Bryan McCann: I have some longer running processes and I have some ad hoc, you know, I like to keep a mixture of both. I, I would say also because I have a team, I think about the team similarly, right? Mm-hmm. I think about managing people now in this, in a very similar way. Um,
[00:27:00] Henrik Werdelin: like prompt an output?
[00:27:02] Jeremy Utley: No, no. Prompt and output.
Okay. Stop. No, that's not what you're saying. Or I'm gonna, I'm gonna, I'm gonna project upon you a better intent than Henry just did. Bryan what I heard you say is not only do I think about scaling myself up, I think about innate creating an environment where others can be scaled up. And that's what you were saying, whether you said it or not, what I wanna know is, as a leader, what do you do pragmatically when you're managing people to, one, help them scale themselves and then two know whether they are
[00:27:32] Bryan McCann: Yeah. I'll run with your generosity there, Jeremy.
[00:27:35] Jeremy Utley: Thank you. No, thank you.
[00:27:36] Bryan McCann: You,
[00:27:37] Henrik Werdelin: I'm coming back to it though 'cause I'm,
[00:27:39] Jeremy Utley: stop it. No, I'm not on on that. No, that's the weak side. Okay. This is folks, we're not
[00:27:43] Bryan McCann: gonna cut this out. No, it's true. It's because the next step for me has been to some extent in that scaling up process to say eventually I don't run the agents.
Right? Like I shouldn't be the one pulling up cloud code necessarily every day and having 10, 10, of course, because I have a hundred people. That I need to teach how to do that.
[00:28:04] Jeremy Utley: Your ability to scale up yourself up will become the bottleneck. But if you can help others scale up, that is exponential.
[00:28:11] Bryan McCann: Almost every minute of my time, yes.
Is better spent on that than just the way that maybe one of my engineers in the, this is how I coach them, right. My engineers, it's every time you are gonna do something now try to get AI to do it for you first, and then if it can't spend more of your time figuring out if you can get it to do that. And if it still can't, now you have a very interesting research problem.
[00:28:39] Jeremy Utley: Mm-hmm.
[00:28:40] Bryan McCann: And if it can, then you've automated something. So if you find the research problem, tell me about it immediately, because that's maybe an area where we can grow and differentiate the company in the future. I want that bit of information back in my brain for my overarching strategy, right? So again, there's a management of an information passing process where I don't want all the results of every experiment and how many tests passed and failed in every pr, right?
We obviously don't want all the information, but you wanna be able to identify the most important information to make your next decision and run the next set of experiments that you're running and just instilling that mindset. Spending as much time with my team every time I see them. Trying to keep enough distance to critique the process while being close enough to understand the process.
[00:29:32] Jeremy Utley: Yeah.
[00:29:32] Bryan McCann: Is the balancing game that I play every day. Totally.
[00:29:35] Jeremy Utley: So if somebody, if maybe let's go from the IC level and then work our way up to the leader. If you start at the individual contributor level or you know, 90% of our audience, they're doing their job and they're hearing, scale yourself up. What is kind of step one, maybe step two, if you start at the very most basic level, how do I Bryan scale myself out?
[00:29:58] Bryan McCann: So when I was purely coding entirely by myself, pure ic, just starting the first principle, which I think is still applicable, and many of you probably heard, if you do the same thing two or three times, you should then take the time to automate it. So for me, a lot of what that looked like in the days before PyTorch existed, and you know, before a lot of these things existed, was trying to formalize structures around how do I run experiments as quickly as possible and how to make them as repeatable as possible, et cetera, et cetera.
So I think it's tricky in startup life because you're always doing these unscalable things and there's this tension between these two types of principles. Well, I shouldn't bother automating. Some things that I don't know if that is even worth automating. Maybe in three months I'm gonna throw all that away.
That's okay. Throw it away. That's what I would say. Every time I wrote a new paper, I threw everything away. I started from scratch and it completely rewrote my framework because I'd learned so much that the, the important thing for me to do was to get really good at building so that I could build the system the way I needed it to be, not just be attached to my system.
Obviously you can't do that with entire companies, but I do encourage my team to do that too. Like I want a culture of building a bias towards building so much so that we're not afraid to throw things away, throw code away. We have a better idea. There's a better way to do it.
[00:31:32] Henrik Werdelin: Let's just do it. You talk about engineers.
What's your thought on these enterprises that you now work with on kind of the vibing, the vibe, coding, entering in that domain so that an engineer is becoming more of a. A work researcher, like somebody who can use code, not because in this area they know how to code, but they can use the agents to do that.
Are we going to think differently about who is a good productive person in our organization when these things are more kind of implemented into our lives? I'll give you an example. Just you think about it like, you know, our producer on this podcast is Emma. She used to do pre-research. I How's your
[00:32:13] Bryan McCann: state of mind right now?
[00:32:14] Henrik Werdelin: She's state of mind and Emma used to do pre-research on people, everybody who kind of asked to be on the podcast and then ended up vibe coding, a rep service that basically did research, listened to some podcasts, and basically gave like a guesstimate if this would be a good candidate or not, right?
Mm-hmm. Clearly like very kind of like brilliant way of thinking about how do I scale my own abilities Perfect for spending three hours researching. How can I just do that right? Not necessarily something that you would assume like a podcast producer or somebody who's doing a podcast producer's job, you kinda necessarily think of none unless you did very much for me, like an example of what I would imagine people, internal organizations that you also work in now will start to kind of think, Hey, I could just code this up myself.
And so cures of where your head is in about vibe coding in inside our organization with friend non coders.
[00:33:05] Bryan McCann: I think I, I think everybody should be doing it as much as possible, just like Emma did. They should have the same exact thought process. Just because they're not a coder, you know, it's not part of their identity, doesn't mean that if they, I think the same principles hold if you're doing something three times,
[00:33:23] Henrik Werdelin: yeah,
[00:33:24] Bryan McCann: take some time to automate it, see if AI can do it.
Your time now is better spent trying to do it. And, and just like I said with the engineers, if the AI can't do it, you're gonna get some really valuable intuition about where AI is right now. In a way that you wouldn't otherwise get. It doesn't really matter if you're an engineer or if you're a product designer or if you're a podcast producer.
Learning how to learn, learning how to create, these are essential fundamental building blocks for learning how to scale yourself. So if you think about neural networks, organizations and people, all is very similar types of entities where they need to learn as fast as possible. They need to have these certain building blocks and skills.
Very good information passing. I mean it one day or later, you know, we can go more philosophical and and talk about how all this applies to like the individual as well. Right? But anyone in an organization, any node in a neur network should be looking to. To do these things. It doesn't matter if you're an in
[00:34:39] Henrik Werdelin: charge.
I have to ask, and I know Jeremy's gonna kill me for this now 'cause I'm bringing you back into the dark side. Right.
[00:34:45] Jeremy Utley: Dude,
[00:34:45] Henrik Werdelin: let me, dude,
[00:34:45] Jeremy Utley: why change you go there? This was such a positive conversation.
[00:34:48] Henrik Werdelin: I know, but, and we had many positive things, but we talk about organizational design that was made probably, and you'll know this better than me, Jeremy, I think as I understand it was made when the railroads kinda was introduced to the us.
Like the whole classic radical kind of way, right? Yeah.
[00:35:02] Jeremy Utley: Chain of command. Yeah.
[00:35:03] Henrik Werdelin: And it does seem that increasingly we'll have to redesign an organizational in structure that probably, I don't know what it'll look like, but maybe it'll look more like the SEAL team. Small groups that report straight to the president than the army right now.
I do think there is something interesting to thinking about humans also, or basically task of saying, Hey, I have a prompt. I have something I need and I need to basically get an output and I can either ask an agent or can I ask a human, is this a human or an agent? Question. Then some humans can take very abstract then prompts because they will then be able to pass that on.
Mm-hmm. And so Jeremy, I knew that you kind of thought it was an evil way of thinking as people as just kind of notes in a
[00:35:46] Jeremy Utley: prompting, prompting Humans does feel weird, but I get what you
[00:35:48] Henrik Werdelin: have. But I do think that there's something interesting in like if organizations are becoming more like the web with notes in a, in a oral network, in, in many ways with Pathways than this classic heretical organizational design that we have now.
So Bryan is curious if I can lure you down that way of thinking. Oh,
[00:36:11] Bryan McCann: I, yeah. No need to lure me. Uh, he's already
down
[00:36:15] Jeremy Utley: there, he's already
[00:36:16] Bryan McCann: deep in the revenue. I'm, I'm there. I And beyond I think, I think we should be designing organizations like neural networks. I think you can, if you look at everything we've learned over the last five years about how to get neural networks work really well.
We should apply that to organizations. And a lot of those things you kind of already do, but
[00:36:34] Jeremy Utley: Well, but for somebody who doesn't know, maybe you take one step back for somebody who doesn't know what we've learned about how to get neural networks to work really well. Say what we've learned and then what the implications might be for org design.
'cause I don't think that'll be obvious to some people.
[00:36:47] Bryan McCann: Right, right. So here's a very concrete example. Again, something we kind of do, but like it's worth thinking about in the context just to establish some credibility for this metaphor. So if you take a neural network, uh, and a neural network, in the simplest case, what we can think of, one as having layers, and then there's layers have notes in each layer.
And as you build it up, the data comes in at the bottom in this mental picture. And then at the output we're gonna, we're gonna have a classification of like yes or no data comes in, yes or no at the output. Just layers in between. So if there are layers and you have too many layers. Then when you're training the neural network, the signal, when you make a mistake on the yes or no question, it tries to back propagate down through the network, right?
But by the time you get to the really, really bottom layers, there's almost no signal left. That makes it very hard for those bottom layers to learn. Um, so what is something that we do? We add residual connections or skip connections between higher level nodes, down to lower layers. So they pass signal directly and they send error signal back directly.
So you might have heard about, you know, meeting with your skip manager, your skip level man, we do this, like these are similar principles that. Uh, have emerged independently to some extent in two different types of organizations, of nodes or people, but there are nodes in the network trying to learn as quickly as possible.
So if that one makes sense to you, maybe one that would be less intuitive, but we could then transfer over is looking at transformers. So transformers have this really nice self attention mechanism where every node in one layer, if it's a self attention layer, can attend to all of the other nodes or like all the other vectors.
Let's think about it as a sentence. Every sentence has a word. Every word gets a vector. Every vector gets to pay attention to all the other vectors. That's what self attention means Conceptually. They get to say, how similar am I to that? Or how useful is that for this example, relative to my job, like for my node in this network and keeping the information flowing very well up and down the network with those skip connections is important and keeping information passing horizontally in the network with a mechanism like self attention.
And there might be other mechanisms too, but self attention is a very popular one. So information flow horizontally, up and down. Very, very important in neural networks. And whenever you see a bottleneck happen, it's almost always in our advantage to break that bottleneck. We sometimes do that. An information flow bottle to do that.
Yeah, to try to make like hidden latent representations that mean something. And most of the time to date, that's not the line of thinking that has won. The line of thinking that has won has been. Make everything as parallel as possible to bring it maybe a little bit more forward into the people side.
Make sure everybody can talk to everybody. If you are gonna have a lot of layers, then you better have a lot of skip connections. But it's probably easier just to have fewer layers, which we see the flattening of a lot of like new AI native companies being, but
[00:40:33] Jeremy Utley: then, then you kind of get to what's McGill's number or whatever.
Like there's only so many kind of call it nodes that any human can actually attend to. I think the number is typically around 150. Right. So then you're almost rate limited by the Yeah. Dunbar. Thank you. It's almost, uh, who is at McGill University, which is why I thought of McGill. Um, but the, you start being rate limited by whatever the human's capacity to attend to other humans and other information is there.
Right.
[00:41:00] Bryan McCann: For sure. But tooling can help with that. Right. Just like we talked about before, I don't need all of the information from every, I don't. I think a lot of what will maybe go into that is more and more of the information passing the, like the raw content of work will get passed with AI and the people will be there to focus on the people.
So you're right, there's gonna be a, some upper pound limit.
[00:41:27] Henrik Werdelin: My business partner Nicholas from Autos, he has a thesis that AI is squaring Dunbar, so it's 150 squared, so it's 22,500 mm-hmm. That you'll be able to kind of oversee or have an emotional connection to with an ai. I'm ensu I have one last question that I'm keen to ask you.
We had, um, the founder of SIA on the podcast and he talked about that they have designed a new, uh, way of getting hold of the data in their team by not necessarily doing a big data. Factoring kind of system with one big platform that I think a lot of big companies have to spend a lot of time doing, but basically using agents to go and find the data with just wherever it was.
And then make sure, obviously it could be represented to the view as companies are going through this process of figuring out what data they have that need, and then how do they make sure that that the data, the information gets to the edge of their organization. How do you think about the requirements for doing a big data science project, and how much do you believe that you increasingly can just leave the data where it is and then have agents go and fetch it?
[00:42:38] Bryan McCann: So, with you.com, we, uh, I described we do a lot of public web search and we do a lot of private data search inside companies. And there's a similar maybe analogy here where with, at least with the public web, like if we had a system where everybody just typed the things that they put into the public web.
Typed it into this centralized database and then chat GPT or something, just like had access to that database. It would obviate a lot of the need for having a bunch of like web crawlers and to some extent search itself. And it in that sense, it would be much more efficient as a system. So something that's much more like this is Amazon, right?
You have like buyers and sellers just like inputting information. And you'll have services like Google Shopping, which need to know what's going on on Amazon, and they're crawling Amazon, but it means they're always behind because in Amazon, people are typing the information into the system and Google has to, by definition, after that happens, crawl it, find it, discover it, and then incorporate it, and then use it.
So companies that centralize that data to some extent, I think it's still worth doing over time. As an efficiency trade off, but having methods to deal with the messy, more distributed, less efficient process of having data be just where it is and you go find it, is gonna be incredibly important for growing in this new era.
I don't think you can wait to centralize, but it's still going to be an important part of the process as you, you know, if you're a massive company, eventually want to get efficiency gains, you don't always want to be running your search and your, your finding systems to go with the data. So I think it'll be a mix.
But the fact that we haven't developed those go out and find tools to the same extent suggests that, uh, you know, it, it's to date, it's a harder problem. It's much more complicated to have a system automatically do that. A lot of that knowledge is like baked in the minds of data scientists who have already left those companies.
It's a mess. It's worth doing.
[00:45:02] Jeremy Utley: One thing I would love to hear, if you have any thoughts on is this recruiting the, the question about people. I was actually talking to Wade yesterday as I was driving to a meeting. We were catching up and he was telling me about the conversations that he has with people where he is basically saying, no, you have permission.
Try it. Got it. Just do it. Right. And there's a question of some people, you know, we are actually talking about the challenge of hiring young people and how, as I've talked with college students and, and other young folks who are looking for work, the realization has occurred to me. They are looking for a job a lot of times because their whole life, a teacher has told them the assignment to do and what they need is a boss to give them their next assignment.
And that's, I think that that's actually, it's a profound kind of realization that I'm still kind of grappling with. But it strikes me there's that. It's an exceptional person who's a self-starter, who takes initiative, who can frame their own work, who can set their own objectives, et cetera. How do you think about recruiting in the age of the organization?
That's more like a neural net, because I don't think everyone is, at least, most people haven't been trained with the skills they need to be an effective node.
[00:46:20] Bryan McCann: It's a transition, but I think there are other proxies that often correlate well with it. I have had engineers come to me and tell me like, I just need to know what to do next.
And I'm like, I'm not gonna tell you like the job here is for you to figure that out. Like it is a different type of job that's going to be the job here and in its AI world. I think increasingly for more and more people, I guess I do it a little bit trial by fire, you know? I'm gonna tell you the future.
My job is given all the information I have, tell you what the future's gonna look like one year, two years out. And then you can take that information and you can run with it and you can decide what's gonna be the most impactful thing you can do, the most important thing you can do with your life's energy.
And as long as we're aligned on that, then you belong here on, on my team. And if you decide that that's no longer aligned or you feel like you're outgrowing us, then I'm gonna support you for the rest of your life because you worked with me in this very crucial way and crucial time. And that's how I treat it.
Like it's all of the things that we're building right now. Going back to something we said before, uh, we're gonna redo it all in the future, right? We're gonna be building so much faster. Everything's gonna be rebuilt again, sometimes with each other, sometimes separately. And anybody who comes through my door and maybe passes those like individual, you know, tests and things like that, just.
Make sure they have some basics right? They become a special node in my network that is someone that I'm hopefully trained to be way better than me, and there's way, way better things than me. You know? And I don't just mean like making multi-billion dollar companies or something like that. I mean more like, like for humanity.
Like I want people to have impact. And figuring out how to have impact is, is a skill for me. It's this listening process. I get all this information all the time and trying to feel out where I have strong convictions. And if I don't do it, maybe nobody else will. Now, someone always will kind of try to do it, but you know, will they do it the way that I'm gonna do
[00:48:39] Jeremy Utley: it?
Am I, am I uniquely qualified, positioned, uh, burdened to contribute in that way? I think it's beautiful. Well said.
[00:48:46] Bryan McCann: Everybody in my company should be doing that every day. Yeah. And if they're not then, then we should fix it
[00:48:53] Henrik Werdelin: being the timekeeper. I, Bryan very much appreciate this conversation. Like it's wonderful to have somebody who thought so much about all these things.
[00:49:04] Jeremy Utley: This was super fun. You're
[00:49:05] Henrik Werdelin: amazing. Yeah. Very appreciate it. Thank you so much.
[00:49:07] Jeremy Utley: I'm so inspired. I'm so inspired. I hope we might be able
[00:49:08] Henrik Werdelin: to get you on another day.
[00:49:10] Jeremy Utley: That'd be
[00:49:10] Bryan McCann: great.
[00:49:11] Jeremy Utley: Round two. Thank you
[00:49:11] Bryan McCann: guys.
[00:49:12] Henrik Werdelin: Best along with the you.com.
[00:49:14] Bryan McCann: I have a lot of fun too. See you Rob. Thank you
[00:49:16] Henrik Werdelin: sir.
[00:49:16] Jeremy Utley: Cool. Take care. Cheers.
Bye Aios.
[00:49:18] Bryan McCann: Bye.
[00:49:18] Jeremy Utley: Wow, we, you know what, Hendrik, that was a really fun one because you and I had some disagreements about where we wanted to take the conversation, didn't we?
[00:49:25] Henrik Werdelin: Yes, I know. And I was asking chat botie the other day how we can improve and it suggested that we should disagree more.
[00:49:32] Jeremy Utley: Oh, did it?
[00:49:33] Bryan McCann: There you go.
[00:49:34] Jeremy Utley: Wow. I you, and you did, did it suggest that you suggest that to me or did it suggest that you keep that knowledge to yourself?
[00:49:41] Henrik Werdelin: It suggested that we should have ideas or points that we were prepared to talk about in the podcast where we knew that we came down to different sides.
[00:49:50] Jeremy Utley: Oh, that's cool. Okay, so where do we come down to different sides in this episode, Henrik?
[00:49:54] Henrik Werdelin: Well, I think the reason why I really enjoy it, Bryan and, and how he's thinking about it is because I am, I'm
[00:50:03] Jeremy Utley: so thoughtful. What a thoughtful day. So thoughtful. Wow.
[00:50:05] Henrik Werdelin: Really, and just clearly like incredibly smart. I'm quite fascinated about this idea that the future of work, the future of organizational design, is going to mirror neural networks, which he talked about.
And I have for a while been, and I'll give you a little bit of background, and so I think a lot of companies are going through these kind of three phases. Phase one is how do you upgrade the capabilities of your staff? Two, how do you create agents or gentech workflows to be more efficient? Then I think the third one is kind of like what's next.
And so a lot of companies seem not that the people that are really on the cutting edge seem to be kind of pausing between two and three, and they are hitting your Parkinson law where they go like, I don't understand. We've implemented 40, 50 agents in our organization, but we're not really seeing the efficiency gain.
My thesis is that you basically have now people that can do jobs that belong in other teams. Hmm. And because that they are not, uh, happening at the same time, you won't be able to reduce staff in one department because they can from a, so you basically have this weird kind of, uh, jack saw or mismatch up skill.
And then what people do is, of course, they just fill the time up and so they've got 30% more time and then they get busy doing. And so I do speculate that in the next year or two we will see more organization that will need to ask themselves this fundamental question, which is how do we design ourselves for an organization that is increasingly agentic When we see that the agentic workflows, for example, the one as the case I mentioned with people that are applying with AI and then their job applications reviewed with ai, it basically shows that the, the way that the workflow was designed was for humans in a world of scarcity.
And now that you have agents in a world of abundance, then you need to reinvent a new workflow. And so getting inspiration of what that might look like, how you might be more efficient and get an exponential kind of upside to your AI work, I find to be fascinating and don't hear that many people talk about.
And so when Bryan kind of start to speak about this, I was like. You really have my attention now and yeah. And so, but I think what you might have reacted to is that there is this kinda like build in assumption there that you are giving task and sometimes you'll give it to an agent and you'll be as good as a human.
And then there is, I think the unfortunately consequence that sometimes there will be people that won't want to upgrade, that won't be the people that will understand how to put on the Ironman suit. And at that point they'll just lose their relevancy in the organization because the agent will be able to perform that specific task more efficient than they do.
And you can't upscale them to be kind of the, the managers of many agents or tasks. And so you being such a kind, thoughtful person, I think sometimes come to the table. It's a, I'll say this statement, but I mean as a question that people are of gradable and so it is just about. Finding the way of teaching them the skills.
And I think I might come from a more cynical kind of perspective where if you are a startup and you don't have the resources to do that, there's a time, right as uh, Bryan was saying, where you just have to say, well, your way of working or you are expectation of how we work in this organization is just not how it's gonna be.
So I love you and need you,
[00:53:45] Jeremy Utley: love me, leave me. I, um, I was really, uh, inspired by his, I love the phrase, keep the GPUs full, um, as a kind of personal mantra and then as a team mantra. And I think that's actually, you know, right now the bandwidth limitation is mostly, you know, the reason most organizations can't grow more.
If you say what's get a stranglehold on your business, folks would say, well, if we had more smart people, we could do more. Right? But we can't hire 'em. We gotta grow at the right rate. But. Intelligence is no longer the rate living factor, and no one can reckon with that. And because why? You, you have to think in terms of we gotta keep the GPUs full.
That's how you max out the intelligence of the organization. But individuals, and most forget teams, you know, very few individuals are thinking in terms of what can I pass? What's a full night's worth of work that I can get to the point that I can pass off by the time I leave work today? Because I value the nighttime cycle of experimentation so much.
I wouldn't dare go to sleep without giving an agent something to do. Right? And so to me that when, when he talked about scaling himself up and then he said, quote, every minute of my time is better spent scaling my team up. To me that's, so it's maybe it's a humanistic thought, but a set aside whether people can be scaled up, the fact that he's now observing the best use of his own time as a leader.
Is enabling his team to get scaled up to me. I what I, it was not a humanist thing, it was more just a practical, how do you manage in a world where you're trying to give everybody the mindset, keep the GPUs full.
[00:55:28] Henrik Werdelin: One thing that he was mentioning, most people
[00:55:29] Jeremy Utley: wanna clock off and
[00:55:30] Henrik Werdelin: they wanna be done. Yeah.
But one thing that he was mentioning also, which I, and I don't do this, not only that's your, your trade is to write down like a sentence. And he said, figuring out how to have impact is a new skill. Mm-hmm. And I do think that that is so true. And I think, you know, and I guess as an entrepreneur I get excited about because that is like innate entrepreneurial capability.
Like
[00:55:50] Jeremy Utley: mm-hmm.
[00:55:51] Henrik Werdelin: What, what do I have to do next? What is the next task that I throw myself out? How can I have an impact on, I can move that forward? And the way that I look at entrepreneurship is often through these capabilities, which are not, do you code, do you design or do you product manage? It is more, do you agitation?
So do you kind propel stuff forward it? Um, do you have gravity? Can you tell a story in a way that people understand it? All models understand it. Do you have resourcefulness? Can you do a lot with a little? And those three things are often the trademarks that I look at when I invest in entrepreneurs, for example.
I think in the same way, and maybe I'm just seeing it because I believe more entrepreneurship is good for the world, but I do think that these deeply entrepreneurial skills are what is required in the future organization, because everybody will be able to do everything. And so what they have to do is to figure out how do I as an individual have impact,
[00:56:42] Jeremy Utley: uniquely add impact.
Yeah. I mean, he said everybody in my company should do that every single day. Everybody should do that every single day.
[00:56:51] Henrik Werdelin: I really think that that is such an important kind of lesson for people who wants to have a real thriving career. And I think for organizations that want to figure out who should we identify to really promote and, and then I'm, you know, I'm, I'm not a AI researcher, so I'm just so, so fascinated when you start to take these somewhat extract models.
And then you apply them as a philosophy,
[00:57:17] Jeremy Utley: right? To org design. Yeah. Yeah. That part of the conversation definitely reminded me. I think it was with the team from Applied Intuition, Casser, Eunice, talking about this idea of guilds, right? And what if organization of the future is actually guilds of, you know, small groups of people executing experiments, as you say, with SEAL team kind of accountability, what did he call it?
And not skip level, that's kind of the managerial term, but what was the neural network term? Is that redundancy or, I, I can't remember exactly what it was.
[00:57:49] Henrik Werdelin: This is where his agent, he talked about that the next step level of agents are these predictive agents that basically jump in. So in his will, an agent would jump in and give you the, the right term.
[00:58:01] Jeremy Utley: Well, uh, I mean that by the way, a great context to where, you know, feature. And I love that vision of providing immediate value. I think it's super cool. It's a super cool vision of the future. That would be to give me my next line. I mean, the ultimate obviously is help me know exactly what the right thing to say is,
[00:58:18] Henrik Werdelin: but that's not that, that's not a far away, right.
You could imagine that.
[00:58:22] Jeremy Utley: No,
[00:58:22] Henrik Werdelin: you know, I have granola running on these calls, right? And so I could probably just ask it now, but from there and it coming up with feeders and saying, Hey, you should remember to think about this or ask about this, or, you know, it's time to stop interrupting or whatever feedback could be, right?
[00:58:38] Jeremy Utley: Mm-hmm.
[00:58:38] Henrik Werdelin: Would be not, it doesn't seem to be way into the future.
[00:58:42] Jeremy Utley: Going back to the management or leadership thread for a second, one of the things he said that I really, uh, enjoyed and wrote down was when he is working with his engineers, he said, step one is try to get AI to do the thing. Step two is if it still can't try again and step three, if it still can't.
Commission a research project and come tell me because the stuff you can't figure out how to get AI to do, that's the stuff I wanna know about. Mm. I thought that was a really, really cool way to think about, at no point am I saying don't use ai.
[00:59:16] Henrik Werdelin: Mm-hmm.
[00:59:16] Jeremy Utley: In fact, what I'm saying is your job is to figure out how to use ai, and the only time I want to know is if you can't figure out how AI could help with this.
[00:59:24] Bryan McCann: Hmm.
[00:59:25] Jeremy Utley: That's a, that's like such a total paradigm shift.
[00:59:29] Henrik Werdelin: I agree. Do you have anything else that we should add?
[00:59:33] Jeremy Utley: No. I mean, uh, I would say this, thanks so much for listening. If you enjoyed this episode, hit like It, subscribe, share the teaser or the full episode with a friend who needs to know. They've gotta level themselves up and they've gotta think about what impact they wanna have on the world.
Bye-bye.
[00:59:50] Henrik Werdelin: Bye-bye.