Beyond The Prompt - How to use AI in your company

How Do You Strategize in the AI Era? – with Martin Reeves, Head of BCG’s Think Tank

Episode Summary

How do you strategize in the AI era? Martin Reeves, Chairman of BCG’s Henderson Institute, argues that strategy has always meant playing two contradictory games: optimizing today’s performance while simultaneously creating tomorrow’s. In this episode of Beyond the Prompt, Reeves explains why AI often commoditizes competitive advantage, what leaders must do differently, and how imagination, experimentation, and human creativity remain central in a machine-augmented world.

Episode Notes

Martin Reeves has spent decades advising CEOs on how to think about strategy. As head of BCG’s Henderson Institute, he has built a career challenging leaders to balance efficiency with imagination and to prepare for the next disruptive shift.

In this conversation, Martin tells Henrik and Jeremy why AI alone will not give companies an edge and might even strip them of advantage. He unpacks the “two jobs of business”: playing the current game better than anyone else while simultaneously asking what the next game will be. He argues that AI only sharpens this paradox, forcing leaders to think faster, experiment more, and draw on human imagination in new ways.

The discussion covers the risks of over-optimization, the future of consulting, and the paradoxes of AI adoption. Along the way, Reeves explains how AI can accelerate exploration, why framing the right questions is the strategist’s most important job, and why times of disruption are when number twos become number ones or disappear altogether.

Key Takeaways: 

LinkedIn: Martin Reeves | LinkedIn
BCG Henderson Institute: Home - BCG Henderson Institute
Martins books: The Imagination Machine // Like: The Button That Changed the World

00:00 Intro: Two Jobs in Strategy, Today’s Game and Tomorrow’s Game
01:33 Martin Reeves and the Henderson Institute
04:02 Defining Strategy in the AI Era
05:12 AI and Human Imagination
09:20 Efficiency vs. Competitive Advantage
13:18 Organizational Design for the Future
23:09 The Paradox of Imagination in Business
33:02 Harnessing Serendipity for Innovation
35:18 Devil’s Advocacy and Meeting Optimization
36:51 Where AI Helps and Hurts Organizations
38:16 The Limits of AI Training Data
42:56 How Martin Uses AI Day to Day
47:09 What’s the Next Game for Consulting
53:15 Final Reflections

📜 Read the transcript for this episode: Transcript of How Do You Strategize in the AI Era? – with Martin Reeves, Head of BCG’s Think Tank

Episode Transcription

[00:00:00] Martin Reeves: A truly succesfull company in the long term, there've always been two in business. We, we often simplify think that it's about optimizing the performance of company, but on long enough timescales I know those timescales are compressing. There were always two jobs. One of them to play the current game effectively, play it it more than your you know, extract, returns, , drive efficiency, and you have to do that. Otherwise , you don't have the funds pay for the future or the license to, buy a different future. but you do have to have a different future nothing lasts forever. So you also have to ask the second question, which is, what is the next game? And those games are not only, somewhat contradictory,, but they're actually very fundamentally different in nature. so I'm Martin Reeves. I am, , the chairman the. the founder Bihi Henderson Institute, which is a bcg, the consulting company, BCGs think tank on new approaches to strategy and change and management. , I'm originally a biologist. , I'm a, , through and through generalist, I've always interested in everything and knowing a little bit about everything the connections between things um, i, I sort of of banked my career on being able to deploy my skills as a generalist. And sometimes that felt like the wrong thing to have done. You I, I, felt like the, the the last surviving, , generalist in a firm of specialists at at certain in my career. But, but, for better and probably worse, , I'm generalist that's interested fundamentally in using the mind to solve We might call that strategy. We might call consulting.

, We might call that innovation, but what I doing? I'm basically saying, how do we about that? What's the, what's the real question here? I.

[00:01:33] Jeremy Utley: Martin, maybe just as a, just as of an opening question for folks to calibrate, folks who are joining us and maybe don't know much about the Henderson Institute, would you talk for a second about BCGs Henderson Institute and the role it plays for the firm?

[00:01:48] Martin Reeves: Um, yes. So my discipline is basically strategy. competitive strategy and, uh, competitive strategy is actually a relatively young discipline. it was founded in the early 1960s on the, east coast of, uh. of And, , one of the quirks of Beast Juice history is that our founder, Bruce Henderson, was one of the pioneers, of the discipline. Competitive, strategy. And so the, the firm has roots in strategy and also not just, commercializing the of strategy, but actually, uh, evolving and shaping the discipline of strategy. uh, so that was informal. Um, you know, essentially everybody in BCG at the very beginning was, , an originator of ideas and, uh, you know, practitioner employing those ideas., But at, at a certain point in the history, we, we decided to formalize that by setting up the institute. So the, mission of of the institute essentially to continue Henderson's legacy of shaping the discipline of strategy. We extended that to all of our our offering because, you know, been a, a long time since BCG cease to be a pure, a pure strategy firm. And the the mission is to, inspire. So not just inform, but inspire the next game, uh, not the current, in business and its performance, the the next game of the thought leaders in, in business using the medium of ideas. So the key elements are essentially, you know, inspiration on information next game, the, the next set of ideas that we'll need. Who to the, to the practitioner thought leaders in business using in this age where, know, technology's important. Change is important, sort of defending the traditional technology of ideas. So, you know, that, that essentially is, is is what we do.

[00:03:34] Jeremy Utley: You mentioned the word ideas several times there, which if you know anything about me and Henrik, you know, we are lovers of ideas. How do you think about ideas in a world where have this alien co intelligence called artificial intelligence? How do you think about the nature of ideas and the dissemination of ideas and even the, uh, metabolism of ideas for us as humans?

[00:04:02] Martin Reeves: Um, well, business, I, I think with all of pressures for short term performance is often in the mode of, of doing, , but I think to do strategy effectively, and I I define strategy as, any systematic pattern of thought or action, which leads to an increased probability of advantage. So to do that job, you have to, , borrow a phrase from one of my, um, collaborators. You have to think strategically about thinking strategically while behaving So there's a behavior layer, what you do, um, and you need to about that behavioral layer strategically , while still behaving. Um, so that that ideas layer, that thinking layer is essentially, thinking about framing. know, what's the problem? , Because the problem, the question, the, you know, the task to be addressed is rarely the one that's given. And there are, In, you know, always infinite choices of how you frame a problem. You know, it's partly about thinking how to address it., The problem solving methodology and sequence. And it's also about innovating, a new approaches to that. , So that in itself I think is like an expansion of a execution and performance, oriented, you know, narrow view business. And then we have this, uh, new form of cognition in town. Um, So one of chapters in my book, the Imagination um, is about, is called AI Artificial Imagination. And it asks the question, will we ever have artificial imagination? And, I think it's an interesting question because the The experts early on in the AI revolution came out and said, well, you know, there are certain things that AI will never be able do, and they're, they're the most human Um, so imagination and, uh, and, and ethics and empathy essentially. Well, I'd say that we don't really need to speculate about because we already have tools potentiate and enhance, , human imagination. So, you know, for any aspect of imagination, can name a tool that, you know, that, already exists. I think the failing or the limit is more that we might. Use these tools, not their full potential or use them unimaginatively, but the, you know, the tools to assist, , human imagination already exist. In fact, um, in a clinical trial that some of my colleagues did, where we, you know, we looked at a large sample of, um, of, uh, of, of managers and consultants, we gave them different types of tasks. The, innovative tasks were were actually the ones where the of humans and AI were relatively better. And the combination, the, the tasks where AI plus humans were were relatively worse were, um, what you might call, business problem solving, sort of, you know, fuzzy logic, fuzzy, fuzzy data sets solve a shop floor problem. They, because, because AI is not particularly good at capturing the physics of the world, right? We just have what people, the training data set is what people, you know, what people said on the the internet about, about a certain problem, you know, how to get things done in the real physical world. So absolutely aI will play a a major role in uh, in business imagination,

[00:07:02] Henrik Werdelin: also in ethics?

[00:07:03] Jeremy Utley: Now. Hmm.

[00:07:05] Martin Reeves: um, ethics. Well, I think, the the definitional limit, it's not, you know, it's not uh, a technical limit. It's is a definitional limit of of ai is that essentially at the end of the day, what are we doing? We're, we're serving human ends. And, unless we turn into automatons where we, we cease to have I mean, o only humans can specify their own ends. What, what problem do I want to solve? what what change do I wish to bring about in the world? And which constraints, which ethical constraints do I wish to place on that? You know, it's, it's, it's our ethics. It's, it's our purpose. So that's where we need, um, human intervention. Now at a technical level, you know, we could ask questions like, um, you know, What would be typical ethical issues with a solution like this and, you know, get a fairly good survey. Again, it's a tool which can assist our but it can't replace our ethics. We get to decide what is right or wrong, and we get to decide what the purpose is. So I, I actually, I often laugh about the use of the word agent, in ai, agentic ai. , It's as if we are the objects and the, you know, the, the agents are, the, uh, artificial intelligence bots. But, but of course we we, have to be the agents. Uh, otherwise we are the, you know, the slaves of the AI agents.

[00:08:18] Jeremy Utley: Right. Right. We must be agentic. Um, you wrote, you the Imagination Machines. One of my favorite books. It came out prior to Chad g Bt, right? Was it 2021?

[00:08:29] Martin Reeves: I think it uh, 2021 or 22. 21. I think.

[00:08:34] Jeremy Utley: So the chapter artificial Imagination was conceived prior to generative ai. What, what would you rewrite now that you've seen the impact of generative ai?

[00:08:48] Martin Reeves: Um. well, I I could write more. It was just, a, it was just one chapter and the book wasn't mainly about that, so I could write more. Actually the book didn't really dwell on, there's a whole literature on like, what can the latest technology do right now? You know, the, the technical performance of the ai, I, I didn't really deal with that. ID dealt with more what is immutably true about imagination and, technically what is possible, even if if not the case today. So, so in in that sense, I wouldn't change much. Um, I think since, , writing the book, I've, um, done a lot of thinking about,, AI and competitive advantage. So I would, I would probably write more about that. , 'Cause I see, a sort of a, a grave oversight in that respect in the world, which is that as we as we get enthusiastic about the technology and its technical possibilities, um, we forget that if those same possibilities are available to everybody. So if we all buy the tool and it's trained on the same training data, um, and we're free to use it for any purpose our competitors are are using it for, we haven't competitive advantage. We may have have created efficiency, but we've actually commoditized competitive advantage. So

how, right, There's

[00:10:01] Jeremy Utley: just competitive parody. You could say, say, competitive parity uses

[00:10:05] Martin Reeves: parity. and, and that's a subtle thing if, if somebody gets a 15% productivity list by using new technology, you will be at a a disadvantage if You don't do that too. But merely because you do that doesn't give you an advantage. Um, so I've done a lot of thinking about, um. with an academic strategist called Jay Barney at University of Utah, , about, well do the two intersect. Can you you have the efficiency gains and the competitive advantage? And the short answer is yes, but not automatically. And there are a couple of routes to that. So one route, for example, is reinforce existing advantage. So if Amazon, with its advantages in logistical and IT systems and knowledge customers enhances that existing advantage with ai, that probably could be a durable advantage. Um, there's a very hard path, which is serial temporary advantage, which is you could have an advantage for a while by using the latest technology, to a new end and being further down the learning curve than your competitors. But that's a very hard part because you have to stay ahead you have to, you you know, that you you don't stumble once otherwise your competitors will overtake, you know? but I I think the, I think the big area is, , you know, essentially. Competitive advantage is if, if I grossly simplify, it's about doing difficult and valuable things, which are are hard to imitate. And one of the really difficult and hard and valuable things to do is to have a, an aligned group of people that are collectively effective. And I think the difficulty increases if you're if you introduce artificial intelligence into the mix. So if you imagine this, you know, sort of bionic organization of the future where we have, um, you know, creative uses of ai, seamless with humans and, uh, and, and machines alignment, partition of tasks, you know, interfaces. And that's gonna be as difficult to replicate as culture. So, you know, I, I think that that will be, so the basis for advantage in the future in that sense maybe quite organizational, but what it absolutely isn't, in, in spite of what you might believe, if you Most of the literature on, on, on, AI is the mere deployment of the best and latest technology because it's, especially for technology, because it moves very quickly. So, you know, whatever you do, the, you know, your competitors is probably gonna leapfrog you more quickly than any other technology. And also because it was born open source. So they, so AI is not entirely open but essentially you've got these, uh, large models, which are, they're multiple, they're rather similar. They use training data, and they're they're essentially available anyone for a price. So it's, it's really, uh, closer to a of commoditization than a one of

a. So

[00:12:44] Henrik Werdelin: can, can, I just do a, I I see if I can compute it. Um, there's so many interesting, um, elements of what you say. So one thing though, that as like somebody who is like the practitioner, like somebody who's building companies every day, um. Really interesting, obviously, of asking ourselves, what is it that I'm great at that AI can make me better at? 'Cause then you get like the multi ai multiplier that you outlined. Uh, could you touch a little though on, you know, obviously AI as we know it right now, is not easy to think about how do enhance my culture. Um, but as you think about how do we design our organizations to be. be, to be strategically aligned with the future that we are about to kind of get into what is the organizational principles of that design? And also maybe how do that pertain to the culture that you create within it?

[00:13:36] Martin Reeves: well, you, you can ask the question without AI and then add it? In. And I, think you get to the same answer. But, , my book, the Imagination Machine deals with, um, you know, imagination and advantage and in order to have an effective, imaginative organization, , you know, that is an organization which conceives of valuable things that are not the case, and then causes them to become the case, counterfactual thinking, and then, uh, the conversion of those counterfactuals into facts. To do that you, you need a whole bunch of things to be true. Um, you need people to be,, untethered to their current operating model, to see everything in the world through the lens their current model. So you need a certain, , mindful, you know, flexibility. Um, secondly, you need the avoidance of complacency because the backward looking financial indicators may tell you. Things are fine, you know, you're profitable. But that's very very You know, I think complacency in formally successful, , companies is, is, is a, is a break on, innovation. I think it requires agility because if you do discover counterfactual this is interesting. Um, you have to be able to move quickly and, you know, large objects have inertia. The tend to change very slowly and to, on the the whole, you know, resist change, um, you have to have, the optimal degree of alignment and diversity. You need enough diversity of thought to be to see the ideas, but you have alignment to actually get them done. You need some sort of you know, if you're leaving a synchronized state to find a new synchronized state, you've gotta sort of increase the noise level and then reconverge. So all of that is true, whether you're talking humans or ai and all of that is quite difficult. And in addition, if you are, if you're aligning two types of cognition, I think you've gotta think about, um, fitness for purpose. If your complicated AI models cease to be relevant or or generate, know, untruthful outcomes, how would you know? When would, you know, um, so famous financial collapses where, you know, very sophisticated models, so and complex that people didn't really understand at a granular level, you know exactly how they worked and how would you, uh, continue to audit fitness for purpose, um, uh, ethical oversight, which is, we can do that, but should we do that, um, matching of, , partition of roles. So what do the humans do? What does the a AI do? Um. Maintenance fitness. So even if it were the that humans were substitutable, uh, to some extent, um, if we, if we become pilots that depend on the computer and forget how to fly, that's not necessarily a good thing in in the long term. how do we, how do we have tapered integration so the a AI does largely what got at? We do largely what we are good but we we maintain enough overlap that we, that we don't atrophy our essential capabilities, um, you know, bandwidth matching, which is,, is, you know, humans can think more flexibly and subtly than an AI model. Um, humans are very good at meta thinking, you know, thinking about strategically while thinking strategically, while acting strategically, , entertaining multiple perspectives and so on. But in terms of the amount and the complexity of information processed and finding, you know, weak signals in. Enormous haystacks, the AI is better. So that's two very different types of data trying to communicate with each other. And I, I don't think we've figured out yet what those interfaces look like. Um, so add all of that together. would be the, to to coin a phrase, you know, the Bio Biotically enhanced organization where cognition was not only deployed, but deployed to effectively for, for competitive advantage. And, uh, it would also be rather integrated. You know, I, I , it's common that early in a technology revolution, the, the early applications are spot applications, you know, a particular step of a particular process. you know, usually the things which are most tractable, the, the sort of easiest things. Um, but this would have to be, you know, for the organizational design as a whole. So to put it another way, when my use the word word organization. In the future, what will they mean? They'll probably mean something, I would guess, like what I just, just meant some, you know, some effective cognitive hybrid of hybrid of humans and machines, and it's hard enough just to get the the, human cognitive surplus working properly. , The collective intelligence, let alone adding to that the, the mixture of, , cognitive, technologies.

[00:18:06] Henrik Werdelin: So if you were designing, if you sit and you run a company today, like all this obviously very. Feasible and plausible. I guess the two questions may arises. One is, you know, is this midterm, short term, long term? And secondly, what do I kind of change in my organizational design tomorrow? Like do I still have a marketing team tomorrow? Like do I still have hr, like, or do I actually change something right now to kind of allow me to walk the path that you've

been?

[00:18:31] Martin Reeves: But the thing about innovation you, you never fully know the answer that question. You, you learn your way towards that answer, So, so I guess that translates a question of where we, you know, are we at the stage where your question is the most important which is, you know, we, we, we, we know we're most of what we need to know, and it, we have to now redesign our organizations.

I I think we don't, so, so some organizations are thinking, I think it's very early days actually. I think you know, it's primarily B2C. There's a a lot of noise about, you know, B2B applications of ai, but know, much less action according to the numbers of that I've looked at. and I'm not sure we've found the killer I think we've found the spot applications. Um, I'm not sure we've reconceived the enterprise and I'm not sure how we know how but there's a bunch of companies out there thinking that. So a bunch of companies thinking about things like, um, cognitive elevation, which is how do we move the the human cognition to higher and higher levels, to more and more sophisticated tasks and have a I deal with more routine tasks. I think people are thinking about this issue of of validation. Um, you know, we've already had a number of scandals where., In medicine, example, where huge things were, were claimed of the ai. And since AI is not very traceable, you know, it's very hard say what's going on inside the model because it's a, it's an emergent of a model. You don't, You don't, program the specific, uh, you know, connections for a specific problem. They emerge through a, a learning process. So the, these scandals, uh, have, have, have been where a, property was claimed and the models were too intransparent or complex say, is it working? And we find out after the event that actually didn't, it didn't work well. You know, know, there's a lot of enthusiasm about it. There was dollars poured into it. You know, there are, there are are dermatology applications, example, that diagnose cancers that were thought to be, you know, a miracle of, visual recognition and classification that turns out they don't actually work very well. And, and, in medicine and many other areas there. are Real for that. So there are companies worrying about validation. 'cause we don't really have standards of validation. If I wanna build an airplane, I have to comply with, , certain, you know, accident investigation protocol and some things we know about the design the airplanes, safety. If I want to build a a new drug, I have do clinical trials, you you doubleblind, controlled trials. What are those validation processes for, AI applications? don't know. So, so I think that, you know, I think a company, any company needs to be thinking about these things. I'm not sure there are. fully baked answers that they can, adopt. the, I think a good candidate would be, yeah, probably whole s swats of functional may go away or transmute, uh, because it's already the case that, um. A large part of the,, traditional marketing department is now embedded in the algorithms of, uh, multi-sided market platforms. there have have already been several revolutions, and can learn from those. So one of them is in social media, the, the, aI algorithms that shape your, your feeds. And, um, another one is in, multi-sided marketplaces that recommend you the next product. . So we already have algorithms that essentially do what marketing departments used to do and do it, , at lower cost, at greater and, and add more quickly. So a traditional marketing department might do a, you know, a two by two matrix, sort of four of customers and may match products different customer segments and may have prices associated with those product variants that. Change every six months or every year. But we can now do for every consumer individually with dynamic pricing

[00:22:06] Jeremy Utley: It's a, two by two by two by two by two at, at

[00:22:08] Martin Reeves: at a, lower cost. Um, yeah. so you know, some things about ai, we, we sort of ask, you know, what's, what's gonna happen? And I think, I think we're blind to fact that, well, it already happened. There is history that we can learn from in these areas. I've uh, been making. And also it's actually proactive, which is, this is not a passive affair. The future depends on, on what we do. are, We're gonna focus on cutting applications, revenue enhancing applications, you know, , illegal and, and regulatory constraints first, or legal and constraints later, you know, we, we get to decide.

[00:22:41] Jeremy Utley: that's actually ties perfectly into where I was curious to go is um, I have two questions and I'll ask the first and then the second. You had said earlier, use this phrase, I wrote it down. Um, and what's true about imagination? I have somewhere that I wanna go after that. can you, for folks who don't know or , for folks who me, who hear that phrase and it's catnip, what is immutably true I about Imagination? And then I have a follow on as well.

[00:23:09] Martin Reeves: Um, well,, I think there are some things that are like very strong findings for my research imagination, either scientific fact or, you know, consistent observations. So I'd say, , one of the interesting paradoxes is that, it's almost entirely, uh, uniquely trait. So the ability , to think about, uh, things that are the case, hypotheticals and counterfactuals, and then using our agency to make them the case. I think that's almost entirely human. And, and every 5-year-old can that. Um, but the paradox is that, , it it be by default, becomes extraordinarily hard for large of, of, of middle-aged people and companies to muster that skill that? Every 5-year-old, has. Um, so I think that's sort of why,

[00:23:55] Jeremy Utley: why is that? What's

what's

[00:23:57] Martin Reeves: I think, I think, there are so many. Um, I think companies start small and renegade. I mean, almost by a a a small company, a new company has no chance of succeeding in the world against incumbents unless it does something differently. There's no point in being, you know, a very, very tiny version of Procter and Gamble. Um, you know, know, you gotta do something differently, different product, different business model. . But once that business model has been found, um, for surv the survival and flourishing the company so that the investors get a return, , that needs to be perfected, uh, and scaled. So essentially you are, you go from imagining and first realization to optimization and scaling. And in the process of optimization, and scaling, you deploy the, you know, the Adam Smith principle of the division of work, which is you say, well, hang on,

Jeremy can't do it all. Let's have a marketing department and a production department. They, you know, they're already looking at parts of the company. Mm-hmm. And if your company is still around for people people to talk about, , uh, it will be very big. IE hard change. It'll be successful. Why would walk away from something, uh, uh, successful? , Changing? would involve the realignment the, the disturbance of a, of a very well established pattern of thought. The, uh, you know, personal and corporate risk taking that may seem unnecessary. , A sense of urgency that may be lacking in a sort of a salaried, , you know, formally and maybe in currently enterprise. So these are all barriers to imagine a collective imagination. Um, complacency, , scale. The mental bias seeing things through the lens of your currently, uh, successful model and the sheer, complexity of changing the minds actions of a large group of people. And then the other one is, , the cyclicality of imagination. I the, it's always been the case that nothing lasts forever in business. And, so we always had to not only imagine, but reimagine. just that the time scales of the were usually longer than a managerial career. So the, the next CEO could think about the of the company. I'm just gonna on optimizing total shared return.

I

[00:26:04] Jeremy Utley: I love

that.

The time.

[00:26:06] Martin Reeves: where, whereas today it's not the case. I mean, we, we can show, there's something called the, um, the advantage to K which is the, um, the rate of which competitive advantage, the, the relative performance differential compared to competitors fades. And that used to be roughly 10 years industries and it's now roughly a year. And so, so the world is moving 10 times faster with to competitive advantage. What does that mean? That means every large company difficult though it is to, needs to have that sort of startup mentality.

Okay.

[00:26:38] Jeremy Utley: Okay, so this perfectly dovetails into my follow up question, which is , you said earlier, everyone misunderstands AI and competitive or at least rather there's been an oversight. And you spoke about how you are disadvantaged if you fail to accumulate. The efficiency gains available today, but you aren't advantaged if you do. So. Here's necessarily if you do. So my hypothesis, and it's actually founded in something you wrote years ago. One of the of seminal quotes I remember ever reading in my life, I attribute to you, I don't know if you wrote it or one of your colleagues did, but it's got your name on it. But you said the imagination is sparked by unexpected inputs.

[00:27:20] Martin Reeves: Yeah, no, that's a major part of the um, imagination machine.

[00:27:24] Jeremy Utley: so so here's, here's my hypothesis that I would love for you to react to. Yeah. While achieving efficiency gains doesn't ultimately result in competitive advantage, it is a necessary precondition because it's what sparks imaginations of how to achieve competitive advantage.

[00:27:47] Martin Reeves: Right. Well, uh, I, think this is a very interesting, uh, subject it, it, goes to the heart about what is difficult and valuable and hard to replicate about. Both exploration and exploitation. So, yeah, a truly company in the long term, there've always been two in business. We, we often simplify think that it's about optimizing the performance of company, but on long enough timescales I know those timescales are compressing. There were always two jobs. One of them to play the current game effectively, play it it more than your you know, extract, returns, , drive efficiency,

um, and you have to do that. Otherwise , you don't have the funds pay for the future or the license to, to, to, to, buy a different future. Um, but you do have to have a different future nothing lasts forever. So you also have to ask the second question, which is, what is the next game? And those games are not only, somewhat contradictory,, but they're actually very fundamentally different in nature. You know, the mental procedures and the capabilities of and exploitation of the current model are things like, uh, analysis, uh, incremental learning, deduction analysis. It's a relatively, it may be hard in its own way. It may be hard to find the 5% of efficiency, but it's, uh, uh, you're dealing data and well-established patterns. , Innovation on the other hand, and is a topic of of my, latest uh, like the button that changed the world, Is very serendipitous. I mean, we, we never know. We may set out with an intention , to find, in the case of sir Glen or Green Bell, , to find a multiplex telegraph, and we we may accidentally discover the telephone. Um, and we may have a certain vision of how the telephone would change world. And usually it turns out to be completely wrong. You know, the, what that useful thing does has, you know, far reaching serendipitous ramifications.

So, this sort of, double game of, you know, what the, um, strategists call strategic a dexterity, uh, exploring while exploiting involves very different and very different skills.

[00:29:57] Jeremy Utley: Is, what you're getting at here, just cut to the chase a little bit, is what you're getting at that the people who are responsible for playing the current game effectively have difficulty of what the next game is. Is that, and so my hypothesis that achieving efficiency gains necessary to unlocking actual durable competitive advantage,

the problem with my hypothesis is you are saying, I think those same brains that are good at playing the current game may not be the brains that you can get. Yeah. And

[00:30:25] Martin Reeves: it's, it's not entirely true, but it's it's, provably mostly true. Um, so in other words, um, so we for my first book, uh, your strategy needs a strategy. I,

I looked at five of strategy. The big idea in the was for different types of situations, you actually need different fundamental approaches strategy, uh, you know, visionary, uh, classical, adaptive and so on. There are different, fundamentally different approaches strategy. And we actually, , created a game, using a, a, As sort of a of artificial intelligence, if you will It as, as, as, as, um, something them , a population of multi-arm banded algorithms to simulate any, strategic situation. And then we had thousands of people playing this game over years. So we were able to collect on what type of strategy problems are people good at. And it it turns out that there are about 3% of people that are good at, , both the innovation game and the optimization game, but not a lot. And, um, why? Because they're fundamentally different, skills. So if if you wanna do both of these things, you have to do some subtle things. You, you have to,

you've gotta hire different types of people. You've gotta them get along with other. You've gotta balance the You've gotta have teams that incorporate both components. You've gonna make choices like, , do I try really hard to hire ambidextrous leaders or do I. Hire people that are skilled in one or the other discipline artfully combine them, or do I sequence them in time saying, well, right we need to focus on optimization. And then I, you know, change the people as the business, uh, progresses and match the skills the, to the situation. So that is the hard part about dynamic strategy, which is this change from exploration to exploitation and then, and then self or, or disruption from, without going back to exploration needs to occur continuously. And that's a rather hard thing And in competitive that's

that's rather a good thing That is hard because therefore, it's basis for competitive advantage.

[00:32:23] Jeremy Utley: So how do you, okay, so say I, I agree or I really appreciate that thought that training your existing workforce to derive efficiency gains is insufficient to discovering a new competitive advantage because so few of those people are capable of doing both. So let's, let's take that as a given, even though perhaps we could discover it more. How does an organization structurally discover the horizon of competitive advantage? If its current? Employee based is likely, you could say, for lack of a better word, incapable of imagining it themselves. Right.

[00:33:02] Martin Reeves: Well, you've got to, I think that question is equivalent to, , how do you harness serendipity? we know that serendipity is not the same as randomness, but we also know that it's extraordinarily difficult to predict, is it is hard to predict, innovation. So how do you enhance the power of serendipity in an organization? I, I, I think you can do it in a number of ways. I mean, I think you can, expose the organization heterogeneity. So an externally oriented organization is, less likely breathe in its own smoke and more likely to see something doesn't fit some anomaly that stimulates thought externally. So external orientation is way.

Um, hiring for cognitive diversity is another. have different minds that are capable of looking at the same situation and coming up with different solutions. , Having a culture that says, well, that's actually valuable, the fact that we're not completely aligned, you know, we agree be, aligned for the large part on the core business model. But in the new areas, we deliberately unaligned where we're, we're exploring. Um, you know, not many cultures can do that. That's another thing you do. You can train um, so as part of the Imagination Machine book, I spent some time looking at, educational systems and, and, and my big question was. You know, we we're trained in all sorts of thought in our, many different styles of thought in our educations, right? We're trained in deductive thought and thought. but in terms of counterfactual thinking, the only training that I received, was, uh, you know, freestyle drawing and role play kindergarten. Uh, beyond that, I didn't receive any formal education in the art of counterfactual but you can, train people to, do things like mine analogies, you know, what is this like, multiple working which is, we could look at it this way, we can look at it way, uh, to discriminate between the facts and choices of mental model. You know, often we'll say like, I have a 2% share of the pharmaceutical industry. is a fact. Well, it's not a fact. It, it depends on how you frame the borders of the pharmaceutical and.

What you regard as, you know, within the share versus, you some some other, activity. It's a mental choice and, and so, but you can train to question assumptions and discriminate between and, and mental models. Um, you can have a people that are skilled in. in, deliberately, manipulating the degree of divergence a meeting, you know, making choices. This is a meeting, no, it's Devil's

[00:35:20] Jeremy Utley: advocacy, red teaming,

[00:35:21] Martin Reeves: but, but also like the choice of what is this meeting for? Is this to optimize or is this meeting actually, to question and challenge and, how do we do that? And, um, you can train people in that. I mean, I fortunate to receive, , training in, , the Bono six thinking hats, uh, Which essentially is choicefulness about different styles of thinking and deliberate agreement a group of people on which ones to, to, deploy. So there's a whole bunch of things that you can do to up your odds of being to deploy, collective I, I I imagination.

[00:35:53] Henrik Werdelin: uh, you guys done a lot of research in the use of AI on organizations. You mentioned when we started the conversation that, some things that a person with AI was better at than other things that a person in AI was, you know, what are some of the other kind of like, uh, , results of that kind of that let people kind of like understand if they were, uh, to, , allocate time and resources against some use of ai. Where are there good puts to be, where there there good hunting grounds?

[00:36:28] Martin Reeves: I think the initial view was view it, so Kai Fu Lee, when he wrote his book, one of the early books by an ai, a true AI on what will AI be able to do and what won't it be to do, essentially asserted this extremely plausible hypothesis that it was about and, , empathy

and imagination. That those were the things, the, human that would, you know, survive challenge from ai, substitutability by ai. Um, I think we discovered that that's not the case with imagination. because we can have aids to imagination. and, and I think this, this actually mirrors a, a more general thought, which is, um, in, in, in my book. Like I, I looked at the, the detailed evolution of the like button. This, Recognition tool that creates a currency of of recognition that enabled, the targeting of feeds that enabled a different type of advertising proposition and therefore permitted the takeoff of the social media and triggered the disruption of the marketing and industry.

And one of the really fact, the things when I interviewed pioneer companies of the event of the like button is that none of the pioneers of the button foresaw the eventual evolution of the, of the button. And so it's natural that in the early stages of a technology we we turn to the experts, the people that either purport to know something about the new technology or actually seem to, because they, you know, they worked for Google or they had, you know, jobs or professorships or whatever. But the track record of experts early in technology is,

is pretty disastrous actually. You'd be better off than, than assuming

[00:38:05] Henrik Werdelin: isn't that to your other point of like somebody who was trying to come up with a better telegraph, came up with a telephone. A lot of the time the score kind of like take care of itself, but often not where you expect it to. Right.

[00:38:16] Martin Reeves: So I think you, uh, I think we're discovering, I think, you know, part of innovation is looking for the unexpected and competitive strategy. You know, if you you ask me what is generally true of the world, I could make, make, make some assertions. you know, For instance, my, my, wife yesterday was looking for a, um, some sort of portable wifi device, , for the car my, uh, one of my my daughters has a long trip week and she has to do a homework in the car. So she needs connection. And we speculated, wouldn't it be great if, you could combine a hard disk drive with a pack, with a MiFi, with a router? Like, why couldn't you put that all in one package?

Um, 'cause right now you have to buy the different separately. and so I thought it was a really discussion because we could say well, is that generally gonna be an easy thing to do? No, that's, that's pretty hard. It's gonna be hard to get all of those devices one box.

But the thing about competitive strategy is it doesn't deal with what is true. It deals with the ability to create the exception, you know, strategy is all exceptions. if You do the things which are generally true of your, you know, what your competitors do, you would be probably, you wouldn't exist, or at best be, you know, one of, one of a of, uh, commodities in your, sector. I think devil's advocate thinking, I think saying, yes, it's gonna be generally hard for AI to do ethics, but what can we with AI and ethics? How could we support human ethics and looking for exceptions, looking for analogs and other spheres. Um, I I think that's one thing I could say. I think the, in terms of judging what is be generally hard and what isn't, though, I and this is my uh, I think the training data is, is, a good isn't it? Which is what is the AI being trained on? the AI is being trained on speech acts, um, what people say something in large of, of, uh, textual data. Um, it's not trained on the of the world. The closest it gets to, you know, if push this object, it's gonna move or it isn't gonna move, or it's, um, , the closest you can is what people say about the physics of the world, but what people say about the fix of the world and actually getting to happen. The social physics and the physics, you know, it's probably gonna be hard to do actual physics. Um, I think, um, you know, there, there are certain areas where. You could ask yourself the question, what is not the training data? So that's in the, training data,

there are are gonna be dominant right? If you pick something that's very old. For instance, I was doing some research the other day on the very old discipline which came out the military of, um, operations research. So a particular form of problem solving that was, um, pioneered before powerful digital , to assist with military problems like finding down in, you know, vast swats of the ocean or whatever. Because it it was a very old and now, in a very subject, the, you know, the AI find it. the AI cannot tell you. About your own right? You know, you know what, what is my purpose here? Your, purpose is you get to decide. Some things are not, immutable facts. They are choices made by human beings with their and their agency. now the question is what do you do with those signals of difficulty or ease? You know, you're asking, where should I choose to the ai? Well, you could look at it one of two ways, right? You could say, well, i, I don't want to be stuck funding something that's gonna be very difficult 'cause I might run of money,

so I'm gonna go for a application.

Or you could go the other way say, well, precisely because it's hard, of it never been done There might There might be a very prize associated with that. And that's always a in, uh, in competitive strategy. And that's a difficult, fuzzy, , managerial calculation because there's no data on what doesn't yet exist. Um, so you learn your way, the right balance. that's where. That's where a lot of art and fuzzy and, and human human comes in. Because what you can't do AI is say, look at the population of all things that don't exist and assign a probability to them, you know, just can't be done. But what you can say say is, let me learn my way to the future with choices that combine me proper in the right sequence. Yeah. I say neither,

[00:42:22] Jeremy Utley: neither can humans, right? And what, what innovators do is they create data through deploying, prototypes and experiments. And I do think actually I, I've been working with organizations that are creating, example, synthetic audiences and all of a sudden you, if you replicate a a million decisions, you can actually almost Monte Carlo simulate what a population will do. And so all of a sudden the cost of creating data is coming down multiple orders of magnitude.

[00:42:49] Henrik Werdelin: I think, uh, my final question is how do you use AI kind of on an everyday basis?

[00:42:56] Martin Reeves: so what I've discovered, so I, I, um, I often,, do rapid exploration, rapid landscaping with ai. , So if I'm beginning to think in a certain area, I ask the broadest questions can. Uh, and the thing that really takes time in getting to know a new is the really broad like, um, you know, historically, where did, the current perspective come from? And, you know, what are the schools of thought in this area? And how dis discipline interact with discipline? And what are 20 examples of the thing that studying? And so I do rapid exploration. what I found with, and this may over time, what I found with the current technologies, um. you you know, GT five and, um, and, gro and you know, all of the models is that when I really know a, an area deeply, , the results I get are one, alarmingly fallible. like detail there's a lot of things that are not right and true. and that it really does depend on what are calling prompt engineering as if there is a formula for the right questions. But that was always the most important question in strategy, which is what's question? and I used to have, when I had a pc, in my office, uh, you now, now everybody has sort laptops I used to have a pinned on my pc, like best questions, the questions I would use every day, and it was a reminder. The sticky note was to remind me to ask questions,

and one of the questions was. What's the real question? And because things present themselves a problem. Right? You know, the CEO says, please help me to do a cost reduction. But unless you ask yeah, but what's, the real problem here? Like, why would you want to that? Um, you don't get to the, the, framing of the problem. Um, another one of those questions was, , the divergent of, , what is that an example of, you know? Okay. That question that, that, that that sort of subject is is an example of some larger, you know, edifice of What is that? And then the convergent which is, give me an example of that. What is an of that? What is that an example of? Gimme an of that. You know, another one of these questions was, , what are the the best questions? The question, what are the best questions? 'cause sometimes I'm normally in a consulting assignment or an innovation project, you, you don't know any answers. You may think do, but. Generally things pan out that you don't, but what you can have is, questions that force you to explore and bump into the that, that, trigger the imagination. There's very firm research on, on that essentially the, the, human mind a, is an anomaly detection and we detect it is like there's The kids look at spot five differences between these two versions of a drawing. We're, we're extraordinarily, uh, good, good at that. And that's what triggers the, um, The question there is, is

[00:45:44] Jeremy Utley: do we, the outlier or do we give it valence? Right. And I think the tendency of the expert is probably to dismiss the tendency of the novice or the beginner's mind is to give valence to the anomaly.

[00:45:58] Martin Reeves: Yeah, I mean, I, I'm not sure whether you, uh, know, in, in your area , imagination plays an important role. Um, i, I'm not sure whether you do. Do this, but I, I often learn from people that know nothing about the subject. Um, because mm-hmm. By personally being to ask a question of somebody that knows nothing about a subject forces me to frame it in common language terms. And if you, describe, you know,, market share anti-cancer drugs without using the word share or anti-cancer drugs, it really forces you , to think a lot about the language A i'll often there's a lot of hidden assumptions in the in the words use, and then also um, you know, the reply from kids that begins, well, I I know nothing about that, but, you know, often, it gives you a left field, an imaginative response upon reflection, you know, often has a, an interesting imaginative seed to, to, to, to into. I think the, the, con today's conversation really interesting. It, It, feels to me like essentially we've explored the, um,, the underlying capabilities of uh, purposeful human thought in many different areas. You know, the, the surface questions are what can the AI do? How do organizations work? can you be ambidextrous? So how do you innovate? But, but actually it all boils down to, in the instance, how do you think effectively about sorts of, those sorts of problems? And And that's one of the common threads strategy, like in the, in the deep stratum, is how should we think about that? The question, how should we think about that?

[00:47:29] Jeremy Utley: Um, I wanted to end our discussion, Martin, because you are a think tank at a consulting firm whose job is to think about what's the next game yeah. we would be we'd be remiss if didn't ask you, if you think about where consulting is as an industry now. What's the next game for Given the emergence of AI and given that proficiency among consultants is insufficient for durable competitive advantage, what's the next game?

[00:47:59] Martin Reeves: well, I think consulting is a business is one thing I'd say. So that business has to renew itself and, um, so we're in the midst of a disruption. Um, so can AI write slides? Yes. You know, can perform, , difficult calculations? Um, yes. , Can it find, weak signals? Yes. so obviously will change. And by the way, it has changing. So I I remember my first project, uh, in when I, joined BCG, , it sounds absurd because we'd never, there would never be a like this. It was to calculate the size of the European automotive Spring market. So there are little springs in your, doors, in your, you know, in your car, Berettas, in your, uh, in

[00:48:43] Jeremy Utley: classic,

[00:48:43] Martin Reeves: That's a

[00:48:44] Jeremy Utley: classic case study, uh, interview question, isn't it?

[00:48:47] Martin Reeves: Right. And, and, and actually the statistics didn't exist, which is why somebody thinking of of going into that space wanted us to have the answer to that question. So we had interview in order to calibrate a model, we had to build a a model of industry. And, um, you never do that because data exists , for most things. And that data is traded and monetized. Um, so, uh, and I could give you many other examples. So, you know, we have, uh, for instance, you know, the advent of computers, early in my career, I went to the Tokyo and japanese word processes, they existed, but they they were cumbersome. Um, there was, so the best was a, I think a, a program called Word And it was incredibly complicated. You had to do shortcuts like control C plus Y to or something, you know, and and would never do , uh, so they actually, the accountant, uh, moan, he, he hand drew the slides calligraphy. It's true. And so you could never redo a slide because he would, refuse. He's

[00:49:50] Jeremy Utley: sleeping

[00:49:51] Martin Reeves: and and you could never have, um, more than 20 slides in the deck because it was too much for calligrapher. And, um. Uh, so, uh, so, so that changed obviously. So aI will, will change and we have to be open to change What's this time, what's different this time is it's faster moving. Um, and, bigger, bigger units of CapEx involved probably. Um, so, if I'm hiring types of people, that's easy, right? I I just hire a specialist a particular type of skill. If I have to build a computer network or a large or something. More money involved, stakes. and also we're at that stage where we don't really know. I mean, we expect, we expect great things. You know, we're, we're in the phase of technology, but there have been previous exuberance, phases of uh, artificial intelligence. And, I remember when uh, Deck, the Digital Equipment Corporation, um, I think it was in the, in the, in the eighties, had a, , machine learning tools that simplified procurement were were able to your procurement and thought this was gonna take over the world. but I think all of the companies involved in revolution and about, you know, a hundred or so around that theme,

you know, they all disappeared within 10 So we don't know the future and we're precisely at that stage now. But the second thing I'd say is, the skills probably change. and they're al already changing. So you need, probably to be less. Uh, historically we all, we, we, we hired generalists and then as clients , became more, more demanding in, in different industries, we started hire, you know, specialists in particular. Industries and we specialists. now we need specialists, but we also need a new type of generalist that is able think deeply enough to question, the technologies.

and and a lot about the of the technology is more about sociology and anthropology and the design of human So we it was almost like we needed a bar model. We need more emphasis on human skills and the specialties around that, and more emphasis on technology skills. So already, my company, and think most of the consulting companies are hiring those new skills. I I think we need to experiment. I mean, there was a phase when our , like many businesses was, , like a comfortable optimization and growth game. You know, not necessarily easy, but um, it it was, you know, mostly a case of more and better. Where now, now we're bumping into different, right, which is, well, what is an . How do you have, you know, technologists, uh, you know, sort of like programmers and anthropologists in in the same team, and they know how to talk to other and, you know, how do you deploy how does that work? that's a new, you know, know, that's a, that's a new organization model. So in short, I think consultants will be forced, like other industries to adapt and experiment and learn, , what seems like an inevitable disruption. And those times of of disruption are when number twos become number ones, number ones number threes, or they disappear So, um, it's a, it's a high beater game. You know, the chances of something bad or good to you, the competitive volatility, in increases. So as a strategist, I, you know, strategists have this rather perverse, uh, , behavior of like rubbing their in interesting times. So that sounds like difficult times. You know, If you're CEO of a consulting company, but for me that sounds like really interesting times.

[00:53:09] Jeremy Utley: It's brilliant. That's a perfect place to end actually. That was that's awesome. so amazing. Thank you Martin.

[00:53:14] Henrik Werdelin: Jeremy, that was, that was so fascinating. A a lot of stuff compacted into. This, huh?

[00:53:22] Jeremy Utley: It like, it felt like a dog years conversation where it's gonna take me a longer time to unpack it than I was able to do in real time. One thing that I'm just kind of reflecting on at this very moment is this idea that prompt engineering is fundamentally a strategy. Tactic be, uh, he said at the end, you know, the biggest thing in strategy is what's the question? Mm. And prompt engineering, obviously it's, what's the question? I, I think I'd never really thought about prompting as a strategy tactic, but I do believe, that I've observed and I've experienced with, you know, every one I've ever worked with and taught the quality of your output is directly a function of the quality of your input.

uh, With AI and with strategy as well. And so, um, I, I'm dismissive oftentimes of conversations about prompt engineering because it sounds too technical or something like that. Like I don't, like even the whole craft or the whole art of it, I think is, there's something distasteful about it. And yet when you frame it in terms of a tactic for better strategy, all of a sudden it, it kinda casts it in this totally different light, which I thought was

pretty, pretty

[00:54:27] Henrik Werdelin: I think maybe even to bring that home, you know, the thing that. He was saying also at the end where. One of the questions that he normally had on his PC as like questions he would ask was, what's the real question? Mm-hmm. would would never imagine PT five coming back to that. Right? When you're saying like, Hey, can you tell me like what this, and this is like, could you imagine GPT coming back?

Like, yeah, yeah, but what's, your, what's your real, what you really asking about? Right. Right. Um, but I think, and I think, you know, like what you're saying there is like, that basically something you have to pre-bake into the question you're posing, right? That is the strategy.

[00:55:02] Jeremy Utley: You have to think about your thinking. It's, it reminded me a lot of our conversation with Stephen Koslin, the dean at Harvard, um, because it's really a metacognitive, I mean, his, his whole book is called Your Strategy Needs a Strategy, right? Yeah. He talks about, so he is thinking about thinking, and I think there, what's interesting, Henry, is you said GPT five would never do that, but you say dot, dot, dot.

Unless your custom instructions say, always ask me what's the real question. And then all of a sudden you a cognitive prosthetic, a cognitive, you know, brain testosterone. That always reminds you of the important which is, which leads me, which lead me.

[00:55:38] Henrik Werdelin: Which lead me to, like, I think my, one of my observation was he talked about like basically innovation or originality or, you know, like, uh, imagination.

uh, Ethics and empathy were the three things that some of his, uh, you know, like some of the things that in the initial point of AI conversations, we were kind of to that that would be be difficult humans to, and, and I think he made like, obviously the compelling point that Yeah, but like these. the AI tools can be very kind of like great assistance in kind of those three areas.

Probably like even ethics, you know, like it could probably help you think about what your ethics are, uh,

[00:56:14] Jeremy Utley: for sure.

[00:56:15] Henrik Werdelin: Um, but what is under also underlying is that. Yes, but it is like the human end, as he always just pointed out. Like those are all things that at the end of the day, nobody else be able to answer really than you.

Like what is your idea of a new idea? What is your idea? What's right and wrong? What is your idea or what somebody else might think? And, and so. Yeah, I think what he's articulating is that it's not like necessarily a binary, like what can humans still be good at? It is that the technology kind of will be used for everything. But there are some things where the human end is the

[00:56:53] Jeremy Utley: end. Yeah, I mean, one thing that I found fascinating, I'm kinda riffing here on stuff that, that stood out. Um, one statement. You know, the track record of experts is very bad in regard to new technologies and the fact that someone's an expert say in ai, you and I, experts in ai, right?

Do we, do we have a humility as we approach this transformation? We're We're very bad at predicting where gonna go. I think. There's an intellectual humility there that I find refreshing and also challenging because I know you get asked questions a lot. I get asked questions a lot, and I think there's a real pressure when you, when you get asked question to have an answer, you know?

And the last thing you wanna say is, I don't know. And yet, you know, some of these predictions of technologists in the past look, you know, patently absurd ridiculous, specifically because they were unwilling to admit how little they knew.

[00:57:47] Henrik Werdelin: Hmm. Yeah. Um, the other thing that stood out to me was simple question in, and I'm still kind of framing in the context of how do you get a competitive advantage or how do you stand out either as an individual or as a company?

And I think the simple question of is that the simple question of what is not in the training data is an interesting kind of thought. Right? And then the next one of like, what do you have as an organization or as an individual? that you might be able to add either in your problems or in your question or in your decision making around this, uh, that will keep you competitive.

I think that is an interesting kind of thought process that I'd probably going to bring along for the next few weeks.

[00:58:32] Jeremy Utley: One, one kinda stat, which I great, you know, from the game associated with your strategy needs strategy. Only 3% of players were good at the double game. The ambidexterity. Just recognizing that, 'cause I do think in so many cases, the job of imagining what's next for an organization. Is given to people who are good at the current game.

[00:58:55] Henrik Werdelin: Yeah. The only I would say to that, it's obviously complicated, right? When you then actually sit in it. 'cause I think a lot of companies realize Stat that, hey, this might become, actually that's not true. I think some companies definitely do not understand that.

They probably don't have the answer. Mm-hmm. Um. Some people do, but I think the question that is obviously being posed to a CEO or head of innovation is like, yeah, that's great, About, you know, we have to do something tomorrow. We can't just be pondering. And I think then maybe then the whole thing come back to experimentation and then trying to figure out like how do we experiment as cheaply as possible so that you can afford quite a lot of shark to go.

[00:59:33] Jeremy Utley: So many of these conversations come back to this idea of culture of experimentation, they? they?

[00:59:38] Henrik Werdelin: They do a lot. , But what I can't figure out is that is just kinda like a, an easy out, right? Like, uh, what are we gonna do? Uh, we have no idea. Let's just try shit. And you kind of go like, right, yeah.

But like. like. It would be, ideally you can try shit that has some directionality of success Uh, or, or at least like you had a reason for trying that specific thing and, and so I'm a little bit kind of like torn between the, yes, we actually need to build a new operational model for a lot of these companies because you gonna have generalists that have the ability to try a lot and kind of, I think he was talking about taking imagination into something actionable. But how you actually do that is something that I'm not necessarily sure that we get wiser on when people say, Hey, you should experiment.

More.

[01:00:25] Jeremy Utley: I think, uh, to your point, Henrik, in so far as experiment is, um, code for be willy-nilly. Um, or, you know, and what I hate by the way, is when people expost rationalize something as experiment, uh, were just experimenting right?

But that being said, I do think if an experiment is deployed scientifically, as you said, purposefully with directionality, I think the danger is we don't wanna do anything willy-nilly. So we don't do anything half-baked or without full knowledge of how it's gonna go. Yeah. And I would say a reaction, a negative adverse reaction to kind of the wrong framing of experiment will lead many organizations to therefore not experiment at all.

I think that's

[01:01:09] Henrik Werdelin: very fair. You know, the

[01:01:10] Jeremy Utley: last, and I really think

[01:01:11] Henrik Werdelin: this is, this is something that I took back from, you the ACORN method, my last book about entrepreneurship in organizations, which had nothing to do with ai, but I think it was similar in like, it is really, I think the best way reduce the risk is to reduce the cost.

And so a lot of time, the issue I think in an organization becomes that. There's not a lot of room for origination. There's honestly often not a lot of new ideas, and once one materialize, it become the idea. That then Everybody's chasing and everybody's compounding stuff onto that idea.

'cause finally there's something new that is getting right, seemingly getting approved, and then it has to work.

[01:01:51] Jeremy Utley: It's like

[01:01:52] Henrik Werdelin: a

[01:01:52] Jeremy Utley: reconciliation bill. Not go political, but it's like everything's gotta be stuffed in this one thing rather than just being apel simple, straightforward experience. And

[01:02:01] Henrik Werdelin: so and so it, like the, and then, you know, it has to work, and then people get nervous and then when people get nervous, they get back to what they know, you know, like, and so therefore, like you end up with something that's actually not that much of a new idea anyways, because like you kind of can't have it fail.

And so how you get into the habit of like. The hundreds, thousands the thousands of experiments. and that is maybe something that we should figure out. How do you become better at, obviously that's part of your world. Well, you know,

[01:02:29] Jeremy Utley: two, two questions that I find are very helpful for leaders. Um, one is, what are we trying? And the emphasis there is on the word trying, you know, we don't know it's gonna work, right? Try implies you don't know, a lack of certainty. And then the second question, you ready for it? What else are we trying?

[01:02:47] Henrik Werdelin: Hmm.

[01:02:49] Jeremy Utley: 'cause then that implies options. That implies volume, it implies paralyzation, it implies potentially duplication of effort.

I think if a leader would be comfortable asking, what are we trying , and what else are trying, that would unleash a lot more experimentation.

[01:03:06] Henrik Werdelin: Hmm. I like that a lot. The last I picked up on the conversation, or these work down was, uh. What am I great at that AI can make me even better at?

Mm-hmm. And I think it kind of goes back to fundamental questions that a lot of companies don't really ask themselves is obviously, who do I I serve? What's the problem? Am I solving for those people? A lot of people just define themselves for what they do. Right. You know, like I'm a a media company, you know, but that's not really, you know, your customer don't care. Um, but I think increasingly then it's like am I actually look inwards and I look like not just in, kinda like what I make my money on, but what do I the organization is really good at? Like Where do I have a culture of excellence? Uh, for example, at Bar, we're very good at supply Like, you know, people see us as being a creative kind of products for dogs brand, but we're very good at our supply chain.

And so like when you then look at AI and saying, know, like, Hey, I need to have an advantage. Everybody else will have access to GP P five. So like, how do I not take and make it my success metrics that everybody's using GP P five? 'cause everybody's gonna use GPT or whatever model they use. Then what can I do where I'll be able to kinda like magnify the impact that that AI can have?

That's sort of an an interesting question in my mind.

[01:04:20] Jeremy Utley: Well, if you think about it as a multiplier, uh, going back almost to our with Nicholas Thompson, right? Yeah. About if you, what are your unwired capabilities? One a different. And for folks who haven't listen to that the basic premise was your unwired capabilities still matter because AI is a multiplier.

And if you know and say the multiple is 10, you bench 20 pounds, then now you can bench 200. Right? But if you can bench 200, now you can bench 2000. I think another way to think about unwired by the multiplier of AI is to say, what am I great at? Or, or effectively ask yourself the question, which muscle set?

Can lift more weight than others. Right. Is it my supply chain? Like are do we, do we have a deadlift of 900 in our supply chain? Great. Then the deadlift is actually worth, uh, amplifying with ai. Right. That's an interesting kind of assessment criteria there.

[01:05:13] Henrik Werdelin: Maybe that's actually like one of the questions that, or one of the we should give people when they ask us like, where should I go and try to implement ai?

I think historically I've said like where you feel there's like basic bright spots where you have organizational kind of like pull. But another one we say like, where are you already? Really good.

[01:05:32] Jeremy Utley: Excellent. Yeah. Yeah.

[01:05:34] Henrik Werdelin: Anyway. Yeah. That's

[01:05:34] Jeremy Utley: cool.

[01:05:35] Henrik Werdelin: Let's wrap it.

[01:05:37] Jeremy Utley: , Folks, if you enjoyed this episode, if you would like more big thinking from Henrik, Lin and Jeremy, please hit like, subscribe, share with your neighbor, share with your kids, share with your grandparent, share with your dog, j and uh, until next time,

[01:05:58] Henrik Werdelin: bye.