Beyond The Prompt - How to use AI in your company

How to make an AI-Driven Culture Transformation with Moderna’s Head of AI, Brice Challamel

Episode Summary

In this episode, Brice Challamel, Head of AI Products and Platforms at Moderna, pulls back the curtain on how AI is transforming not just workflows but entire organizational cultures. Brice introduces the concept of the AI "catalyst" - a role focused on accelerating safe, effective change - while sharing practical strategies for fostering widespread AI adoption. Drawing on examples from his leadership at Moderna, Brice explains why it's reckless not to integrate AI into your processes, how to avoid outdated approaches (what he calls "Fred-like behavior") that hold organizations back, and why every individual in a company should think of themselves as a five-person team, augmented by AI. This episode is a masterclass in how to lead through change, prioritize high-impact innovation, and rethink leadership in the age of AI.

Episode Notes

In this episode, Brice Challamel, Head of AI Products and Platforms at Moderna, pulls back the curtain on how AI is transforming not just workflows but entire organizational cultures. Brice introduces the concept of the AI "catalyst" - a role focused on accelerating safe, effective change - while sharing practical strategies for fostering widespread AI adoption.
Drawing on examples from his leadership at Moderna, Brice explains why it's reckless not to integrate AI into your processes, how to avoid outdated approaches (what he calls "Fred-like behavior") that hold organizations back, and why every individual in a company should think of themselves as a five-person team, augmented by AI.
This episode is a masterclass in how to lead through change, prioritize high-impact innovation, and rethink leadership in the age of AI.

Key Takeaways:

Moderna's website: Pioneering mRNA technology - Moderna

00:00 Introduction to Brice Challamel and His Role at Moderna
00:45 What It Means to Be an AI Catalyst
02:30 Leadership in the AI Era: Moving Beyond Traditional Hierarchies
06:10 Avoiding "Fred-like Behavior" in a Modern Workplace
10:20 Augmenting Human Potential: The Five-Person AI Team
14:50 How Moderna Manages 700+ GPTs Safely and Efficiently
20:30 Democratizing AI: Why It's a Utility for Every Employee
27:15 Using AI to Transform Personal and Professional Growth
32:40 Frameworks for Incident Management in AI Integration
39:15 AI’s Role in Healthcare, Radiology, and Beyond
43:30 The Generative AI Champions Team: GCAT in Action
50:00 The Future of Work: Collaboration and Culture with AI
01:03:27 Reflections on AI Adoption and Leadership Insights

📜 Read the transcript for this episode: Transcript of How to make an AI-Driven Culture Transformation with Moderna’s Head of AI, Brice Challamel |

Episode Transcription

[00:00:00] Brice Challamel: Hi, I'm Brice. I'm a head of AI products and platforms at Moderna. I'm also in charge of AI innovation and transformation, which goes with it. My role, , is to promote every way to augment our people so that we can make the greatest possible impact with our medicines.,

[00:00:16] Jeremy Utley: I'll just start as a fan and , you know, I remember seeing headlines, Moderna. Employees build 700 GPTs. I remember this, you know, last year when these headlines are coming out, and I was like, wait, what, huh? And so the fact that we get to talk to the leader who drove the cultural transformation and the way I've thought about you is really more . Of a change management guru it's a privilege because It's a totally different kind of conversation than the ones that we often have And so i'm really looking forward to learning from you on the org change behavior change I think

[00:00:48] Brice Challamel: of myself more as a catalyst than a leader, but we can discuss this on your podcast and we can discuss about what leadership means, right?

Like at face value, it means having followers because like, otherwise, what are you leading? Uh, and if you dig a little bit, it means making sure that your followers like, you know, walk by your side and behind you because otherwise your labor is in danger. , and if you dig a little bit further, it also means that you generate other leaders that like someone You know, can start working in front of you and that you are, you are someone who is so important to the tribe that, that what you've created , is an entire community of leaders who know exactly what you're doing and who can do it in their turn.

But the truth is that all those different flavors of leadership, even though I love to dive into those considerations of like, what does it mean nowadays to be a leader? Like we're not no longer in the beginning of the industrial age when people came from the fields and needed to be told everything to do, uh, and still were organized this way, which is crazy.

Uh, it still has the notion of a hint of paternalistic, you know, uh, condescending, uh, patronizing, you know, and you can finish my sequence of words attached to the leadership notion, which I think is we are very close to the moment where leadership becomes irrelevant as a behavior. Right? Like we still need it , for legal purposes, but as a behavior, like, why would I tell you how to run a podcast?

Or what would you tell me? You know? Even if I were, you know, a leader, like , I should ask you really, because you're the one doing this. And, I don't want people in the company to look up to me for AI. I want them to look up to themselves. Right.

[00:02:18] Jeremy Utley: Ah, so, is that how that word catalyst you mentioned, you said catalyst, not leader, , how do you define catalyzing AI in very

[00:02:27] Brice Challamel: chemical and it speaks volumes to my people at Moderna, uh, but I think everyone understands, um, there's a reaction happening of a population with a new technology, right?

, it's a physical and chemical reaction that's happening. And the only question here is not if it's going to happen, because it's going to happen. Whether , we want to or not, it's how fast, how safe and how useful is this chemical reaction going to be? Because maybe without the catalyst, it goes slower, just this, right?

And we take three years for something we could have done in a year. Wow, shame, you know, because in the meantime, people are dying. Um, , maybe we would have had like incidents on the way. Which if you're a good catalyst, you de risk the whole thing and you make sure that you have zero , retention, that you have good user policy and code of conduct, that you have a good set of governance, because when people, you mentioned 700 GPTs, all of those are software in their own right.

So people who make a GPT, they are literally digital product makers, who put them on the market, who other people use. With all the risks and consequences that it implies, let's say you do a GPT on travel and expense, uh, but this is not your job. , and then the travel expense policy changes. But if people have taken the habit to use your GPT to plan their trip, then they're planning wrong , because it's not your job and you didn't update the knowledge base, so you have not behaved yourself as you should, which is a good product manager on your GPT.

And so for some of them, it doesn't matter. It's like you're summarizing your meeting notes and you know, it's okay. But for some others. They can be critical to our business and we can't just let you roll with them. So that's the role of the catalyst, right? To make sure that the whole chemical reaction doesn't explode in your face.

Right? Like that, that's even though you want the chemical reaction to happen and you want it fast and efficient, you also want it safe. , and that's my little chemist perspective on the whole thing.

[00:04:20] Henrik Werdelin: But very practical. , then do you, then when somebody has done a GPT. Do they inside Moderna feel responsible for the maintaining?

[00:04:30] Brice Challamel: I should certainly hope so. What we did is that we used As always, you need to look at the categories that pre exist because people's minds are already shaped. Like you can't reshape minds so easily. It can be done, but it's really, really complicated. So I want to claim the Everest by the South face, not the North face.

And the South face is to use concepts that are there already. So there is something in the toolbox, which is incident management, where you have one axis, which is how many people are impacted and another axis, which is how critically are they impacted. And then you categorize incidents in this, and that's a framework that most people are familiar with one way or the other.

So , I used that framework and applied it to, uh, what happens if your GPT doesn't work. Which can just mean, in the case that we shared, that the information and travel expense are not up to date, right? That it's not working. Like it's working, but it's not working. And so, uh, on one axis we have, uh, Is it just you?

Is it a team? Is it the entire company? You know, like, how many people are impacted? And the other axis, is it like, is it a mild disturbance? Is it really annoying? , or is it catastrophic? Right? And then we're going to start rating, uh, depending , on really on what happens if your GPT doesn't provide an ideal outcome, uh, the criticality of your GPT.

And we're going to tag every combination of those two axes with concrete GPT examples for people to realize what they are. And then we ask people for now, but I have a better idea next, but for now we ask people , to assess themselves, their GPT or their team's GPTs, if they're a manager and

[00:06:03] Jeremy Utley: to actually plot it.

[00:06:04] Brice Challamel: That's right. . So like, we're not asking them for rocket science either. And then we had just a series of consequences of where they are on the three by three, right? If they are like one and one, meaning like, it's just me and it's a mind nuisance, then we ask you to follow the user policy period.

It's the only thing that everyone has to do, but if you're like a three by three and it's the whole company and it's super critical, then we have 21 requirements for you, including, uh, change logs, including, , a review committee, including a compliance review, uh, you know, and so on and so on.

[00:06:36] Jeremy Utley: Okay, so, I want us to roll back a little bit because we've dove straight into the deep end, which is not atypical of this podcast. But can we start , with your perspective as catalyst? I mean, I'm not a chemist, so I'm going to botch this analogy, but I'm just going to go with you.

You're about to add the catalyst to the reaction. What are the considerations you make before you catalyze this reaction, which is to say, what are you thinking as you dive into the Moderna context, because you came from Google, right? You've been involved with a bunch of AI change management programs.

What were your considerations coming in and what were some of the early moves you made as the catalyst?

[00:07:13] Brice Challamel: So, , I think first we need, uh, to understand. Why we're even here. And I like , to always emphasize, like, why not how a lot of people are thinking of what they do in terms of how, and it's very dangerous because in terms of change, your house going to betray you, whereas your why is going to uplift you.

Really, if you have a strong, why a strong purpose, any change , in especially technology change is actually, uh, an opportunity. Because your why suddenly has a new venue. But if you're a how person, any change is a threat. Because the way you are doing things is suddenly under siege. So , you really have to, for yourself and for everyone around you, go back to why.

Right, like take a moment to separate the why and the how. And to make people refocus , and realign, re anchor themselves in their whys, because that gives them a completely different spin on novelty. And a new technology, and you should do this yourself. So for myself, the why is not a given, right?

Why should I operate this change? So I hear the base notion is that we have waves of utility really that come our way. Like, you know, we could go all the way back to language and tap water, but let's just take the one that's at hand here. Uh, electricity, laptops, internet, AI. Or like, the magic quadrant of those four things, And maybe you think electricity is like, but I have been in situations where like, you don't have a charger anymore and your digital life is a threat.

[00:08:42] Jeremy Utley: That's the base of Maslow's hierarchy in a digital era.

[00:08:46] Brice Challamel: You better have a power bank in your backpack because otherwise at some point, this whole digital beast that you're in is going to have an abrupt stop. So, Electricity, devices, internet, AI are the four things we owe anyone who works at least in the knowledge work But I think in work in general now and so I'm there's a utility missing in the corporate world I have it in my life.

Everyone on my team has it. Everyone at Moderna I would say at this point has it but I don't think you can work decently without it anymore. And so If this is the baseline, then what I'm operating for is mass adoption, because , the revolution we're talking about is not so much the ML revolution. It's been there for a decade now, actually really for 40 years, but accessible for a decade.

Uh, but it was like a five star hotel at the end of a dirt road, like super hard to do and super hard to use. And now the dirt road is paved. It's like a highway, right? Because you can just ask in English and, and receive in English or any, or in Danish or in any other language. And so, um. If this is a revolution, it's a democratization revolution, then I need to listen to everyone, right?

Like, this is not me driving some expertise transformation in this or that field. So I do mass discovery programs where I interview hundreds of people. I have a methodology for this and I use AI for this, by the way. I use AI for AI a lot because otherwise it soon becomes overwhelming. And I'm going to start by one of my favorite sentences is listen before you think.

[00:10:08] Jeremy Utley: Like that , you're talking about, I just want to, make sure this is clear , for the audience. You're talking about the attitude of the catalyst prior to entering the reaction is listen before you think,

[00:10:19] Brice Challamel: because otherwise you are very likely to placate something that has worked for you.

And that you think was super useful, say, adoption of AI for a retailer, , in 2018, uh, trying to, you know, get out of Amazon , and build their own thing with a public cloud that's either Microsoft or Google's in most cases. And you're going to think, oh, that worked for me. Like that was super useful.

I have lots of key learnings from there. Let's just apply this to Moderna you would be dead wrong because this is so different on every dimension. It's a whole new world. It's a whole different technology. It's a whole new way of thinking. It redistributes. Power in very different way in the fabric of the company.

You really have to pause and think what you are, what you're dealing with and what you're organizing this time around.

[00:11:01] Jeremy Utley: So you said you listen, you, you conduct hundreds of interviews and you use AI, uh, for AI. What happens next?

[00:11:08] Brice Challamel: Well, next I anchor on three key notions. You're so wonderfully curious because no one ever asked me this. My first notion is the challenge, uh, because that's what's going to drive the whole thing.

Like, what are we trying to accomplish? It would be extraordinary. So it's something to the effect of, uh, how could we, and here I'm not reinventing entirely, you know, it's very design thinking centric, right? Like, at some point you want a user driven challenge, which is to the effect of how could we, right?

It doesn't have to be this, but that's like 90 percent of the case that sometimes it's going to start by how could we. And then you need objectives. And I say this because otherwise you're going to be floating , in a form of extra abstraction layer. You need concrete things that can anchor what success looks like.

So, as part of the discovery template, because there's a template to it, I ask seven questions. And my sixth question, the one before the last, Which is the most important question. Every other leads to there. And the last one is kind of a way to cheat out from it. My sixth question is what would make you say that we have succeeded together beyond all expectations, because this is how I anchor the why that people have on themselves to something that is very driven. And I get the key to unlock their ultimate hope and expectation. Professionally at least and that sometimes informs me on what we're dealing with more than anything else, right? It's just a one sentence.

So having a collection of hundreds of those I can start now Creating objectives because they gave me something very concrete. I asked them What would make you say and it has to be something I can observe or measure,

[00:12:50] Henrik Werdelin: When you ask people about the why and how do we succeed do you feel that everybody now knows the Why?

The kind of like, the universe on which that they could answer.

[00:13:01] Jeremy Utley: The possibility space, yeah. Yeah,

[00:13:05] Brice Challamel: this is a perfect question. That's exactly right, no. And they shouldn't. So, the last thing in the world that I want is to influence, like, because maybe they say, what would make me say is like, if you were on the front page of the New York Times, and that's their point, what would make me say is that if I had a billion dollars in my bank account, alright.

You know, what would make me say, like, if we discover a new physics particle, thanks to AI, or a new type of protein that revolutionizes life science. And even with my few examples there, you immediately see that we're dealing with different types of personalities. And different types , of intrinsic and extrinsic motivators, right?

Like the ones who come from your self realization or the ones who come from , your needs. So I am already there. I'm starting to understand the components of the reaction, right? Like how many do you have of each categories? Like what are we dealing with? You know, how much in change management and you heard my section school analogy with like the five positions versus change.

[00:13:55] Jeremy Utley: I'm just going to put a note in that I would love for you to talk about bellwethers

[00:13:59] Brice Challamel: I'm happy to and have other co analogies by the way. I have a series of co stories.

[00:14:02] Jeremy Utley: Oh good. Good Of course very french very french what you were saying

[00:14:06] Brice Challamel: All this to say that as you listen to them and you ask that question the art of the possible in terms of answers is extraordinary.

I have I probably have done this type of interview More than 10, 000 times in my life now, like, uh, because I started doing those 25 years ago and , they have evolved, , but I always ask that question. It was always one of my key questions, even 25 years ago when I was starting. So were taking thousands and thousands of those interviews across the decades, right?

And the first, no one has ever answered that question right off the bat. It's never happened. Everyone poses like, Oh, that's a good question. Which tells you something is that people don't quite dare think about success beyond their wildest expectations. They live in the middle game of good success or like not catastrophic failure.

[00:15:00] Jeremy Utley: Wow. That's huge.

[00:15:01] Brice Challamel: It's interesting, right? Because, I can tell you out of 10, 000 discovery interviews, I have never, ever had someone who could answer that question straight off the bat. Never.

[00:15:10] Henrik Werdelin: And do you then feel that it's your responsibility to help them Go there or is the answer that they provide the answer that is the answer?

[00:15:20] Brice Challamel: It's my responsibility to try the best path to the highest. Self actualization of the most people possible. Sometimes it happens because not everyone's going to get exactly where they want to go. Like when you interview a hundred people and they're just representing 6, 000, you have a galaxy of objectives.

And I call this organizational awareness, right? Like I'm very anchored in organizational awareness, like understanding the momentum , and the profound achievement desire of the organization in its individuals and its collective aspect. And. Because I use that notion of organizational awareness, I never use the word politics, because the good part of politics is organizational awareness, is to understand who's trying to achieve what and be mindful of it when you speak with people or when you act, when you communicate. Everything else really is like useless and toxic parts of behaviors. So for me, I want to anchor on that part of what other people can talk about as being politics.

[00:16:29] Jeremy Utley: Okay. So I want to make sure we get to the third thing. Cause you said you anchor on three key notions, right? . So first one I wrote down was what would we try to accomplish that's extraordinary? The kind of the design thinking oriented, how might we kinds of provocations. The second thing is objectives, concrete things on which I can anchor success. What's the third key notion.

[00:16:49] Brice Challamel: Objectives will objectively give you the bar of success or failure. And it shouldn't be subjective. It shouldn't be something that is objective interpretation. Like you do, or don't have a billion dollars in your bank account. You are, or you're not on the front page of the New York times. Like this is undeniable, right? And by the way, one last point on this for Henrik, we might end up with objectives that no one ever formulated, but that when I hear a hundred of those, then we elevate the whole conversation and think if we do those three things, all of these people , are going to be happy, right?

Like it is not the arts. of the, certainly not the art of the least common denominator, which would be terrible, but not even the art of the best common denominator. , it's more like understanding the colors that you're painting with and then painting. And, um, because it's very rare that I actually take one specific sentence and put it there.

It can happen when someone had like a, you know, uh, a blaze of glory and gave me something like that perfectly encapsulates everything you've been hearing and I'm taking this as it is, but most often it's a lot of work. And the third category is criteria. It comes from the Greek krinine, which means to sort the grain.

And it also gave you like a movie critics or a critical situation or a critical disease, right? So it's what differentiates the good from the bad. In a way of sorting things, right? Sorting the grain. And so out of all the notions, ideas, and initiatives that could lead us there, I will favor and here come your criteria.

Uh, the ones that are the fastest to implement are the easiest to roll out at scale. Are the ones which will have the best impact versus their cost, right? Like those are the most obvious. There's a little funny exercise to do, which sometimes I do with some of the leaders, especially.

I present them with, , completely, , impossible answers to the challenge. So let's say you're trying to, you know, to do pageant contest , on, , Miss Universe and you propose Einstein, like will he work or will he not work? So for some reasons. I don't think Einstein would cut it to be Miss Universe.

But from some other research and from some other aspects, I think he'd be a perfect candidate. Very telegenic, pretty bright, you know, lots of ideas, you know, uh, good coverage of both continents, , US and Europe. Like, I like him for Miss Universe, but then, you know, there's a gender question hanging there.

There's an, , age at what age, you know? Maybe in his 20s, but like, we all think of his in his 70s. So, and you see there, I already just have like five criteria,

just there. And because I choose a purposely wrong solution, it forces judgment to activate. And we have the same grid for right and wrong solutions, but it's harder to elucidate the grid of our own judgment and something that is relevant because it's implied. Whereas if you take an answer that is completely irrelevant, all of your grid of judgment is going to jump to the throat of that answer.

[00:19:45] Jeremy Utley: Yeah, that's great. So it strikes me. One, one thing that's just becoming clear here is this is not. The case of an individual who's already in the organization, who's just kind of catalyzing a movement because they got excited about it. There's such deep consideration before you ever decide to call it, jump into the Moderna reaction.

So what, Do you feel that that's actually an important part of the changes that the organ is you've done some, you could call it some qualification of the organization ahead of time. I just think a lot of times folks think about changing where they are and it strikes me that in a way I feel you actually probably were brought in as a concerted strategic decision.

We need to accomplish a transformation. How important is that? In the, in, in your entire success story.

[00:20:32] Brice Challamel: I can't speak entirely to it because this was not only my decision. It's a shared decision, right? Like to, to join, but, um, I would say this, , we met before the pandemic with, uh, Stefan, the CEO of Moderna, uh, and part of the board who visited Google.

And we had a professional crush. You know, uh, it wasn't on Valentine's day and I came back home and told my wife, Oh, I, if I wasn't, you know, so much enough with Google, I would, I would be working at Moderna. And, um, and then the pandemic happened and, uh, , it was very obvious to me how important Moderna was, maybe one of the most important companies in the world and, , at that time and still today, uh, whatever the rest of, of the world thinks.

And that what they've just done could never be done again the same way, right? Like, uh, never again will a hundred people huddling corridors of a single place on earth to just unlock the biggest problems in manufacturing a solution for mankind. So in my line of work, you know, when you're seeing something that is the early stage of maturity curve, and that's never going to happen again.

And one of the biggest dangers for those organizations is that they anchor on initial success. And so I sent a note to Stephen saying. You can do this again and again and again, but you're going to have to change the way you do it every single time, right? So if you're interested, I think I know how to give you a permanent first mover advantage.

Uh, but it's going to take deep change at every step of the way. You're never going to be able to do it twice the same way. And that spoke to him. So 20 minutes later, I had a reply from him, which I wasn't sure. It's like a bottle, you know, in the ocean. , and two hours later, my children were registered in the school at Boston, right?

Like, uh, because in case, in case it works out.

[00:22:18] Jeremy Utley: Okay, so you talk about speed of the reaction. That's pretty quick. 20 minute response from the CEO. Two hours later, you're enrolling your kids across the country.

[00:22:25] Brice Challamel: Well, it was Valentine's day. That's when you have to enroll your children. , if you see what the calendar year looks like for schools.

February is when you need to start activating yourself. So, uh, I couldn't afford to wait whatever time it was going to take for interviews or whatever, and then find out that my children are not registered at school. So I, you know, you have to immediately think about the chain of consequences , of a good decision.

It's very modern. I like to, by the way, like, I love this culture because the Moderna is act with urgency. So you were not reckless. We're like, so, so thoughtful, but sometimes we see the moment and we just seize it. So, uh, I would say that certainly Stefan had in mind that initial contract we had, but then my first round of interviews, I did three in, in, in a short sequence.

My first round of interviews was on the readiness for change of the company. And then my second round of interview, which was a year and a half later, was on, on digital practices at Moderna. And then my third one was on AI. So I did three times a round of more than a hundred interviews in the time that I've been there.

And I'm about to do a fourth one, by the way, because now we have a fourth, you know, iteration of challenges that requires to go back to listening before you think. So the first time was the readiness for change and we discussed. The second time was IT foundations, digital foundations, and then we discussed.

And I actually ended up being in charge of those with a team called Digital Empowerment for a year and a half. Uh, so electricity, laptops, internet, like this thing, because it wasn't, it was broken under the strain of hyper growth and there's no AI transformation if you don't have electricity, you know, it's game over.

And then the third one was about AI and the one that I have to work with now. Is ways of working. , we need to transform how we work in profound ways. And maybe even the culture of how we work needs to change.

[00:24:11] Jeremy Utley: and that's even in an organization, , which has as a foundational premise act with urgency, which is

[00:24:17] Brice Challamel: Let me give you a simple example.. There is no more individual contributors in a company nowadays. Those days are over. Now everyone is a team of five. Themselves, an assistant. A coach, an expert and a creative partner, right?

[00:24:33] Henrik Werdelin: Can I ask you a question, that I've been thinking a lot about in the organization I'm involved in, that the, the flattening of the hierarchy, right?

Like the removal of the fat pillow in the middle, um, the change from a traditional, let's say army structure to a Navy SEAL structure. Navy SEALs that report straight to the president, there are seven people, they get an objective, you know, they don't, um, I, I have been wondering if it's because of digital tools like the slacks of the world and now with AI, obviously information flows are now available to people who sits close to the customer or close to the problem that need to be

[00:25:15] Brice Challamel: solved.

[00:25:15] Henrik Werdelin: And so to your point, it does seem that we are close to this completely. Uh, re architecture of organizational design, and I think and where I think I got added kind of like intrigued about this is I think we talked to. I think it's Kevin Kelly, who mentioned that when electricity came online, suddenly a lot of stuff had to foundationally get changed, like factories had to be rebuilt in a different ways because, you know, obviously it changed.

And so I was wondering if because you mentioned it at the beginning, and it's such an interesting kind of Maybe you could put a few words to how you think about that.

[00:25:54] Brice Challamel: So we're going to get philosophical about horizontal and vertical division of work, of labor. You know, it used to be that humans were holistic and could do everything, right?

Like, Stradivarius did his own violins himself entirely from Antoine. , but then when he died , the recipe got forgotten. Like, it was never a collective achievement that mankind could pride itself with. It was a single person topic that withered and no one knows how to do Stradivarius anymore.

And then we were thrust into horizontal and vertical division of labor. I think , the horizontal part, you know, , has ever expanded because now the number of proficiencies and expertise and, you know, deep rabbit holes of like amazing knowledge in one field or another, uh, I don't think I've ever seen it slow down, if anything, it grows exponentially.

Um, even though it doesn't have to be all embedded within the same organization, but when you think of it at the scale , of the fabric of work, but the vertical division of labor, I think is applied. Like, I think it's something really, people spend too much time thinking about their next promotion. , it's a complete toxic line of thought that kind of, you know, is a permanent mental burden on their minds.

And we see it going back and forth. With uh, I think it was overused at first when you had those huge hierarchies with like 20 levels of hierarchies We've seen those like in corporate organizations with levels 1 to level 20s and things like this um, and you have to wonder what level 13 does, you know, uh, apart from relaying from level 14 to level 12 and managing up to 11 other levels or whatever so it's That part is much more under scrutiny.

I think the Good Even the excuse for it, which was vertical integration of conglomerates, is no longer there. Because now we live more in hives, right, of different, you know, small expert teams. I love that quote from, you know, what changes the world, it's a small team of dedicated men and women. Because if you're by yourself, you're not going to change the world.

But if you're a crowd, you're not going to change the world either. You have to be one of those small teams that accomplish something fantastic. Even if we take the invention at hand here, right, like the Transformers, or the team of eight at Google. Right. That's the exact right team that changes the world.

Like none of them individually could have done something like this, but if they'd been a group of 40 in a committee with subcommittee and secretaries , and recorded meetings, like this is game over. So I call this the dinner conversation size, right? Like, at what size do you still think you can have a good dinner conversation?

[00:28:29] Jeremy Utley: it's a classic two pizza team. It's Bezos's two pizza team, basically.

[00:28:33] Brice Challamel: Yeah, but I'm more encouraged on the conversation than the food. And I know that the food means the number. But depending on the topic and the people, like for instance, this is the dinner table, the three of us there.

How many more people could even put a word like with a speaker like me, like occupying the field all the time and you with all your brilliant questions and Henry, like being, you know, the sharpshooter bring that little nudge at the right moment, that kind of the conversation. So this is a food table and , , two pizzas would be too much for the three of us now on other topics.

I could have like, you know, uh, 12, 15 people with different roles, , in good organization. And I think team six is a hundred soldiers, right? Like, so we've seen good commandos of a hundred people who know how to organize something efficiently. So I don't want to incur too much , on the exact size as a fixed number more on what's the topic at hand and the people around the table.

And when am I going to stop having a good conversation? , we all have that feeling, right? Like if I'm not going to have Jeremy and Henrik at dinner, uh, And you know this yourselves, because at the extreme of this, if it's like the most interesting romantic partner, you're going to want to have a dinner with only two people around the table.

[00:29:44] Jeremy Utley: Right, exactly. , nobody else can get invited to. I wanted to go back to your kind of your four components. You said, uh, four things for utility, for knowledge worker are one electricity, two devices, three internet, and four AI. And then you made a statement. You said people at Moderna feel you cannot work decently without AI.

I mean, I want to make sure one, did I hear you correctly? And two, how do you know they feel that way? And three, how did they get to the point? They feel that way.

[00:30:15] Brice Challamel: no, you're right. Like I feel that way. , and the outcome of my interview with them confirms or informs how I feel about it. Right. , and then at some point I reached a conclusion out of all those interviews that no, this is not something we should reserve say to the developers of the company and that they should use Gen AI to improve their code.

And maybe to a hundred scientists. No, this is something that not a single person that I've ever met. Didn't speak with with glimmers in their eyes of how it could augment them So you you can just you start to see chief AI officers here and there right And this is going to have a very short shelf life.

It's like being a a chief mobile officer in 2012 or something, right? like um The true name they should have is chief augmentation officer, right? It doesn't sound perfect and we're not used to it. But it's all about augmenting people Individually and collectively, you know, how high can I raise you?

I, I often joke to people at Mudrana when I do this in like slightly public, you know, conversations where town halls or things like this, that they all joined as heroes and that we're going to equip them to turn them into superheroes.. And that they're all the Sidney Naskai's and the Bruce Wayne's , of the world.

And they were going to turn them into Batmans and Catwomen.

[00:31:40] Jeremy Utley: can you tell me about the first thing you did? I think there was a contest. Is that right?

[00:31:46] Brice Challamel: Yeah, isn't exactly the first thing I did, but you're right. Like in that third round. So remember it was the third round of something, right? There's a round on change readiness and around on, let's say digital readiness.

[00:31:57] Jeremy Utley: That's kind of preparing the soil.

[00:32:00] Henrik Werdelin: Are they required you feel? Cause I think a lot of people are now just sitting, you know, from day zero, they're not even like, you know, you're like so many cycles ahead, but a lot of organizations just sitting there and go like, Hey, I understand that I need to provide augment people with this new superpower.

I just don't even know how to get going. , what I hear a little bit you're saying is that you kind of need to do that foundational work before you can even get to the iPod. Is that correct?

[00:32:23] Brice Challamel: Well, at least to have awareness of it. Because if my interviews had told me , that this was a robust change ready, digitally ready organization.

, I would probably go straight , to round three. Right. Like, uh, with very lean recommendations for the first tour or lean initiatives, lean, lean catalyst reactions. Um, in this case, it's not because it's an interesting point there. It's not because people are digitally literate at Moderna. They are all very, very good at digital.

Like this is one of the smartest organizations with some of the most educated people I've met in my life , as a company. , the challenge is that in their case, There's kind of a ghost curve of the usefulness of digital culture. If you don't have any at all, we're going to have to build some.

If you have too much, we're going to have to unbuild some. Because, uh, in my case, those are people who've spent 20 years of their life dealing with bioinformatics. And this is going to be like, uh, the sound of thunder in the blue sky when they are exposed , to Gen AI. Because really, we should acknowledge together that there's this and that kind of software.

This kind of software does the same thing all the time. And if you, if you give it instruction, you get an outcome and you give it a million times instruction, you get a million times the same outcome, this software, and then there's that kind of software that if you give it an instruction, it gives you an outcome.

If you give it another, it gives you another outcome. And you can do this a million times. You're going to have a million different outcomes. And that kind of software is easy and homomorphized because we're used to it from humans. We're just not used to it from software. And so we tend to anthropomorphize it, even though it's so wrong on so many different levels, because it doesn't have intent, it doesn't have an agent, it doesn't have agenda.

Like there's a lot of things here that are probably not human at all. But, uh, certainly the part of it that will never give me a joke twice the same way I've met before. Right? Like, uh, and these are my great friends. , So we need for people to grapple with the fact that there's this and that type of software.

Let me give you a super concrete and basic example. It baffles most people that Gen AI can't do mid level math. If you say 2 and 2, it's going to say 4, so it does math. But if you say 258 multiplied by 725, it's game over. It's going to give you a completely wrong number. Why? Because it does guesswork. So it's guessing math, right?

As you know, this, it's like, it's like guessing the next word in the sentence and making kind of diversified guesses, which is the beauty of it. But diversification of guesses for math is wrong. It's plain wrong. We, you know, like this is when you have to go , from, you know, kindergarten to school, right?

At some point you can't, guess math. You have to do the math. , but people are baffled because they don't understand. But we're thinking about that kind of software and that kind of software is creative in the essence. I was going to be doing creative math, right, which is not what you want there.

So there's a lot of ways to go around this. You can trigger Python script , to run your, calculations. You can use reflective models like O1 or O1 Pro to break down the construct of how they do a math demonstration and they deliver the outcome. Uh, we're not out of ideas to get some form of reliability out of models, but the basic foundation, the cultural acceptation of software in our society is not there.

It is not on that kind of software. It's on this kind of software. And like, if you don't explain the difference to your people, you're immediately in danger of people starting to do calculations with concrete outcomes that have impact on the company and your customers. And that are just wrong. Right? So, so that's why we need those layers of culture , to build in. I'm very optimistic because if you look at chess, uh, no chess player in the world would play without an AI. Uh, Another sentence that I like is the end is the beginning. Everyone used to say that when Deep Blue beat Kasparov at chess, it was the end of chess. And if you look at what has happened, there's 10 times more.

License owners for the chess federation. The level of grandmasters has exploded. The number of grandmasters has exploded. We now have 2000 grandmasters. Their level is like three to 500 points on average higher than the best grandmasters of 20 years ago, because they were raised in the age of AI. And when the model says that the move is wrong, the move is wrong.

Even Magnus Carlsen doesn't say that the model maybe has it right or wrong. And that he thinks this is, no, if the move is wrong, the move is wrong. And Magnus Carlsen knows better than to challenge it. But, the way that informs how you learn and understand chess, the way that you grow with the support of that AI that always knows the right move from the wrong move, and then you can realize any game and look back into scenario and branching hypothesis of why was that right move?

Only nine moves later, are you going to find out that you have to move your rook there because actually nine moves later, that's where the queen is going to add on the other side. And then you learn, you know, and the game of chess itself has 20 years. There's been more progress in chess in 20 years than in all the rest of the history of chess for 8, 000 years combined.

And I think this is about happen almost everywhere. Uh, think about medical diagnosis. You still have doctors who confront AI medical diagnosis even though doctors are like right 60 to 70 percent of the times on average and AI software now, you know is right 90 percent of the times on average on simple diagnosis.

Radiologs, you know, they've gotten used to it. Like, uh, if you do an x ray or a scan, the AI knows better, period. But some general practitioners do think that they know better than the AI. And maybe for now, in some cases they do, right? Like we're edging towards, but those days are counted. , you see, my point is how do you make everyone into a team manager of those five people we discussed and into , , an AHS player into the Magnus Carlsen of their own field. a level of chess that was unprecedented

[00:38:29] Jeremy Utley: Right now, and there's still, I was just talking with some friends about this this weekend. Right now, most people, if you ask the question, did you use AI for this? The right answer is don't worry. I did it myself. That's kind of the right kind. I know you and you laugh, right?

Cause you're like, , you're from the future. Right. But I think what you know, and what we are learning and increasingly learning is the right answer to that question is of course, do you want me to not, Use every tool available to me to do the best work possible. But the sad reality is I bet Bryce, I don't know what the number is.

95, if not 99 percent of professional workers, the right answer in their mind is don't worry. I didn't use any AI. That's the right answer. That's what their boss wants to hear. That's what their colleagues want to hear. And so there's this enormous, I mean, In a sense, Moderna is a bellwether. I mean, we didn't get to bellwether yet, but Moderna is a bellwether.

Because right now there are very few other organizations where the leader would laugh when I said, don't worry, I didn't use AI, right? You laughed out loud. Yeah. That shows how radically progressive you all are. Well,

[00:39:33] Brice Challamel: no, but, but like, you're spot on. Like I would never do anything without layers of AI integrated into it. Like, why would I, it is so risky and reckless , and stupid and limited. Like , I have no words for it. But then it's like asking someone, would you do this without a laptop? Or, have you done the calculation by hand? Yeah, yeah. I promise. The calculation by hand will be better than paper. Don't worry. I didn't use any software,

[00:39:55] Henrik Werdelin: were thinking about this the other day. , we were having an interview with somebody and then, um, it was noticed that they were clearly using AI for all the answers.

Like it was like an interview and they clearly had the machine, you know? And you know, like today, if somebody came into an office and saying, you know what, uh, just before we do the interview, I just want to let everybody know I don't do computers and I don't use the internet because that's just not my thing.

I mean, like people just go like, who are you? Right. But it seems weird that , there is now increasingly a polarized world where there's people like you, Bryce, who, you know, thinks and breathes and in, by osmosis, your organizations do the same. And then there is the most of the organization and most of the people that are not even kind of like starting to step onto the, that ladder.

Yeah.

[00:40:45] Brice Challamel: Well, there's another level of optimism there , that's ahead of us. Because. People are lazy and I, I don't blame them. It's Darwinian advantage. They tried to get the best outcome out of the least effort. This is who we are as a species. And so when they see a way it hits them in the face instantly.

So Gen AI and per the, the vector of OpenAI and ChatGPT. Is at this point the fastest ever adopted technology, right? So this is not something that has been completely unnoticed. And by the way, you can hardly open a newspaper or listen to a blog or a podcast without reading about ai the most valued company in the world today is nvidia because of ai so like whatever your angle is You're gonna you're gonna hear about it, right?

And so This is not a stealth operation, GNI. , this is very much in your face wherever you turn your face. So, I hear what you're saying, but at the same time, look at this amazing tsunami of change message that we're receiving now as a society. I don't think that I've ever seen anything like this.

Like, I can tell you because I was there, that computing was not like this and mobile phones were not like this either. , it took maybe a decade for mobile phones to kind of get some, some kind of a footprint. In our daily lives,

[00:41:57] Henrik Werdelin: so you're saying the problem will take care of itself.

[00:41:59] Brice Challamel: I'm saying that it's going to happen whether we want to or not.

[00:42:02] Jeremy Utley: It's inevitable. Yeah.

[00:42:04] Brice Challamel: i'm not here to make sure people and when I use ai they're going to use ai I'm here to make sure they use it before the others or before You know the diseases are upon us because we we are more a competing company against The burden of disease than against any other pharma company if i'm honest Uh, you know our the next pandemic And, um, and if we're going to do this in a way that makes people feel safe and feel elated, because I want them to do this in joy.

Unchanged change can be very enjoyable. We all feed and notice intuitively. You love a new book. You love a new movie. You love a brand new jacket. You love a new lover. You love a new job. You love a new political figure that's inspiring, so I, I just think it's deeply rooted in human nature to root for change in novelty. We're a neo five species, which is why we have all these different teeth, right? Like we have teeth to cut, fibers to eat meat, to grind, uh, you know, peas and so on. Like we are omniverse creatures who can adapt to anything.

We have Eskimos on North Pole. We have bedwin in the desert. We have people who goes kuber diving. We have people who go on the moon. So like humans are so vastly adaptable. This is almost our defining trait, right, like, uh, this and running. And so we dig change profoundly, naturally, instinctively. It's just that sometimes people are so bad at this that they find a way to make change painful. And it takes a lot, but there's three ways.

[00:43:33] Jeremy Utley: Okay, there are three ways.

[00:43:35] Brice Challamel: Yeah, you can make it, uh, isolated. Like you, Jeremy, are going to change, but the rest of us are not. That becomes painful because you're going to be the object of scrutiny, mockery, you know, judgment and so on. I can make it brutal. Like, thank you Jeremy, it was really nice knowing you, but bye bye.

Right? So that's changed, but it's very brutal. It's very immediate, like there's no gradual way to ease into it, or to prepare for it, or to adapt to it. And, I can make sure that you have no voice in it. This is how change is going to happen, whether you want to or not, and period. And so in this way, you can't influence it, you can't contribute to it, you can't find your self realization in it.

So you can use any combination of those three superpowers of destructive change, And you're going to mess up change, but naturally people love change. Like, uh, if it's done right, like they are going to enjoy the new movie and a new book, if it's going to go right.

[00:44:29] Jeremy Utley: This conversation is reminding me of your sticker, which I know is on the back of your laptop, which says, don't be Fred.

, would you tell folks, what is it? What does that mean?

[00:44:38] Brice Challamel: CEO, Stefan famously has a bus when he was, uh, in the early years of his professional journey. I know that boss's first name was Fred and we won't say anymore. Um, but Fred had his emails printed and then would write the answer and give , the handwritten answer back to his assistant to put it back into the machine and send the email back to the person.

And, uh, this is of course the right technology with the wrong behavior, right? , and we don't see much Fred's anymore, but they were there for a while. People who had all the tools at their disposal, but just didn't get on board. And there's a notion there also that. Laptops must be some evolved form of typewriters, which is a tool of lowly people whom I can't compare to myself because I'm a high thinker, right?

So there was a motion here, which is why I have instant knee jerk, you know, caution about vertical division of labor and all its yields. Because the moment you start to think yourself above the fray and because the fray uses typewriters and I think with my brain, and now there's a better typewriter called a laptop, but I think with my brain and they use a typewriter, you kind of have somehow lost yourself in the way, right?

Like something wrong has happened to you. And it's out of pride. It's out of misplaced pride. Um, so, , it's very important to Stefan that the senior leaders of the company, , drive change. , and embody change and champion it and give the example for it. , so we're not letting them get away with a threat like behavior and those tickers in a very fun way.

I didn't do them even though I love them, uh, because I don't have a budget for change, zero budget, zero time. , otherwise I would use it and I would somehow start to impose change to others, which is not my role here at Moderna. So those stickers were done by the R& D team and by Rose Duffin, who's now our chief research officer.

, and she launched a whole campaign of don't be afraid, use AI. , and I just found out because , she came to meet me in the corridor and said, Oh, Bryce, I need a new sticker campaign. I was looking for you. Would you want a sticker? And I was like, show me, you know, and I was so blown away.

And I thought this was such a great initiative. And of course, I, I probably, , have the don't be afraid, use AI sticker on my laptop. But even more probably because I didn't do it. I didn't even conceive it or be part of it.

[00:46:58] Jeremy Utley: It's brilliant. Okay. You said a phrase that I've got to follow up on. You said you can't let senior leaders get away with Fred like behavior. One, what's an example of a Fred like behavior and two, how do you, , not let them get away with it?

[00:47:12] Brice Challamel: Yeah. All right. So an example of Fred like behavior is you're like in your messaging application, let's say a team's channel where you have like 500 messages, let's say on AI.

And, uh, you have one of two threat behaviors there that are possible. The first one is to scroll through them. And the second one is to type in the search bar and search for keywords. Uh, in both cases, you're doomed because how are you ever going to find insights out of 500 messages in a team's channel on AI?

By either ways, can you scroll your way through them? Can you actually type like, you know, Boolean research for them? Like it's, it's impossible. So, , the normal way is you open your AI Chrome extension, which we call in chat. We have this at Moderna. Uh, it understands immediately the entirety of your threads in, in teams.

And you ask it questions you under gauge with conversation. You used , to have data research from assets or conversation with experts. Now you can have conversation with assets. Isn't it, isn't it grand? , , and those assets include, uh, long email threads or long conversations in a messaging app, right?

So I can say, what are the five most often met challenges with AI in this thread? Which is very open to the whole company. And it's going to immediately read everything and categorize me the five most commonly found obstacles to AI adoption. And then I'm going to say, what Considering the way that people have answered each other in this conversation out of crowd wisdom, which would be the three best ways to solve each of those top challenges, and then I'm going to learn from crowd wisdom, and that, I think, is a modern behavior, right?

[00:48:45] Jeremy Utley: That's such a deeply personal thing. I mean, not, not to get big brother, but when you say don't let folks get away with Fred, like behavior. And I love those two examples, scrolling and typing, because it's something that all of us do every day, or that we're all Fred in that sense. How do you enforce a new behavior in that kind of context.

[00:49:02] Brice Challamel: You remember how I told you probably half an hour ago that I was entering a fourth round, which was around ways of working. So, so that's the topic ahead of us. There's obviously two different ways of working. I witnessed them firsthand.

We have people who are true AI champions way beyond myself at Moderna and who amaze us with what they do with AI. And we have people who still haven't boarded the train. And it's okay, but just the train is gaining speed. So like, I don't want them to rip their arms away while they're on board the train.

So , it's more a matter of keeping a cohesion on our social fabric. I don't want to leave anyone on the border of the route. So it's, uh, the, I need to go back to listening before I think on this, I don't have all the answers there. Like I know that something has, has to, has to be done. I know that there are precedents to it, but I don't want to placate one of my precedents on the current situation because this situation has never happened before.

And by the way, I would love the recording of this conversation if you'd be so kind, because you asked me all the right questions, all the right orders.

And so , I have things to do with it on AI.

[00:50:06] Henrik Werdelin: I mean, like, it's also incredible just how often now that I record a conversation and then, because that you can use this conversation as the piece of dbt outputting something.

I mean, like all the time now, cause you also, you formulate yourself and your, at least I do in thoughts and these meetings, sometimes you've been coming up with stuff. That you didn't even know that you had in your mind.

[00:50:23] Brice Challamel: And that's another example of be afraid or don't be afraid. This is a perfect example of threat like behavior. So not only do I always record, like I came to this conversation and I was informed before I joined you that this was recorded. So I knew this was and we were, we never even had to talk about it. When it's not, I would immediately ask, is it okay if I record this conversation? Because I don't know how it's going to go, and it might be something that I want to look back at and kind of re examine with a different type of lens.

Uh, we can go back to chess. At the end of your chess game, you always want to run the analysis with Stockfish and look back at every move you made and how right or wrong it was, right? , , every conversation for me is a little chess game that I want to realize with AI. And, uh, so sometimes it was a very straightforward game and I, I don't mean much depth of analysis, but in a case like this conversation, I really want to do it.

And, uh, then there's a third case in which, uh, people would like to, but we are like on a different platform. They don't know how or anything. So I always have my phone with other AI on it, which is post recognition. And then I would ask them permission to record it separately from the interface. They would say yes or no.

I. I have yet to come across someone who says no, so normally everyone says yes, and then I just launch recording on my phone, right, which is next to the speaker, and I'm going to have voice recognition and complete recording everything that was said, uh, and use it back, right? So, it's rare cases, because most often now the platforms allow for it and people know how to use them, but, uh, having those conversations without the transcript, It's like writing back with your hand and pencil to an email, right?

[00:51:50] Henrik Werdelin: And also just as a tip, , I was doing a brainstorm in the Danish National Bank the other day. We were 12 people discussing the Draki report, like the, why Europe is, you know, falling behind. And I literally just put the phone on the, on the table, 14 people. And I was like, no way it's going to pick everything up.

Same thing, put it into auto AI and just had to tag who the different people were. Okay. A full transcript. And then obviously, you know, half an hour after the meeting, I could send not just kind of like a resume, but also what was the takeaway from the different people? It's just such a brilliant experience.

[00:52:24] Brice Challamel: So let me give you a personal version of this just for entertainment. I'm just out of parent teacher conference because it's the end of the year. , I have two kids, they have some common teachers, , and they are in French American school, so half of them are French, half of them are American

so I had 16 back to back parent teacher conference interviews to do my experience with those has been dismal, historically. Like I, I, take a few notes of a few words, I barely understand what they tell me, like it's, it's complicated because you're focused on the conversation but there's so little time

I So this time I did with AI, so , I got every conversation recorded and done with the agreement of the teachers and who were very interested in how that was going to go and the outcome and so on. Out of this, I did a synthesis for each of my kids. I have two children and for a household, you know, like, like, what is it for us as a family?

Then I asked , another AI model to present the key findings, conclusions and next steps. To a 12 year old. So one of my child is 11, the other one is 12 years old. , so it just reframed the entire, summary and next steps of the meeting for a 12 year old, and it did it beautifully, it was perfect.

And then I asked it to propose table games for us, that would drive us in a fun and friendly way because we're French. So we have , family dinners here , and we, you know, these are, uh, Our exploration environments every evening to propose new table games, which would lead to the best possible outcome for both my children in the things that their teachers had highlighted

and I got out of these five table games. Which are so brilliant, which we now really enjoy a lot and have a lot of fun with, uh, and that exactly hone in on the exact skills that the teachers and my children said, I wish I would see more of this or more of that from them.

[00:54:11] Jeremy Utley: That's such a cool example.

[00:54:14] Brice Challamel: , so it's a small thing. It's very natural, right? , but you see also one thing that you, that, Uh, for this one time, I'm going to take this example as like a reference. It, it has been my assistant in taking notes. It has been my coach because I also asked, it for each interview if I ask the right questions and how I could improve myself as a parent in parent teacher conferences.

And I had a very, very, because you have 16 back to back, so you can make strong progress there. This is a very highly iterative sequence, the parent teacher conferences. I think I was a much better, Parent at the 16th iteration of this Senate. The first, if I'm honest, based on the guidance from the model.

Um, and then of course it was an expert on education principles and how to, to leverage them and so on. And then it was a creative partners to design table games. So the team of five has operated there and something that couldn't matter more to me, which is the future of my kids.

[00:55:09] Jeremy Utley: Can I just say, 'cause I'm a doofus and I'm happy to be the lead doofus of this podcast. Um, you said earlier that every team was comprised of five people. And you were saying AI augmented individuals are now at least five people. The individual, the assistant, the coach, the expert, the creative partner. I'm just saying that because if I missed it, I'm sure an audience member missed it, Bryce was not saying that every team at maternal is five people.

He's saying every individual is now hyper augmented. I just wanted to put a fine point on that.

[00:55:45] Brice Challamel: Yeah. And I hope I give you a very concrete. You know, a relatable example of this.

[00:55:51] Jeremy Utley: It's brilliant. It's absolutely brilliant. There's one question that I have I just, I want to hear a little bit more about the role of the champions. , I've heard you talk about that. The G is it GCAT

[00:56:02] Brice Challamel: Yes. Generative AI Champions Team, which are also the four bases of the DNA, Gwamin, Adimin, Titim, Timin. So it's a little play on the acronym of the four bases of the DNA, uh, with our champions group because we are that kind of company. And, you need to create social momentum mechanisms, right? You need relays and amplifiers , and also thought partners of, of, uh, different levels of this. Very often I would, compare management and the vertical labor organization, which we discussed , in this conversation to a resourcing system like the bloodstream in the body. Right. Like, , the core last rule, whatever you think of management, uh, is to allocate resources and prioritize the allocation of resources. But then there's another system in the human body, which is the nervous system. And even though you could map it in a way that overlaps for some parts, uh, the blood system, it is not the same. It doesn't operate at the same speed and doesn't have the same relays. It doesn't have the same, you know, chemistry and it just doesn't connect to the same central organ to begin with. And so we need a nervous system also in something as Vital and critical as a change management program for a paradigm transformation, which is AI. And so I want to think of the nervous system of the company. I can't afford not to know. Something that is happening in in very key parts of the company, and I think they can't afford not to know what we're doing at the platform level either.

And so we need to send signals to each other. This is unrelated to resource, so it doesn't have to map exactly hierarchy or management. And by the way, it doesn't. It more maps proficiency and time investment and willingness to share with others. Right? What you need from your nervous system. So when we did that prompt contest at the early stages of phase three, which was the AI adoption phase, we asked everyone who had a modern identity to propose ideas on what to prompt with AI and how this could be useful to them.

And we got, um, 180 concrete, winners out of that prompt contest. , and we use the AI to summarize and leverage and the insights out of those. Which is, by the way, how I know of the team of five. We ended up finding that those 180 prompts fell in four categories. Right? And we could have subcategories, but the assistant The expert, the coach, the creative partner. So out of all , the thousands of propositions and the hundreds of winners, those were the four categories. And we wrote a blog post for the whole company. It was titled 180 things we learned from you on AI. Cause I wanted to highlight that I was not going to teach AI to anyone. Like they were going to teach AI to me or to everyone else.

And that's what's happening there, right? , that collective momentum that you want to trigger and you need relays for Uh, so the GST, they are the winners of the prompt contest. And to win that prompt contest, you had to propose something that was really great, , and then you put it out there on the team's channel and you have to be upvoted by your peers.

So anyone who has a modern identity can vote you. They, all they have to do is put an emoticon on your idea and emoji, right? But the winners. Are the ones who have the best intersection of a great idea and being very popular. And those are my champions. Because if you have the best possible ideas, but no one wants to vote you because somehow you haven't gained favors from anyone around you, I don't know that you can be my champion.

And if you're like the nicest person in the world, but you can't for the life of you put together a prompt, I know that you can be my champion either. Right? Like, so, so the combination, the Venn

[00:59:54] Jeremy Utley: diagram is really

[00:59:55] Brice Challamel: critical there. Little two level Venn diagram, but I need people who are. imaginative and good at prompting AI and who other people want to vote because they like them or they like their idea

, and those are , the essential nodes of the nervous system. So we have some roles that are like, uh, if you will, intermediaries between the champions and us, we have AI integrators who dedicate a part , of their job with the agreement of our leadership and their manager. to understanding the role that AI is going to play in their teams.

And they are less, uh, a smaller team than the AI champions, and kind of more dedicated to it. Whereas my job is 100 percent this, and their job might be 20 or 50 percent this. Then we have the AI champions and we have the enthusiasts. So that's 3, 000 people , who, Simply the people who contribute to our forum.

We have this team's channel, which is the biggest team channel of the company, by far and wide. It has 3, 000 active members. And I think of them, it's half of our population, as the AI enthusiasts. So one out of two humans at Moderna is an AI enthusiast who contributes actively to our forum on AI. , and then we have the users.

It's everyone, right? Everyone is a user, like electricity and

[01:01:07] Henrik Werdelin: And what do the different groups kind of like, Get or do or get labeled as like you said, like you mentioned, obviously your full time job is this, you have , the people that have like 15, 20 percent of their allocated kind of like resource is allocated to, to this, and then you had like the, you know, other groups, , is that a label or does it come with anything else , for being in that group?

[01:01:30] Brice Challamel: So AI integrators are very, uh, they're very official. They're officially integrators, right? It's like acknowledge, and it's going to be part of their underperformance review and things like this, uh, AI champions, uh, they're also numerous closest, we only have a number of them. So currently we have a hundred of them.

And someone needs to get out for someone to get in. So we did two rotations already. We're about to do a third one. So , we numbered GST groups. We're about to do GST we're about And, when we think that the mission has shifted enough, we want to reactivate the social contract because he didn't sign up for this.

You didn't sign up if you're a champion, uh, from round one, from the prompt contest to drive new ways of working in Moderna, which is the next objective. So I want to make sure that you have time for this, that you have taste for this, that this is something that you want to be a part of. It's a volunteer group, and certainly they should mention this in their annual performance evaluation, but it wouldn't be held against them if they didn't, right?

This is a volunteer group, right? Journey is their destination. And then for the enthusiasts, Uh, no, the forum is open to anyone. , we keep talking about it all the time. There's a coding here. We use coding. So it's go slash AI, like you can't forget it. Go AI. And, I mentioned five times.

Every time I, I speak with anyone, it's in my signature, my emails. Like you can't miss it. Uh, it's a very broad and wide invitation to just be part of the conversation, , to join at your pace, , on your topics. , so that last group is more like, uh, It's an open conversation, which you're always invited to be a part of.

[01:03:10] Henrik Werdelin: Bryce, I think this is a good place to end. You talked about Moderna and Google had a corporate crush. I think both Jeremy and I now have a corporate crush on you, because of your kindness and your thoughtfulness and what you do. Thank you so much for being on.

[01:03:27] Jeremy Utley: Wow. What a conversation.

[01:03:32] Henrik Werdelin: What a conversation. I mean, like I'm, I'm really blown away by him. Like everything, like the way that he communicate his way of crafting stories, obviously his methodology for doing something that we all know is very difficult, which is teaching people to ride the AI bike.

[01:03:52] Jeremy Utley: I mean, I'm, the problem is dude, I've got like nine pages of notes here. I'm scrambling as you're talking. I'm just like, where do I start? But , I was struck by so many things. I mean, the comment, people don't dare to think of wild success. The fact that no one has had a ready answer to the question of.

What would make you think we've succeeded beyond expectations? I thought it was so fascinating just as a commentary on human nature, that no one dares to think of wild success. And then of course, if we want to go towards AI, just for a second, I loved his, his comment. His comment that it's stupid, risky, reckless to be doing anything without a I.

I think right now there's a whole chorus. There's a cacophony to say chorus implies melody, harmony, etcetera. It's more like a cacophony of accusations around the recklessness of using a I. And it was so refreshing. And I think, just A message from the future, so to speak, to think about the recklessness of not using every technology available to us to do the best work possible, I thought was really astounding. Any soundbites that stood out to you?

[01:04:59] Henrik Werdelin: I mean, there's a lot of different sound boys, but like, you know, there are so many to your point that, I've tried to write a lot of them down and couldn't really, I think what he really brought forth for me is how this new technology. That this and that technology, which also was a good way of thinking about it, that this new, that technology, the alien technology that we now have access to, how it's really going to change, obviously, how the organizations are changed, but also just as we as a human can make ourselves better, like as an individual contributor, like just his case of like, becoming a better parent by becoming better at listening to a teacher, being better of like introducing games, You know, in his family and all those different things like being better of listening to the organizations when you post a question.

I think, I think the big kind of takeaway is actually like just how profound we will have to rethink our capabilities and how we conduct work. And if we do, how incredibly upgraded we can become.

[01:06:06] Jeremy Utley: hmm. Hmm. Yeah, I'm really eager. We should have him back in say a year after he's conducted this ways of working set, you know, the fourth round of interviews that he's mentioned. I'll be really And, um, you know, I'm really eager to hear what what comes of that and what the further transformation. I mean, it's fascinating, right?

If on the one hand you look at a company like Moderna, you go, they're already so far ahead and yet the sense of. Uh, discontent. You know, a proper, you know, humble discontent. I get the impression talking to him that they're just starting their transformation journey. And, and, and there are three rounds of champions in, and I love even the premise of updating the social contract with champions because as the needs evolve and change, not assuming that people still want to give of their time.

I mean, there's so many incredible nuances to human change management there. I mean, I would, I'd be really eager to hear it. If our listeners have questions or areas that they want to probe on, it just felt like that is a conversation that it could easily become a five part series, you know, on org transformation and, change management.

And um, yeah, I'm,

[01:07:18] Henrik Werdelin: Even like, I mean like, when you see the Moderna video that kind of made the rounds like a year ago, where he talks about like the 700 custom GPTs they've done, and they have their head illegal kind of like talking about how they use it, I think we were all kind of like envious in a positive way of you know, how the hell have you managed to do that? And so , The initial kind of thought is like, well, you know, like now in all the organizations that I'm involved in, I want 700 GBTs, right? But his whole point is that that is not what you need to start with. It's not about getting 700 GBTs in there. That is understanding of what is it that you can drive for people that will make them feel successful. And how do you use AI to do that? , and so it's a kind of like a subtle but profound kind of change in how you approach it.

[01:08:05] Jeremy Utley: Why not? How? Why not? How? Tons more. I would say to our listeners, we love hearing from you what stood out to you. Drop us a line on LinkedIn or Twitter or wherever your, you know, preferred means of communication or let us know what you thought of this conversation and if you enjoyed it, please share it with a friend, someone perhaps who's trying to catalyze a change in their own life or organization.

And as always, let us know if you know a folks who you'd like to hear us talk to on the podcast.

[01:08:33] Henrik Werdelin: And with that, so long.

[01:08:36] Jeremy Utley: , let's keep moving beyond the prompt. Thanks