Beyond The Prompt - How to use AI in your company

The AI Playbook Every Leader Needs: A Chat With Adam Brotman & Andy Sack

Episode Summary

Adam Brotman and Andy Sack, co-authors of AI First: A Playbook for Future-Proofing Your Business and co-CEOs of Forum3, join Beyond the Prompt to share what it really means to build an AI-first company. They explain why leaders must treat AI as a co-intelligence tool and how adopting the right mindset can transform decision-making, culture, and competitiveness.

Episode Notes

Adam Brotman and Andy Sack sit down with Henrik and Jeremy to unpack their book AI First and the framework they have developed for leaders. They argue that AI is not just another technology wave but a leadership reset that demands new playbooks, new structures and new ways of thinking.

They explain why AI should be seen as an augmentation of human intelligence, an “Ironman suit” for leaders, and how mindset, experimentation and governance are essential to adoption. The conversation also explores organizational redesign, the role of executives in fostering AI literacy and the urgency of adapting quickly as the technology advances.

This episode offers a practical and forward-looking discussion on how leaders can integrate AI across their organizations, build cultures of experimentation and future-proof their businesses in a rapidly changing landscape.

Key Takeaways: 

Forum3: Digital Strategy for the AI Era | Forum3
AI First book: AI First Book | Forum3
Andy LinkedIn: Andy Sack | LinkedIn
Adam LinkedIn: Adam Brotman | LinkedIn

00:00 Intro: The Urgency of AI
00:19 Meet the Authors & The Premise of AI First
03:43 Defining an AI-Forward Leader
05:02 Adoption, Resistance & the AI Wake-Up Call
08:01 Why Mindset Matters More Than Tools
09:39 Experimentation, Governance & AI Culture
14:09 Re-architecting Organizations for AI
28:42 Balancing Innovation and Safety
35:45 The Evolution of AI Safety
37:46 Open Source vs. Closed Source Debate
40:07 AI’s Role in Organizational Agility
41:32 Human Augmentation & Co-Intelligence
42:34 The Future of AI and Autonomous Agents
46:14 Prototyping, Vibe Coding & Rapid Innovation
54:02 The Future of Organizational Design & Final Reflections


📜 Read the transcript for this episode: Transcript of The AI Playbook Every Leader Needs: A Chat With Adam Brotman & Andy Sack

Episode Transcription

[00:00:00] Andy Sack: If you're not engaging with AI your business, you're behind. If you're not having a code re, or having a holy shit moment, you're not paying attention And I think there's gonna be massive resistance chaos and, meanwhile, the technology will advance.

[00:00:19] Andy Sack: Hi, I am Sack, co, CEO and of Forum three and also co-author of ai. First a playbook for future proofing your business brand. my background I'm a career. I started internet companies in the nineties, became a venture capitalist, during web two and now during the a era. I'm back to my, entrepreneurial roots with my dear friend adam Robbin.

[00:00:46] Adam Brotman: I am Adam Brotman. I am the other side of the co of all the things Andy just said. Co CEO co-founder and co-author, of our company Form three and our book ai. First, um, my background is I was, chief Digital officer of Starbucks for most of my career. Also the President of J Crew. And,, I love consumer brands in the intersection of digital technology and brand building. and , that's what Andy and I are working on both on our company form three, but also , with our book AI First.

[00:01:17] Jeremy Utley: Ai first, for folks who aren't familiar with the book, who aren't familiar with your work, can you tell us what's the premise of the book? What's your core argument?

[00:01:27] Adam Brotman: the premise of the book, AI first is trying to answer the of what does it actually mean to be an AI first company? When AI gen AI really just kind of hit us a few years ago, it was moving super fast. We all knew and came to know what it meant to be a digital first company and then a mobile first company. But none of us, including Andy and I supposed digital thought leaders and digital transformation experts had any idea what it would mean to be an AI first company. So we actually researched that question by talking to some of the world's leading ai, uh, lab leaders as well as business experts. And the premise of the book basically says, aI first companies start with AI first leaders, that you need to understand that this is a, cot intelligence tool, not just. Don't think of like your parents' ai Think of this as a completely different, way of augmenting how you make better decisions faster as an individual leader. well as how you in, you know, integrated into everything you do as a company. And we, as a result of that insight, we developed a playbook in the book actually tries to a future proof way, irrespective of how fast and powerful the AI is and all the new things that are gonna happen, allows business leaders to get their whole company to understand that insight that I just mentioned about how this is like an Ironman suit. That this is about individual co intelligence and then allows them to integrate it and scale it within their organization. And that essentially the arc of the book and the essence of the, of what we came up with in the book.

[00:03:14] Andy Sack: I usually let Adam go first so that I can chime in and have, um, at the end with the succinct version. the premise the book is AI is a holy shit moment. Brace yourselves, all of us, for the speed of development of ai, and you need a playbook. And so we, develop the playbook by talking to lots of like,

[00:03:36] Jeremy Utley: the playbook may be the playbook Second, you, you mentioned something Adam, I want to come back to, which is the leader. How do you define kind of an AI forward leader? What are the behaviors, habits, routines, et cetera, that are, call it for the quintessential, aI first leader? What's it. look like?

[00:03:56] Adam Brotman: It starts with a mindset. So it's less about like individual specific actions, but a leader, particularly the cEO of company, let's let's say that we, we talk about that a lot in the book. it actually has to start at the very top of the organization. That the leader is encouraging the company to have that holy shit moment that Andy just and that everybody has to internalize, just how powerful and fast moving and dynamic. And, and, and sometimes jagged, this technology can be so people need to get their hands on it. They need to be curious, they need to be willing to, experiment, and that has to come from the top as opposed to, you know, some set of rules or, you know, typical technology implementation training. This is much more about experimentation in a mindset and a willingness to sort of keep trying things and figure out what works and what doesn't. And, And, that will lead to an AI first culture, um, and rewiring of how how people do things.

[00:05:02] Henrik Werdelin: How much do you think that. People are actually following the playbooks, And I'm asking because we've done a bunch interviews now with people from companies that are considered like pretty advanced in what they do. And the, And the feeling we get is that everybody's just getting started and that. some, you know, like talk about like, you know, they have 80% you know, there are staff using it, stuff like that. But if you ask, uh, think it was the CEO, the founder of Savior, we asked him like, how far do you think UI in your journey from zero to 10? He's like, probably a three. And you like, compared to everybody else, they were probably like a nine.

[00:05:42] Andy Sack: Who'd you that question of?

[00:05:43] Henrik Werdelin: I asked Wade Foster

[00:05:44] Jeremy Utley: CEO, EO of Zapier.

[00:05:46] Andy Sack: Oh

[00:05:47] Henrik Werdelin: yeah. And I think, you know, he's like obviously like a paranoid fi founder, so probably like, there's some elements of that, but it does seems that a lot of people kind of like don't really, I'm not really getting kinda like the effect that they think they can get out of it yet. I guess that, I mean, he question is that a fair statement mean

[00:06:06] Jeremy Utley: he called a code red. I don't know. Andy and adam, if you're familiar with what Wade and his team have done as Zapier, but they declared a code red where they told everybody take a week off work and do a deep dive. To your point, Adam, I think a AI first leaders gotta start by kind of hitting the alarm bell. I think most organizations though, don't feel it's a code red moment. If they're, if, if they're honest, the, the vast majority of organizations go, that seems a bit excessive. I mean, I was talking to a CEO yesterday. He said, I don't want to alarm people. That was actually his verbatim quote. I

[00:06:39] Andy Sack: I think that's right. I think, I mean, it goes back, I think to the jagged frontier of, and the relativism of the deployment , and usage of ai, which is, I don't know, you know, wade's probably, right? He is probably at a three in his journey and he, you know, in his mind because of what, because he has an understanding of how fast the technology's moving and how significant of a change in the world and business for his business. Total digital business. It, this is, he's probably at a three. Um, and even the and that's calling a code red and saying, take a week off and, and the number of cEOs who, the who, who have declared code red is probably two or 3% of the hundreds that we've spoken to. You talk to somebody else who's. In a, you know, in the professional sports business, they own a baseball team, you know, the Seattle Mariners, let's say. And, you know, they could, they're, they don't, they don't wanna upset their employees and no, it's not gonna really affect marketing. And like, yeah, I kind of want some training. And it's not, you know, for them it's just different, right? Like,

[00:07:51] Jeremy Utley: well, well make, make the case, Andy, for the Mariners. specific. Why is it an existential moment or or is it okay for them to say, uh, we don't have anything worry about?

[00:08:01] Andy Sack: uh, I mean, I make the Kate, I mean, in my opinion, I really wanted to call the book the Holy moment. The first chapter is called The Holy Shit Moment. In the podcast I talk a lot about the holy shit moment. What is, this? what is an AI first, CEO? Because I think an AI first, CEO is different than. It has different skills and, and, um, uh, succeeds in different ways than the CEO of five years ago, 10 years ago. Mm-hmm. and And, it's ones that have had the AI holy shit moment. And it's like, it is weighed. I mean, I think that this, the book can be summarized if you were trying to, , take away from the book is holy cow, , this is an era of advancement kind of like we've been through before in the industrialization era, et cetera. But this is gonna happen in such a compressed time, and it is now, it's the next two to three years. And if you're not on the bus, you're behind. , So, the case for the Mariners would be, this is gonna affect every aspect of the way in which , your fan base. Decide whether or not to attend a baseball game. If they do decide to attend a baseball game, their entire customer experience, um, is gonna be affected by their interaction. And there's an opportunity to personalize the experience for the type of fan that they are in ways that have never before been possible. You should start experimenting with AI and changing the way in which that customer experience the way in which customers experience their mariners.

[00:09:39] Jeremy Utley: So this word experimenting is a word that we hear a lot, and when you talk about playbook or you use the word playbook, that the book is a playbook. Can you talk for a second about What are the mechanisms? We understand the value of experimentation broadly in innovation, even pre ai. What does it look like to create mechanisms for experimentation, permissions, resourcing, et cetera, to how do you, how do you define success? What are you measuring, et cetera, et

[00:10:04] Adam Brotman: in, in the book we, we talk about this from the perspective of kind of a boring word, governance. So you, you don't, know, it's funny, Andy and I are not big on like committees and bureaucracies and policies and governance. Those are not words that particularly my friend Andy likes and I don't like 'em either. And, um, he doesn't

[00:10:28] Jeremy Utley: strike me as a governance, as a, as a governance evangelist. Yes.

[00:10:32] Adam Brotman: No, but one of the things I came across that quickly,

[00:10:35] Andy Sack: Jeremy,

[00:10:36] Jeremy Utley: I'm, i'm a little slow. I'm a little slow.

[00:10:39] Adam Brotman: but one of the things we realized in. Writing the book and to everybody was that this is such a, we've never had this thing called intelligence as a service. Like, it's such a weird concept because it's like the closest thing we ever had to our human beings. And so if you, the more that we realize that this is, this is, like spinning up. Some way human beings to like help you. um, how do you make sure that you're, like, you would never just willy-nilly just hire people with no descriptions and no tasks and no management. And like all of a sudden governance becomes like critical, like in a world. However, when when it comes to using ai, you can't just have you, you don't want to, there's both defensive and You don't, on a defensive side, you just don't want people running around with chickens with their heads cut off, like doing stuff that's gonna like, give up secure information. Like there's no actual, like learnings that are being sort of tracked and organized and sort of push yourself forward. And on the other hand. You don't want to be like so fearful and so disorganized that you're like missing out on how fast things are moving. How, how you could actually totally reinvent and improve speed of decision making, quality of decision making, improve customer experiences, improve marketing effectiveness. Like so how do you balance that? And we realized like yeah, you need a company needs, for lack of a better analogy, a vehicle to sort of put the company in and take 'em to the promised land here at a speed and in way that makes sense, but is also smart. And so we realized, oh yeah, you gotta have like a force or a council or a champion AI champions team that is actually keeping up with this stuff. And actually can think, can actually, when GPT five comes out last Thursday, can play with it and can have an intelligent conversation about, oh look it, it does this better, but this is worse. And I wonder if this is a good opportunity to use Gemini instead. Hey, where are we on our AI use policy around Gemini instead of GPT team? and and what about like, you know, the, this agent thing that just came out on, on, gPT five, like, oh, is this smart to connect to our data? But yet we wanna experiment with agents in other ways. Like if you ask a regular person who's not AI proficient that's keeping up with this stuff Like that just is like, what goes right over their head.

Yeah.

And so you need a group of people that understand the business you're in, understand like what you're trying to achieve, understand your customers, really understand your customers, understand your customer experience and like your brand and your business model, but also understand AI enough that they can, they can connect those dots and sort of make sure that you , have a responsible, but yet. Proactive way of adopting ai. And so experimenting in that context from that group , makes all the sense in, the world. You, it's like Andy's always good at talking about comparing science labs in, in, in, in science to businesses and like, you need like a lab right? That you could sort of play with these things and experiment and learn that's Can I ask you a

[00:13:47] Henrik Werdelin: little bit on the lab thing? Yeah. Because, so one of the things that I think there's kind of like two types of organizations. There's the ones that are kind just getting started, that are getting the playgroup, they're setting up the champion they have, like, you know, they're following the playbook. They should read the book Then I think there, are different people who kind of been doing a lot of that stuff and they are starting to kinda like understand some of the subtleties of AI and organization. and so they might say, Hey, as I'm doing this, I'm realizing the organizational structure they have right now might not be the one that should be there once kinda like really starting to use ai. Um, and so, , i'll give you an example. Maybe you can that as I had, somebody called me the other day, he's the CEO of big, uh, company and he said, I really need somebody to help me social media that knows a lot about ai and I I kinda like was struggling a little bit coming up with a name for him until I kind of realized that it isn't really just somebody who can use MIT journey that he was asking for. he was asking for somebody who could help him kind of re-architecture what marketing like in an age of ai. Which is not a social media manager, is, something different, which would have the issue that if that reported into the normal CMO kind of like organization structure probably would fail because it wasn't just about adding another headcount, it was about rethinking basically how you were doing the day-to-day work. And so I guess the question is, the more advanced that get with the organizations that you advise and what you see, how much you think that we need to re-architecture some of the organizational design? Or do you think that we can basically redo the plumbing with ai?

[00:15:19] Andy Sack: I mean, this goes right at the heart of, um, innovative's dilemma. I'm a big believer that that ai, speed of the advancement of AI absolutely is going to change the structure of our world, the businesses and our lives. And so the notion that we're gonna have. Marketing departments that look like the they like today in two years is, uh, it's why you should have a holy shit moment and get on it because it's gonna look totally different. Again, the the things, you know, you, I think you, I'm sure you guys saw, you saw the Kelsie ad that played during the NBA finals.

I mean, the notion that one guy did that in less than two days is they ad So what are the, what are the, the ad, that ad is incredible. If you haven't seen it and you're listening to this, check out the sci ad, type in sci NBA uh, and you'll find it. What are the skills that are required to do that? There's like a, there's a visioning of the output. skills tool, the AI tools that, and then there's rapid, rapid iteration, and That I find I struggle with this rapid iteration skill.

'cause you like, you just gotta keep trying it and trying it, and Oh, you know, and, and we were, Adam and I were talking this morning, when you try it, the first time, it's non-deterministic. The second time you do it, it's different. Um, and yet that tool that there's a, there's an impulse there. So I think that organizations structures within companies of existing companies. It's why the Ethan Mooch chapter, the last chapter in the book, I think is, it's why we did an epilogue to the, book. Because we realized that Ethan and his, he was telling us in that when we did that interview, it was like, oh guys, your playbook's really nice and all, but it's for like, it's kind of outta date. And it really, It's not, , you're not pushing business leaders far enough into the future. It's gonna be outta date the day the book is published. And we, you know, we got off the call and we're like, were we just chastised by Ethan? And we we kind were, and He was

[00:17:25] Adam Brotman: Yeah. Oh, you were

[00:17:26] Andy Sack: we He was kind of right. Um, it's because he was saying that like, you need to be about the next mo when he, when he chastised us. I don't even know, like whether certainly VO wasn't out. And I don't know that that oh three, the reasoning models were out. Absolutely not. And

[00:17:44] Adam Brotman: It was was in, we talked to 'em in September or around then, and they didn't come out until

[00:17:50] Andy Sack: So long story short, organizational structures are gonna change radically. I do believe there's gonna be a bunch of small five, eight person teams doing the work that 200 person

[00:18:03] Henrik Werdelin: So would you start that already? mean, like there seemed to be, there's two ways of thinking what you do today. If you're the chief innovation officer or whatever the title is that's responsible for this, you can either I'm gonna try to upgrade everybody. And I'm really gonna like, just make myself kind of like AI first. Right? The other one, which you seems some people are doing is what I think of as re-platforming, where they kind of go, this won't work. I'll have to basically rebuild, but kind of rebuild it on AI stack, on AI philosophy. And so I chopped the team basically down, and then I built from that. What,

[00:18:37] Jeremy Utley: where are in, well we, what we heard henrik, remember what we heard from John Waldman. I mean he, one of our, uh, you know, previous guest, CE of Home Base, he mentioned how his, uh, board. Was him to be hiring unemployed people for two reasons. One, they have the bandwidth to be re-skilling. Two, they have categorical evidence from the market that their current skill's insufficient. And I think it speaks a little bit to henrik's question about uh, how much effort should we in, call it up-skilling, re-skilling, et cetera, versus where do we just need to start over?

[00:19:11] Adam Brotman: So let me give you a couple thoughts on number one is, we we wrote about this in the book a little bit, and, I, I, I'm gonna draw some parallels to when I was named Chief Digital Officer at Starbucks there was really no such title and I we talked about this in the book, and and, I'm not comparing the AI first wave necessarily to the mobile first wave that dealing with on the backs of the digital, computerization and internet wave one. I think this is different. I think this is more of a code red, as And I think we make that point in the book. And I think that if you try to just, I, I personally don't think a code red means burn it all down and, and, reorganize right now. I actually think that you could, like, that would be ambitious. Um, but I think that you're gonna, you might find yourself in sort of a klarna situation where you have that instinct to do, and I don't know the Klarna details, but I'm just following

what i've read and I actually really admire a lot of the he was saying. But I feel like if you, you'll end up cartwheeling over the finish line if you call code red and you don't really understand that. Like it takes time for organization particularly any sort of sizeable organization to, rewire itself and they, you know, you're a real business and you know, it's easy to sort of say something like, oh, We need to like burn it all down and rewire everything. I do think you will eventually need to do that on some level, but I, it's we said before, you have to like give yourself a way to do that. And so I actually, what we say now, we would recommend is, yes, you call code red, you call a code red from your CEO that like, this is, this is different. this is gonna change much faster and come us much faster than any other technology. And it's different than any other any other technology. And we're gonna to start rewiring things. But if we try to like do that across the board right now, we would end up in a situation where like, you just wouldn't be able to get your daily work done, even it was augmented by ai. So you gotta like find a pace and an arc to do that We recommend call a code red from the top, so to speak. Um. Get people AI literate across the organization, make, you know, create academies. create a license for people to use AI in a safe, secure, and responsible way. and like a core group of people that are gonna really invest themselves in like staying the, at the edge of this stuff that are gonna like, kinda recommend how your organization should make these changes. I don't think it's like, for example, unless your of HR or your CEO is like proficient, like who's gonna make the decision to like, rewire everything? You could say, well the functional leader in the in marketing's gonna do it well. Like, okay. Or she or he are they they, on that counselor or they up to speed? Like now they do need to do that in the next few years, which is pretty amazing that I'm saying but I just would say be careful about like, declaring things and just 'cause you could end up in a situation where you're gonna have to like, you know, roll it back on some levels And I, I, I'm a big fan of like doing it in a somewhat organized, even if it even if urgent manner.

[00:22:24] Jeremy Utley: So you're a CEO, you've say you declare a code red, and I'm, i'm speaking right now from an observation I have with a, you know, multinational enterprise that has appointed a AI committee or a council, I think you said force, right? got champions. My observation is, unless it's delicately managed in terms of permissions, expectations, accountability, et cetera, the the task force can actually just become the bottleneck. It's, oh, people are waiting. let's see what the task force says about x, Y, Z. how do you make sure, from a kind of structural

perspective, going back to Henry's question about org design, how do you. structure the task force so that they accelerate your learning rather than rate, limit your learning?

[00:23:07] Adam Brotman: Yeah, that the task force and, and, and AI policy should accomplish what you just said. It should accelerate. So you, So if you. think about how to implement our playbook and, and and I'll explain in, in, in, it. 'cause who who knows how to implement our and let's explain it. You you have to sort of give permission the organization to use generative ai. I've, across the board, I'm talking about everybody on some level like don't just like roll out a and tell anyone what to do with it and how it can be used you. And that's why I say it starts with the should be saying to the whole organization like. This is a powerful new technology. I encourage you all to use it. We have a policy, we have a task force to help you. Like they're there to help you think prioritization of pilots and you know, any sort of office hours and training that you need. and Thoughts and they're gonna administrate policy. If you have any questions about, you know, can you use notebook, limb, can we use Gemini?

What's core tool? Like, how do I learn more about this? How do I get more involved? task force is there to facilitate, not be a gatekeeper. And the, it, it has to be a message from the top that everybody should be using this and playing with it in a way. And then the task force is there as sort of an engine to sure people can answer their questions to make sure they're not just all over the place. And then that's sort of the combination that allows an organization, if it wants to, to have everybody moving forward. I that, that it's sort of an delicate needle to be thread, but the task force is not a And the, and the use policy, you could say the same thing. Well, use policy is gonna tell me what I can and can't do. like that's why you gotta craft. Use policy, but a good council, a good AI leader, like within the organization is gonna, is gonna is gonna, make sure that all of this has the opposite effect. that it's actually meant for everybody to be freely using it. I mean, Andy and I, when we talk to leaders, and one of the first questions we is, how often are you using ai? Like, which ai? are you using? How many times a day are you using it?

Like when we start talking to them, why have you run that through ai? Like, I actually think one the core questions that will happen all the time in the next year to that's a sign of an AI first organization is when a leader says to their team, like, I really, I just expect that you will have you'll you'll have run this through claude Rock, Gemini Cha, BT at a good thinking reasoning level before you've come to me with some thoughts or an answer to a question,

[00:25:34] Jeremy Utley: right? Failure to leverage every tool available to you will be considered professional malpractice.

[00:25:41] Henrik Werdelin: on the playbook. Um, if you were just like listing down the chapters, is that like feasible to do, just to kind of give people listening in kinda like, what are the elements and so they can kind of go either I already thought about that, or this is something I need to , read up on.

[00:25:59] Adam Brotman: the The, of the book is, it starts out with Andy and I talking about our meeting with Sam Altman and our Holy Shit moment. And as andy mentioned, it was kind of title of the intro chapter is called the Holy Shit Moment, which was, that was all about like, wow, these guys are going after a GI, artificial general intelligence. They're moving much faster than thought with a goal to have this concept very powerful AI in the next five years. And we didn't realize how fast their, their ambitions were, and then that led us a chapter with Reid Hoffman, where he was explaining you know, the, the Ironman suit effect of this. this. Is about like micro use cases, individual use cases, 10 Xing, the individual functional leader. Then we talked to Bill Gates about productivity. And, uh, really about how like if you're gonna get that level of productivity, it's not just gonna be quantitative, it's gonna be qualitative and what the, like the two dimensions of productivity between qualitative and quantitative and like how important to understand that. And then we, you know, talk to Mustafa Soliman, um, uh, who was not yet the CEO of Microsoft, but had, was coming of his time from being a a Google DeepMind co-founder and really thinking about what does the next three or four years look like in terms of the speed of change of this technology and it would, like we just talked about, like, how will it affect marketing? How will it affect org design? Um, will it affect the attention economy? And so we went, that sort of the beginning arc of the book. And then we stopped in middle of the And we said, wait a minute. We, we wrote, this is an unusual book. It was not, we didn't an outline. We just literally document our learning journey. And then we stopped and we had a, a chapter where we just said, let's put this all together and start talking to business leaders how they're using it. And so then we started talking to the CEO of suzy.com. we talked to SA Khan from the Khan academy, . And we talked to the head of AI at moderna who had led an absolutely like literally a casebook. Textbook case study on how to adopt or AI throughout think they have 80% adopt, you know, every day, Was that

[00:28:09] Henrik Werdelin: Bryce?

You were talking

to?

[00:28:10] Adam Brotman: Bryce Chama, yeah.

[00:28:11] Henrik Werdelin: Yeah. we, was such a huge fan of his.

[00:28:12] Adam Brotman: yeah, so we so We talked to Bryce, we talked to Sal Khan, we talked to matt Bri from Susie, all an effort to like, who are the leaders out there that get it And that have like, actually succeeded at infusing aI into the culture, into the mindset. And they were those AI first leaders. And then based on all that arc, we ended playbook and then ended up talking to Ethan Molik and said, you know, how do we do? And as Andy said, he said, you didn't go far enough.

[00:28:40] Henrik Werdelin: Can I ask a, a kind of odd question. You know, when sometimes talk to people, there's a thing that you feel they, they don't say. Like, did you have like a sense after talking to all these very smart people who is probably as like aware of what's going on as anybody else? What was the, what was the, unspoken

[00:29:02] Andy Sack: I, have an answer to that would 'cause I, as we've done the book tour, we've asked a similar question, safety, nobody talks about safety. It's not, in all of those conversations. And it's easy, like I fall into this as well. Like I'm enthralled by the pace, , the capability set of what. AI enables the organizational structure, all of that stuff. Super interesting. intellectual fodder, but safety.

[00:29:32] Henrik Werdelin: Do you think you can do both? I mean, i'm asked because that. I'm currently in europe. where everybody's tried to regulate themself into innovation. Right. And the US obviously is amazing 'cause you, guys are literally the wild west people that are like, we'll figure it out. Um like how do you, what do you fire, aim. How, how, is it kind of like, just like an impossible challenge to say, we'd like to move as fast and try to get to a GI, but we also want safety. Is that Is that, doable?

[00:30:02] Andy Sack: You know, I, um, I think early on a number of people recognized that. This not technology. This was alien intelligence and the capability set of the tool that is powered by technology, guess. and that the power of the tool conceivably an existential threat to humanity. And so. I don't know what do with that Humans aren't great, like the capitalist system's doing what it does, is a bunch of very smart, hungry business people at probably six or to eight companies are attracting capital and deploying it and going as fast as they can

[00:30:44] Henrik Werdelin: one, one thing that might stop it. I guess I'll say it as a statement, but I mean as a question. So I had an op-ed in What Post yesterday about. What I thought was kinda like a positive use of ai, right? Like as we talked about your podcast, like how do you democratize access to entrepreneurship with AI and the comments that was like 600 comments and most of them didn't seem to have like read the piece and, but could just feel like an immense anger and immense fear, right? It was all about yes, but it's gonna polarize all wealth. Yes. It's gonna like, it's just very word. And then I see like the Duolingo people put something on LinkedIn where are excited about something than with ai and the comment track is just like, I'm canceling is just because you wanna make more money, you don't care about people. Like there's like a lot of, like what I said is like anger and deep fear. And then I think there are people like that do these podcasts where we talk about all this stuff and we're like, but it's awesome. Um, do you think that. That kind of like what I would see as brewing resistance or brewing feel or brewing kind of like, um, attention to kind of things that could go wrong, might in some way kind of create that kind of like break uh, effect to this. Or will that be kinda like pushed to the side and people go like, let's just

[00:32:12] Andy Sack: I think it'll be pushed to the side. I mean, it'll drew up and be chaos and be like, you know, if, I'm sure you to the diary CEO, the, that recent part, I thought that was an excellent podcast in which he talked about the five to 15 years of chaos and then tried to make the case for Utopian. I'm, so I'm sort of on that like, You know, like we're in it, , it's the the bus is left, the station. If you're not engaging with AI your business, you're behind. If you're not having a code re, or having a holy shit moment, you're not paying attention And I think there's gonna be massive resistance chaos and, and meanwhile, the technology will advance.

[00:32:57] Jeremy Utley: Yeah. Did guys see there a, a report just out they studied the human evaluation of work and it was the same underlying work. And then the question was whether an evaluator thought that it was human generated or human plus AI generated. And the worst evaluations were given to human and AI generated work by the least likely to adopt the ai. So, which is to say the less likely someone is to adopt ai, the more harsh they are to be towards AI adopters. And in many cases, these are are the middle managers. And they were harshest, by the way, towards women and minorities when they perceived that a woman was using ai. They were much harsher. When they perceived that a minority was using ai, were using much harsher, oh, sorry. They, they evaluated the same underlying output much more harshly and questioned the competence. They, they

[00:33:50] Andy Sack: who was, they knew who was who was making the output.

[00:33:53] Jeremy Utley: That was, the experimental condition, same output. But then they told the evaluator, this is human only output by a man. This is human only output a woman. This is human and AI output by a man, human and AI by woman. And consistently the least experienced were, who often are in positions of, uh, giving promotions of approving work, et cetera, were most harsh towards ai.

[00:34:16] Adam Brotman: they were They, were were testing for, they were testing for bias, um, in, in a, in this AI context, AI output context or AI assisted context. Yeah, I mean, that doesn't surprise me in the sense of been going on ai and now with ai to your point, like it but getting back to, I, with Andy that I mean, I do think we can draw comparisons to the internet in some way. So like, when the internet really, I mean, we're old enough to remember all this stuff, at I am, And I'm not, you know, the, the, internet caused a lot of problems, causes a lot of problems like today, like, social media and fraud and scams and, um, so there's this weird, there's this weird thing we've experienced, and it's going to it just gets more powerful, right? Computers to internet, to cloud, mobile to now where like, it's the same mixture of like, a, you can't put the genie back the bottle. B. It, causes it, there's, real concerns, right? Andy's right. Like the whole time were talking to we'd get done with the discussion and we'd say, yeah, there was really not a discussion of like safety, or like, why are they building these things that they don't know what's gonna happen? And there just needs to be, I mean, talking about society, you're talking about even just Like displacement, and, disruption. and there's not a ton of, so, so in other words, this industry is happening and these tools are out there. And I, you know, I, do think it's similar to the internet in the sense of like, yeah, there's, there's a dark side to this and there's a, there's problems and we need to be doing more,

[00:36:02] Henrik Werdelin: you that we, is talk about safety, often talk about the foundational models. Um. And then i'll give you a little backstory. I was part of like the early social web right? And so I remember at social food camp, you know, like where everybody who would building social platforms were, and all the founders of companies that then became well known we're kind of hanging out and there was not a lot of discussion about safety there because we're taking pictures of our lunch and putting them, you know, on a feed. And we were like, what's the worst that can happen? Then obviously, you know, a whole generation's mental health kind of got washed, washed, away in the process. When we talk about safety and, and you talk to all these CEOs, obviously they can't change the foundational models, but I wonder if there's something they can do to address the element of safety just in the way that they're conducting themself with the use every day in their organizations.

[00:36:59] Adam Brotman: CEOs of the labs or CEOs of everyday companies?

[00:37:02] Henrik Werdelin: No, I don't, I don't have an answer for it. So like, you know, not sure that there is, but I just wonder, we talk about safety and then we kind of like look at these eight companies and go like, you should fix And then like, in Europe, we look at the government saying, oh, you should fix it. You know, like, I wonder like if there is the introspective is safety

[00:37:16] Andy Sack: Is there safety on the edge as opposed to the center?

[00:37:19] Jeremy Utley: Hmm.

[00:37:19] Henrik Werdelin: Like what, what would we do there? Right. know, is it our We early, I asked the question like, should we basically just kind of find new people if the oath ones don't want to be upgraded, is part of the safety is to say, well, actually no, it's your responsibility to kinda like, make sure that people get along on the ride. you know, like of those of questions. I mean, like, I, I just, I don't hear them that often,

[00:37:42] Andy Sack: Uh, it's a good question. I, haven't thought about it. My knee jerk reaction to it is that really it is about the large language models. It's not about user at the, end, the CEO, uh, and cetera. It's about , the capability set of the set of tools, and really the debate is about open source versus closed. and where does ethics and safety fall in the responsibility? And you just today, because AI is such a profound step forward in terms of its capability set to solve problems that humans can't, um, for the good, as you know, we might actually solve cancer, we might solve Alzheimer's, we might solve fusion. Like there's, being talked about as a potential, um, breakthrough in all of those domains. Um, you know, the underbelly of all of that is is that is, that that technology could be used for of. Not well-intentioned things. And when you have a debate, a geopolitical debate about what, what is intention and just 'cause the US says one thing in Germany, which are two countries that are closely aligned, it gets really complicated really in the, world. Um, and so like open source, closed source is really a challenging question. um,

[00:39:07] Henrik Werdelin: and I think that's actually one that I don't hear a lot of us discussing about my co-founder at Autos and co-author of the book, Nicholas. He was in Bhutan last week. one of the questions you get there, from a group of high schoolers you know, so what do you use ob May I or deep seek? Like that's like the first question, right? And then obviously, you know, he's like, you know, what do you mean? Like I used to ob, may I, um, I wonder how much of like, a lot of like the, the world politics also will kind of like then be influenced by the question that you're just posing. Open source versus close, but also just where does the, who makes, yeah.

[00:39:44] Andy Sack: Safety's being It's not, it's not top of mind. not given the, given what's at stake, the prize that is perceived stake, um, geopolitically and capital wise. It, it's safety be damned.

[00:40:00] Jeremy Utley: one that I would love to, uh, revisit is this question about both leadership and organizational structure in regards to the playbook. Um, you had mentioned samma, and one of the things that I've wondered about is to what extent is this about AI first versus, call it agility, for lack of better word. And the reason I ask that is. I think AI first has certain connotation, but, when I talk to my friends at Open AI for example, the hallmarks of their culture aren't, they are using AI everything. Everyone's using AI for everything. So that's a non-negotiable, as Brad Anderson said on the podcast recently, every person in every function, every day. Okay, granted, but I look at the way they are making decisions at open ai, it's about being clear about one wave versus two way doors. It You know, it's classic Amazon kind of thought process, right? Um, they're clear on meetings must be valuable. They're clear. There's no email. They do everything via There's literally zero email in the company except with outsiders, right? And so to me, those aren't necessarily AI solutions. They're organizational decisions around in, uh, agility, velocity, pushing, decision making down, et cetera. How much of this is, is, AI catalyzing the necessity of agility, uh, versus AI bringing in a new concern, consideration, et cetera?

[00:41:27] Adam Brotman: I mean, I, think I understand your question. I'll, I'll try for second. Andy, jump in. We wrestled with the title AI first. Because we were purposely trying to make, we asked the question like, what does AI first even mean? Before we knew the answer, and then we ended up with an answer, that it's about human in the middle. Like we it's, it's, it's augment. It's 10 xing, it's the human, it's the Ironman suit for human. It's still Tony Stark in Ironman suit. And, at the end the day, that's how I think of it. I think Andy and I think of it as like, it's still, it is about you're, you're sort of going through your day and now you have this like always on augmentation tool that's at your disposal. You're use it a lot and you're gonna actually rewire the way that you make decisions. Um, and gonna get weird. 'cause we talk about this in the book, like when agents truly, and we don't really have, we have agentic things that. We do with aI now. Like even thinking models are somewhat agentic just on their face. And then you've got these things called agents, but that's a really misused and overly used term. And, but at some point there's gonna be agents that are like trusted, autonomous, capable, maybe even super intelligent agents. And I don't that that's a world that's 20 years away. I think it's it's much closer than that. And, And, so that's a different consideration, but for we don't really have that. This is an augmentation, like an incredible augmentation tool. And you can automate workflows and you can do things, but it's humans are being augmented. So, And you know that I dunno if that fully answered your question, but that's how we things? For the next couple years , but people need to, we do think that, and would recommend that people, uh, in the workforce, uh, in leaders in particular like realize , their people probably should be using it more, but it's to achieve what the people are trying to achieve. Right? It's, it, It's, still human powered and the question is like how much augmentation, how much 10 xing happens. Um, know, for now, I mean, I think that that that spectrum, it's like science fiction, like that spectrum is gonna, it is already starting to change. whereas like how much do sort of let the aI do things, um, and how much do you rely on the AI? And, but but I think it's human power

[00:43:52] Jeremy Utley: Well, I think there's a lot there, Adam, just in terms of decision making. If you think about turns an organization, you're my boss, I propose something to you, you've gotta review it. You come back. If both augmented and I say, Hey, chad, GPT, you, know everything about my boss, Adam, know how he makes decisions. Will you recast this email in way that's most likely to be persuasive to him? Right. And you have an AI that's trained to evaluate Jeremy's proposals. You could imagine if even if we just cut the turns in half, we doubled decision velocity. Right? And there, even in a world that's where only humans being augmented to interact with each other.

[00:44:28] Adam Brotman: I agree.

[00:44:28] Jeremy Utley: You see the organization accelerate dramatically.

[00:44:32] Adam Brotman: That's right. Yeah. Andy and I talk about it as like, you see that movie limitless with Bradley Cooper? It's like, I still think that analogy holds, like is everybody has like a limitless but if everybody, not just one character in movie, but like, what happens you end up making better decisions faster? and if you think about it, like in most organizations, our jobs are like, we're just like decision making, agents to to mix them. yeah. mix

[00:45:05] Andy Sack: a, I have a, I have a slightly, I think it's related. I think it's, it addresses this question, but I was watching some YouTube clip. I, you know, I, hear, have you ever heard of YouTube? It's this great new platform.

[00:45:18] Jeremy Utley: Amazing. What, what's the URL?

[00:45:20] Andy Sack: um, on YouTube. There was a, Tom Brady on and he was like talking to soccer team and he was describing how every day in practice, he would do the two minute drill and he would be Like I'm gonna treat this practice like it's the super Bowl. And he would do the two drill and he'd like, you know, they'd score and he'd be like, he'd jump up and down and he got his team , to treat practice as if it was, you know, that Super Bowl moment. And, I think that, I think that , that's very relevant to

[00:45:53] Jeremy Utley: this conversation. Conversation. Yeah. That's brilliant. I absolutely agree. Folks don't really see all of the opportunities they are for snaps, as you're saying. Every email in exchange is a snap. Are you treating it like a Super Bowl moment or are you just casting it off?

[00:46:13] Adam Brotman: Like it it, it'd be interesting. I, I've always wondered, this is kind of a tangent, but I've always wondered, if every everything I did in a course of a week, I forced myself to use AI for every single thing I did, which would really weird because I don't know, how you'd do that. But like, but it, it'd be interesting because I wanna try it as an experiment. I have a feeling I would find myself in a weird but better place at the end of the week, which is weird to say. Um, but it, it, but have to force yourself to, it's completely unnatural to do that. And, and it, would be me prompting and me. Iterating So it's still me. Like I, you know, we're not, again, until the agents become autonomous, truly autonomous. Like it's still You prompting and still you interacting and you thinking how you wanna react and how you wanna receive the information. So I, know, anyways, I, I think that that Tom Brady thing is a example. I do think that that's, I, and I think that's the, like in, it's interesting. We've on this podcast on a point, which is like, I think in some ways that concept, which sounds so weird almost overly like artificial and inhuman and that kinda stuff. Is the point we want to get across in the That is like, you can choose not do that or it's okay, but know that if you did choose to do that, would probably make better decisions faster all week. And if your whole organization was doing that on a compounded basis every day, every month, like, where is your company gonna be? Right. If that was really, we actually know the answer by the way. Like there's been done by Harvard and BCG and Wharton that say, you're gonna be like 50% faster and higher quality at, at, today's models. Like, that's insane. If you could get 10%. Michael Dell was on a podcast that andy sent me, um, I can't remember what bg pod bg. Yeah, BG Squared. And he was the same thing. He's like, i'll take 10% if I can get 10% higher quality and faster decision

[00:48:23] Henrik Werdelin: making. You probably read the book. I have, uh, read the, book for sure. Uh, I have one last question and then we'll let you go 'cause we know we're running out time. But the, um, I know Adam, that you convinced Andy to write a new book, but really what I'm interested in is, uh, what is is the, kind of like the next thing you guys are pondering about, like what's the, the thing that you might have the answer for, but where you just an insane amount of interestingness?

[00:48:51] Andy Sack: I have two areas that I, one is the impact of this massive investment and scaling to the overall economy globally and in the us, And that means what does really happen to labor industry, by industry, job title, by job title, consumer spec. Like the impact is. There's like a massive gold rush that's being poured into the economy. So there's that, that's one topic. the second topic is. Um, the fundamental changing. I mean, you, you, in many ways, your question about organizational structure really fascinating to me. But what actually happens to the fabric of competition? I compare a lot of what competition in business in general. I compare a lot of what's happened with AI to, suddenly, at least in the us, um, you know, there's, 20 to 40 tools for , every industry, every application. And it's like there's, uh, competition has been unleashed in massive ways, hurts margins and, and yet, like what happens as a result of that. So I think business fundamentally change, and I'm fascinated

[00:50:06] Henrik Werdelin: That's fascinating.

Yeah,

[00:50:08] Adam Brotman: the only thing I'd add is, and Andy and I have also, we've talking about that, we've also been talking about vibe coding, and it's interesting, it it's that terms, you know, Andre Carpathy sort of coined that term. It's been for six or whatever now, but, there's something about what's happening with the coding abilities for non-technical people in the last month between lovable and G PT five's release and the high highlighted coding. Andy and I, As we started to try vibe coating some things, it actually opened our mind that it's vibe coding it takes that problem that AI solving we just said to like another dimension um, , that's another thing we're fascinated by, like pulling the on that to be like, think these, these sort of science fiction ideas of like organizations. Not just like rolling their own enterprise software, but like, I'm not a software but like taking a software engineering approach to non-software problems and actually creating code to sort of help with those problems if you can combine a business leader's mind with functional expertise with software development capabilities, suddenly like that's a really weird and interesting for propelling an organization forward. And Andy and I are like. we're trying to get our heads around that, which like a whole other, it's like in the, the series three Body Problem. don't if you've ever watched those books and that movie, but like the first series is about aliens, you know, maybe coming to Earth or whatever. But then if you hear about the and third books are about, you're like oh Like it, it went in a weird place around things. And I think that's where the to AI first is gonna be probably, if it was ever literally or figuratively written, gonna be about how organizations, they don't just like change their design. They're not just AI first, but like they're just completely different animals in a way, because of the, of ai.

[00:52:09] Jeremy Utley: Yeah, there's so much there. The, it's, it's impossible to overstate. I mean, talk about governance, headache or safety headache, right? But I mean, I, , I told this story the day. I at work working with a, company in Latin America, and they had like 10 ideas. We, we started with, you know, I'm, I'm the idea guy, right? So you start with a thousand ideas, you get down to 10 and before lunch, which is like 3:00 there, I said, Hey, back in the envelope how much time and budget you need to pull these ideas? Not, not scale across thousands of locations, just like prototype and the mode response. Any guesses? It's like K, eight weeks. That's what everybody said. Roughly 50 k in weeks while they were at lunch, me And my partner built one the ideas. It's like, and they come back from lunch. Like, did you like this? And you could just see it's like. there's the the time blowing of minds and then the CIO going, no, now you've really screwed me. Because now they're gonna expect, and you think about every manager who hears a team say it's gonna take 10 weeks, and they go, great. You know, if a manager doesn't know, they should say, absolutely unacceptable. I'll give You 10 hours. Right. Then they keep talking about velocity. Right. Then they keep saying, great. 10 weeks, let me know when you got a prototype. Right

[00:53:27] Adam Brotman: Yeah. That's right Jeremy. Like, I know we're at time. That's a, that's a great story. Yeah, you, you just said, it's interesting 'cause we, Andy and in, in the book, AI First were all about like wow, like the Iron Man suit, the 10 X thing that, that Reid Hoffman talks about. This is better decisions faster, but what happens when , the execution of the better decision is also Like That's the part like you're saying, like that. That's the part that we're trying to get our head around and is the next chapter.

[00:53:56] Henrik Werdelin: That's awesome. That's cool. I think that's a great place to stop part. really really appreciate having you on.

Okay. Jeremy, um, interesting conversation, huh?

[00:54:06] Jeremy Utley: I thought so. You know, I'm, I'm still, my head's kinda spinning there. At the end we were talking about everybody's got a limitless pill.

[00:54:12] Henrik Werdelin: Yeah,

[00:54:13] Jeremy Utley: and I think for all of the hype around agents and for all the hype around, you know, scaling, you know, productivity, stuff like that, the kinda thought exercise we went through at the end, right? Adam's my boss, I'm working for Adam. If I just am able to communicate better with him and if he's just better able to manage me understand, or just thinking about the decision velocity in that kind of paired relationship, if you extrapolate it. No, it doesn't take very much extrapolation to to imagine. It's not just like a 10% better organization. It's a, it's a totally different organization. I wanna listen to the BG two podcast with Michael Delks. It sounds like he's was saying something similar, but to me it's really profound to think about truly being AI first, truly being augmented.

And after we hit pause on the recording, we're talking about maybe we do an AI first challenge where we and our listeners say, Hey, all week long. We are gonna invite AI into every part of our week. We don't even know what that means really. But even just to shift the frame from, you know, what is AI a fit for to starting point is we're, we're working with AI for everything and finding ways to do it and then reflecting on that later.

I think it's a really powerful reframe actually. So, yeah, I think, think it

[00:55:31] Henrik Werdelin: also's interesting when. You know, when you suddenly figure out it breaks, you know, like I was, um, um, I'm not sure we talked about it, but like I was wearing one of those little dongles, you know, that records everything. Mm-hmm. And uh, you know, I brought around the family and my wife looked at it and goes like, what's that? And go like, oh, you know, it's an AI device. I'm testing, I'm recording everything. She's like, no, you're not. And it started like a really interesting kind of conversation about like in her view, for example, even though this is just from my own use, she thought that was worse than taking a picture and putting it on social. Like that, that like the, the boundary that was broken there. And you know, for me, you know, like since I was just using it for myself, you know, like going like, you know, it's just me and my AI friend. Right. So, uh

[00:56:11] Jeremy Utley: Right.

[00:56:12] Henrik Werdelin: So I think it's an interesting, well the interesting with that experiments is not only obviously what's gonna yield and kind of like it's active way, what you can use it for, but also like where you find out why you definitely shouldn't use it. Yeah. Um, the thing that I really kinda like crystallized for me was. I saw a Shopify kind of, um, head of design or something like that, basically said something to extent like, don't ever show me like Figma files or anything. I only wanna see prototypes. Right. I only wanna see basically where you mocked up the solution for something that you're debating with me.

Mm-hmm. Um, and he had many other points and as, and that was kinda like, I think the, the output from my brain of this conversation of like, , are we asking our organization to implement AI because we're trying to implement new technology? Or are we trying to really create a new type of behavior? A sense of resourcefulness or a sense of entrepreneurship amongst the people , where organizations have more agency and they are more kind of like, they, they, you know, they, they go further in the chain of thinking about something and then executing on it. Mm-hmm. And I think what was interesting with this kind of example of like, I don't wanna see. Uh, and I don't wanna see a text. I wanna see a picture. I don't wanna see a picture. I wanna see a presentation. I wanna see a presentation. I wanna see a prototype.

[00:57:33] Jeremy Utley: Prototype that

[00:57:34] Henrik Werdelin: the prototyping is kind of like now a capability that anybody can actually do. Mm-hmm. And that was just not the case, even like a year ago.

[00:57:43] Jeremy Utley: You know this, this is gonna, I think that I'm predicting folks are gonna enjoy this conversation a lot because you don't, you don't know this Henrik, but we're gonna riff here for another minute or two because one thing I've been thinking about is this idea of low res prototyping. And now you can actually high res prototype as quickly as you can.

Low res prototype, right? It's easy. It's you in. Rep, you can build a working app as quickly as you could build a crappy, you know, construction paper pipe cleaner version. Right. And at Stanford for years, we've been teaching students the, the merits and values of low resolution prototyping. One reason is speed. Okay, and now you can get high resolution at the same speed, maybe even greater. The other thing which I would be fascinated to dig into, which I don't think is addressed by call it the reps of the world, is your own investment in your solution. When you are working with crappy materials like construction paper, your ego doesn't get attached. Whenever something looks nice. Uh, it also changes the customer's perception or the user's perception, right? If it looks nice, I'm talking about colors and button placement. Whereas if it looks crappy, of course I'm talking about the concept 'cause you couldn't possibly wanna launch this, right? And so all of a sudden we're kind of anchoring differently in these rapid prototyping tools, I think will have a different impact on the user, which probably is good because the more realistic or believable the decision, the higher.

Quality of the data point. So I like that. But from a user or from a designer perspective, or an innovator perspective, I think part of the challenge is it's really hard to throw away because it looks nice. And I'll give you one simple example. One thing I know to be true, teaching a, you know, hundreds of professionals now, how to use tools like Rept, is that. Sometimes the best thing to do is open a new window and drop the PRD in and just start over because for whatever reason, rept went down a weird rabbit hole. But what I've noticed is the cognitive load on a human when they see all the work. Yeah. Granted, it only took Rept 10 minutes to do this, but it feels like four weeks worth of work.

Trying to scrap that window feels like. Throwing away Four weeks worth of work. Not 10 minutes.

[00:59:53] Henrik Werdelin: Okay. I have two things. And so

[00:59:54] Jeremy Utley: there's like, there's like a weird attachment there. Anyway, I, so I'm just, I'm kinda riffing Hit, hit me back. Come on. Yeah. But

[01:00:00] Henrik Werdelin: there's two No, I'm, I'm continuing the riff 'cause I'm into it. There's two thoughts. One is, there is this kind of like, um, the promiscuous of making, which is if you talk to people who are great at, um, using people on Upwork. What they do is they hire six people to do the same work and then they just take the best part and they continue with that person. They're kind of like disposable in the use, which is kind of like weird for most people 'cause they paid somebody to do some work. I think what you're saying there for me is quite profound and has helped me as an entrepreneur a lot is like I'm pretty promiscuous when it comes to kind of like concept. I start two or three things at the same time, and if it doesn't work, I don't dig in harder. I basically go, oh, it didn't work, and then I kind of go to the next thing. I'm very surprised by my wife, she used this term, she's a medical biologist. She used this term like it was non-viable, like it's not emotional. Didn't work next thing. Right. Right. Um, the other thing which I think is interesting that you mentioned was that this thing, what happens when suddenly you can prototype in high fidelity and you are making the point that it used to be easy for people to basically do something that was kind of shitty looking.

'cause then people would focus on the concept. I think the flip side of what you're saying though is that now that everything looks like. The final product. Mm-hmm. The idea suddenly actually matters, right. The idea is more important. So people now has VO three and you see these amazing produced videos and you go like, yeah, but it's not funny. It's not interesting. It didn't make me feel anything. It basically was like you prompted something that was not a good concept, didn't have an insight. The idea more

[01:01:40] Jeremy Utley: important than ever, the idea is more important than ever. That is

[01:01:42] Henrik Werdelin: so fascinating also. Right, and the same thing. I mean, a lot of the work that I do now where I work, do product work is that I used to the same thing. You know, I remember my first started my career, I would use like balsamic mockup or like I would muck up stuff like in these kind of like wire frame me kind of ways. Now, you know, I use, you know, whatever GBT five or rep and I basically built whatever I want and then I give it to an engineer and saying, okay, I know this is like not safe or anything, but like this is roughly what I was looking to do. And then obviously the velocity of then taking that into the cold code base much faster than having to kind of explain it to somebody that had to explain it to something they didn't have to code it. Um, but if it's not working, then it's very clear immediately 'cause it just doesn't pop right there when you try to use it.

[01:02:27] Jeremy Utley: Mm-hmm. Yeah, it's, it's really, you know, we talked about a future in this episode and how the future's gonna look different. Our organizations are gonna look different , when it's technically possible to run that many experiments and parallel. You're gonna have all sorts of other, call it coordination challenges, which maybe will be solved by ai, but I think folks will only earn the right to have the, that kind of elevated abstracted conversation about innovation if they are at the edge of capabilities. Meaning if they're still writing, as we talked with John Waldman about a few weeks ago, right? If they're still writing 20 page PR docs and reading 20 page PR docs, they are going to be a hundred times slower. And if they're only building one thing with Repli, they're gonna be a hundred times worse than if they're building a hundred and they're happy to scrap the 99 for the one thing that they wouldn't have discovered had they not done a hundred parallel versions. Right. But this is that, that extrapolating to the. Exponential in a sense is a wild way to think about product development, innovation, management, and it's now more possible than ever because we're no longer constrained. Going back to the beginning of this conversation, actually, we're no longer constrained by our intelligence.

The reason you could not make 10 versions of the landing page is 'cause it takes 10 times as much time. Now it literally takes the same amount of time to make. 10 is to make one. The only limitation is, do you, as the innovator know you should make 10. That's the only difference

[01:04:01] Henrik Werdelin: I'll establish now because we'll soon make the commentary than the podcast.

[01:04:06] Jeremy Utley: No, no, it's, I mean, folks, Hey, how about this? How about the code word, the secret code word this week is MAs Ari. That's M-A-S-H-E-N-R-I-K, or. Uh, what's, what's less, less, Henrik, I dunno. I wanna know whether people like more of this kind of conversation or less. So whatever the code word you want to use, that tells us whether you want to hear us riffing more or just like get back to the, get back to the guest, you jokers, let us know.

[01:04:38] Henrik Werdelin: That's awesome.

[01:04:39] Jeremy Utley: As always. As always, thanks for listening to this episode of Beyond the Prompt. I am your host, Jeremy Utley, alongside my co-host Henrik Lin, and we are delighted. To learn alongside you. Until next time, signing off from Copenhagen in La Honda, California. Bye.