Beyond The Prompt - How to use AI in your company

New research: Get better ideas with AI (with Kian Gohar, Jeremy Utley, David McRaney, Henrik Werdelin)

Episode Summary

"You Are Not So Smart" podcast host David McRaney and Henrik Werdelin sit down to discuss the surprising results of a new study into what happens when groups of people work together to brainstorm solutions to problems with the help of ChatGPT. Based on their new research, Stanford's Jeremy Utley and best-selling author Kian Gohar have created a new paradigm for getting the most out of AI-assisted ideation, which they call FIXIT. In this episode, we dive into the research and explore how you can become better at getting good ideas with AI.

Episode Notes

"You Are Not So Smart" podcast host David McRaney and Henrik Werdelin sit down to discuss the surprising results of a new study into what happens when groups of people work together to brainstorm solutions to problems with the help of ChatGPT.

Based on their new research, Stanford's Jeremy Utley and best-selling author Kian Gohar have created a new paradigm for getting the most out of AI-assisted ideation, which they call FIXIT. In this episode, we dive into the research and explore how you can become better at getting good ideas with AI.

Co-production with  David McRaney's podcast You Are Not So Smart 

Including:

Jeremy Utley

 Kian Gohar 

Henrik Werdelin

📜 Read the transcript for this episode: Transcript of New research: Get better ideas with AI (with Kian Gohar, Jeremy Utley, David McRaney, Henrik Werdelin) |

Episode Transcription

[00:00:00] Jeremy Utley: Hey there, welcome back to another episode of Beyond the Prompt, where we explore the most fascinating applications of generative AI in business today. I'm your host, Jeremy Utley, joined by my ever curious co host, Henrik Werdelin. Today's episode is special. I'm Henrik turns the tables on me and invites me to be a guest on my own show. And then he hands over my normal co hosting duties to none other than David McCraney, the brilliant mind behind the podcast, You Are Not So Smart. Joining us as a guest is my research partner, Kian Gohar. Over the past year, Kian and I have been knee deep in research, understanding the impact of generative AI on problem solving within organizations. What we've uncovered is both fascinating and a bit of a cautionary tale. A specific cognitive bias can seriously hamper AI assisted teams from reaching their full potential. Luckily, this bias is as preventable as it is predictable for folks who are aware of the phenomenon. David is here to help us unpack the cognitive bias conundrum, and Henrik will, of course, dive into its implications on the organizational front. It's a crossover episode you won't want to miss. Let's get started. When we first started seeing results flow in, my first thought was, oh no, oh no. You know, and actually it's a good sign as an academic researcher when you get the data back and you think, oh no, because then you wait a few beats. And I remember that first day, Kian and I are on the phone looking at stuff and I go, Oh, yeah.

[00:01:40] David McRaney: My name is David McCraney. This is the You Are Not So Smart podcast. And that was the voice of Jeremy Utley, a professor at Stanford University specializing in creativity and entrepreneurship. You may remember him from a previous episode of this podcast where we discussed his book Idea Flow. And since we recorded that episode, I have visited him at Stanford to see the inner workings of the d. school, where he teaches. Also known as the Hasso Platner Institute of Design, except no one calls it that. They call it the D School. And Jeremy Utley co founded a very popular program there called Stanford's Masters of Creativity. They literally take students from all walks of life, And put them through programs and activities designed to unlock their creative potentials. Pretty amazing! And I visit it as research for my next book, which is going to explore just what the word genius really means and why that's such a hard question to answer. Scientifically, linguistically, culturally, historically. All the big A L L Y words, it's going to be a weird book. I think you're going to love it, but yeah, the d. school, the d. school is one of a handful of places around the world devoted to systematically methodologically unlocking people's genius while also teaching them not only how to generate lots of ideas, but how to sort the good ones from the bad ones using the latest tools available. Which now includes AI, chat GPT, which is the subject of this episode. And the reason you just heard Jeremy say, Oh yeah, yeah. Surprising results. That's what he's referring to his research into how teams perform while using AI to assist them during brainstorming sessions. And that research delivered something amazing. Very unexpected in case you haven't heard AI tools, like chat, GPT are here and they are going to change just about every aspect of how we work. So if that's something you're thinking about, if that's something you're worried about, that's something you are interested in. And I think you should be perhaps all of those things. This is an episode about some of the earliest research into that topic that just published. Here's Jeremy and his research partner, Tien Gohar, talking about why they wanted to do the study. We're going to explore in this episode,

[00:04:17] Jeremy Utley: both Kian and I, when Chad GPT came out, we started playing and because of the roles we played advising organizations and building capacity around innovation, we start hearing kind of murmurs in our respective networks around what are the implications for innovation? What are the implications for creativity? Is ideation dead? You know, all these kinds of interesting questions. And the truth is we had hunches ourselves, but. We didn't really have anything more than a hunch. We had our own personal kind of call it anecdotal experience, but not much more. For me, the question was, what can we demonstrate apart from kind of anecdotal observation and what kinds of claims can we make about the impact of generative AI on the way teams work when they're trying to solve problems, but I got to actually, Kian because this whole project was his idea and I'm dying to know, actually, Kian, I'll just put you on the spot. What, what were you thinking when you concocted this crazy research project in your mind?

[00:05:18] Kian Gohar: Um, it was a crazy idea. It was the beginning of 2023, January. And, uh, so in my work, I help teams become high performing through solving problems more collaboratively through exercises and practices and behavioral changes that allow them over time to get better at what they do. And where chapter BT became publicly available. I was really interested in, in exploring, um, what happens to the world of problem solving and how do teams collaborate together one on one when all of a sudden you have this other tool that can help come up with ideas to problems that your team may have. And so I was really interested on, on exploring how the impact of generative AI on a team basis and how do teams use it, how will this improve their problem solving capability. How will it make them feel? How will it make them a higher performing team when they have access to these kinds of exponential digital tools that they've never had before?

[00:06:20] David McRaney: So they did the study. And based on the results, Jeremy and Kian have developed a whole paradigm for getting the most out of team based AI assisted ideation. It involves Taylor Swift, the Einstein effect, GPT prompts, cognitive biases. We'll get into all of that in just a moment. You'll hear all the details, which have a really interesting, you are not so smart angle that I'm eager to tell you about, and you'll get some incredible, useful science based actionable advice. But first, because this is relevant to what you're about to hear, you should know that Jeremy just launched a new podcast called Beyond the Prompt. The premise is that this is a show about how professionals around the world are using AI tools like ChatGBT at their companies right now, in their work. It's hosted by Jeremy and Henrik Werderlen, who is an entrepreneur known for starting BarkBox. A monthly dog toy and treat box subscription service. He also founded PreHype, a venture building R& D group in New York City. Their show features conversations with experts about how to leverage AI to accelerate, in a word, business. And since Jeremy and his research partner, Kian Gohar, just published a fresh scientific paper detailing their study into how to do just that. I sat down with all three of them, Jeremy, Henrik, and Kian, to record an episode about some surprising psychology they discovered. We will all have to take into account when using chat GPT and other AI tools. Here is everyone introducing themselves, starting with Jeremy Otley.

[00:08:05] Jeremy Utley: I've spent the last 15 years teaching at Stanford's D school, wrote what I thought would be the world's greatest book on brainstorming and idea generation. Uh, that was published one month before ChatGPT came out, which I consider to be the equivalent of writing the world's greatest book on retail prior to the invention of the internet. Uh, which is to say I have, uh, the, the, the, the sands upon which my expertise has built are shifting rapidly. So about a month after ChatGPT came out, I, I left my operational responsibilities at Stanford to go all in exploring, uh, the implications of AI on innovation and entrepreneurship and organizations.

[00:08:47] David McRaney: And here is Henrik Werdelin.

[00:08:49] Henrik Werdelin: Uh, I'm an entrepreneur. I've been building companies for most of my career and I'm very interested in, I guess, the odd and science of how you built from scratch and how you Make it a little bit less stressful to run. And for about AI, I have had access to OBI and a bunch of the other fundamental platforms for a few years. And. I'm very intrigued in both applied entrepreneurship, but also applied AI. So a little bit less of the bits and bobs and a little bit more on like, how do you actually get humans to use it?

[00:09:22] David McRaney: And finally, Kian Gohar.

[00:09:24] Kian Gohar: Yeah, sure. Hey gentlemen. Great to meet you all. Uh, Kian Gohar. I live in LA. I run a leadership development firm that helps. Organizations with high performing team behaviors. I wrote a book, uh, came out last year on the future of work based on best practices that, uh, the most high performing teams in the pandemic deployed and lessons learned from that. Formerly, I was an executive director at Singularity in Silicon Valley for many years, and also an executive director at the XPRIZE here in LA, designing moonshots to solve, uh, big challenges for humanity.

[00:09:58] David McRaney: Entrepreneurs, business, science. Academia. So while recording altogether somewhere along the way, we brainstormed a bit, which, you know, this makes this meta already. And we decided that I would make an episode about this study for You Are Not So Smart, and they would adapt that episode for their podcast, Beyond the Prompt. It's all very meta. So. Let's get into it. AI assisted ideation. The Taylor Swift method. The Einstellung effect. Before we get into just what all of that is, Let's And what Jeremy and Kian found in their research, let's take a moment to let them explain what exactly is ideation.

[00:10:47] Kian Gohar: Ideation, I think is, uh, developing effective possible solutions to a problem. And teams can define that however way they want, but coming together virtually or in person or in the metaverse to figure out what are the different possible ways a problem can be solved with. Different modalities.

[00:11:09] David McRaney: And in my mind, tell me if I'm wrong, like, it's coming up with good ideas.

[00:11:13] Jeremy Utley: You're wrong. You're wrong. Stop. Stop. No. No. Okay. Go. This drives me crazy. Thank you, David, for, for, for the opportunity to get on my high horse for a second. Good. There's, why good? Why introduce the concept of quality? And that's what everybody does, right? Everybody says ideation is coming up with good ideas, but the problem is good. And, and good. I don't think quality has any bearing on the conversation at the beginning and effective ideation insofar as ideation is effective problem solving. Effective ideation is generating ideas without regard to quality. And one of the key limiting factors for most teams and individuals who are trying to engage in effective problem solving is their fixation on good.

[00:11:56] David McRaney: I love this. I love it. Effective in this scenario, I mean, it doesn't necessarily mean commitment. The greatest ideas ever let's work. Let's put our heads together until we come up with the greatest idea that's ever been had about this particular project or problem is some of your, in some of the, uh, materials that you, in which you described the study, you had dealt these beautiful quotes. Uh, I think it's Linus Paulding. I've got a note here, uh, to have a good idea, you need to have a lot of ideas. Uh, Keith Simonton, MNSA Lifetime Achievement Award winner. He said quantity is the single greatest predictor of quality. Beautiful. The, the, the, I'm quoting things that you've sent me. You know, the more willing you are to have a bad idea, the likely you already have a good one. Uh, what about this Taylor Swift thing though? Hey, Kian, what did Taylor Swift have to say about this concept?

[00:12:40] Kian Gohar: Oh, Taylor. Uh, she is obviously the master of creativity. Uh, you know, when she was accepting her. Award, uh, the iHeartRadio Music Awards this year for most, uh, innovative artists. She comes up and she says, uh, I want to, I want to make all my fans be aware of that for the few amazing song ideas that I've developed. I've also had hundreds and thousands of terrible ideas. It allowed me to ultimately get to the few thousand good ideas that I've had that have gotten me on the stage. So, you know, the master of creativity says to her fans, don't be afraid of having lots of bad ideas because the bad ideas are just cousins, um, having a better idea on your next iteration.

[00:13:27] David McRaney: Okay, so that's ideation. And this is something people can do in groups. And this kind of group brainstorming works better when bad ideas are totally welcome. In fact, Permission to have bad ideas means you can work in stages so that sorting the good from the bad can occur down the line. So, how would that work when ChatGPT enters the mix, when teams are directed to use it as a tool in that process? This was the major question Jeremy and Kian set out to answer. So to do that, they got lots and lots of people at real companies whose real jobs are to come up with innovative solutions to real problems. They put those people into lots and lots of teams, some using ChatGPT to help, some not, and then they compared and contrasted the outcomes of their work to collect lots and lots of new, never before quantified evidence Based on those results. See, prior to this research, no one had looked into how teams might use ChatGPT. Not scientifically. The hypothesis, the general assumption, was that if you give a group of people generative AI, it would, of course, lead to a revolution in corporate innovation. And that would lead to new automated workflows, vastly increased productivity, supercharged brainstorming that sort of thing, but that is not what happened. Here's how the study went, as I understand it. And you fill in my, my gaps here. You're like, Hey, here's some actual challenges that businesses are facing. How do we improve customer service in this particular division? How do I develop this new product? I need some new internal training resources. And so then you hand this over to, these are problems that are, that people are looking for some sort of solution or ideas about how we can work on them. And you hand these to groups that are divided into some are using AI and some are not using AI to ideate, to brainstorm. So starting there, let's imagine that we're talking about this group's not using AI, this group is using AI. You can go back and forth, Jeremy and Kian, like, what did you see in starting with the groups that didn't use it? What kind of things do they do in this situation? And the groups that did use it, what did they do in that situation? Well, importantly,

[00:15:53] Jeremy Utley: the groups didn't know of one another's existence, right? So part of the research is you can't let people know the, the jig, right? So as far as problem solvers are concerned, they've been invited to show up to a problem solving session. And that's why the problem was so important. It's had to be relevant to the organization, not only because we wanted to study the impact of AI on problem solving in context, in a meaningful context. And not theoretical problems, but real practical problems that businesses are facing, but also because that's what it requires to get human beings engaged with a problem solving exercise. They go, Oh yeah, our internal training materials do stink. We do need to improve those. Or, Oh yeah, we have been wondering whether we want to enter this kind of adjacent market and how would we do that? Right. So it's not only. Important for, for our study purposes that, that the problem be meaningful. It was also important to participants to say, Hey, I'm going to spend a couple of hours of my life working with other people in the company. I want to feel like this is going to make a difference to the company. Right? So as far as they're concerned, they're not a part of a study. As far as they're concerned, they're a part of a problem solving exercise, right? And so for us, and we, we, we scheduled them such that they didn't know of one another's existence and we didn't tell either one of them of the other's existence, right? So the non AI group got a world class brainstorming activity. Kian and I happen to be world class innovation facilitators, right? So they got a premier brainstorming activity. And the AI teams got a premier brainstorming activity and were also given access to generative AI tools and given a brief primer on how to use them. Right. But so in both cases, importantly, I just wanted to state that neither one knew one, that the other group existed or two, that we were studying how does performance differ. Now, as far as both of them are concerned, they've been invited to a problem solving exercise. It seems relevant to the business and they're willing to, you know, You know, devote a couple of hours of enthusiastic engagement for the sake of improving business outcomes. That's kind of like the starting point.

[00:17:51] Kian Gohar: I would also add that what we found was interesting teams who didn't have AI brainstormed like how you and I think about coming together and developing ideas to a problem. You know, we are in person or virtually and we whiteboard it out and we come up with ideas and we prioritize it. It's just like, we've done it for decades. What we found was with teams that had access to AI, they had different approaches to how they used it. Some teams, um, were totally quiet. Uh, like they basically have, uh, Resting AI face because they were like staring into the computer, uh, with chat GPT and try to come up with solutions on their own and they weren't really talking to each other, even though we were like right there.

[00:18:31] Jeremy Utley: It was, it was like study hall. It was hysterical. And chat GPT face is a phrase that Kiana and I came up with. Cause it's like, you watch these people while they're in the session. It's, it's like they're oblivious to one another's existence. It's kind of fascinating. Yeah.

[00:18:43] Kian Gohar: So there was like that, you know, there was that, uh, modality and then there Who will be much more communicative with each other, um, during the process of generating ideas with chat GPT. And so just exploring like how are some teams super quiet and solitary, and that depends on the dynamics and individuals and sort of like who takes the lead and how they structure the exercise versus teams who organically are much more iterative with each other and talk to each other and say, Oh, this is what chat GPT said. Um, what do you think about this? And then they build on top of that. And so just, um, being able to observe the two different kinds of styles was, uh, very different than the, the, the, uh, control group, which we didn't have access to AI and would do brainstorming like, um, how we've done it for, for decades.

[00:19:30] David McRaney: We've got the study, we've got the idea of it, we've got these two different groups. The control group, they pretty much did the same sort of thing that people do, but the AI groups are doing all sorts of different stuff. And what this makes me feel like as I'm reading the study, Oh yeah, and the cool, there's going to be such a cool takeaway at the end, which is. These people with this incredible tool, making, putting hundreds of ideas together and all these different marvelous ways. And then, no, I didn't get the outcome. I thought it was going to get, I'm not going to say it for you because it'd be more interesting to hear you put it out there. What did you discover about the quality of AI assisted groups versus non AI assisted groups in the domain of ideation, please.

[00:20:10] Jeremy Utley: You're exactly right, David, when we first started seeing results flow in, my first thought was, Oh no, Oh no. And actually I've, it's a good sign as an academic researcher, when you get the data back and you think, Oh no, because then you wait a few beats. And I remember that first day, Kian and I are on the phone looking at stuff and I go, Oh yeah, right. Because it's, and you realize at that point we had assumptions about what we were expecting. Theoretically speaking, and you read a lot of the research right now, it's like the theoretical ceiling is very high for AI assisted teams. And so I'm expecting, it's like, the question was how many multiples more ideas are, are, are AI assisted teams generating? How much broader is the divergence of possibilities that there are met? Like, to me, it was a question of what's the, what's the multiple number, right? What's the, what's the, uh, expansion factor. And then to see AI assisted teams aren't generating more ideas. Often, in fact, they're generating less, substantially less furthermore, they aren't generating better ideas. Uh, but what was wild is they weren't generating worse ideas either. They were generating a modest amount of ordinary material and the flies in the face of all of our expectations. I thought we're going to conduct a research study to talk about how much better AI assisted teams do, how much broader their thinking is. And it just wasn't the case. Now, there are times where, as we've mentioned, AI assisted teams do outperform. But that comes down to a particular orientation towards the technology. And we can dig into that, but the fundamental finding, it was one of those, Oh, no moments that really helped us spark. I think a pretty, pretty interesting insight.

[00:21:56] Kian Gohar: Yeah. There were moments when Jerry was like, Keon, I haven't, this is crazy. This is, we're doing this wrong. No, let's just wait to see what the result come out. And, uh, as we saw more and more of, uh, The, the, the teams, um, develop the same kind of results. It became obvious to us that yes, there are limitations in how most humans use AI to get more ideas and better quality ideas, but the teams that did really, really well, they did it differently. And so this became very clear to us that this is not just a technology limitation, rather, it is actually a, uh, a limitation of how humans expect the technology to help us solve the problem. And that requires us to change our behaviors, our biases, and how we work going forward with man and machine.

[00:22:46] Henrik Werdelin: I mean, one of the things that's so fascinating about the study is that AI and the way that we interface with AI is such a seductive, easy interface.

[00:22:58] David McRaney: Just to remind you, that's Henrik Werdlund, co host of Beyond the Prompt.

[00:23:01] Henrik Werdelin: that you kind of assume that You can't really do anything wrong, right? Like it's a little bit like brainstorm assisted by white boards or posted notes. So you can like, how can you not use these in the correct way? And. And it's just such a mind blowing thing that because it's so, it's seemingly so seductively easy, you kind of like don't even realize that you could do it in a real way.

[00:23:26] David McRaney: So they have these corporate innovation teams, lots of them, several groups, and they sit them down in groups and have them engage in collaborative problem solving on the sorts of things these sorts of teams usually work on. While at the same time, other groups perform the same tasks in the same way without the help of Chad TPT. And they measured not only how many ideas these groups produced, but how good were their ideas, how clever and useful were their solutions. And they did that by asking, as they put it, the problem owners, the person or persons whose problem the team was trying to solve. And those people ranked the ideas as A, B, C, or D in quality. And what they found was The AI assisted teams tended to under produce a moderate stack of C quality ideas. The solutions were mid okay, not great and not terrible teams using AI converged on average quality solutions. And this is what makes this such a great, you are not so smart kind of study when asked to rate the ideas, the people giving out the grades. Unaware of which had been AI assisted and which had not, assumed the few great ideas coming their way must have been the ones produced by the AI assisted teams when they very much were not. As one grader of ideas in the study said, quote, I believe the same lie. I thought the AI assisted group had more and better ideas, end quote. So what accounts for these results? Well, according to Jeremy and Kian, a pernicious cognitive bias was at play, the good old Einstallung effect. I can talk about that. I can wax poetic about this for a minute, but I'd love to hear your take before I start talking a little bit about what this even is. I will just, uh, contribute, uh, means setting or attitude. This is something that people do when, uh, they have done things kind of, sort of like that before. Um, yeah. Yeah. Yeah. Uh, but I'll, uh, but go farther, uh, with this idea, if you will, Jeremy and Kian.

[00:25:48] Jeremy Utley: You know, you actually, you saying that they've done things like this before, I think is actually a really helpful addendum. Chat is such a natural interface. It's almost like the deck is stacked against us as humans in collaborating with AI, because it seems like so much other stuff we've ever done, which is exactly, as you know, David, what Abraham and. Edith Lucian's found back in 1942, the water jug, the water jug experiment. Yeah. Yeah.

[00:26:14] David McRaney: Okay. Let me drop in here to explain the water jug experiment from the 1940s. Subjects were split into two groups. One solved a series of water jug puzzles that the participants would learn all had pretty much the same solution with many steps. The other group didn't do any puzzles like that beforehand, and they served as a control. Then, both groups were given a new kind of puzzle that required a different method of problem solving from the kind the groups that had solved problems beforehand had learned how to solve, but you could use that previous method. It just would take forever. Long story short, the control group tended to settle on the simplest possible solution to the problem. While the other group tended to settle on the more complex, laborious solutions that required more steps because that was the sort of problem solving they had learned from the previous puzzles. They had formed a cognitive bias and they were biased in favor of using familiar methods rather than seeking out innovative approaches. Oh, and if you'd like to try to solve one of those puzzles, here's an example. You start with three jars. One can hold 8 gallons, another 5 gallons, and another 3 gallons, 8, 5, 3. Then imagine you fill the 8 gallon jar with water, and your task is to figure out how, only using these jars, pouring water back and forth between them, how can you split that to exactly 4 gallons of water in the 8 gallon jug, and 4 gallons of water in the 5 gallon? Feel free to pause here and try to figure that out. What does that have to do with the study? Well, Jeremy and Kian found that the groups who used chat GPT to help come up with ideas, tended to just use it like Google. They didn't leverage its abilities to chat, to iterate, to serve as devil's advocate, they didn't engage in any kind of back and forth, the sort of thing chat GPT offers that would have been very useful in coming up with ideas, solutions to problems. Instead, they tended to stick with its early, often mediocre ideas and answers. In short, they only scratched the surface of its potential and thus did not, though they could have, outperform the other groups.

[00:28:47] Jeremy Utley: When, when human beings are given a task that seems like something they've done before. They end up settling called the Einstein effect, which is basically they cease to search for better solutions. And even when a better solution is possible, and even when subjects are told that there's a better solution, what researchers at Oxford have demonstrated more recently with chess masters is they fixate on the solution that fits their established paradigm rather than even looking for the better solution. This is the Einstein effect. Dunker studied it. Many do, but I love. I'd never thought about that, that even the chat interface itself lends itself to this sense of, I've been here before. And the more someone feels I've been here before, the more in danger they are, paradoxically, of falling prey to the bias.

[00:29:38] David McRaney: Yeah, I've been here before. I, I've, I've used Google to do things. I'll just use this as super Google. And that blows me away because the, all the studies you mentioned, people will do this even at very high functioning levels. The chess masters, mathematicians and physicists will do this. They'll have a, a way that they have solved problems in the past that is laborious. And difficult and requires an all day sort of thing at the board. And then there will be a much simpler, easier, will not take all day solution to the thing, but they, and they can know about that and they're like, yeah, I'm still going to do it the way I usually do it. Cause I'm, I'm that's safer to me. Hey, what do you have to add on this, Kia, before we, we move forward?

[00:30:16] Kian Gohar: I think the, you said, you know, you, everybody knows how to use Google. And the, and I think actually part of the problem with a chat GPT interface is that it does look just like Google, like the search box, like ask me a question. And that's the problem because people use it like they would use Google or like Wikipedia, and that's actually not the way it should be used one of the CEOs of the companies that we partner with had this great insight. And he said, when it comes to chatbots, the emphasis belongs on the chat, not the bot. And so we have to develop conversational interfaces that allow us, that encourage us to have conversations rather than asking it a question like you would Google. And I think that's still in the early stages of us designing for that.

[00:30:59] Henrik Werdelin: We've all used bots before and they. tend to suck. And so I wonder if there's always kind of like, there might even have people going into it. You were mentioning all these studies, like people kind of expect what they have always expected. And I don't know how many times we've been on those kind of like chat bonds online where you go like, you know, uh, you kind of like have to almost guess the exact phrase that the engineer had kind of put in there. And so do you think there might even be kind of like a built in biased Against getting a really, really good answer because we're so used for chat bugs, not be very good.

[00:31:36] Jeremy Utley: I think it's a brilliant point. Uh, Keon was just saying that, you know, one thing I took from Keon, what he just said was we've got to become fluent AI conversationalists. That's actually the kind of the capability gap that's got to be climbed here. And just by way of an anecdote, Henrik, I think it may address kind of your observation. I feel that what's required to become a AI conversationalist, so to speak, Is actually to have a personal epiphany experience, like a deeply personal experience. And we found ourselves Keon and I were, I mean, we were at a, one of the largest financial services companies on the West coast and the head of innovation, we asked, so what have you been using chat GPT for this person said. They hadn't used it yet outside of our study, which is mind blowing. And I found, and they were kind of imagining a future project where I'm, I'm planning on using it because I know that this innovation project and to me, if you contrast that, by the way, I'll give you just a personal story because I think it's really illustrates the point. My grandma is approaching her nineties. I was with her over Thanksgiving. She was saying, Hey, you're doing all this chat GPT stuff. Like what is it, this technology? And I, I sat there thinking for a second, I said, okay. What's a personal, emotional question you want to ask a friend about? And she goes, I thought this was technology. I said, I just bear with me. Just humor me for a second, granny. We're in the car on like a four hour drive. So you got nothing but time. And she said, well, I'd been wondering about when it would be appropriate to move into a assisted living facility. And I, first of all, I'm like, Whoa, she's never asked me that. That's okay. So. Um, honor the moment that's very kind of significant. And I said, okay, let's just, let's use this. I want to use the technology to show you about it rather than tell you about it. I opened up the chat interface and the, you know, whisper like voice. And I just said, Hey, we're trying to make a decision about whether we should move into assisted living. Would you ask us three or four questions to get a sense for where we're at before you provide any recommendations? And then, you know, immediately it responds, you know, JGPT, sure Jeremy, I'd be happy to help you with this decision. First question, can you tell me about any changes to your mobility recently? And I looked at granny, I'm driving and I just said, just talk to it. And she's, you know, well, no, for the most part of me, we're still getting around. You know, we, we love going to the gym in the morning and papa loves to golf and, you know, and I'm just, she's just kind of rambling, right. You know, as a human does, right. And I said, that's good. That's good. So I, I just, you know, hit the, you know, upload button and then it's great. Thanks so much. Next question. Can you tell us about relationships with your caregivers in your life? And I said, granny answer. Yeah. Um, we don't really have any caregivers. I mean, my daughter comes up from Dallas every couple months, but it's not, it's more just for visits. It's, you know, So this totally unstructured, like this is the, this is the premier use case for experiencing chat GPT, right? Because a couple more questions come up and then boom, it's, you know, chat GPT says, okay, based on what, based on this conversation, here are a few thoughts for you to consider. And I handed her my phone and I said, I don't need to read this. I mean, this is your conversation. Just take a look at it. And her eyes got wide and she was like, I didn't know computers could do this. I said, yeah, they can't until now. But what's crazy to me. And the reason I mentioned that story is because. All week long, there are these moments where we're at the dinner table, we're in the kitchen or something, we're doing something. And she goes, she's kind of elbowing me, Hey, Jeremy, do you think chat GPT could, you know, help with the recipe? You know, she has all these ideas about, you know, call it application in her life. The only way as a nearly 90 year old, she can have an imagination about ways to use chat GPT is because she had this deeply personal epiphany experience. And to me, and what, what, you know, Keon and I's were standing in this financial services company with this innovation leader said they've never used it, they asked us, what should we, what should I do to get started? And what I found myself saying, and what I believe is really the answer is have An emotional, personal conversation that you would want to have with a friend. Importantly, ask ChatGPT to ask you questions before you get started. And to me, there's something about changing the frame of the conversation. You know, Henry, to your point about we've all interacted with dumb chatbots. Well, GPT is not a dumb chatbot, but you actually have to, in a way you have to frame the conversation in a way that gives it the benefit of the doubt. And I think that's really the challenge for most users is they don't know they've got to give GPT the benefit of the doubt. So having some of those early prompts. That empower a user to discover this kind of incredible interaction that's possible. I think is like a key point of friction. That's keeping people from realizing potential.

[00:36:12] David McRaney: This is a, I love this guy. This did this week. Cause I'm like, I'm working on a book about genius and I keep playing with, with, uh, GPT to see like, Oh, is there anything in here that's fun to do? And one of the things I did recently is I said, okay. Have this particular philosopher define this and then what would they, what might they say? And then they gave a definition that's like, okay, now have this philosopher challenge that and but take it into account and refine it and then they produce theirs and then I did that over and over and over again and at the very end of all that I said now imagine Wittgenstein who's the philosopher of Defining anything. Uh, look at the entire conversation we've had so far and just say, how would you sum this up in a way that challenges everyone that has already spoken, but also takes that into account. And. I could iterate that a million times if I wanted to, and it just keeps getting more and more bizarre. It's like one of those AI images where they're like, make it more comfortable, make it more comfortable until eventually there's somebody in a recliner and in the cosmos. It was doing that with words and, but in that space, I'm seeing all these different things that helped me brainstorm. And so your study reminded me of this thing that I already had played around with. And so I'm very excited that you put this to the, uh, well, let's actually scientifically investigate it. Was that a good use of, uh, uh, well, I was looking at it thinking, was that a good use of David's time?

[00:37:35] Jeremy Utley: Uh, even the way you're talking about using chat GPT, we found is mostly foreign to folks. Most folks. Interface with chat GPT, like a Google search query where it's a single, uh, query and a single result. And most of the time they go, it wasn't that good. Oh, the technology is not there. And they kind of dismiss it. And that's, that's kind of it. The vast majority of folks in teams have a cursory interaction where they have, where they enter a kind of largely uncontextualized prompt and receive a very mediocre response. And they go, uh, I knew it wasn't that good. Right. And even the, the description of you being almost recursive and going back and going back and going back, that's a behavior we saw. That is exceptional and it is the exception, but it delivers better results. What's wild is if you, if you mesh the call it performance with and cross that data with sentiment data, because what happens is. Um, it's saying nothing right now about comparing with AI versus without AI, kind of setting that aside for a moment, which is interesting. But even the, with, within the AI assisted groups, the teams that approached AI, like it's an Oracle, loved it, felt great and they underperformed. The teams that approached AI, like a conversation partner, overperformed. But they didn't really like it nearly as much. In fact, it felt more like work than magic. And so there's this fascinating dichotomy, if you will. If you want it to feel like magic, chances are you're going to get, you're going to derive a fraction of the value of the interaction, but if you want to derive maximum value, you actually have to invest effort and thought into the conversation, which all of a sudden doesn't really feel magical. It just feels like a different kind of work. And the truth is, I mean, going back to kind of Einstein or satisficing or however you want to slice it. Most folks aren't that interested. They don't care enough to push. And to me, what, what strikes me about your research study is you care so deeply that you're pushing and you're examining and you're cross examining what we observe in many cases and organizations is folks get a cursory, they provoke a cursory interaction and they're satisfied with a cursory answer. In a sense, because that sense of care and, and also an understanding of the way effective interactions unfold works, isn't really there.

[00:39:56] Kian Gohar: You know, people we found are satisfied with a good enough answer. And to really get better answers to Jeremy's point, you have to rethink your workflow and change how you as humans work together in person, virtually and with generative AI. And reframing your behaviors takes work. Like you got to go to the gym and work out and that's hard. And a lot of, let's just say, middle aged adults don't want to work out and reframe and re rethink how they have done work for decades. And so that's really, really hard. And that is part of the work that's required, not just from learning how to use GPT, but in the human behavior, how do we work with it, with each other? And so, you know, there's amazing other research that came out this year that showed large language models can, can improve ideation and creativity theoretically, you know, very, very high. So like there's a big ceiling on like how it can help us with ideation. What we found in our research actually was that, yes, uh, there's a theoretical angle, but when you actually apply it on teams in actual organizations with real problems, they're like here in terms of how they can actually use these technologies to develop better solutions to problems. So there's like this big gap between what's possible and what's reality. And in order for us to To close this gap, we got to change how we think, how we work, how we collaborate on a team and how we use this technology. All of that is absolutely brand new to humans within the last year.

[00:41:29] Henrik Werdelin: And I just asked, the study showed that when you had people ask a question or post like a problem to Chattopadhyay and they, and they didn't kind of like quiz it because they were lazy. They just kind of accepted the first answer. They didn't perform as well as other people. Is that the takeaway?

[00:41:48] Jeremy Utley: Exactly. Exactly. It's, it's not, it's not quite as simple as that. It's not like, like one thing that, um, teams that don't have AI spend an hour working really hard exploring a huge kind of solution space. They have terrible ideas and they have great ideas. Teams that do have AI. Instead of spending an hour kind of exploring like a broad variety of ideas really quickly, they get to like good enough ideas and, you know, call it in 10 minutes, but what do they do with the next 50 minutes? They don't push farther. They go, you guys want to go get coffee? And so, uh, now when someone heard that, they asked me a question, well, if, if they only had a B idea, why didn't they keep pushing? And what I realized was participants in the middle of a study don't know the grade of the grading doesn't happen until later they have. And so good enough. You don't know if good enough as a B or an A or a C, right? It feels good. And you go, well, this is wow. None of us would have thought of that in 10 minutes. It's pretty great. Like, I think we're mostly done. But that, that decision is made without respect to the broader field of possibility. And because the bar is kind of set sufficiently high by early output from JGPT, whereas with a human team, maybe there's hemming and hawing and struggling and there's fits and starts and good and bad, but the teams actually generate a broader, what we found is they generate a much broader variety in terms of quality of ideas. And they actually many times generate more ideas than teams with generative value. Dude, all you have to do is say, give me a thousand ideas, it will do it. Like you can literally just prompt it for volume and variation. And it would, the problem is teams don't approach problem solving that way.

[00:43:22] Henrik Werdelin: And probably also you have the built in problem with an LLM model that good is average, right? Like almost per definition. And so at least what I found it is that if you ask it, for example, give me a really shitty idea. It'll come up with something where it will say something like, Hey, this will be highly illegal. So it will be a shitty idea. And you're like, yeah, wait a minute. It's not that illegal. And you could also really change it a little bit. And so you can actually make a pretty good idea by prompting it out and kind of like the perimeter, perimeters of. of its ideation space.

[00:43:51] Kian Gohar: And I think one of the things that we found is that, um, sometimes good enough, uh, is okay, uh, depending on what the context is and what the problem you're trying to solve for. If the problem you're trying to solve for is not mission critical and you're optimizing for speed and then collaborating with AI to come up with. Uh, solution to this problem in 10 minutes might be good enough. And then you, and then you move on. But if the problem you're trying to solve for is mission critical and you can't just have a good enough answer, but you have to have the right answer or the best answer. That's when AI has a limitation in that it oftentimes allows, uh, human teams to come up with just good enough answers. And that's really not good enough.

[00:44:32] Henrik Werdelin: Quick question on the, on the, uh, premise of ideas versus the problems. Obviously for somebody who deals in entrepreneurship all the time, one of our full belief systems is that ideas comes way after having a good problem to solve. And so as you were studying this and was thinking about how to phrase the questions, how much were you looking at problem identification versus ideation?

[00:44:56] Jeremy Utley: Yeah. One of our, actually one of the partners in the study, we, we partnered both with US organizations and also European organizations. One of our European organization partners, He's one of the things he said to me in the past was oftentimes the problem is the problem when we're doing innovation organizations, a problem is actually where we need to be focused. So, Henrik, you're exactly right. And one of the things that we did in terms of the research methodology is at one point in the exercise, we actually forced folks to redefine the problem in as many ways as possible. Right. So they, they were given a problem from a problem owner in the organization. They were given a prompt to generate some solutions and through a series of exercises. And then at one point we actually had them almost backtrack and say, what are all the different ways we can view this problem? So it's not an exact analog to what you're saying. Um, from an entrepreneurial perspective, right? The individual is often the one. Digging in, validating the problem, getting familiar with users, pain points, et cetera. It wasn't, you know, we, we didn't have weeks and weeks for folks to undertake problem solving. It was more of an, uh, of a kind of a, a short sprint session, but even in that short sprint, we totally agree with your belief that the problem is really critical. And we carved out deliberate time in the study to make sure that folks Explored other problems as a means of generating other solutions.

[00:46:17] Henrik Werdelin: Because one thing that I do, but it does seem that AI is good at, sometimes it can be difficult to kind of get your out of your own head and look at the problem from a different vantage point. But obviously the nice thing about role playing with AI is that you can get it to role play any character that you want. And so I find this kind of reframing exercise that Tom Zedell wrote this cool book called, uh, what's your problem? And it really. Digs into, I think is that you can kind of, it's very difficult to do just by yourself, but by doing what David did and kind of like invent all these different personas and saying, you know, how do we look at the problem from this thing? You often redefine the problem so that the ideation becomes easier because you might not be solving a problem that's very difficult to solve.

[00:47:00] Kian Gohar: And one of the things we actually found that, uh, generative AI can be very useful for is assessing those assumptions. Like we're having a conversation today about how it's difficult to figure out what the problem is. And that's because we have particular biases about framing that, that context or that problem that we're in. And so, uh, one of the, the great possibilities for generative AI is that it allows us to think outside our own assumptions and we can use it to think about like, what are some counter arguments. What are some alternatives, what are some potential alternative scenarios that I didn't think about that would allow us to perhaps reframe what the problem is? Because again, we're biased going into whatever that context is to begin with.

[00:47:41] Henrik Werdelin: This AI movement, it's very similar to the mid nineties when a lot of us kind of like starting to use the internet right? You know, and people are going, you're, you're showing, you know, you might. You know, I, I started back and fight on it, but then suddenly like the browser materialized and you were showing, I remember showing my mom the browser and you're like, what can I use it for? And I'm like, you're like, you can use it for anything. Right. And she goes like, well, can I, can I check if there's a book available in the library? I'm like, sure. I like, it's probably not like the thing that's going to transform the world, but. But it's so fascinating. I wonder, David, you studied this, you know, with people's psyche. What do you do when you need humans to talk to humans in a different way or learn to talk to humans in different ways? Is there tricks from human to human interaction that we could kind of use for us getting to know this new thing?

[00:48:34] David McRaney: The big, your big takeaway here, which I thought was really amazing, you have two big takeaways in the study. One was, uh, don't look for answers, try to have conversations. This seems to lead to better outcomes in this particular ideation space for sure. And we're so used to doing it the other way. Like I'm looking for one very specific outcome. And I love that that's one of our big takeaways in the study, try to have a conversation with this thing and you would be amazed at what happens. And this is also true when you're facing difficult conversations with other human beings, try to become outcome independent. And also you should be attempting to ask questions that allows the other person to articulate So, thank you. Their position on the matter and in that articulation space, that's where we'll have like a better outcome in the conversation. Wow. If I'm specifically attempting to get to a very specific goal in a conversation with another person, if you get there, you've also achieved creating the world that you already thought existed and you're not going to be surprised by a damn thing. Wow. And this also feels bad as the person who's on the other side of the conversation. And I think that's really important. You can do this with GPT. I've done it. It's just, it's incredible. You can, you can say, Hey, I, I, I'm wondering what you think about this. And then like, I don't know, like you can really have one of these strange, they call it cognitive empathy. NYU coined that as a phrase where I'm sure there's a reason this person is, is acting, behaving, thinking, feeling in a certain way. And I have to have empathy for the fact that they are motivated reasoners and they have some sort of way of seeing the world. And what I'm more interested in is how they arrived at their answers than I am at the actual answers that I'm, uh, receiving. And if we can get both people on that page where, Oh, wow, I've learned something about myself. This is why I'm arriving in these answers in this way. It, it, everybody levels up in that one conversation. It feels good. I've been astonished at how GPT has biases and it will start to exhibit certain human biases in its conversations, but I've also been excited by interacting with it in a way where I get it to introspect. I get it to say, I wonder why you thought of it that way, or I wonder what's leading you to that. It starts say, well, and it'll start talking itself out of certain ways of seeing things. Sometimes the way a person will, it'll say, well, actually when you look at it this way and , I'm like, wow, you are, you're a robot. Yeah.

[00:50:51] Jeremy Utley: Okay. So I gotta tell you a couple of fun anecdotes that are totally, you just got me riffing here. David one is, uh, Henrik. I think your question made me think of something Kevin Kelly told me, which Kevin Kelly, you know, is amazing. Artist visionary, technology futurist. And I was talking to him about how he uses Mid Journey and DALI and I said, what are you going for? And he said, I'm waiting to surprise myself, which I thought was a beautiful description of the goal. David, you said like, most times we're trying to get what we already, what we've already premeditated. And if you realize your goal is not to get what you were looking for, but to get what you didn't know you were looking for and, and to that end, I would say we, uh, one of the things that I've done recently, uh, Henrik has been involved in this project is we built this. Coach basically to help people learn how to talk to AI. And we've got it inside of an organization where there's kind of a pilot group of users who are playing with it, but no one's gotten to the end of the call it pre programmed drills. And I had a question yesterday. Like what happens when somebody gets to the end? Cause we only have so many drills. So I just really ordinarily these drills, you know, take, you know, meaningful, thoughtful, attention. And, you know, somebody, we imagine, so we would do one, one a day or every couple of days, right. Basically just drills on how to become a conversationalist. Well, I just kept hitting net like next, did it great, did it great, did it great. And I got to the end and I was texting one of our partners going, what's going to happen when we get to the end of this thing? No kidding. The bot keeps creating more drills. I'm like, and I literally, I go, dude, my mind just got blown because it came up with a totally great drill that I never thought of. It's like, what?

[00:52:34] Henrik Werdelin: I'm not sure you, any of you are familiar with the book, um, why greatness can't be planned. Um, It's an incredible book, but more so, I think, a philosophy and it's obviously written about AI and about, you know, how we'll get to a, uh, uh, artificial general intelligence and how we need to rethink it. But, but the premise of the book is that instead of pursuing a goal, we should pursue of interestingness. And that becomes the stepping stone that kind of lead us into new discoveries. And then this goal will take care of itself. And it's kind of fascinating, David, what you were just saying, like a little bit like, and I think with Jeremy, what you were saying the study was kind of showing is that if you're kind of like not just trying to get to the idea, but you're trying to kind of uncover kind of like new pieces of interestingness, it could lead you to a much better place if you're just trying to To like the, the goal.

[00:53:29] David McRaney: Yeah. This is my experience. Like you can totally use these bots to say, will you please define the word superfluous to me? And then yeah, just like, Hey, here you go. And then, but you can just go from that one prompt to some weird places. Like, like why do you suppose that is the word we use for that? And are there other ways of to expressing it? And then it also go well, yes. So then. You think there could be a better way of doing this? And all of a sudden it's brainstorming like, Oh, there might be a better way. I know, and I'm like, who? And then you're like, has anybody else said that? There are so many ways to bounce even out of a pure definition into some bizarre ass places with this thing. And you'll walk away excited and you're starting to fountain out on your own. And it's prompting you to, and now you're doing this sort of like, Oh, we're going to prompt each other for

[00:54:13] Henrik Werdelin: reverse prompting.

[00:54:14] David McRaney: Yeah, that's been my goal here recently is like, I want this thing to get me to a place where I feel like I just got prompted to ask better questions. It's prompting me.

[00:54:24] Henrik Werdelin: And you were mentioning something earlier about how, you know, learning how to train. Yeah. And so could you talk a little bit more on like, how did you, we, when we go to the gym, we probably all kind of know that we have to go to the gym and we end up like doing it. I heard this radio show once when somebody was asked, they had all these experts and going like, what is the absolute best kind of form of training? And then somebody answers the way you, the one you get done, which I thought was kind of like nice. And so when it comes to brainstorming or using chat2BT as a partner in idea discovery. What is the training methodology that people should apply?

[00:55:06] Kian Gohar: So, uh, I love this gym example because, uh, we know how to work out because there's coaches there who have done it for decades, right now we're in a situation where the coaches don't know how to train people and the coaches have to be re coached and retrained. And that's what's coming out of our research findings. And so to answer the question of like, how should teams think about using generative AI to come up with more effective solutions to a problem. We found that there is a commonality in terms of like how we design innovation and ideation. And we come up with this very basic five step mnemonic that I think will help people remember how they should think about using generative AI on their team collaboration activities. We call it fixit, S I X I T, and um, very quickly F stands for thinking about having a very focused problem. So you don't want to boil the entire ocean. You don't want to ask. Chat GPT, Hey, how do I improve my sales by 10%? Well, that's just way too broad. You have to be very, very narrow to ask a question that can allow it to open up dialogue around a narrow point. So you want to have a very focused area to think about. The second part, which I think is really critical is that you have, is the I, you have to have individual thought on your own as a human. Before you go to chat GPT. So you've got the small focus problem. It's important for you as a team member to think on your own. What are, what are some solutions I might offer? Cause we often have different solutions. The next step is then to go to chat GPT or a large language model and it's X provide context. So you need to give enough information. in order for a chatbot to ask you the right kinds of questions to have the conversation, to help you get to the right point. If you don't give it enough context, it's just going to give you generic information that's just maybe not even good enough as, as we'll be experienced.

[00:57:02] Jeremy Utley: If, if I may there, Kian, one thing that's, that's very unexpected, but in getting to the kind of the conversational dynamic, if you're not sure how much context to give, ask ChatGPT. Ask how much, let me know if you need more context to be able to prompt you well, right? This is even that is, I think it flies in the face of much of our kind of dogma of I've got to be the one to decide, you know, or like custom, I, you know, Henrik and I interviewed Dan Shipper about custom instructions, right. And, and we said, well, how do we, what if you don't know what's put in your custom instructions? You know, Dan said, ask Chad GPT to interview you about what it should put in its own custom instructions. Right. Okay. But so that, to me, that getting back to this idea of context, it's one thing if you say, I've got to prepare the perfect dossier that I'm going to upload, it's another thing to say, here's my goal. I can give you any context that you need. Can you tell me the pieces of context that would be useful in helping frame this and narrow this appropriately? So really it's way more conversational than even, than we knew even at the time, I think, of the paper's writing probably, Kielan.

[00:58:06] Kian Gohar: Yeah, exactly. Exactly. And so once you've given a context, the next part is the part that we've been learning about is you have to have an iterative conversation with it. As opposed to going to it like an encyclopedia, you want to treat it like a friend or like a, like a colleague and have a back and forth conversation with it. And this is really, really critical. At that point, once you've done this iterative conversation, then you go to the fifth step, which is the team, which is the team. Then you bring this to the team and say, okay, I've had an individual interaction with. Uh, chat GPT around this particular problem, here are particular solutions or ideas that we were developed. Now, let's as a team, you've all done this individually as well. Let's think about, let's put this on a whiteboard or virtual whiteboard. Let's prioritize what we've seen for the resources that we have, the time that we need to solve this problem. So it starts with very narrow problem. Then thinking about it on your own. Giving it context so that it can give you the right kinds of conversations that you can iteratively have with it and then going to the team and having a prioritization activity or exercise and the team decide that's the right thing or we should really focus on that. Or the team will say, this is interesting. Let's do another loop because we found an interesting pathway. Now let's iterate that even further. So let's go back. And so we found this, this, this five step process called fix it. To be an elegant solution for humans to start thinking about how they can bring AI in as a co pilot on their team to come up with more effective solutions to problems that they face. [00:59:38] David McRaney: Fix it. You can go to howtofixit. ai to see the whole deal. And here it is in summary. F, set a focused problem, be precise rather than abstract. I. Individual. Ideation first. Safeguard individual human creativity. X. Protect your context. Train the AI. I. Have interactive conversations. AI as a collaborative thought partner. And T. Team incubation. Facilitate your decision making. Each one of those has several steps, lots of explanations, howtofixit. ai. I love the idea of like coming to this and not trying to answer the question, trying to get it to show you the thing, the answer I didn't know I was even looking for, or just to be surprised by it. That's a great use of a new technology. It's a new technology. What if we did new things all the way around with this guy? I love it. I, I had that epiphany that you were discussing earlier, Jeremy, with, uh, I, um, I, my epiphany was asking GPT about epiphanies. That's when I realized, Oh, this is a powerful tool. I just pulled it up. I have it over here. I asked it. I, I just talked for like 15 minutes into a text, a speech to text about tiramisu moments, which is a idea I had for a sub stack, a, like a post or maybe a, something I was like, I would like to maybe write about this. Here's what I'm thinking. And it's because the first time I had tiramisu, I didn't know tiramisu existed. And I was that up. I was at a conference and they brought it out as a dessert and I was like, Damn, this is good. What is this? And they were like, everybody was like, uh, tiramisu, tiramisu. And I was like, Oh yeah, yeah, tiramisu. I love this stuff. But I learned it existed. I just had a, uh, accommodation moment, as they'd say in psychology. I now entered a universe in which tiramisu exists. Before that I didn't, I wasn't in that universe. Now I am. And this is something I was like, I bet there's a way to write about this that would be fun. And I, I told this story to GBT with a bunch of other things. And then I said, could you now give me from the perspective of a bunch of different famous thinkers, like what would they say about this? And then I asked it to have them argue with each other and disagree. And then it worked. From that point forward, I started every time I, I gave it one of these ideas, I asked it, Hey, tell me. Have all these different famous people disagree with me, show me where they would be like, maybe you haven't seen it this way. And that started becoming like a very powerful way to, to get started on something. I found that that was like, uh, an accelerant to whether or not I want to pursue something. And it made me more excited about the, you think they would like squash it? Absolutely not. Every time they disagreed with me, I felt like, ah, okay, now I really know what I want to say about this. And that's been my epiphany moment. I love this. I love that everyone in this conversation has been, uh, excited about this. We have so many conversations about the technology have been like, well, so anybody see Oppenheimer, I do not deny that this is a tool and a tool can be used for all sorts of things, but I'm excited that people are using it for good things and we're doing research right now. And your research is in the domain of behavior. How will human beings interact with this? What are the pros and cons and how can we get better at it? I commend thee. Thank you.

[01:02:58] Jeremy Utley: Thank you. They, you know, we're, we're pretty, we're pretty stoked because we heard from Harvard business review. They're going to actually feature the paper as some of the most interesting research that's been conducted in the last publication cycle. So hopefully by the time this episode is out, they'll actually have, it will be hitting newsstands as well. But. If, for whatever reason, this is either before that or it doesn't happen, if folks can go to howtofixit. ai, if they want to pull down the paper and they can grab some of our early recommendations for teams that want to implement this model, howtofixit. ai. What the one thing that just occurred to me as well is. I think similar to the chatbot phenomenon of, Oh, we've been here before. I think there may be a, there may be a lack of appreciation of the nuance and ability that, that has to be present in the organization, right? So whether it's investing in conversational ability, whether it's having outside facilitation, that's one thing we've seen actually, right. Is that teams that have outside facilitators help them interface with AI tend to perform better. Right. So Just like you don't want to approach the chat GPT window, like a Google window. Don't approach the task of. Ideation to come full circle back to the beginning of the conversation. Don't approach the task of ideation now in this new world of generative AI, like we've been here before. Think about enablement, think about abilities, think about facilitation and your team is going to get exponentially more benefit than if you just do it, like you've always done it.

[01:04:28] David McRaney: That's a great point. It makes, don't get me going. I thought I had already wrapped things up in my mind. Uh, the, cause you'd like, if you're having an argument, if you're trying Transportation, and all you've ever had are, uh, wagons and horses. Uh, your discussion of transportation is, uh, the very idea of how to have a discussion about transportation is limited by that space. And then, When you have all this new technology, a discussion about transportation doesn't even look like a previous discussion would have been about transportation. Ideation is a concept that will itself evolve thanks to this new tool. That is a really cool takeaway. I dig that a whole lot.

[01:05:08] Henrik Werdelin: Can I ask a final question? Just because we're hearing, when do you know when you have the idea? When do you have this kind of contentment moment that David just expressed with the conversation, but as you're ideating?

[01:05:23] Jeremy Utley: Isn't there a book on that, Henrik, I think called The Acorn Method? Oh, you're just flirting.

[01:05:28] Henrik Werdelin: That is my book, which I could recommend reading, but like, no, I don't think that will

[01:05:33] David McRaney: end. No, no, no. Tell me about this acorn thing.

[01:05:37] Henrik Werdelin: The Acorn Method is a book that I wrote about how when you are trying to grow your business by growing new businesses, um, and the thesis is that companies should see themselves as not just becoming a very big tree. And so as you are trying to see, how do you not, how do you make sure that what happens to most of the fortune 500s, which is that the halfway house is getting shorter and shorter, how do you kind of like break that mold? The way that you do it is that you look at the way the trees have evolved over, you know, many hundred thousands years. And you say, well, instead of just trying to grow, grow, grow, we should look at how do we drop small acorns around ourselves and how do we make sure that we make sure that they aspire and becomes maybe a bigger tree than we are. And if you look at the Googles and you look at the apples, that's exactly the methodology they have completely stolen from the world wood web, uh, you know, of trees.

[01:06:38] Jeremy Utley: If you want more wood, drop more acorns. That's that was my takeaway from Henrik's book. And I didn't mean it tongue in cheek. When I answer your own question with your own book, Henrik, I think the way you've, you have that epiphany moment. It's not, it's not that the job's not done with ideation. The job's done when you implement a desirable and feasible and profitable solution. And the way you discover which of those works is you drop acorns. You don't put all your eggs in one basket. You try lots of things in a low resolution, high speed, high velocity kind of manner that's entrepreneurial in nature, that's resourceful, that's resource constrained, etc. Over time, you start to see which of those trees grow. So there can be this kind of artificial kind of declaration of victory at the end of an ideation session. The truth is the next critical capability is around new business development and the way you learn, which of the ideas works is by trying a lot of them as cheaply and as quickly as possible, right? And so it's not that you get like the answer really quickly, so much as that you, you get into the process of piloting and prototyping more, more with more confidence.

[01:07:45] Kian Gohar: I would also say that. You know, an idea has worked when at the end of the activity or the exercise, got a smile on your face and, uh, it makes you happy as I have been this entire, uh, hour plus talking to you guys, um, uh, smiling, uh, grinning because it's been just very, uh, enjoyable, but also it's been a great idea for us to get together and have this conversation. So thank you so much.

[01:08:08] Henrik Werdelin: That's all for today's episode of Beyond the Prompt. If you liked it just a little bit, would really appreciate if you'd share the episode, maybe go in and rate it and like it on your favorite podcast platform. Until next time, be well.