This episode includes a bonus episode. See below for details. Join us for an great conversation with David Boyle, an expert in audience research and data analytics, who is passionate about leveraging AI, especially language models, to enhance organizational decision-making. Drawing from his extensive background in corporate strategy and media, David reflects on how ChatGPT transformed his approach to research and decision-making. We delve into AI's profound impact across various industries, the crucial role of clear and effective AI prompts, and the iterative process of refining AI-generated outputs. David introduces his framework of the four P's—prep, prompt, process, and proficiency—as key to unlocking AI’s full potential. He also highlights the importance of human oversight in guiding AI to boost both efficiency and creativity, sharing practical applications and success stories from diverse fields. This episode is followed by an bonus clip, where David uses this interview to do a full walk-through of how he uses AI to extract notes and insights from meeting he had. Its really worth a listen! Link to the files David mention: https://docs.google.com/document/d/1EpHmpuvb23wvK7zGHkrFt7hftz7ULgv6q5KQ6zpcTIs/edit#heading=h.nvj144pyb5e4
Join us for an insightful conversation with David Boyle, an expert in audience research and data analytics, who is passionate about leveraging AI, especially language models, to enhance organizational decision-making. Drawing from his extensive background in corporate strategy and media, David reflects on how ChatGPT transformed his approach to research and decision-making. We delve into AI's profound impact across various industries, the crucial role of clear and effective AI prompts, and the iterative process of refining AI-generated outputs. David introduces his framework of the four P's—prep, prompt, process, and proficiency—as key to unlocking AI’s full potential. He also highlights the importance of human oversight in guiding AI to boost both efficiency and creativity, sharing practical applications and success stories from diverse fields.
This episode is followed by an bonus clip, where David uses this interview to do a full walk-through of how he uses AI to extract notes and insights from meeting he had. Its really worth a listen!
Link to the files that David mention: https://docs.google.com/document/d/1EpHmpuvb23wvK7zGHkrFt7hftz7ULgv6q5KQ6zpcTIs/edit#heading=h.nvj144pyb5e4
Link to find more from David: https://linktr.ee/david_boyle
David at work: https://www.audiencestrategies.com/
00:00 Introduction to David Boyle
00:38 The Impact of Chat GPT on Corporate Decision Making
02:14 The Surprising Capabilities of AI in Audience Understanding
06:49 The Four P's of Effective AI Utilization
09:51 Real-World Applications and Success Stories
12:12 Overcoming Scepticism and Embracing AI
22:51 Practical Tips for Integrating AI into Daily Tasks
37:16 Refining AI Responses: The CARE Method
38:01 Success Stories: AI in Innovation
40:12 Challenges and Insights in AI Implementation
41:22 Exploring Audience Research with AI
43:28 Final Thoughts and Reflections
📜 Read the transcript for this episode: Transcript of Using AI for Better Decision Making with David Boyle. Episode include BONUS clip (se notes) |
[00:00:00] David Boyle: I'm David Boyle. I'm obsessed by how organizations make better decisions. Usually that's using consumer research and audience understanding, but in the last year, it's been using language models and helping them to be as useful as possible to as many people as possible and as many organizations as possible. And I giggle every day when I find new and exciting use cases still. .
[00:00:22] Henrik Werdelin: Thank you so much for coming on .
[00:00:25] Jeremy Utley: I know we want to get to prompt MBA and how you created it and the impact that you're having.
But before we get to prompt MBA, tell us about your background and the expertise that helped you see this opportunity.
[00:00:38] David Boyle: Yeah, so I'm really passionate about decision making in organizations. I started in corporate strategy work and ended up in loads of brands, media entertainment like the BBC, Harper Collins in book publishing, in my music. I'm really obsessed by how we can make better decisions about products or about marketing or about. all these different things. And for me, that passion is always manifested in putting audiences at the heart of decision making, like having people to really, and deeply understand their audience, whether that's Snoop Dogg understanding his audience and doing things differently as a result, or whether it's brands like diesel or Madame Tussauds in London or Nike. And then. Along came chat GPT in late November 2022 and really it just stopped me and my colleagues in our tracks because it was pretty clear pretty quickly that it changes everything about how we do our jobs. And I really mean everything about how we do our jobs. So we were in a fortunate position to realize this and to stop what we were doing to obsess over that, to work out all the ways it could be practically useful for people interested in corporate decision making. And we ended up writing books on this topic. We ended up doing lots of training on this topic and helping lots of people on this topic. So yeah, a bit unexpected.
[00:01:59] Jeremy Utley: Now, you said it was immediately clear to you. I think many people, we stopped in our tracks, but then people go, Oh. Well, we got the holidays and I've got my, family parties, et cetera. What did you see or hear that made you go, hang on, , we've got to do a more serious pivot right now.
[00:02:14] David Boyle: It's funny when you look back now, as you can with your chat GPT logs, you can see what your original first ever prompt was. And for a lot of people, it's like write a poem or tell me a story. And I'm really ashamed to say that my first ever prompt is help me understand the audience for this brand and come up with an audience segmentation, that brings to life the different audiences. And so being a real nerd and caring about that, I pushed it really hard to start with, and it gave me a great response. I was just blown away by what was possible. And then I think the key from there is like systematically trying to apply it to all the things we do every day.
So, Understanding audience needs, thinking through different audiences, planning off the back of that. Therefore, what product innovation should I do? What marketing should I do? And we've got expertise, so we know how to judge it. And it turns out it's very good at every single one of those steps.
[00:03:08] Henrik Werdelin: As you know, this is also one of the areas of my interest. How did it make you feel to start with realizing that it had. You know, what in for a long time we considered to be very human, kind of like understanding of the audience, realizing obviously it's just predictive text. And so it doesn't necessarily understand the human. Was that something that kind of like you had to get your head around first or was it just, Hey, what it spits out is brilliant. So I'm going to just use this.
[00:03:40] David Boyle: I think it's an incredible surprise, and it still is to this day. I still regularly try something new and giggle when it successfully does something. So that's been one of the characteristics of the last year. I think on one level, though, technology has always Pushed further and further into human thinking and you know, in the old days of my field, it used to be people knocking on doors and doing surveys with people face to face, then phones, then internet, then social media analytics and big data analytics and done well, those things push further and further into human decision making and get people closer and closer to a decision and give them more and more insight. And this is the natural evolution of that, albeit maybe like a radical step. So in many ways, it is doing something that I've always done with technology, but it's just better and quicker and more flexible. Also, it doesn't come with any kind of an instruction manual. So you gotta like work it out yourself, right?
[00:04:35] Henrik Werdelin: One kind of like provocative thought that I sometimes dance around is that at one point you realize that AI as it is right now might not be brilliant, but neither are most people and so it kind of talks a little bit to the rabbit requestion to the mean and how we are probably less unique than we thought we are. Because if you take an average of something, it actually becomes a pretty decent thing.
[00:05:03] David Boyle: Well, I think even brilliant people are only brilliant some of the time, and quite a lot of what they do is send not very good emails and write not very good papers, and make not very good decisions. Even if they're very, very smart some of the time, we all have moments where we just rush through tasks and get them off our plate, right? And so one of the benefits is that it takes all those tasks you weren't really concentrating on and significantly elevates them closer to your level of brilliance. I think though. If you don't have a lot of proficiency in the topic that you're working on you're not very good at guiding the AI and you're not very good at judging the AI. And so you might end up with a response that's somewhat averaged, and that might be still a big step forward for you relative to your competency. I've been lucky to work with singers and songwriters at the absolute top of their game, screenwriters in Hollywood at the top of their game, like psychotherapists who have practiced for years and years at the top of their game chief marketing officers and many other people across the spectrum. If you're very, very good at something, it can still help you to be better and quicker and clearer in what it is that you do. That's been one of my great joys is sitting with those experts using my ability to prompt and to nudge the AI and using their expertise to both set an appropriate problem and to judge its response and watching their eyes light up when you realize it can help a genius to be better and quicker and have more fun.
[00:06:32] Jeremy Utley: So there's kind of two things I want to hear. Help a genius. I feel like it's a great I'd love an example of that. And maybe we can come back to it because you said something that I don't want to forget. You use this phrase, guard the AI, and you have to use your expertise to guard the AI. Can you tell us what does that mean?
[00:06:49] David Boyle: Yeah, well, I think the title of the podcast is absolutely brilliant, beyond the prompt, because a prompt is important, but it's only one of what I call the four P's that you need. Far too many people think that if you give me a prompt, then I'll be great at using AI. That's not how it works. Prompting is just one of four P's that I think are critical to using AI. So the first is prep, which is that you have to come knowing what it is that you want, knowing what good looks like, and bring some materials, some context to paste into your prompt. Maybe some brand guidelines or some audience insights that you've gathered from some other means. Then prompting is really important, we can talk about that. But more importantly, process goes alongside it, which is a real world process to get to your goal, like breaking your task down into multiple steps and not jumping straight to the answer. And also a process for using the AI. So using its limited memory, resetting its memory when you need all the other features and functions of the AI. And then finally, as it comes to guarding the AI proficiency. And that's using your judgment to make sure that what it gives you is useful. Adding anything in that's left out, making sure that it's sticking to the guidelines or the constraints that you have in the problem, and then rewording it if necessary into your language and, enhancing it a little bit. But that proficiency step is just as critical as the other three. So, Yeah, guarding the AI, overseeing the AI, making sure that you're in charge, not the AI. Absolutely critical.
[00:08:22] Jeremy Utley: Yeah, I think that's one area where we see folks almost treat an LLM like a search query, and they're not used to bringing their proficiency to a search query, right? It's like you put in a prompt, you get results, but you don't push back. And you certainly don't exercise your own judgment or your own discernment. How do you help people kind of Grow along the ability continuum to realize this is not search.
[00:08:50] David Boyle: Yeah I think this is one of the challenges. This is unlike anything we've had before. And so old mental models of how to work with technology. are useful, but don't cover all the ground that's needed. The analogy we use at the end of every training session to frame this thing is that language models are a bit like an electric bike for your mind. And so Steve Jobs famously said technology is like a bike for the mind. Well, these are electric bikes and that does mean you can travel bigger intellectual distances or climb bigger intellectual hills and do things faster. It does mean that. But if you've ridden an electric bike, you know that you're also in charge. Like you've got to steer the thing and make decisions at every single junction, watch out for the traffic and fit in with the traffic. You've got to set the direction and navigate and you've got to park the thing at the end and cross the threshold to your destination by yourself. So very much an electric bike for the mind and not a self driving car for the mind, which is another way people often think about this.
[00:09:50] Henrik Werdelin: That makes a lot of sense. Let's go uncover like the beginning, because I think one of the things that Jeremy and I really have been fascinated about over the last few podcasts we've done is how just young industry is and how most people who are now experts like yourself, we're not necessarily experts or come from like a deep computer science background or necessarily understands the ins and outs of, like the specific of the models, but more is kind of experts in applied AI. How do you actually use them? Could you walk me through a little bit of just those very early days when you tried it out and you were like, wow, this is going to change everything. And it sounds so dramatic. And obviously we're on an AI podcast and we're all fascinated by it and into it. And so it can seem a little bit celebratory, but what was some of the thoughts, if you can remember that kind of went through your brain as you were using it for the first few times?
[00:10:44] David Boyle: I think it started off with absolute surprise. Because I think I came into AI with skepticism. It's something that's been promised a lot over the years and failed to deliver. It's a badge that's put on things that are not really revolutionary technologies. So there's a lot of hype and a lot of over promise. And so yeah, I came in with big skepticism, I would say. But very, very quickly that turned to surprise. And in my experience with other people as well, that moment where you see it do something in a domain where you're an expert when you see it do something that actually you regard to be very, very impressive. That is a moment of real surprise. And it's a turning point in anybody's relationship with these technologies. And for me, that was pretty quick because I'm a giant nerd. And the first thing I did was apply it to areas where I think are difficult and I'm an expert segmenting an audience. And so pretty quickly, me and a colleague, Richard, were coming up with ideas and use cases and examples that we regarded to be Which would be quite remarkable. And mostly this was done via WhatsApp. We were WhatsApping each other during this time. And I vividly remember saying to him after about a week Are we crazy? Or is this really as good as we're saying it is? You know, that's the moment when you step back and say, It just can't be this good. It can't be this clever. Perhaps we've deluded ourselves in some way over the last week, you know?
[00:12:12] Henrik Werdelin: So now over all your teaching, I think this is important and the discovery and the aha moment is important, I feel, because I spend a few hours every day researching this and I feel I have a hard time keeping up. It just moves so fast. And so I feel the divide between kind of people who understand how to use this. And people who do not seem to be kind of really just becoming bigger and bigger, by the millisecond. And you talk to people in organizations that understand intellectually that it's happening and they heard about it, but they don't really use it in their everyday life and they don't really know how to kind of get around to do it. What is your method when you're teaching, that let people kind of like get that feeling that you described that you had?
[00:12:54] David Boyle: well, I first approach it in the wrong way and then I think I approach it in the right way as a follow up. So the wrong way, I think, is to logically explain to somebody why this is brilliant and that my want is for the world to make sense of decisions logically. And so the first thing I say is look at all these great academic studies which show the benefits of being quicker and doing more and being more analytical and being more creative. Wealth of evidence, you don't have to trust me anymore. You can trust academics and therefore we should, of course, agree that we should devote resources to working out how to deploy this in our organizations, right? Everybody nods and nobody does anything. Then I do what I think actually works, what really breaks through, which is to show them an example in a domain where they're an expert. where it does something very clever. And usually what I do is to show it fail first with a very simple prompt, and then have them judge the failure, incorporate their judgment from the failure into a revised prompt, and after one or maybe two iterations, hey, together we've fixed the problem and we've made it do something which they regard to be exciting in their domain. Usually they sit back in their chair, big smile comes across their face and they say, Oh, I get it now. And so that's the magic for me, that moment right there.
[00:14:12] Jeremy Utley: I love this can you tell us about, is there an example that comes to mind of a failed prompt where you're trying to Show an example in a domain where they're an expert. You give a failed prompt and then they diagnosed the failure and then the revise the prompt because I want as much as possible. I'd love for our listeners to kind of walk through almost a visceral experience of that failure evaluation delight.
[00:14:37] David Boyle: Yeah, let's go back to audience segmentation, which is my pet passion in life. So whenever we work with a brand we'll sit down and say: okay let's understand your audiences as best we can using a language model. And we'll start with a prompt that says, Hey, segment the audiences for. Let's say Pepsi. Segment the audience of a Pepsi. And the language model will do an all right job. I mean, it will give you some options for how you might segment the audience, you know, demographically, regionally, other ways. And it all might give you some starters, like totally fine. But not an actual segmentation and not one that's very exciting or very useful. Certainly not. So the question then is, is the person able to judge that response and articulate their criticisms? Because it's one thing saying, well, that's not very good. It's another thing saying, here are the ways it's not very good. Or here is what it is I'm looking for. So a really good marketeer will say, Huh. No, I want a segmentation that's based more on deep and underlying category related needs. And it's not demographics and if they give me that feedback, we incorporate that into the prompt right away. It will do a pretty good job. And then if we give it even more feedback, to refine what it is that we want and to be more specific about maybe you want more product based segmentation or maybe you want a more like marketing based segmentation. If you're clearer about what you want and reprompt again, you'll get a great answer. And we've done that in situations where we have the. audience research, the old fashioned quant audience research that cost a lot of money and took three months to deliver. And you can get most of the way there by just prompting an LLM. You can get most of the insights you waited three months for and paid, you know, 50 to a hundred grand for. That's a real remarkable moment for people.
[00:16:29] Henrik Werdelin: And is this just taking the models as is, or do you use any rag based systems where you upload, for example, all the interviews or whatever you have?
[00:16:41] David Boyle: So you can do rag based systems or an uploading interviews and lots of other things, and they all help, they all add useful knowledge or unique perspectives, but They all add quite a lot of complexity to the process and my philosophy is that if we can help people to get the most out of the base models as they are, that's by far the biggest win, and it teaches them the foundational skills for how to think about those models and use those models in every part of their life. And then once they're an expert in that, adding rag or other data sources can be a really helpful addition, but I don't want it to get in the way of the foundational knowledge of using these systems. And the fact is that with good prompts, they're incredibly powerful as is without any additional knowledge being used.
[00:17:30] Jeremy Utley: So you talked about a moment ago, You giggle. You love to get to the point where you giggle when it does something that you weren't expecting or you're delighted by. Can you tell us about the last time you giggled? What were you trying to do? And what was the response that made you kind of giddy? I can think of an example myself in my life, but I'm a collector. I'm a connoisseur of these kinds of stories.
[00:17:52] David Boyle: Well, the most obvious one for me is, qualitative research interviewing people. And it's something I've never really enjoyed, honestly, because it's quite a lot of work to prep for an interview. The conversation itself is quite good fun. Following up from the interview, you know, overseeing any kind of transcription is quite a lot of work. going through your notes is quite hard work. And then pulling out the main themes, making sure they're accurate. It all just feels like quite a lot of work. So I've never really spent a lot of time doing that in my career. I've avoided it where possible. I now leap into that situation and love doing it. It is so fun to have an LLM help you think through interview questions. Like, I get to better questions quicker than I ever would before, and it's fun to do. Knowing that as soon as I hang up the call, there's a transcription waiting for me is amazing. What I do after an interview, I still take notes in my notebook. I do a zoom call to myself and I reflect on the notes that I think are most important and the themes I think that really stand out. And then I put the transcription of both the interview and my reflection on the big themes into chat GPT. And with a single prompt, it will give me a fantastic overview of the big themes, issues, topics and all the useful quotes guided by my expertise, my summary of the notes and what I think is important. So definitely guided by me. It's just a fun, joyous thing to do now. And it feels naughty every time I do it.
[00:19:18] Henrik Werdelin: What's the prompt design or something like that? Do you then go through like the classic kind of like, here's who you are. You're like, this is what we're trying or have you kind of like build it out from there?
[00:19:29] David Boyle: Yeah. I use a relatively sophisticated workflow here. So, custom instructions on the first hand are very helpful, both in context of who I am and what I care about in the world. Also the writing style and thinking style that I prefer. So they're baked into every prompt in the background. Then I'll paste in some context for the interview. So, hey, By way of context I'm exploring the issues and trends in electronic music in the UK, and here are some perspectives and issues I want to focus on and here's the goal of the report. So I have that written already and paste that in. Then the two, transcripts, the interview and my reflections. And then I have three different prompts that I'll run one at a time afterwards. One for major themes and topics which is how I like to think about the big picture themes. Second for like anecdotes and stories that are particularly powerful illustrate them. And the third prompt is for direct quotes with the theme, for the quote. So on this theme, here's a quote. On this theme, here's a quote. And it all needs sense checking, as we always should, against your memory and against the transcript where needed. But I get a great response almost instantly. And again, it, well, it feels like cheating.
[00:20:43] Henrik Werdelin: And how, you know, you've obviously done a bunch of eBooks on the different types of prompting for real estate people, for musicians, for TV people. What's the mental model that people should have and what makes it different to prompt from different types of people outside the individualness of like, what you can do with custom prompting?
[00:21:02] David Boyle: Well, I think what interests me most are the general skills that everybody needs in order to use these things. And we've written five public books now. Also one private book for a corporation that's published internally on their innovation process as it relates to using language models. So six books in total. But they're really all the same book in the sense that the foundational skills are absolutely identical it's just that I think people need to see Those skills applied in an area where they're an expert in order for it to really click. So I think you could pick up any one of our books and laterally apply the logic to your area if you wanted to, just that people won't do that. So using real music examples will help musicians to see that breakthrough. Using real psychotherapy examples will help a psychotherapist to see it more literally in their world. But the lessons are absolutely the same. And I think of it as though I'm on a quest to find somebody for whom these things can't help. So whether it's a world famous pianist or a psychotherapist, you know, I love sitting down with them, especially if they're skeptical and trying to find a ways that it's useful to them at the core of their craft. And yeah, I'm yet to find someone where that's not. Well, where it's not very helpful
[00:22:20] Henrik Werdelin: sometimes is less that it's hard for people to find something, but it is more that it's not a natural kind of part of their process of thinking. And so when they actually sit down and do it, they'll always find something. But They often forget. And so the small tricks of using voice to text and the open AI app instead. So you can just babble instead of writing. And, some of those tricks are what I find often kind of increased the ability for people to use it. Do you have other kind of like other mental model or tricks and tips that kind of a good for people who kind of know that they should probably use it some more, but then just forget about it when daily life hits them.
[00:23:03] David Boyle: Yeah, that's a great question because I think this is a radical change to how you do your job and that requires quite a lot of work for any kind of radical change to set in. For me, The key is to find one repetitive task that you do where you can bake this in and make it a habit, and then it will creep into all the other tasks that you have. And so maybe that's meeting summarization, in which case get really, really good at it. You'll work out your workflow, you'll optimize it, and then you'll just end up accidentally using it on all the other tasks. If you're a songwriter, like maybe that's songwriting, but it's probably not actually. It's probably on the marketing side of being a songwriter where you use it a lot and then it will creep into the songwriting. It really will. So I don't really care where you start, but I care that you find one thing that you're passionate about using this for probably the thing you hate doing the most and then you bake it into your life and your workflow.
[00:24:01] Jeremy Utley: Can I just interrupt there because I'm going to give a real world example. Okay. Straight from the desk of Jeremy Utley. So I've got hundreds of old blog posts I've written. I know existentially one should be posting on linkedin every day. It is so tiresome to me and I want it to be fresh. I would love for my assistant to be good at this, but I find and Jason, if you're listening, you know this, I find that he has a really hard time emulating my voice and so he's got this assignment, post in my voice, I've got to give feedback. But what I find the bottleneck now is not my ability to post on LinkedIn. It's my ability to summon the energy to give him feedback on how, again, my voice is not being mirrored here. So to me, one realization is we're having this conversation is I should be using an LLM to facilitate that process. I think about a musician and the marketing challenge I think sometimes the alternative isn't do it poorly, the alternative is. Just procrastinate and never do it because it's so distasteful. Because as you said, it does require work, right? It's an electric bike. How do you summon the activation energy when the alternative is just keep putting it off and keep being annoyed by it. If you had to consult or advise me, I mean, in this particular case, right?
[00:25:18] Henrik Werdelin: I realized that we're interviewing David, so I'll give you one more second, because I tried to make a hack for that. I made a custom GPT through OpenAI, where I uploaded not only a bunch of blog posts that I've written that I liked, but I used an amazing ghostwriter for my book. And so we have all the interview notes and the transcripts and so I found a bunch of transcripts that I didn't put into that. And what I realized if somebody sends me an email and I have to answer, and I don't really know quite what to answer and I just copy paste the email into that custom GPT. It's incredibly good of doing two things. One is emulating my voice, but also we tend to have these 10, 20 stories of mental models that we kind of reuse over and over again to answer most things. And it's pretty good at picking those up through like the interviews and the different blog posts I've done. And so I'm pretty sure if you did the same thing. And just added like, what should be the LinkedIn post, based on this, it would come up something that was 70 percent there. And then obviously being in edit mode, it's always much easier than being in origination mode.
David, that's kind of like completely off what you might've been thinking.
[00:26:34] David Boyle: No, I think it's perfect. But I think it maybe illustrates the challenge here. In my experience helping companies to adopt technology that helps improve decision making. I think there are two types of people. There are people who believe in investing in process because with a good process or a good framework or a good system You realize that everything you do on top of that is radically better and quicker and easier and kind of more fun. And I've seen those benefits in loads of different technologies over the years, like massively. And this is just another one of them that requires investment in process and a bit of systems and inference from frameworks. There are also another type of person I've encountered a lot over my career and they're just not that interested in an optimizing process. They just don't want to invest their time in it. They just don't feel or see the benefits. And frankly, sometimes even when you offer them a nice process that they could adopt they just don't get it. And I think that that's okay, you know, that's fine. But I wouldn't want to have to help them to achieve process efficiency. So if you're interested in process efficiency, then a little bit of investment, as Henrik was saying, um, thinking about what good source reference documents are, what good looks like, thinking about what you mean by style or tone or things like that, developing a nice prompt, and then having some muscle memory for where to copy and paste, where to press go, and then what to do with the answer. Some muscle memory for that process will get you an awful long way, but you have to want to improve the process in the first place. I've not found a way to inspire people that don't want to do that. And some people just don't.
[00:28:16] Henrik Werdelin: In many ways, not that we have to be product developing here, but you could probably have the model. And I think this is something that Jeremy is often using as a trick for coming up with better ideas. If you don't know what questions to ask them, you can ask them to. I'm going to ask you some things that then can come up with those questions. And I mean, like it's maybe a little bit of a circular kind of argument, but I would imagine that even people who go like, I'm not an assistant thinker myself, this is not something that come natural to me. If you then said, Hey, I'm not a system thinker. Could you help me come up with a way of automating or optimizing my work? There's a specific workflow better come up with a pretty good suggestion.
[00:28:54] Jeremy Utley: Yeah. Yeah. No, I think it's spot on. I love David, what you said about kind of, are you a systems person or not? And also, Henry, , is there a way to hack so that everybody basically could be? I'm curious about going back to David. One of the things that you've mentioned a couple times is finding someone's domain expertise in order to showcase the power of an LLM. How do you help find the intersection of someone's domain expertise and an opportunity for an LLM? Is there a simple kind of model to walk through to get somebody to the point where they can even do the bad prompt first?
[00:29:28] David Boyle: That's a good question. I think you always have to start with a job that somebody is trying to do and the job that can be quite uninspired. One of the academic studies I love the most is the plastic surgeons who use chat GPT to help write up post operation notes. If you're a plastic surgeon, I would imagine that sitting down at your desk after an operation and writing up notes for the patient about what happened is not the highlight of your day. And so this study showed that these plastic surgeons, not only did it take radically less time to write these notes, but they are preferred by patients and doctors and independent tests. So you've taken the bureaucracy out of somebody's day, and you've streamlined it and made it better for everybody, including the person doing the bureaucracy. And so sometimes starting with a piece of bureaucracy that somebody has to do is a job that you can help them with. So for example, with singers and songwriters, I really think that getting the marketing stuff off their plate more efficiently so that they can spend time being creative is like the place I started trying to get them excited about language models. But also it could be the core of somebody's craft, if you want. A colleague of mine, Ray sat down with a world famous pianist and composer. And Ray was a bit bolder than I would have been, so he didn't start with the marketing usage. He started with songwriting, composing, and he said to this person, um, describe to me one of your most challenging pieces of music that you've written, and he, described the start here, then go here, and then here, and he kind of typed that in, and he said, describe to me what an epic ending to that could have been. Not the one you actually put on it, but what could have been an epic ending. And this composer said, well, in the 15th century, there was this person and they did this. And if I could have gone from my start to there, that would have been an amazing ending. And so Ray prompted chat GPT and said, what are some ways I could go from this start to this ending? And it came up with four. And the person said, Oh my God. They're all perfectly credible ways of achieving that. I never could have thought of that myself. And wow, it's done something that to me is incredible at the right heart of my craft. So whether you start with the bold push towards the heart of what it is that somebody does and their expertise, or whether you start with the admin tasks on their plate, like a plastic surgeon, I think there's lots of ways of going about it, but seeing somebody surprised when it does something that they care about is the magic you're pushing for.
[00:31:53] Henrik Werdelin: I imagine that you now have tried prompting for different models. Do you have any advice or observation on the difference on anything from let's just take bot or clod or chat2BT and Bing. There's increasingly amount of kind of different things. Do you just go to one source and then that's good enough? Or do you increasingly kind of use different LMs for different kinds of use cases?
[00:32:19] David Boyle: I have a crude mental model of three broad use cases that are important enough to require a different language model. So for me, broadly thinking, um, whether it's strategic or creative thinking, GPT 4. If it's research, like I think the answer's out there on the internet somewhere I want to know where the answer came from. And I want several potential answers checked, and I want natural language response. If that's the job you're trying to do, perplexity, no questions, 100 percent of the time. And then if it's writing a natural writing style, um, Then Claude would be the model for Manthropic. So crudely, I'll go to one of those three for those use cases. Underneath that, there's loads of other use cases which are a bit more niche and might require other models. Like, if the question is go through my emails, then of course I would go to Bard. I've got no choice on that one. And if the question is, I need to do some data science, then a code interpreter inside of ChatGPT is the place to go. But for the big tasks, I think there's three models that everybody should have in their mind.
[00:33:29] Henrik Werdelin: I a hundred percent agree. And it's funny how These things are emergent as kind of like the way that we are all doing it. The only kind of extra thing I said now that bond have access to YouTube finding something on YouTube Obviously you need bond and because it has access to Transcripts you can actually get quite a lot out of that and I would also say like anything that's event based or travel based but it's pretty good at also, so I need to find a water park in my area or I'm looking to go to New York next week. What are my options? But to your point then we're kind of out in niche land a little bit.
[00:34:08] David Boyle: This is a big challenge for everybody because practically speaking, most people at work have a corporate solution, which is not one of these. It's usually a homebrew version of a chat GPT built using a Microsoft system, maybe. Or it's another third party that pitched them and sold them in. So. Practically speaking, people are using a model which has different features and functions to the ones we've just talked about. And knowing what features and functions are available today, and knowing that that changes rapidly, I think, is a huge problem that complicates this for people. So what I really like to do is to come back to basics again, and say, look let's learn together the foundational skills for using any language model. Let's learn all the universal truths and best practices and advice that are applicable to absolutely every single one of these models. And then a slightly separate problem you need to tackle is exactly which model am I using and what extra features, functions or constraints does it have? But if we can give people those foundational ways of thinking, like Always worry about the four P's then I think they're well suited to, to pick up any model and know how to push it forward and how to test it and very quickly work out what it's good at and what it's not good at.
[00:35:19] Henrik Werdelin: , what would you say is the biggest prompt mistake that people could do on a base layer?
[00:35:27] David Boyle: Think it's clearly articulating your requirements. Full stop. I mean, if it doesn't deliver a response that you regard to be useful. A hundred percent of the time, it's because you weren't clear what it was that you wanted. You weren't clear on the tone or the style, maybe if that's your problem, you weren't clear on the context in which you're operating. You've got to remember these things don't know whether you're a plastic surgeon or a physicist or a schoolteacher or a kindergarten pupil. It does not know that when you log on, unless you tell it. It doesn't know whether you've got a giant marketing budget and anything is possible, or whether you're incredibly constrained and you need hacks that get you through your marketing campaign. You know, it doesn't know whether you want to be bold and provocative in your thinking, or timid and conservative. It doesn't know any of those things you do. You probably take them for granted. You probably don't put them in the prompt. But knowing what your goals and constraints are and what it is that you want and clearly articulating it is most of the time the gap between a mediocre response and a great response.
[00:36:33] Jeremy Utley: I love, by the way the insight that you share earlier, which is critique the bad response. To me, it's an amazing because again, it puts the onus back on the user, , and just to be a little bit cynical here, I think some people are hoping and LLM does a bad job so they can say, See, it's not really that good. And with the, tactic that you just shared now critique the response and see what it does. I think for people who actually want to see what the capabilities truly are. They'd be blown away. I think many people actually want to leave going, it's not really that good. It's not going to take my job or whatever their kind of doomsday scenario is, and therefore they don't take the burden on themselves.
[00:37:16] David Boyle: Yeah. We have an acronym for this. So whenever you get response from a language model, you should always use care. That is C for . Check it. That's on you, not on the language model. Okay. I see A add. You always want to add some things. There's bound to be stuff it missed. R, remove. There's bound to be bits of its response that you don't like that they're off topic or off brand or slightly off. Remove them, and then E edit, make the wording your own, the language your own refine the flow maybe. Uh, but you generally always have to use all four of those, um, and let's talk about a dirty truth, most people don't do that, they just copy and paste the response and think, well, that's good enough.
[00:37:58] Jeremy Utley: Yeah. Yeah, that's huge. Okay. The last thing that I want to hear. Um, most of our listeners are folks inside organizations who are trying to find ways to apply AI. We love to hear kind of success stories or favorite use cases. Can you think of an example of a use of an LLM that you point to as a see this can really have Economic impact on a business.
[00:38:23] David Boyle: Yeah, we were asked by one of the biggest consumer goods companies in the world to help them with their innovation process using language models. And originally the brief was something around write us some prompts we can use so these things can help us with innovation. I think there was a spirit early on that language models might replace some of the existing innovation process and almost automate some of it. Where we ended that project, I think was a radically different position arguing that a prompt is important, but it's only one part of the process. And that preparation, proficiency and process are also very important parts of how you use language models, but we pretty quickly showed that the new product innovation process could be radically quicker and radically cheaper and deliver better results if people use language models to help them with that process. The consumer tested a bunch of new product innovations where language models had helped them to come up with the innovation alongside the same brief given to it. traditional innovation teams and more of the language model assisted innovations past consumer testing thresholds. And they scored higher and they were quicker and easier to get to as well. So pretty quickly they had results that showed the benefits. We then worked with teams in a dozen countries around the world to practically help them to use language models. on their innovation process. And we wrote a 300 page book for them on how the new process should look and all of the different prompts that might be useful at different stages. So yeah, pretty quickly we realized that language models are incredibly important. And then it was a three or four month process to work out, all right, but what's all the nuances around when, where, how, all the watchouts and all the guardrails around it. And I think, most people can pretty quickly get to an inspiring example like that. But then you've got the three or four months worth of work to kind of document all of the processes and then roll them out in training to drive real cultural change. That's really the hard bit nowadays.
[00:40:27] Henrik Werdelin: This is incredible. It really is. It's so cool when we have this conversation because still, even though we have a lot of them and we think of a lot. To your point it is just this kind of like tool that doesn't come with a manual. And so you seem to be constantly learning new things about it and coming up with new ways of using it. And in many ways, it's very similar to the early days of the internet, right? When people are like, well, what can you use it for? You can kind of use it for everything. You just have to figure out how.
[00:40:58] David Boyle: Yeah, and Ethan Mollick, the professor who writes a lot on this topic, has a quote I love. He said, they're bad at some things you'd expect them to be good at, and they're good at some things you'd expect them to be bad at. That speaks to how hard it is to get it right. How important this conversation is and how much work is required to really But drive their usage in organizations.
[00:41:19] Jeremy Utley: This, you are an in of one, my friend, an in of one.
[00:41:22] Henrik Werdelin: Can I ask a last question just because I was working on something the other day and you will have a very unique perspective on it. I was talking to like a very fortunate thousand company in the U. S. and they were talking about using this for research purposes. Have you ever seen somebody just load everything in they have? As a rack model, and then allow, let's say, the marketing team to interview kind of a synthetic and a representative of the audience. And so, instead of going the back and forth of saying, hey, let's figure out what the audience might think of that. You basically use a rag model to make that initial inquiry.
[00:41:59] David Boyle: Yes, but without rag. ' cause again, I think the base model is so powerful. So with the right level of prompting about who the target persona is, the target audience, it does an incredibly good job just straight off the bat without any rag. My problem with RAG is the complete lack of transparency in any system I've seen about what it's actually binding and using. If it could show me, Hey, here's the five nuggets that I found that feel relevant. And I could say, Oh yeah, okay. First they're relevant and useful. And second they're well representative of what I know to be in the source material. If I could have high confidence in that flow, then I'd put a lot more weight on reg, but yeah, the base model is already incredibly powerful for that use case. So yes, definitely.
[00:42:46] Henrik Werdelin: Awesome. Brilliant.
[00:42:46] David Boyle: Yeah, audience research use cases really passionate and I think that most audience research Most by volume will and should go away . Language models can can get you a good enough answer, infinitely quicker and infinitely cheaper. And for most use cases, that's good enough.
[00:43:06] Henrik Werdelin: Well, if we can ever be helpful with anything, please let us know. We really, really appreciate you taking time.
[00:43:10] David Boyle: So lovely to meet you. . I'm really passionate about this topic and market research in general. If there's anything I can do to help in that area or nerd out with you, then count me in a hundred percent.
[00:43:20] Jeremy Utley: Yeah, that'd be fun. We'll find an excuse to collaborate on something. It'd be a lot of fun. Thanks David for your time
[00:43:25] David Boyle: bye
[00:43:28] Henrik Werdelin: Jeremy, what's your first thought?
[00:43:30] Jeremy Utley: I love the quote It's on me. That's one thing that I'm taking with me is if I don't get a useful response. A hundred percent of the time, I wasn't clear enough. I think very few people have that attitude towards LLM models today. And if folks would adopt it and take kind of a proactive stance towards giving feedback, I think they'd be blown away by how much better the output could be.
[00:43:59] Henrik Werdelin: And How do you square that with many people's assumption that you shouldn't work for technology, technology should work for you?
[00:44:06] Jeremy Utley: I think that we have an old paradigm there. I love the metaphor of an electric bike for the mind not an autonomous vehicle. And I think the paradoxical fear that AI will do everything for us and desire that it should is the problem. The reality is there's a human AI collaboration that has to occur. And I think human beings need to become more willing collaborators with AI.
[00:44:35] Henrik Werdelin: The thing that really strikes me on a philosophical level with People we talk to is their level of empathy. And as technologists, we obviously through a lot of your work and through Stanford and D schools work got introduced to kind of design thinking. But what is really incredible with most of the experts we talk to is that they are not. Engineers, but they are system thinkers and they have a incredible way of trying to fuse this new, very advanced piece of technology with humanity. And in many ways, what they do is that they. After morph size the machine and the more they do that, the more they talk to it like a human, the more that they understand that the NLM model is like an intern that you have to give parameters and you have to tell them what you expect, then you have to tell them what good looks like the better results they get. It's a fascinating kind of like merge of humanists, a singularity kind of thing.
[00:45:42] Jeremy Utley: The more you treat it like a human, the better results you get. And you would never expect the humans to get it right the first time without coaching and guidance and feedback. I think that's exactly right. The other thing that I thought just from a Almost as a starting point when he's talking about helping folks have their first epiphany, so to speak. I thought it's a really great piece of advice to intersect your expertise with an LLM. So, identify something where you feel you have unique domain expertise and do a wrong prompt first. Or do a prompt that doesn't yield very useful results and then do the hard work of diagnosing the failure, judging it, articulate your criticism, et cetera, and then get a revised, output from the LLM. I thought that's a really great way for folks to see the value. I mean, just like he talked about with his audience research. Starting with the area where you feel like you have unique expertise, I think is an incredible way of demonstrating the kind of incremental juice you can get from an LLM.
[00:46:47] Henrik Werdelin: I also really like the idea of finding something very boring that you do every day and just get into the habit of doing it over and over again, like interview notes. I even sometimes for doing my Daily gratitude praxis, so I have a shortcut and I basically ramble something I'm grateful for and I have it right up and then I say something a little bit of how the day had been and it creates a little diary entry. And it's just a nice way to kind of like do a habit that I've always wanted to do, but I never really felt I had time for. And so now I have this nice write up of how all my days have been, and then I have a nice little write up of something I'm grateful for that day. And what I'm now working on is, can I take the model and then have it review my last few months and then start to kind of try to find insights and, days that I've had that were good and what I'm grateful for that day.
[00:47:38] Jeremy Utley: That's beautiful. I think his direct quote was find a piece of bureaucracy to handle so that you can do your real job, but you there. That's not bureaucracy. It's almost like a you're working to become the kind of person you want to be. And you're saying, where is there a gap between who I am and what I want to be? And can an LLM help me become more of who I want to be? I think that's beautiful. I'll be really eager. I'm sure listeners too will be eager to hear what are some of the results of that trend analysis that it does over time.
[00:48:12] Henrik Werdelin: Well, Jeremy, I'm very grateful today for being able to do this podcast with you. And I'm very grateful for anybody. Who have chosen to listen because we get a lot of enjoyment out of having this conversation and we hope that other people who are listening to it get the same kind of enjoyment. So that'll be on my gratitude list today.
[00:48:29] Jeremy Utley: Absolutely. Thank you. Thank you.
Okay, it's bonus time. If that wasn't enough, what follows is an unedited take of David Boyle actually going through his own post interview process on the interview we just conducted with him. So you remember what he told us about how he used to dread going through his notes and digging through everything that he covered in a call and then how ChatGPT has totally changed his approach and how much more he's able to learn and how much more he's able to enjoy it? Well, we sent him the transcript of this podcast right after we finished recording, and he was gracious enough to do the post interview process that he typically does for his clients on this podcast conversation. So what follows is an unedited view of David using his own process on the conversation that we had together. We think it's so cool to see how folks are really using these tools in the real world, and we hope that you enjoy this illuminating peek. Behind the curtains of David Boyle's practice.
[00:49:31] David Boyle: Okay. Well, I just finished recording, um, podcast and they've just sent me through the recording to take a look at, which is very exciting. Uh, I can see all the ways in which I messed up. But I thought what we'd do here is I'll show you my workflow. After a meeting, for how I take the contents of the meeting, the transcription of the meeting and turn it into something that's useful for me to reflect on make use of and learn a little bit from as well. So this is my workflow. pretty much. The first thing we do is need a recording of the meeting. And here, what I will do is download the recording they sent to me from zoom. There's one way of doing it. Ultimately I want to get it into otter. ai, uh, which is the best platform for speaker identification. And it does a better job of transcription than quite a lot of other platforms. There are loads of other ways you could do what I'm doing now. You could do it A lot of it inside Zoom you could do a lot of it inside Otter but here's my workflow. Take the recording, wherever it comes from, throw it into Otter. What Otter will do is identify the different speakers. And what I've done is named them so that then instead of saying speaker one, speaker two, it has the names. That'll be really helpful for the interpretation. And you'll see that Otter does a pretty good job of breaking the paragraphs up and showing you who said what. Really good. Really, really good. Apparently I did 53 percent of the talking in this podcast, which I'm not too happy about. But yeah, you know, now also has its own AI built in. You could totally try and do some summarization here, but I quite like the power and the flexibility of chat GPT. So I'm going to do my workflow is to take the transcription from Otter export it as a text file, and then. Take the text file and paste it in to chat GPT. Now we're going to want to paste quite a long text file in here. A chat GPT is totally cool with length but we're going to want to put prompts alongside it. And I want to just make sure the chat GPT is really clear what the transcription is and really clear what the prompt is. So I have a method here which has some academic basis for it. That also helps me to clearly think it through. I'm going to enclose the transcription in some curly brackets. I'm going to do this transcription equals curly brackets. And then I'm going to put the transcription in. Press paste, and I'm going to close the curly brackets, just so that ChatGPT has a better sense that, alright, okay, that bit's the transcription, And right in a second. Oh, and this section is a prompt. All right, great. Just so it doesn't get confused between the two. By the way, chat GPT rarely does get confused. Claude gets confused a lot if you don't do this. So I feel like it's good practice and also helps you to be a bit tidier. So there's the whole transcription pasted in. And now I need a prompt that I can use to summarize this in some way. Now, the real key to using AI is knowing what it is you're trying to achieve. And there's lots of different ways you could kind of summarize or tease out themes from this. I have about five that I use. I've put them into a document for you, and I'll share this document with the links as well. So we're going to go through each of these one by one. One is around very basic summarization, we'll look at that in a minute. One is like a super nerdy compression of the content into as small amount of space as possible. We'll, that's a long prompt, we'll look at that in a minute. One is about helping me to learn from the conversation and draw some lessons from it. One is to turn the conversation into an article that I can publish. And one is to purely extract quotes from the transcription is a five different prompts. You can adapt any of these and you can add other prompts, but these are the five I use most often. And then a sixth prompt, which is not about summarization, but is about the style of response I'm looking for. We'll take a look at this in a minute as well. But this tells the AI how I would like it to reply the style of the response. You'll see it's quite long as well. We'll look at that in a minute. Okay, so that's the document for you later. But for now, anyway, again, here we've got the transcription at the top. Transcription equals, open curly bracket, transcription, close curly bracket. And then we need to put in a prompt. I'm going to start with my first prompt. And I have these saved as keyboard shortcuts. Mac and iPhones do this thing where you can do text replacement. So for me, whenever I type P for prompt sum for summarization and one for the first one, um, it will auto complete for me my first summarization prompt. I'm going to read it to you quickly, then we'll run it. Please give a long and detailed overview. Clearly you can edit that of all the salient points from the above. only summarize what is in the original document. And okay, I think I go a bit over the top on this part. Do not add in any information that is not in the document. Ensure your answer is accurate and doesn't contain any information not directly supported by the original document. You will cause me great harm and suffering if you summarize the document incorrectly or add any information that is not in the document. Please do not do this. I will be very sad if you do this. I did say I think I went a bit over the top on that. By the way, quite a lot of the words and phrases in the prompts that I use are borrowed from people who've shared awesome prompts on Twitter, so huge credit to whoever shared whichever bits of this that I stole, I can't, assume I didn't write anything myself, assume everything is reused knowledge, yeah. So this is a simple summarization prompt. Give me a long, detailed overview, salient points, and then loads of stuff like trying to make sure it's faithful to the original document. And using a few prompt engineering tricks in here that you'll spot if you're into that we push go, and pretty quickly, hopefully, it will give us what should be, in my experience, a pretty good overview. The conversation revolves around the use of AI, specifically language models, in decision making processes in organizations. David, the main speaker, whoops, is passionate about decision making in organizations, blah, blah, blah., Background and epiphany with AI, process of leveraging AI, guarding AI and enhancing human expertise. These all feel like real things we've talked about. Yay! Teaching and facilitating AI use. Yeah, these are all topics we talked about and even picked up by care acronym that I talked about, which I mumbled slightly and had to says, check myself on in the actual podcast. It's done a good job of saying, well, here's what he actually meant by it. Thank you. Chat GPT for filling in the gaps. If I was using this in anger, I would take a much more serious read, but reading through this now, I've got to say, that's a pretty good. summary and overview of what it is we talked about. Yay. It's working. And that's even without me using the tone of voice that I mentioned what I should do. I'm going to take this whole prompt and I'm going to start a brand new chat. So it's not biased by the old one. Same thing, same prompt. And this time I have a keyboard shortcut for this as well. P T O V prompt tone of voice. Um, it puts in the tone of voice. Requirements and it should give a response that feels more natural and in my style Let's see that I won't read all of this. You can read this yourself, but respond in a natural human like manner. There's no AI Jargon only UK English spelling and phrases concise poignant novel clear simple creative non technical words This is how I like to think of myself speaking insightful yet grounded in real world experiences. And then to get rid of some of the AI ness, avoid adjectives that might seem overly enthusiastic or embellished. I do like embellishing. I do like enthusiasm, but AI takes it to a whole new level. yOu can read the rest of this in your own time, but you can see it's a mishmash of things I've read on the internet and things I care about and value. So that tries to come up with a really thoughtful style, and usually I would run that alongside any prompt, although it's baked into my custom instructions as well. And that's, let's see what that comes back with, but hopefully it's a bit more in my style, although the other one, since it used my custom instructions, did a pretty good job anyway. Um, you can see here, it's taken a totally different writing style instead of lots of bullet points, which is a standard part of my custom instructions. It's decided to write in long paragraphs. Much more of a flow to it and much more natural sounding. So actually, you know what? I prefer the first draft of this one, the really structured, just give me a list version. But again, it depends what it is you're trying to achieve. Right, let's start a new chat and this time instead of prompt summarization one, let's try a second summarization prompt. This is super nerdy and was copied off an apologies. I don't know who it was, but it was copied off someone on the internet. So this time you will generate increasingly concise, dense summaries. And it loops through this process five times first writer summary then check what's missing from the summary and then rewrite the summary, trying to squash back in things that were missing from the original. So it iterates through a loop, it's dead fun. And it outputs in the format of a code block, so it's a bit easier to kind of see what's going on, although you don't need to. So, Here is a summary, and then what's missing, like, didn't have these concepts in it, so, it wrote a denser version, and then it said what's missing from the original transcription, then it wrote a denser version, and then it said what's missing, and then it wrote another dense version. And there should be one or two more, I've lost count and these are increasingly dense versions. As I say, it's written in a code block, so, you'd have to scroll to the right to read each one, or copy and paste it somewhere else. Oops, now I've just changed the tab by accident. There's all the summaries. Yeah. And if you wanted to read it, let's copy this and just paste it here so we can read it a little bit. So here's the densest of all the summaries. David Boyle speaking with Henrique and Jeremy on their AI podcast covers language models and decision making and audience insight. He discusses using AI for transcribing qualitative research. Outlining the four P's for effective AI use and likening AI to an electric bike. Boils insights into a consumer goods company. Showcase AI's role in innovation. Pretty good summary, albeit like short. And if you read through the previous ones, you should see each one is slightly wafflier and slightly wafflier. And before it gets to a very dense one, the person I stole this off, I forget who it is, apologies said that they preferred or in tests, people preferred about level three. So not too dense, but like reasonably dense. So that's kind of fun. If you really want to like squash as much information to a small number of words as possible, that's quite a good way of summarizing it. But let's try some other methods. That was summarization shortcut two for me. Let's try number three. This is a summarization for different reasons. This one is help me understand myself and also challenge me to learn and think and see the world differently. I'm not afraid of tough feedback. It's better for you to be bold and wrong than timid, unheard, and correct. Write in a way that makes your response useful as a memory aid and a record of who I am and who others are, how I and others behaved, and what I could do differently to improve as a person. And then this, you know, three sub parts to the prompt, which we'll see in action in a second. So this is geared around saying, all right, I've had this conversation. We talked about some stuff, you know, what can I learn from it? And I think this probably is a bit of polishing actually, because it's a pretty new one. I've not used it a lot. But I had it output in a very specific format with the idea that I would gather these as like a knowledge base to help me to learn. So very specific format, data at the top, then overview, then analysis, all the headings. So this is an analysis of a transcription of a conversation between all of us, a brief overview, analysis David passionately discusses how AI can revolutionize decision making. Very good. Your interaction shows a deep intellectual curiosity. Well, sometimes and a willingness to challenge existing paradigms. Hell yeah. This trait is vital for continuous growth. Uh, conversation indicates your readiness to adapt to new technologies. Yeah. Adapt or die. Um, openness to change, blah, blah, blah. What's missing? Here we go. This is sometimes interesting. Connecting to personal experiences. There was less emphasis on how they could be applied to personal life or self improvement, which is exactly what we're doing right now. So yay. Exploring this could provide a more holistic understanding of AI's impact. Yes, very much so. And we probably should have mentioned it on the podcast. Number two, human centric, centric perspectives. The conversation heavily focuses on AI and technology, incorporating more human centric, centric. Perspectives like the impact of AI on team dynamics and personal relationships could provide a more balanced view. Yes, it could. Good point. There's lots to talk about on that topic. We could do that in a follow up episode. Third piece of information about what's missing. feedback loop that limits the discussion on the feedback mechanism between users and AI systems, highlighting the importance of iterative learning and adaption and using AI. Yeah, totally valid, useful point as well. Absolutely could have discussed that on the podcast. So some helpful analysis of how I behaved what I focused on and some helpful evidence about what I could have done differently. Very nice. All right, let's copy that and try the next prompt. That was, I think, three. Yeah, so let's try four next. Uh, this is dead simple as a prompt. I'd like to turn the above into a short article. Please write this for me using the above transcription as a basis and stick as closely to this content as you can. Restructure it to make it clearer and flow better. use headings to break up the text. And since we're care about the writing here, rather than the thinking, as some of the other prompts were geared around, I'm going to also add my tone of voice prompt here to have it respond in a style of writing that hopefully is closer to what I want in the world. Let's see how it does. Unveiling the Potential of AI Decision Making, Insights from David Boyle. Sounds fancy. In a world increasingly influenced by AI and data analytics, understanding the potential of AI in enhancing decision making processes is critical. David Boyle, an expert in audience research and data analytics, shares his insights into the transformative role of AI, particularly language models, in various organizational contexts. Ah, pretty good actually. Pretty good. Usually without the tone of voice prompt, it tries to make you sound like the most, the cleverest genius in the world. And it's not done that. Yay. This is a telltale sign of AI, various organizational prompts contexts. But otherwise, you know, pretty natural, good start. We've got AI's impact on corporate strategy. Uh, four P's broken out there , and listed so critical something about empowering decision making across domains, the future of AI, a call to action and a conclusion. I'm not going to read it through in detail now, but it looks to me on glance and skim reading it. Pretty good. A pretty good draft of a, of an article on that topic. All right, let's try the last and final. One piece five is my shortcut for this one. This very simply gets you a list of quotes. So please identify and list a number of direct quotes from the above that support and bring to life key themes. List the quotes accurately and in the voice of the person speaking. But with slight improvements to clarity and to flow where needed. Proceed each with the theme on the same line as the quote, e. g. on the importance of blah blah. And in my experience it does a pretty good job of finding you useful quotes. You're definitely going to want to fact check all these quotes against the transcription. You're definitely going to want to use your brain to see what's missing maybe add some things in and some of these won't be useful, take them out, but a really good start at finding a list of quotes from a piece so for example, on the use of AI for decision making, I'm obsessed with how organizations make better decisions, and in the last year it's been using language models. That sounds a lot like something I said yesterday. Excellent. On the necessity of clear articulation in AI prompts, if it doesn't deliver a response that you regard to be useful, it's because you weren't clear what it was you wanted. Yes, that definitely sounds like something I said as well. It's like a bike for the mind. You've got to steer the thing. That's a poor version of a quote. I actually describe it being an electric bike for the mind. And I think there's a much richer following sentence about the impact on you. So, uh, I would say that one's not very good. On facilitating AI epiphanies, show them an example in a domain where they're an expert. Do a field prompt first, then critique the response. No, I get what it's saying. The first part is correct. The field is wrong, which is probably a transcription error from Otter. And ChatGPT is usually very good at saying, huh, that word doesn't fit. So I know what they meant. But it's done a bad job here. So it's the right quote, but it's the wrong, some of the wrong words, as is this one. It's the right quote. It's a powerful quote, but it's not the right wording. So, Thank you for pulling these out. They are the quotes I want, but I'm going to have to edit them myself. So two, two perfect ones out of four that we've looked at and two that need some work, but are really good. That's a pretty good summary of how the direct quoting works here. It's much better at finding underlying themes and patterns than it is about finding direct and precise facts , or quotes, that kind of thing. So anyway, I put all these quotes into a doc. We'll share the link to the doc as well. You should tweak them all to make them do what it is that you want. And definitely take my advice on setting up keyboard shortcuts to make these much easier to deploy day to day. But typically that's what I'll do after Important meeting or a thoughtful meeting or a podcast or anything like that is Throw the transcription into chat gpt and just spend time 10 minutes or so just pulling out some of these summaries and just noodling on it a little bit, reflecting on it, learning a little bit. Seeing chat GPT summarize what I said always makes me understand it more clearly than I did when I said it in the first place. So, incredible learning opportunities. And if you think back to the world pre chat GPT, you know, I almost never did that. I'd love taking paper notes. I always do that. But very rarely would I reflect on them. Very rarely would I spend time with them. And now, because this is fun to do it feels like cheating to have that level of insight into a call. I do this regularly and really love it. I hope it's useful to you. Yay.
[01:09:32] Jeremy Utley: David, that's incredible. Useful. I'd say so. It's incredibly useful. We love hearing stories like that, how you take a process that someone used to avoid because of the drudgery of it, and you turn it into something that almost feels naughty. We would end this episode by exhorting you, admonishing you, dear listener, find an application of generative AI that feels naughty to you. Not that it's bad, Wrong or illegal or something, but that it's so fun and feels so magical in terms of its ability to expand your capabilities that you find yourself feeling a little bit guilty that you get to use this world changing technology. That's the kind of thing that we want to be bringing to you on beyond the prompt. So if you have experiences with it, if you want to share with us, we would love to hear from you and to hear what kinds of stuff you're experimenting with as you go on your own journey of exploration. Have a great day. Thanks for listening.