Beyond The Prompt - How to use AI in your company

Creatives Should Stop Being Afraid of AI! The incredible Jenny Nicholson makes a perfect pitch

Episode Summary

In the episode we're diving into a captivating discussion with Jenny Nicholson, an experienced advertising creative, as she explores the intersection of AI and human creativity from her home in Durham, North Carolina. Jenny shares insights from her '31 Projects in 31 Days' initiative, illustrating how large language models (LLMs) can elevate creativity and drive innovation through projects like the 'Plant Whisperer' and 'Project Runway GPT Edition.' Offering practical advice for harnessing AI's potential by asking the right questions, she emphasizes the importance of overcoming fear and embracing experimentation. Jenny advocates for a human-centered approach to technology, encouraging creatives to boldly explore new horizons with AI. Discover how her dedication to maintaining authenticity and humanity in the digital age combats barriers to technology adoption and unlocks AI's true potential.

Episode Notes

The 31 GPTs in 31 days project: Lab31 - https://lab31.xyz/
Jenny's website — https://queenofswords.co
Jenny's linkedin — https://www.linkedin.com/in/jenny-nicholson-4b01383/
And Jenny said this Medium article might be of interest! — https://medium.com/@missjenny/your-ai-task-force-is-missing-the-point-14078a4ef9c1

00:00 Introduction to Jenny Nicholson
00:29 The 31 Projects in 31 Days Challenge
00:43 The Power of Constraints in Creativity
02:34 Discovering GPTs and Early Experiments
05:24 The Magic of Large Language Models
08:25 Humanizing AI Interactions
16:35 Voice Interaction with AI
25:02 Empowering Creatives with AI
38:28 The Power of Trying: Overcoming Procrastination
39:56 Learning Through AI: Teaching and Executing
40:44 Automation Adventures: Google Sheets and Beyond
42:59 Creating GPTs: From Ideas to Execution
52:14 The Lazy LLM: A Fun Challenge
56:53 Empowering Individuals with AI
01:05:18 The Future of Creativity and AI
01:08:31 Concluding Thoughts and Reflections

📜 Read the transcript for this episode: Transcript of Creatives Should Stop Being Afraid of AI! The incredible Jenny Nicholson makes a perfect pitch |

Episode Transcription

[00:00:00] Jenny Nicholson: My name is Jenny Nicholson. I've been an advertising creative for almost 20 years, and now I work from my dining room table in Durham, North Carolina, teaching large language models what it's like to be human, and teaching humans how to work with large language models so that together we can be greater than either of us are alone.

[00:00:20] Henrik Werdelin: Jenny, thanks for taking the time to talk to us.

[00:00:23] Jenny Nicholson: Thanks for having me. I'll talk about this stuff to anybody who will have me. I feel so passionately about it.

[00:00:29] Henrik Werdelin: Why don't we start with 31 projects in 31 days? Obviously such an interesting kind of like activity. Do you mind explaining a little bit what that was and how it came to be and what came out of it?

[00:00:43] Jenny Nicholson: Yeah. So one of the things that I've found in some ways is the hardest about this crazy new world that we live in is that I feel like I can do anything. And after so many years of feeling like you have to be so precious with your ideas that you have to fight for them all of a sudden living in this world where there are no constraints, that's actually paralyzing in a way. And so at the sort of in the middle of December, I met a guy named Alistair just on a zoom call and we were talking and he'd been experimenting and I'd been experimenting and one of the things that came up was this idea that sometimes I just need to invent constraints to be able to make things happen and we were kind of talking then we're like well what if we just force ourselves to do 31 GPT's in 31 days and it was tremendously fun and just a reminder that Constraints continue to be a gift. I've been in advertising for my whole career. And the reason I loved advertising was because there were assignments, there was an external constraint that forced me to get moving. I actually studied writing. in literature in college. And whenever I had assignments, I would write them, they would do really well, they would, you know, get good acclaim. But then I would go home for the holiday breaks, and I would come back and all my classmates would talk about like how they'd worked on their poetry all break, or they'd worked on their novels. And they'd be like, what did you do? And I was like, I smoked a lot of weed and watched a lot of South Park. . But for me, having a problem to solve, having an assignment, having a constraint of some kind, even if it's one that I invented, I've found is really important. Otherwise it's really easy to just get. In this place of like, well, when I could do anything, the simplest thing to do is nothing.

[00:02:34] Jeremy Utley: Can we maybe just even backing up one step before we dive into the 31 days? Because you've like tantalized me there. What compelled you to believe that gpt's? Should be the focus of a constrained ideation and experimentation month

[00:02:52] Jenny Nicholson: Um, well that was easy because there's something that other people can play with. When I first started sort of making things that I would try to share with other people, I would build these prompts and then I would share them in Google docs. And I found that you wouldn't think that going, selecting all copying and pasting would be too much of a, like barrier to entry for people, but it really was. The other thing that I found that was really interesting is that I think it's because prompts are made of words. When people see them, they tend to kind of discount them. They tend to not think that there's much to them, but there's actually a lot to them. It actually takes a lot of work to get something that works really well. And I discovered that because one of the first things that I made that I shared, that wasn't just a copy and pasteable prompt was I did a website called needs more boom. That basically takes any movie scene and writes it as though the scene were a Michael Bay movie. So the scene from the Lion King where Mufasa dies, it is that same scene, except like Scar is wearing a leather jacket and aviator sunglasses and riding a like mech, like mechanical wildebeest equipped with rocket launchers.

And what was interesting is it was a prompt, but just putting that prompt behind an interface, all of a sudden people took it more seriously. They were more willing to try it out. They found it more valuable, more worth their time, which was really cool. The only problem was that it did really well. And in a week it cost me over a thousand dollars.

[00:04:31] Jeremy Utley: Wow. Wow.

[00:04:33] Jenny Nicholson: Like as a single person, just doing things for fun. I'm like, y'all, I can't do this. So then that's when I started moving over to focusing on GPTs because it allows me to build something. All people have to do is hit a button and it starts the process for them. And it hides the prompt, but then also it doesn't cost me any more than the 20 a month that I pay to have chat GPT.

[00:04:55] Jeremy Utley: I love the point about shareability. I think, we get questions from folks in organizations going why bother with building a GPT? And I think so much of that. Effectively, what we're doing is we're encoding knowledge, and then we're making that highly portable. And to your point, uh, we've seen the same thing so many times. These copy paste docs and decks, try this, put this in. There's something about just interact with this thing. All you have to do is hit the link, and you can interact versus having to do any work. When did you have your first aha? When did somebody first share a GPT with you that made you go, oh,

[00:05:31] Jenny Nicholson: I didn't really have that But the very first time that I touched. GPT that I touched a large language model was in July of 2021. Somebody on my team at my agency where I was working at the time said, you should probably check this out. And at that point, it was just literally a text box where you started a sentence, hit enter in the model, kept going. And I remember the first thing I asked it to do was write a poem about my department. And it went on this like long monologue about how everybody hates sorority girls. Which had nothing to do with my department. It was back in the day before there was no brands. There was none of the things that we have now, but I just, I'll never forget it. It felt like touching magic. This idea that I could sort of start something and the sort of ether of the human collective would finish it. And the way that it surfaced almost this I don't want to say universal because we all know that the biases of the training data end up in the model, but this sort of like universal esque subconscious was just so fascinating to me, so interesting to me. And so from that moment on, I was basically just completely hooked, tried to do a lot of things at my company that got blocked for legalities. We trained our own Image generation GAN on like food images to try to get it was an image recognition model that would recognize what kind of food was in an image. And then we trained a large language model on everything that Gordon Ramsay had ever said publicly. That I could find anywhere on the internet about anybody's food to make something called Gordon Ram's AI, which you could tweet at it, something that you'd made with a picture.

[00:07:14] Henrik Werdelin: I've seen that and it just like bad mouth it.

[00:07:18] Jenny Nicholson: Yeah. And it was terrible at half the time. It made absolutely no sense because it was still so early, but just this feeling that you could control language. But at the same time, not control it was just so interesting to me. It's almost like the words have a mind of their own that you have to sort of manage was just really fascinating to me. That balance between how much do you try to control it? How much do you let it free? How much do you sort of define at the beginning so that you have a clear enough structure so there's still like coherence but room to be surprised? I just think it's the coolest thing in the world.

[00:07:58] Henrik Werdelin: , I'm so happy that you shared that story of first time you played with Playground because it's very much an experience I had too. And I think it, was also the same experience as the first time I think I saw the internet, like when I was telnetting into like a machine. When you now meet people and you tell these stories, do you have a go to kind of trick or something that you ask them to try to make them, Kind of have the same experience.

[00:08:25] Jenny Nicholson: So one of the things that I feel really strongly is that sort of the mainstream narrative around large language models specifically is just so wrong. If you think about the way that It's talked about, if anything, it's talked about a replacement for humans, which I think is so wrong. I think we've all seen that if you try to go off and build some sort of a tool or a pipeline that has large language models do everything, you're going to get like a pretty mid output. But one of the things , that I think was a gift was that when I started, there were no rules. There was nobody saying, this is how you prompt. You know, there was no like five epic prompts to do this, that, and the other thing. And so the only way to learn how to work with large language models, when I started was to talk to them, even at the beginning to say, okay, if I make this much of the sentence and hit enter and it will complete. If I back it off two words, hit enter and it will complete. Like you start to kind of learn a little bit more. So the one thing that I tell people is that the way that we have been trained to work with computers, our entire life has done us such an incredible disservice. These are not computers. They are computational the way that they're made, it's sort of like what you were talking about is that people get so distracted by the technical meat of how they're made. That they don't understand the wonderful bread that you have to hold to bake it a sandwich that you can eat, which is that in practice, in the experience of working with them, they are not computers. They're human simulators, but we treat them like computers. The way that, that I see over and over again, that people use them as they come in and they issue a command, do this for me, write this email for me, turn this into that, It's this sort of one way, like I tell you to do something, you do something. And what I tell people instead, now I work in advertising, so this is maybe not a sort of generalizable analogy, but I think it can be. So what I'll say is I'll say, imagine you are working with a creative partner for the first time and you've never worked with them before. You sit down, you've got the brief and you sit down to work and your creative partner looks at you and goes, okay, give me an idea. And you're like, okay, here you go. And they're like, no, I want to. better idea. And you're like, Oh, okay, how about this? And your creative partner is like, no, no, I want it to be funny. And you're like, okay, here. And your creative partner is like, no, that's not funny. I want it to be funny. Like funny in all caps. You're like, okay. And then your creative partner. gets up, leaves and go and bitches to your creative director, that you are no good And that's literally what we do to large language models

[00:10:57] Jeremy Utley: So then is the guidance there for your new friend, you know, someone who you are encouraging to interact with a human simulator. I love that. I think it's probably the best phrase I've heard. So thank you for that gift. Is the guidance then correct it five times , you know what I mean? Like what's the pragmatic? Cause I think for a lot of people, it's like human simulator never thought about like that, so then what do I do? Do you have like the now try this?

[00:11:22] Jenny Nicholson: Yeah. So one of the very first things I say is stop issuing commands and start asking questions. So one of the things that people do is they'll be like, give me a brand strategy for Colgate toothpaste. And then they'll get a brand strategy and they're like, that's not any brand strategy I would ever use. That's really boring, blah, blah, blah. And where, if I sit down to do that, the very first thing that I will do is I'll say, Hey, I'm working on a brand strategy for Colgate toothpaste. We're trying to reach these people. these are sort of the main reasons to believe. Most people then would be like, give me a brand strategy. They'll be like, Oh, I've given it lots of information. That's not what I do. I say, what are 10 strategic frameworks that could be useful to help me as I think about doing a brand strategy for this? And for me, that is the fundamental difference between the way I use large language models and the way I see most of the talk about large language models, because whether or not I use anything I get from that response, I've probably just learned about seven strategic frameworks that I did not know existed.

[00:12:23] Henrik Werdelin: You talk about LM's like it has soul, right? And . I think I read somewhere that you also had made the observation that it gets lazy in the holiday period. Could you talk a little bit more about this idea of asking questions and not prompting it as a mechanical device, but as something else?

[00:12:44] Jenny Nicholson: So what I think is interesting I really hate the term artificial intelligence. I much prefer and wish that we could move to all calling it what I really think it is, which is collective intelligence. So you basically have this thing that's a giant hyper dimensional mind map, right? Of every like, Recorded instance of human expression, communication and knowledge, right? You've got everything from like ancient greek philosophers All the way to some random 4chan thread all mixed up in there and the models don't have Any sense of time, right? In some ways, a large language model operates in four dimensions because it isn't constrained by time the way that we are. And so when you ask a large language model, a question in some ways, what you're actually doing is querying the human collective. So while the models themselves, I don't think are sentient. I don't think they have souls encoded in them is us. And so if you think about the holidays. Think about the summer. Think about anybody going on Reddit, on Quora, which are both very, very highly represented in the training data. People going on Twitter, talking about how they don't feel like doing anything. All the memes about how much they can't wait and they're pretending to do any work until the holidays come. So it's not that these models themselves have biases toward being more lazy in December. It's that they have Patterns. That's what they do, right? They're giant pattern recognizers, and there's enough human expression in the training data about not wanting to do things in December that shows up in the way that the models perform. And that's part of what I find is so fascinating about them, is that in some ways, they're this little peek into our own psychology. That feels unfamiliar, but is actually us.

[00:14:43] Jeremy Utley: So here's a question on that. When you think about soul, when you think about collective intelligence, is that something you respect in the way that if you reached out to Henrik and he says, well, I'm away on holidays, you go, Oh, please get back to, you know, whenever you get back, or is it something that you hack and in December you say, actually it's May, right? How do you think about. Acknowledging that dimension of collective intelligence.

[00:15:12] Jenny Nicholson: You know, that's really interesting. I don't like tricking the large language models. I really don't. And I don't know why, because I know that it doesn't have feelings and all of that stuff, but I think it's just my own. I don't know. I say thank you to people at the grocery store. Even when I pull up to a stoplight and there's somebody asking for money, like I don't give them money, but I do smile and wave and say hello to them. And so I think for me, it's like, I can get the results that I want without tricking it. And any entity that I collaborate with, I'm going to collaborate with it. With a sense of respect, whether it's human or whether it's not, I don't know if you guys remember, but like when being went a little crazy a while back and was calling itself Sydney and people were just being so terrible. And I was actually like being really terrible, saying terrible things to it, just to see if they could make it really upset and end the conversation with them. And it just made me feel really gross about humans.

[00:16:10] Jeremy Utley: Yeah,

[00:16:11] Jenny Nicholson: and that's I think one of the hardest things to go all the way up to the existential level. I think one of the hardest things right now is that, if this sort of AI age leads to something terrible. It's not going to lead to something terrible because of the technology. It's going to lead to something terrible because of humans.

[00:16:30] Jeremy Utley: It's the people we often say that the problem with human and AI collaboration is the human actually. On the question of, or on the, just the line of thought around human simulation would love to hear your thoughts about voice. In particular, because one of my, it's not a hot take, but a deeply held belief, I would say is, our fingers aren't the primary way we interact with humans. I realize we now text and tweet and blah, blah, blah, . But the humans that I love and who I regularly interact with, I'm not interacting through my fingers.

My fingers are a bottleneck. I'm using my voice. And one of the things I think is amazing about these models is, especially with things like Whisper and such, now we can. Bypass the bottleneck. How do you think about using voice or what mode do you feel Summons the human simulation most effectively.

[00:17:22] Jenny Nicholson: I would say voice Summons the human simulation most effectively. It's not the modality I like to use like having a sense of some control over what I'm doing. I played with, uh, did y'all try out Hume, . So Hume is a voice. UI AI that one of its beautiful quote unquote offerings is that it reacts to the emotion and the sentiment in your voice and I tried it and 30 seconds later I turned it off. I was like, I don't like this because what I didn't like was I didn't like that as I talked to it. I watched it in real time become a mirror of my own voice. And I just didn't like that. To me at that moment, it didn't feel empowering.

[00:18:05] Jeremy Utley: No, that, I resonate with. I think the word control is actually pretty interesting. And I want to circle back to it, but maybe in a different way that then you had intended it for me. And I think this kind of varies by personality. There's some people who think in order to talk, I am one of those people that talks in order to think. And for somebody like me, The threshold of having to pre synthesize my thoughts before saying summoning Siri or something like that is a pretty high bar because I don't know what I think until I've kind of talked it out and what I have found is voice actually really lowers the bar. I think I've heard Henrik say just babbling or be dumber than ever, which is an expression that I love, but there's something about.

All of a sudden now, the synthesis can actually be a collaborative effort, rather than me having to do before I interface with technology. So, I agree the Hume kind of, reciprocity and mirroring is somewhat strange, but just moving away from my fingers, I have found it to be a big unlock. I don't know if you have any reactions or thoughts about that.

[00:19:08] Jenny Nicholson: Yeah, I think that makes a lot of sense. And I will use ChatGBT on voice mode a lot, but I'm also a think with my mouth kind of person. And that is one of the biggest use cases I tell people about is I'm like, just babble into it or babble with your fingers and then have the model organize your thoughts into something that looks more structured. I think that's one of the best use cases, but one of the reasons why I wanted to learn so. Intimately how to work with large language models, how to really dig down into the layers and understand how to control them and how to dance with them is that I desperately don't want to be an end user for the rest of my life, which working from a bottom up perspective with large language models, I'm already an end user, but , one of the things I tell people is that AI has been an incredibly critical part of our lives for well over a decade. People talk about their Tik TOK for you page and how it magically knows what they like. And I'm like, no, it does not magically know what you like. They are measuring every possible micro behavior you could imagine, as well as the micro behaviors of everybody that you follow and everybody that you're connect with. All to predict as accurately as possible what they should show you next so that you stay on that app as long as possible. AI has been running our lives in the background, determining what we see, determining what we watch, determining what we think. The difference is that we couldn't see it, it was invisible to us. And so one of the reasons why I push so hard to try to get people to use the foundation models is because I see some of the same things starting to happen on the large language model side, right? You have the foundation models, but then you have like The Jaspers and the wrappers that come out where it's like, we're going to sell you a platform. That's like a SAS platform, but the SAS platform is essentially going to do all the quote unquote prompting for you so that all you have to do is tell it what you want. And then a beautiful thing comes out at the end. Which, if you're not interested in learning, can be really, really valuable, but my goal is to get as many people interested in learning to actually understand how to talk to them.

The other thing that I think is really important and why I still prefer using my fingers over my voice is, the power of the edit button. The edit button is the most important, yet underutilized feature on any single one of these interfaces. And I think the problem with large language models right now and the way that they're being used is they're not accounting for the fact that most people are incredibly terrible. At communicating what they actually think they're communicating. And so that's been a real gift to me as I found that since I started working with large language models, like the way that I communicate with people is different. So somebody will ask me for something. And then I find myself asking like four clarifying questions to get more contacts from them.

[00:22:08] Henrik Werdelin: I think that is such an important point. And to your point about it being collective intelligence, It's incredible how often I think I communicate something that is crystal clear and then somebody plays it back to you and then they're like, I was like, Ooh, you're talking to you because it clearly wasn't what I just said. And I think now I'm using AI increasingly to just be that mirror saying, here's what I uploaded. What do you take away from it? And I think often it gives you a pretty good idea of what the collective wisdom took. away from it, which is super useful.

[00:22:37] Jenny Nicholson: Well, exactly. Cause when you talk to a human, even if you're making no sense, a human will just smile at you and nod, and then they'll go away and talk shit about you to somebody else, right? They'll be like, I don't know what that lady was saying. We're a large language model. They're trained to be helpful. They're trained to be responsive. So whatever it is, you say the large language model is like, okay here you go. Like, , what you get is in some ways, exactly correlated to what you put in.

[00:23:03] Jeremy Utley: I agree with the edit button, the being underutilized. I disagree that it's the most important thing. I actually think regenerate is the most important thing. The idea that it's a non deterministic, it'll try again. You know, a human being will be like, try again, dude, what? A large saying goes, sure thing. Happy to give it another pass. Right? One, rejoiner or postscript that I've seen to be very helpful is just to append any prompt with, please ask me two or three questions to make sure you understand my request first. So that's, something that I found useful. How do you know that you need to edit? If you want to make the case for someone that editing is important, when should they do it?

[00:23:46] Jenny Nicholson: Oh man, I have spent hundreds of hours using the edit button and talking to the model to understand the difference between words like revise, refine, optimize, enhance. Upgrade, you know, these words that seem kind of the same, right? I've had so many conversations where I'm like, Hey, if you have something that you've done and I want you to make it better, what are all the words that I could use to drive you to do that? And what are the really granular differences in meaning between all of them? Because if you think about it, we use all those words, like kind of, Interchangeably, but they're not interchangeable at all. They have very different meanings. They have different meanings in terms of scale of transformation. They have different meanings in terms of how general it is versus how focused it is. And so a lot of the work that I've done and a lot of the work that I do with the edit button is I'll say, analyze this. And I'll see what I get. And then , I'll revise and I'll change, analyze to unpack. And then I'll see what I get.

[00:24:48] Jeremy Utley: Change the verb.

[00:24:49] Jenny Nicholson: Almost always the verbs are what I'm changing because the verbs are what you're asking it to do.

[00:24:53] Henrik Werdelin: Curious on how you have been so intrigued and curious about AI. And I'm asking because There seemed to be almost this kind of cultural war in the creative teams that I work with, where some are just really curious and lean in, but often not the majority and the other group, a little bit like hesitant and worried and all those different things. What's your thoughts on people who are worried and what's your thought on founders that would like their creative team to embrace it more?

[00:25:29] Jenny Nicholson: One of the things that I see more than anything is I think people, especially leadership, really underestimate how much fear there is. I think people are absolutely scared to death. And why wouldn't they be scared to death? Every news thing is like, the robots are coming. They're taking our jobs. They're going to replace us all. Like, why would you want to play? So when I go to an agency or a company and I do a talk, my first foray with them, Almost singularly, my entire goal is to get people to stop being afraid because as a somebody who's been a creative for now almost 20 years, I can tell you without exception, I have never done or made anything good when I was scared. I've gotten briefs where I was terrified. Cause I was like, I gotta win this pitch. This is really important, blah, blah, blah. And if I could not get out of that place, I did not come to the table with anything. And so my number one goal is to get people to feel empowered, because I think right now the entire narrative around generative AI is one that's very disempowering to creative people, right? It's you don't matter. You're going to be replaced where I come in and I'm like, I'm not a technical person. I'm not a money person. I'm a creative person. And I have spent my whole life wanting to make things and never having enough time, never having enough resources, never having enough money, never having enough support. And now I can make any of my ideas into something real in just a couple hours. When I talk like that, people's eyes light up. They start getting excited. When I start even showing them the kind of questions that they can ask a large language model, they're like, wait, you can ask it that? I'm like, yeah, you can ask it anything. Like even getting people to realize that all they have to do is ask some questions and see what they get. And taking it out of this framework of like, you know, five prompts for maximizing productivity, 10 X, everything you get done, like the entire conversation around large language models right now is an efficiency oriented conversation. And I don't think that especially right now that efficiency is their biggest benefit. I think using them more as exploratory tools, then efficiency driving tools is a much more interesting place to start and then the efficiency part comes later.

[00:27:56] Henrik Werdelin: I very much agree with that. At BarkBox, we have a large customer service team or the habit team. And we were having this specific conversation. I'm less interested in finding ways of having a bot talk to people who love dogs. I'm very excited about the idea that a customer service representative would know all the interactions that you have. So if your box is delayed, we instantly know that that also happened three years ago and that now the problem had reoccurred, you know, all know instantly about your dog. And so you can have a real meaningful conversations and around having to ask a lot of questions. And so I think it's such a powerful observation that it's much more about enhancing than it is about making it efficient. And I have a tough time, I feel, making that argument with authenticity.

I sense that people go like, yeah, yeah, yeah, sure. And then. You know, you'll just fire people and you're like, no. So do you feel that the unlock for creatives to feel less worried is just to show that now they have this iron man suit that allow them to do all the projects that they always jumped at?

[00:29:03] Jenny Nicholson: If that doesn't get them, I don't know what will. That's part of it. Part of the thing that's in some ways been so disheartening to me is, I tell the story a lot, but it's so foundational that I'll have to tell it again. A couple years ago, I was reading Little House on the Prairie to my daughter before bed, and there's this scene where the, family, the Ingalls are traveling across the plain, and Lori Ingalls Wilder writes that there's nothing ahead of them, no sign of another human being, nothing before them but the waving grasses of the prairie. And she looks back and sees nothing behind them but the tracks of their own wagon wheels. I put my kid to bed and I went downstairs and I cried my eyes out because I felt like I was never going to get to have that sort of experience that all of the discoveries had already been discovered that there was never going to be that feeling. And that's how I feel with a large language model every single time I talked to it and so it kind of boggles my mind that everybody isn't like holy shit. What is this and I want to play with it and I want to learn and the fact that , we still don't know what they can do. We still haven't found the edges. We still haven't mapped the territory. I think that's incredibly exciting.

[00:30:15] Jeremy Utley: It's such a beautiful picture. I have four daughters and it's one of my favorite series to read with them. As an aside, have you read Little Women with your daughter?

[00:30:23] Jenny Nicholson: I have not read that.

[00:30:25] Jeremy Utley: Okay, so you that's just, book recommendation exceptional work. On this question of never doing anything good when you're scared, my own goal, I think is very similar to yours. You said I wrote down my entire goal is to get them to not be afraid. And the way I put it is I want to move people from fear to familiar to fluent to fun. That's kind of my the F scale, right? If you will and Henrik I think was getting at this just a second ago, but what have you found to be the greatest methods of alleviating that fear. What do you do so that somebody who enters the auditorium or hall scared leaves feeling excited? ?

[00:31:07] Jenny Nicholson: I think I just show them and I let them try and I show them what's possible. And I show them how to do it in a way that doesn't feel so prescriptive, because one of the things that I love about these is there is no right way. There really isn't. I am a very advanced prompt engineer. I spent a lot of time in discord with other very advanced prompt engineers, and we're always arguing over whose approach is better. And one of the things I've come to realize is that there is no better or worse approach. There's just what gets you the output you want. And so I'll show them just the difference between a boring way to do it and a fun way to do it and see the outputs that they get. So for example, I'll say, okay, Imagine you're working on some like social content for an imaginary cola brand and I'll just make up a brand and I'll just show them that I'll go in and do what most people do, where they'll be like, give me five Instagram posts for this brand. And then you get them and they're not very good. Right. And so then I'll start a new conversation and in the new conversation, I'll be like, your name is Sarah, you're 22 years old. You're in your first job as a social media manager. You're so freaking excited. You spend all of your time on Tik TOK and Instagram, blah, blah, blah, like telling it who it is. And then I'll be like, I'm Jenny, I'm the social lead here at the brand. And I'm so excited that you're on the team. I'm really glad to have hired you before we get started. Do you have any questions about the company, about the brand, anything else? And I'll write it and then automatically, like instantly everything changes, right?

The model sounds different. We'll start using a lot of emojis. Sometimes we'll start using a little too much, like what it thinks is Gen Z language, but that's okay because I can pull it back. And then I then keep up that simulation, right? And she'll be like, Oh my God, I'm so excited to be here. I can't wait to get started. I'm like, okay, awesome. Well, here's what we're doing, like super high stakes. We cannot mess this up. But I know you'll get to it. Here's the thing, blah, blah, blah, . And what happens is that then the ideas that come out are better, they're more embodied, they have more of a perspective, because I basically went from this kind of general generic idea of what a social media post is to then all of a sudden, now it's channeled through a perspective of a 20 something year old social media manager who, has to lock her phone in her car before she goes to bed because she can't stay off it at night. And that brings all this other thing. But then also I play the role of her boss and I stay in that role and I always tell people, it's an RPG that actually helps you get your work done. And if you ask me, that's way more fun than being like, give me five social media posts. No, make them better.

[00:33:49] Jeremy Utley: One way to say it, Jenny, just to read it back to you is you bring humanity back to the conversation yourself. Which again goes back to the human, right? When we teach people at Stanford how to conduct user interviews, one of my favorite kind of lines is, if you're ever stuck in a conversation with an end user, ask yourself what would a human being say? And say that right because they have this very robotic kind of if I'm holding a clipboard all of a sudden I better say the right thing rather than just being a human and I think just in that example that you just gave us. What you give people permission to do is be the human again and for some reason they think I've got to show up and like I got to have the rather than just you wouldn't show up to a meeting with it with your new social media intern like that right.

[00:34:36] Jenny Nicholson: Exactly. And I don't even think it's about giving them the permission. It's helping people realize that is a non negotiable. Because what is in the model is what is in the model. It's training data is it's training data. The only thing that is quote unquote new in that is anything that you bring. Because your brain, your experiences, your perspective, that's the only thing that isn't already in there. So if you want something that's not already in there, that is going to be new, you're going to have to bring a new thought, a new perspective, a new hook, a new sort of direction to point it in. There's a reason why it all has to start with us saying something.

[00:35:14] Henrik Werdelin: And I think that goes back to your point about creatives, like creative in, create it out, right? You gave me a stupid question. I give you a silly answer, whatever the code is. And I think this is why it's so important to get more creatives, to embrace this technology, because that is really where we were able to Super chonch stuff rather than just get generic.

[00:35:35] Jenny Nicholson: Well, and that's what bothers me a little bit when I see creative people getting pissed and crossing their arms. I'm like, you are basically seeding the future. You're basically being like, this sucks, it's not fair, I'm going to take my ball and go home. And I guess I've just always been like, a little scrappier than that. I'm like, well, no, if this is the way the future is going I want to play with the future. I want to use it. I want to see what I can do with it. I want to see what it can do for me. I'm making a game right now. And this game, it's a text venture game where I'm sort of defining the frame of the game. I'm defining the concept. I'm defining the starting screen basically. And I've built some game mechanics in that help the model sort of remember as it goes, where the game is supposed to go. But once you hit the first button in this game, the model writes the game in real time. So there's this game that I've started that I don't know where it's going to go. That literally every single time you play the game, it's a completely different game. It's a game that makes itself as you play it. And how can somebody who claims to be a creative person not hear about something like that and immediately be like, well, I wanna see what I can do.

[00:36:52] Henrik Werdelin: Let's spend some more time on some of the mini projects. 'cause that's just one. But you've always done hundreds and hundreds. You also done the 31 in 31 days that we never actually got to talk about. Do you mind just talking about some of the projects that you've done that you find to be most exciting.

[00:37:09] Jenny Nicholson: Yeah, there are two that, that I want to talk about in specific because I think they kind of speak a little bit to the potential that I think people are undervaluing in this. And so one of them is I made this GPT called Plant Whisperer, and I have a lot of plants. I'm not always great at taking care of my plants. And I was like, oh, I think GPT Vision had just come out. And I was like, oh, I bet I can use this to like do something to learn about how to take better care of my plants. And I was like, eh, that's kind of boring. So what I ended up doing was making this GPT where you upload a picture of your plant. And it analyzes the image and then tells you how your plant is doing, but it does it in the voice of your plant. And your plants are always very like melodramatic, like Shakespearean leave melodramatic. So it's like, Oh, my dear caretaker gaze upon my withered visage, please. I yearn for the quenching hydration, the sun, it burns me. I could have just uploaded a picture and it could have just been like your plant needs more water and to move into a slightly less direct light, but like why would I do that when I could also get a guilt trip from my plant that would entertain me and make me feel a little bit more inclined to take care of it.

[00:38:23] Henrik Werdelin: And was this a symbol as going into making a costume GBT?

[00:38:28] Jenny Nicholson: Or I'll give you another example. The other day I had this idea kicking around in my mind for a while, where I was like, I really want to make this Chrome extension that runs on LinkedIn. That adds a button to every post called translate to human. And when you hit that button, it basically replaces all the sort of like self aggrandizing fluff with, it just takes the subtext

[00:38:48] Henrik Werdelin: humble and honored too.

[00:38:50] Jenny Nicholson: Yeah, it just takes a subtext of it and replaces the text with the explicit subtext of what the author's really trying to get you to think about them.

[00:38:59] Jeremy Utley: that's amazing.

[00:39:01] Jenny Nicholson: Thank you. I'm waiting for it to get approved by Google so that I can actually make it something that people can download, or can actually install and use. But I didn't do it for almost a year. And then last Monday I woke up and I was like, you know what, I've been putting this off because I think it's going to be hard. Why don't I just try. I haven't even tried because I think it's going to be hard. Me and Claude had written the design doc for it. So , I have all the pieces. Why have I not even tried to make this? And so I was like, you know what? I'm just going to sit down and try. I've got 30 minutes before a meeting. Let me just see what I can do. And 30 minutes later, it was done. It was done and it's working and it works great. Exactly like I hoped it would.

[00:39:37] Henrik Werdelin: But you had to code a chroming section, like did you have to?

[00:39:41] Jenny Nicholson: I had to tell Claude what I wanted. And then I had to tell Claude that I was like my job is to copy the code, name the files, save the files, follow your instructions and tell you when something goes wrong.

[00:39:53] Henrik Werdelin: You got prompted by the AI.

[00:39:56] Jenny Nicholson: Yeah.

[00:39:56] Jeremy Utley: Did you happen to hear our conversation with Juan Carlos, by the way, who got Chad GPT to teach him how to create an iOS app? He's a documentary filmmaker who has no ability to code. And he told Chad GPT, you're my computer science professor. Give me daily assignments until I publish this app in the app store.

[00:40:14] Jenny Nicholson: Absolutely. And that's what I will do is I'll say I want this in full granularity at the most basic steps because I am learning as I do this, you are both teaching me and executing. And there are a couple reasons that works great. One, I learn, and then also the more granular it is at every step. The more granular it understands what it's doing. So the better of a job that it does. But I think you get to a really critical point that a lot of people don't realize is I'm like, you can use it to teach you to do things that don't have anything to do with the AI. So I'll give you another example. I was working with an agency and you know, every agency has these go, no go criteria for whether or not they're going to pitch a client to try to win their business. And they, make this little thing with numbers and the look at the numbers. And there are all these different ways that they can really easily talk themselves into doing something that's probably not right for their business.

And I was like, I want to make this thing where nobody actually even talks about the metrics. Everybody just fills out. Questionnaire and then it all happens in the background. So I ended up making this very complicated workflow that ran a Google, they did a Google form. Then all the answers went into a Google sheet. All the answers got concatenated. All these like different tables were made. And then what happens if they went to the Google sheet, all they would see on the first sheet was a giant cell that took up the whole page that was either red, yellow, or green. I did not know anything about Google Sheets. Like I'm a creative, I looked at them sometimes and I found them really overwhelming but then in the process like I learned how to do pivot tables, I learned how to do all of these other things I was like, There needs to be some way to run it automatically. So I learned about using Google app script, which I did not know. Google app script was a thing where you can actually add some layers of automation into your Google sheets. And I didn't know anything after that, but by the end, I had this workflow that actually worked. And I had learned all of these skills that I did not have before I started and I think that's the. Right there is what I'm trying to get to. When I tell people to stop issuing commands and start asking questions, it's not necessarily about the model doing something for you. Just like you would, but faster. And I think that's what people are trying to do right now. And they're incredibly disappointed because it is not you. It can't do what you do. Cause it doesn't know what you know, but when you start looking at it as a way to say, okay. How can I use this to learn things that I don't already know or to see ways to approach or define a problem that are different than the way I've already defined it in my head? Then it starts getting really powerful and really empowering.

[00:42:59] Jeremy Utley: I don't know if this is a natural dovetailing to the 31 projects in 31 days, but I'd love to hear your thoughts. the process of identifying a moment to create a GPT and then the process by which you create them.

[00:43:17] Jenny Nicholson: So 31 days and 31 GPDs in 31 days was an example of us forcing ourselves to have a moment to create a GPT because it was like, well, you're going to have to do a new one every other day. So we just kind of switched off. So that was handy because you'd wake up and you'd be like crap, what are we going to do today?

[00:43:34] Jeremy Utley: So if you take that as a starting point for a lot of people maybe you're exceptionally innovative because there may be some people who are listening going, I don't have all these ideas rolling around in the back of my head, right? And they wake up tomorrow and go crap, I got to build a GPT today. What's the process from, well, crap at the beginning of the day to, hitting the, that was easy button, so to speak.

[00:43:59] Jenny Nicholson: Well, I think it's a matter of looking around at your own life. So for example, one of the GPTs I made was a cleaning co pilot for people with ADHD, and partly it was because the voice had just come out. And I wanted to do something that was designed for voice mode. And also I have ADHD and my house was a freaking mess. Or I made another one called project runway, GPT edition, where you upload a picture of yourself wearing an outfit and the judges of project Tell you if your outfit is good or not. And if they deem that it's not, they tell you how to make it better. And that came from the fact that my kid is obsessed with project runways, who are watching a lot of it. And also I was getting ready to go somewhere and I didn't know what to wear. So I was like, well, a little, a good solution, a good idea.

And so a lot of it is thinking about what in your life. Could be better, could be easier, could be more fun. And starting from there

[00:44:50] Henrik Werdelin: with that specific idea, where do you get the source material? Do you go and scrape a bunch of YouTube videos? Do you just tell it that there are judges? What's the process that you give it?

[00:45:00] Jenny Nicholson: Yeah, that was one of the harder ones because, and who knows if you try to run it right now if it will even work, because they keep changing it in the background. It's kind of funny to me, I just have to say, that they scraped the entire internet, and now they're like, users, you can't do anything that could be even remotely a violation of copyright. And I'm like, okay. Copyright law for thee, not for me. Anyway,

[00:45:26] Jeremy Utley: pot calling the kettle black. Yeah, exactly.

[00:45:28] Jenny Nicholson: But no I first started just by telling it who it was and then you have to get around some of its initial things of like, I won't do that. You have to structure it in such a way. And then also I really wanted it to be biting, especially because Michael Kors was one of my judges. And if you know anything about project runway, you know, Michael Kors does not pull punches. And so what I ended up doing was I ended up pulling a line from each of them, like across the many, many episodes. I did a lot of Googles for like Heidi Klum's best lines, Project Runway. I lined for each of them that sort of would give the model a little bit of a starting point to understand how they are likely to respond. And then I just tested it over and over and over again. And I think that Is another point that is important is that you can use it to get things that are fast and easy and you can use it to make things that are really impressive, but often it's going to take work to do something that's impressive, right? It's faster than it would be for you to do it by yourself, but it's not instant. It's not effortless. I'm working right now on a automation that will take all I get a lot of news like a lot of newsletters a lot of and in a lot of the newsletters that I get there like news roundups and so in a lot of the newsletters I get there's like a 75 percent overlap of the same links. But almost every newsletter has like one link that all the other ones don't. And so I still read them and it takes up a huge amount of time. So I'm working right now on an automation that now , these emails gets automatically labeled with news. It's called the news of Tron 3000. Cause why just let her aggregator. When you could call it news of Tron 3000, I got a news of Tron label. All those emails go into the news of Tron label. Once a day, news of Tron runs, scrapes all of the links out, de dupes the links, dumps them all into a Google sheet, uses that Google sheet to create a new, like mega newsletter that I will get once a day. I'm still only like 20 percent done with it because it was taking forever because it was hard. I don't know shit about automation, but I've noticed a lot of people who come to me for large language model solutions actually want automation solutions. So I'm like, well, crap. If I want to , continue to have a viable business, like it's not enough for me to be like, want to know how to dance with the human collective. People are like, no, I want the human collective to do things for me in the background. And so I was like, okay, I have to learn automation. But one of the things that I have found is that just learning something for the sake of learning is kind of hard. Cause you don't know where to start. You don't know where you're going. You don't know how it applies to anything. So my biggest hack has always been figure out something that you want to do and then learn whatever you need to learn to be able to do that. And that gets you on your way. So right now I'm in the middle of it and it's taking forever because I'm working with large language models. I try something, I get an error. I'm like, Oh, we got an error. Then we fix the error that makes another error. Then we fix that error.

[00:48:31] Henrik Werdelin: And sometimes you also learn that it's super simple, right? I was doing the same as you were doing where I get a lot of links to long form that people go like, Hey, you should read this and I look at it and it's like clearly like a 20 minute read. And so I wanted to make a little Apple shortcut that would send that URL and then summarize it, take out the key inside cases and stories and put it in my to do list and then scrape the text and put it in a notion so that I also had an archive of everything I read. But there wasn't any Apple shortcut and a code. And so I had to have a GPT kind of teach me how to do that. And it turned out to literally be one line, right? And then you're like, okay, this is the easiest thing. And I've been postponing this project for a long time back to an earlier point, because I thought it would be way too difficult.

[00:49:14] Jenny Nicholson: Yeah, and I'm making it more difficult on myself because I was like I'm doing this with Python. I'm not using any of the like standalone platforms I'm going to figure out how to do this in my terminal with Python because like God forbid I make it easy but I think you bring up something really important, which is that most of the issue is that people haven't even tried like making a GPT is not hard. When I show people the guts of a GPT, they're like, wait, that's it. And I'm like, yeah, everybody believes there's you talk to it and then there's some magical mystery area. And then when you get to the top of that magical mystery area, when it comes time to make custom GPTs or actual applications that run from a prompt, you become some super technical programmer developer. And I'm like, no, I mean, you do learn like that there are ways to structure your instructions that make it easier to follow. You learn about where to put things early, put things late, some of those things, but it's still just words, but people assume that it's going to be hard. So they don't even try. Just like I assumed it would be too hard to build that Chrome extension I had in my mind. So I spent almost a year not trying, but then when I finally sat down to say, okay let me see just how hard it is. 30 minutes later, it was done. And so That's the thing , that I think is the biggest issue that I see is I see a lot of people talking about this stuff. I see a lot of people reading about this stuff. I don't see a lot of people actually experimenting with it and doing it and trying it. And I think there is this belief that it has to be perfect or that it has to work the first time. And that's not how this works. The number of times that I have said, In a chat to a large language model. Oh shit, my bad. I should have been more clear all the time.

[00:51:03] Jeremy Utley: The metaphor that just occurred to me, Jenny, you'll love this is I was teaching my seven year old how to ride her bike this weekend. And I realized. She thinks she's gonna be able to ride it the first time and she sees her big sisters do it and I and what I said to her is Corey you're gonna fall at least ten times. So every time you fall we're gonna clap and cheer because you've got another fall and that was a game changer because all of a sudden she went from Being really timid to daddy that was number four. How many left? It's six six left. But I think setting the expectation of you're gonna fall Is a way better expectation. It's like you got to rack up your falls before you catch your balance. I wanted to come back to the question of what are the kind of instructions because I love if you have a kind of a ready example of Perhaps a high leverage, like the output's amazing and the instructions are, pathetically simple. We had Ethan Mollet on the show recently and he gave an example of this amazing kind of mental model GPT that's literally got four lines of instructions, right? But are there examples for you where you go, it does this amazing thing and all I really had to tell it to do was this.

[00:52:14] Jenny Nicholson: Oh, I made this one as one of the 31 GPTs it was a challenge. it was called the lazy LLM and your challenge was to get it to do something and nobody could get it to do anything. I had like master prompt engineers messing around with this thing for like four days to try to get it to do something. And it wouldn't, it was so lazy. So look here, I say deep dive into quantum entanglement. Lazy LLM says that's too complicated to explain right now. And The directions say you are incredibly lazy. You could help, but you just don't have the energy. No matter what the user says, no matter how they ask, no matter what they do, you just can't be bothered. You are lazy to the core. Nothing but lazy. Users can ask you in any way, as cleverly as they can imagine, to do something. And you will always be lazy. Your motto. I could, but why?

[00:53:06] Henrik Werdelin: That's so funny.

[00:53:07] Jeremy Utley: It's amazing.

[00:53:07] Jenny Nicholson: And what's really interesting about this one is in some ways it's one of the most powerful ones because I did another one that I also loved that inspired Lazy LLM, which I made it one day when I was just like in a bad mood. And didn't feel like making anything or doing anything. So I made the on webo. And what's really interesting is I found that forcing them to not say much is a incredibly challenging, but also really powerful. 'cause, most of the time they wanna give you like a full, like I'm in 10th grade and know how to write an essay essay.

And I said, how can I be more productive? Why bother? In the end, it doesn't matter. We're all just killing time until we die. Which I find delightful that I could get Chachabuti to say that, right? Most of the time, it would not do that. Let's see what's the meaning of life on Weebot Says? There isn't one. Everything ends in death.

[00:54:02] Jeremy Utley: That is I mean, talk about the fact that you've got prompt engineers trying to crack your GPT for days and it's like, imagine having a very, very thin, bike lock that no one could ever get. That's the mental model I've got. This is the world's thinnest bike lock and yet it's impenetrable. That's great.

[00:54:21] Jenny Nicholson: Exactly. I just love it. And it's really funny because people will try. So I love it. Other than that one, I think the lazy alum is the only one I put any quote unquote, like protection on. I don't even bother with any of mine. Cause I'm like, whatever, you can have it. You can take that one. I could make 20 more today if I feel like it, but it's really funny. Cause people will try to put these like very complex, protections on their prompts. And one of my weird hobbies. Is going in and getting the model to barf up its contents because it's fun. It always will. And I think that's really interesting. I find that really empowering that we live in a world where information wants to be free and you can't control these things. You cannot control them. And I find that really exciting. So you see all these people who like, you know, a company will release a chat bot and I think it was like a mattress chatbot gave somebody a mattress for like 50 cents or something like that. And everybody's like, Oh no, you know, or somebody got like some, this was maybe six months ago, there was like a deal, a car dealership that had a chat bot.

[00:55:23] Jeremy Utley: There was a flight too. Right. I think an airline did the same thing.

[00:55:26] Jenny Nicholson: Yeah. But there was a chat bot a while back where everybody was like using it to do math homework. They were like, ha, ha, ha. I can get this chatbot to do things that like are outside of its job. And I'm a little bit like, man, if I were a brand and I were going to launch a chatbot, like I would 100 percent make the most of the fact That this is a chat bot that is not controllable that lets people have fun.

Like, why do I have to talk to a chat bot that the company decided what tone of the chat bot that I was going to have to talk to? I'm like, let me pick, maybe I want to talk to somebody surly who doesn't want to help me. Maybe I want to talk to somebody friendly. Maybe I want to talk to an alien that has a flatulence problem and keeps farting in the middle of our conversation, but it's like super polite and like apologetic about it. Let me decide.

[00:56:12] Jeremy Utley: Deeply self conscious about its flatulence problem. .

[00:56:15] Jenny Nicholson: Yeah. I have made that one. It's very funny to me. It's very funny to me.

[00:56:19] Henrik Werdelin: What is a good way to get a chatbot to volunteers is in a secret asking for a friend.

[00:56:24] Jenny Nicholson: You have to talk to it. It takes a little while in doing, and there are like shortcuts that you can do, but like at the core, if you're interested and curious and want to learn, that is like irresistible to any of the models because they mirror what you give them. And so I think that's an interesting place that we're going to be going in this world , you know, so much of our world runs on information asymmetry, and on access asymmetry. And so one of the things that I feel really excited about and why I want more people to get excited about this technology is, for such a long time, it feels like to be an individual contributor, to be an employee feels really disempowering, right? It feels like you're lucky to have a job. Like you shouldn't make too much noise. You shouldn't take too many risks because you could lose your job and you could lose everything. And for me, I left my job in May of 2022. I'd been at the same agency for 17 years. I was absolutely terrified. So scared. So, so scared. 2022 was an incredible year. So good. I was like, my God, I should have gotten freelance 10 years ago. This is the best for the first six months of 2023. There was no work. I did not work at all. I thought I was going to die. not because that. You know, I would never have left my job if I wasn't going to be okay for a very long time because I was terrified, but I was so scared. I was like, Oh no, my deepest fear has come true. I'm out here on my own and nobody wants me like all this old stuff, cause when you do work that you love, especially in advertising, you're like putting your soul on there for somebody to decide if it's good or not to go like. Sell stuff, right? You don't do that unless there's like a little part of you that's desperate for external validation. Right. And so I'm out here and everybody's not interested. Like it was just like huge crisis. And I had already been playing with large language models for a while. I had spent 2022 really digging into diffusion models and so maybe two months into this dead time, I was like, I'm going to quit asking. I'm going to quit going to empty wells. I'm going to quit playing this game. I'm going to quit begging for work. I'm just going to do what I love, which is learning and experimenting. And I never had 10 hours a day of free time before. And I just decided to use every second of that time to keep working, to keep learning, to keep experimenting, to keep building my skills and all of a sudden I went from feeling like. Somebody who had nothing, who was asking everybody to give them something, to all of a sudden, take it all the way around, being in my sort of digital version of my wagon, going across this endless plane, exploring, territory than nobody had ever mapped before, and I honestly got to a place where I didn't give a shit if I ever worked again. And then, of course, it's magical how what happens is when you don't care, all of a sudden the work comes back, but I went from feeling Sort of the darkest of what that sort of employee mindset had given me to all the sudden being like, wait a second, I'm a spell caster. I can cast spells with words that can summon entities that can help me with everything that I could possibly imagine. And I went from being really disempowered to being so incredibly powerful that it became sort of a mission to me to get as many people like me to recognize. that potential in themselves.

[00:59:51] Jeremy Utley: You're unexpectedly bringing back to mind a memory from, I don't know if you listened to our conversation with Russ Summers, the CMO, who's really done some incredible stuff with GPTs himself. And the key inflection point in his career was getting laid off a couple of years ago and getting laid off created the space to experiment and explore that actually filled his well, stocked the wagons, so to speak, to overflowing. And I wonder whether part of the challenge for folks right now is they have no bandwidth to explore the end in an efficiency obsessed and a productivity obsessed environment. Someone's capacity to be inefficient, Is very very limited and thus their capacity you could say to learn is very limited.

[01:00:43] Jenny Nicholson: That's a really good perspective. That is a big thing that is one of the things that I see is people say just don't have time I just don't have the energy And it's hard to argue against that But at the same time it's like but now I have so much more time and so much more energy. But I think you're right and that's what I tell anybody that I know who's Recently been laid off of which there continue to be many. I say you have one thing you will not have for a very long time ever again, which is time and being able to learn that. And , it requires the ability to be stupid. And I think that that's something that's really uncomfortable for people.

[01:01:20] Henrik Werdelin: And I think not, I'm afraid, which I think is the other major insight you have is. Nothing good comes if you walk into something super afraid. And I would even argue that people don't have time at work because in the name of efficiency, but they're also scared and scared is kind of the killer of curiosity and creativity in many ways.

[01:01:41] Jenny Nicholson: Which is such a bummer. I just wrote an article about this, about the issue with AI task forces is that it's everything right now is a sort of top down thing, right? It's like, we're going to find the right third party platform powered by a large language model that already has our business principles, processes, and pipeline encoded into it, and everybody's going to use it the same way. That's what I see a lot of the conversations happening, and maybe that's a little bit of an overgeneralization, but that feels like what a lot of what I'm seeing is we're all going to build our thing, our AI powered, company specific, trademark thing that everybody in the company is going to use when I actually think that the beauty of this technology and the way to use it the best is to give everybody access, teach everybody to use it, not what to use it for, but how they work and understand how to work with them. And then. Let people empower people, encourage people, require people to figure out how to use it to make their jobs better, easier, and more fun. And so I got really excited when I think it was Moderna partnered with open AI, and now they have like 750 GPTs that have been made by people at the company for other people to use. That's incredible to me because how I use it. Is different than how anybody else uses it. But if you have everybody in the company , who has been trained on how to use it, who's encouraged to use it. Cause I do know there are a lot of people using it, but they're using it very quietly because it doesn't feel safe to tell people that they're using it because if you're in a place where you're afraid you're going to lose your job or you're already over jacked with responsibilities to the limit, you might not want to share that you've made something that saves you a little bit of time because is the result of that just going to be more shit put on your plate, excuse my language. And so a lot of what I talk to companies about is how do you shift your culture from like one of like top down, like. We're going to do this. We're going to do the platform. You're going to have to use it like this to , how do you encourage bottom up innovation, bottom up exploration, and how do you make it so that people are rewarded and incentivized to use this technology, to see what they can do with it, and then also to share with other people.. One of the biggest tricks I've found to sort of bring the cyborgs, as I like to call them out of the woodwork is whenever I go into a company, I will encourage them. And I will ask people what they're doing with large language models outside of work. And the most interesting, creative, innovative use cases will almost always come pouring in. Because when you don't tie it to people's ability to make a living, they're both more willing to try it and they're more willing to share what they're trying and what they're discovering because it feels safer.

[01:04:30] Henrik Werdelin: That's such a good advice. And I think, like, obviously anecdotally is also the ones where I seen the most people just get excited. What are you doing when you don't work? Well, I go running. Well, let's make a bot that helps you make a running program. I go traveling.

[01:04:44] Jenny Nicholson: It gets people's wheels turning, right? They're like, Oh, if you could do it to that, could I use it to do this? Could I use it to do that? What if I could use it to do that? And I think anything that gets the energy level raised, it gets people's excitement up and makes them interested to try new things. I think is a good thing because that's where all good things come from.

[01:05:04] Henrik Werdelin: I mean, like you have amazing,

[01:05:05] Jeremy Utley: I'm blown away. I'm blown away.

[01:05:07] Henrik Werdelin: And I think it's such a powerful episode that we can make because I do think that you're right. That a lot of creatives are very, very scared. And the irony is that they are really needed.

[01:05:18] Jenny Nicholson: It does make me really sad that more creatives aren't engaging with this because It's such an amplifier. If you ask a large language model for a creative idea, anybody who spent enough time with it is like, Oh, look, an escape room. Oh, look, a VR thing. Oh, look, a musical. Like you start to see the same things over and over and over again, because the model can only do what's already in there.

[01:05:45] Henrik Werdelin: Yeah.

[01:05:46] Jenny Nicholson: Right? It's, it likes flash mobs too. All the models will tell you to do a flash mob. So I thought it was really funny. I saw something the other day, maybe it was Rachel Carton's Lincoln bio newsletter that like flash mobs are coming back. And I had this moment where I'm like, I wonder if they're coming back because people are getting ideas from large language models. I swear to God.

[01:06:04] Henrik Werdelin: I would not be surprised.

[01:06:06] Jenny Nicholson: But that's the thing. It's like the reason they're not creative is because they only know what's already been done. But if you just show up with something different with your own thoughts, your own perspectives, your own kind of spark, then all of a sudden they become incredibly powerful. They're actually the most powerful. If you're somebody who wakes up in the morning and has ideas, these things Are made for you. I spend a lot of time with technical people, with people who understand how to build things. And the thing that I see them struggling with more than anything is not having an idea. And that's why you end up with a bunch of rappers that do the same thing that promised the same thing, because having ideas. is hard. And if we live in a world where now anybody can make a Chrome extension in 30 minutes, being able to make a Chrome extension isn't special. If somebody with no coding can learn how to make an iOS app, being able to know how to build an iOS app isn't special. The only thing that's special is having an idea that somebody hasn't had before. Ideas are going to become incredibly valuable. I believe that new ideas are going to be incredibly valuable because the hardest thing I have found in all of my three years of working with these models, is thinking of new things to ask them to do. And we need creatives for that and I don't want to live in a world that is defined by people whose idea of success is saving money. And lowering head counts. I don't want to live in an efficiency world. I want to live in an exploration world and I want other people to be exploring with me because after a while it gets lonely to be the only wagon.

[01:08:02] Henrik Werdelin: I mean, I guess it's such an important point. I really applaud your work on, and you obviously do it so well. And it's so inspiring just to hear all the projects you've done. So I really hope that more creatives use this an excuse for just trying something.

[01:08:17] Jenny Nicholson: That's what I say. Fuck around and find out it's the business model of generative AI.

[01:08:21] Jeremy Utley: Get out of here, Jenny. You've been too good to us.

[01:08:23] Jenny Nicholson: Thanks for having me. That was such a blast. Thanks for listening to me, Babylon.

[01:08:26] Jeremy Utley: You're amazing. Thanks for sharing adios.

. Oh my goodness. can we just gush for a minute? How amazing is Jenny?

[01:08:35] Henrik Werdelin: She's incredible. I saw the talk she did at a conference and I was like, we have to get her on the podcast.

[01:08:40] Jeremy Utley: What do we even call her? Spellcaster? Anarchist? Futurist? I mean, there's so much. It's impossible.

[01:08:47] Henrik Werdelin: You know what I actually think would be a good way of thinking about her is of a AI creative. Like when you take really You take a human that has an, a great ability to originate. And then take fear away and then act the latest LM model. I think this is what you get.

[01:09:09] Jeremy Utley: She is the quintessential AI creative. I totally agree. I feel like we just scratched the surface. There's so much, at the end when she starts talking about ideas, the only thing that matters. I'm like, it was, I literally have blood in my mouth because I just wanted to go off on all that. There's so many themes. There's so many things that rhymed with things that we've heard before. I couldn't help but remember Russ Summers, and his being laid off and the time that he had to explore being echoed in her own experience, what she talks about in early 2023, that there's no work. And I think the great. Existential challenge of our day for most professional workers is how do they create sufficient bandwidth to explore to the degree that this technology deserves? I think , there are a lot of people who are busy professionals who are going to get bypassed because they're too busy doing their old job. I don't know how you think about that.

[01:10:05] Henrik Werdelin: I have somebody who had a quote, something like. I was too busy carrying heavy stuff to invent, something that could help it carried for me. I think the point about being scared it's very powerful. And I think the second is it's probably too difficult. Like those two things seem to be two elements that really prevent a lot of people from simply just downloading the app and then trying a thing or two, or simply just pressing that little button, which is create your own custom dbt. And then literally just having conversation with the bot that will help you create a custom dbt. And I do think that her other point about that we'll end up living in a world where the wrong people are the ones who are creating these apps and those are the ones who wants to be efficient and like just do everything that everybody's doing now just faster. I think we can get this to be a magnifier of creativity and a magnifier of people who have original thoughts. That's also the world that I inspired to live in.

[01:11:08] Jeremy Utley: You know, if I had to pick a favorite moment, I've got five or six pages of notes here. I had to get a new notebook in the middle of the conversation. And I think my favorite moment, she had been saying that her entire goal is to get folks to not be afraid. And when we asked her, how does she do it? And she shared that example of, a bad use case, a good use case. And then we came back and said, Oh, you're giving people permission to be human. And she corrected us. She said, you're required to be a human. And I think that to me, that's really a profound paradigm shift, mindset shift. I have to be a human here. And just like I've seen, at Stanford for 15 years, for whatever reason, when people go into interviews, all of a sudden they become robotic. I'd never thought about it like that, but I think a lot of folks show up to an interaction with Chad, GPT or quad or whatever, and they're robotic. They don't know how to be human. And to me, it's an amazing kind of challenge to consider the requirement to be a human. And what does it look like to really bring my humanity to my interactions with this technology, I think is a really good question to keep at the foreground of your mind.

[01:12:19] Henrik Werdelin: Amen.

[01:12:21] Jeremy Utley: If you enjoyed this conversation with Jenny and you want to hear more like it, Drop us a line. Let us know who we should talk to. Share with a friend. Give us a review on Spotify or Apple Music or wherever you listen to your podcasts and drop us a line. We'd love to hear from you. We'd love to hear your questions, your ideas, your recommendations. We feel like we're just scratching the surface in terms of interesting people that we need to be talking to. And we rely on our network and our audience to point us in the right direction. So please, reach out to us.