In this episode, Nicholas Thompson, CEO of The Atlantic, offers a sweeping and deeply personal exploration of how AI is reshaping creativity, leadership, and human connection. From his daily video series The Most Interesting Thing in Tech to his marathon training powered by ChatGPT, Nicholas shares how he integrates AI into both work and life—not just as a tool, but as a thought partner. He reflects on the emotional complexity of AI relationships, the tension between cognitive augmentation and cognitive offloading, and what it means to preserve our “unwired” intelligence in an increasingly automated world. The conversation ventures into leadership during disruption, the ethics of AI-generated content, and the future of journalism in a world where agents may consume your content on your behalf. Nicholas also shares how he's cultivating third spaces, building muscle memory for analog thinking, and encouraging experimentation across his team—all while preparing for an uncertain future where imagination, not automation, might be our greatest asset. Whether you're a tech-savvy leader, a content creator, or just trying to stay grounded in the age of generative AI, this episode is full of honest reflections and hard-earned insights on how to navigate what’s next.
In this episode, Nicholas Thompson, CEO of The Atlantic, offers a sweeping and deeply personal exploration of how AI is reshaping creativity, leadership, and human connection. From his daily video series The Most Interesting Thing in Tech to his marathon training powered by ChatGPT, Nicholas shares how he integrates AI into both work and life—not just as a tool, but as a thought partner.
He reflects on the emotional complexity of AI relationships, the tension between cognitive augmentation and cognitive offloading, and what it means to preserve our “unwired” intelligence in an increasingly automated world. The conversation ventures into leadership during disruption, the ethics of AI-generated content, and the future of journalism in a world where agents may consume your content on your behalf.
Nicholas also shares how he's cultivating third spaces, building muscle memory for analog thinking, and encouraging experimentation across his team—all while preparing for an uncertain future where imagination, not automation, might be our greatest asset.
Whether you're a tech-savvy leader, a content creator, or just trying to stay grounded in the age of generative AI, this episode is full of honest reflections and hard-earned insights on how to navigate what’s next.
Key Takeaways:
LinkedIn: Nicholas Thompson | LinkedIn
The Atlantic: World Edition - The Atlantic
Website: Home - Nicholas Thompson
X: nxthompson (@nxthompson)
Strava: Cycling & Biking App - Tracker, Trails, Training & More | Strava
Caitlin Flanagan – Sex Without Women: Article:SexWithoutWomen-TheAtlantic
00:00 Introduction to Nicholas Thompson
00:11 Navigating the Information Overload
01:10 Daily Tech Insights and Tools
02:10 Using AI for Content Creation
04:39 AI as a Personal Trainer
08:02 Emotional Connections with AI
12:12 The Risks of AI Relationships
16:17 Preparing for AGI and Cognitive Offloading
30:26 AI's Impact on Leadership
31:10 Navigating AI Competitors
32:01 Internal AI Strategies
32:49 Ethical Considerations in AI Usage
34:07 AI in Journalism and Writing
36:32 Practical AI Applications
40:27 Balancing AI and Human Skills
49:27 Future of AI in Media
53:50 Final Thoughts and Reflections
📜 Read the transcript for this episode: Transcript of What AI Can't Replace – How The Atlantic Deals with Disruption |
[00:00:00] Nicholas Thompson: But if it scares you so much, you kind of put up your hands and say, go away, AI, then the disruption happens faster, Mm-hmm. If you fight AI. you know, will make your job worse sooner. Um, and and, you know, I talk about it a little bit like, one of the metaphors used in an all hands when I was going and forth on this issue is, look, look, there is a storm coming right? And the storm is AI and it's gonna massively change the way people do journalism.
I can say, I don't like the storm and I can go stand out there naked and I can yell at it, right? Or, I can put on a raincoat, I can get an umbrella and I can try to figure out the path of the storm and how best to protect what I do, right?
I'm Nicholas Thompson. I'm CEO of the Atlantic. I love to run. I love to read about AI. And I love to talk to other people about the biggest challenges and biggest questions in AI.
[00:00:53] Jeremy Utley: Both Hendrik and I obviously huge , fans of your work, huge fans of you know, the most interesting thing in tech. And we thought perhaps it might be an interesting place to start just in this, uh, era where there's so much information. I mean, forget even in the world in general, just take AI, which is kind of the purview of our podcast, but there's so much information. How do you think about sifting through the signal from the noise? What's your personal attitude or approach to understanding what matters?
[00:01:25] Nicholas Thompson: So I try to, I'll first say that is like the hardest thing, right? There's so much information, there's so much going on. There's so many papers there's so many tweets. There's just this constant flow of information. And so I am constantly in a state of trying to prune and improve the information flows I get, making sure that. If I'm spending time on Twitter, I'm seeing tweets that are useful. If I'm reading research papers that I stop if they're not interesting enough and that I'm learning how to use AI tools to help me sort through that. So for this daily video, I do the most interesting thing in tech. I have, there's both kind of the slow build process where I'm listening to podcasts about AI. I'm reading books. I'm writing down notes. I'm, I keep a file of things that I think are interesting about tech that I might make stories. And then I
I. then I have, so I have that long, slow build, and then I have immediate stuff coming in where I'm Looking at tech meme. I'm looking at the Atlantic's tech coverage. I'm looking at wires tech coverage. I'm reading New York Times tech coverage. I'm reading different blogs. Um, and then I have I have this interesting friend who sends me like 15 ideas a day and I'm going through his like email stream, which is, you know, good 25 of my inbox. Um, and I go through and then I find one. and then what's important is I kind of commit that the whole process from beginning to end won't be more than 15 minutes. So it's a three minute video, so it has to be something I understand. It has to be something I can process fairly quickly. Um, and then I post it online.
[00:02:53] Henrik Werdelin: And so do you use any tools for that? Like, are you a a notebook lr? are you, a, like, do you use any tools for any of this?
[00:02:59] Nicholas Thompson: So for, for the, particularly for the videos, I use open AI and I use Claw to vet the ideas. Hey, so I'm thinking of doing this. uh, I'm gonna make a three minute video about this topic or this paper. I'm gonna structure it this way. What do you think? What is a counter argument? What do I have wrong? Right. So I'll use them as though. You know, they're very smart people sitting across the table. I use notebook LM if, um, You know, I'm in transit or I'm going out for a run and I wanna understand a paper. I'll upload it, create a podcast, um, or I'll upload my notes, create a podcast and listen to it. Um, so I use notebook. LM to kind of depending on the state of my preparation
[00:03:37] Jeremy Utley: can I nerd out for just a second on going back to Claude and vetting ideas, counter arguments, et cetera. Do you stay in the same conversation? Meaning does that conversation have the history of all of its interactions, counter arguments, suggest, et cetera? Or do you start a new conversation?
[00:03:55] Nicholas Thompson: Ideally I stay in the same conversation? as long as I can. The problem, my fundamental frustration with these models is how quickly you fill the context. window.
[00:04:03] Jeremy Utley: Yeah.
[00:04:03] Nicholas Thompson: and so Claude really can't handle that much. If you upload the paper or you get in a long conversation, eventually the context window runs out. I would love to have one long running conversation where I can upload you know, here's the thing I wanna talk about. Okay, let's talk about it. Let's chat. Okay, great. Here's the video I did. Here's the response I got. Here's the commentary. Learn from that. Help me on the next one. We'll get there, but context windows, I think right now, are one of the main weaknesses of these models.
[00:04:32] Henrik Werdelin: Yeah. I think one, one thing, uh, you were talking about when you have like this kind of list of ideas that are kind of have interestingness, but that you have not committed to yet, um, it seemed to me that the more that we use these models, the more that we are now building almost these kind of like repositories of information that we then quickly can dump into. the models. And so I do the same when I write my newsletter, like I have like this kind of list of just like things. And then I have one that helps me kind of like rewrite in the format that I do in the newsletter. But I also like now have like the format of like how my startups are talking, you know, like what is my new book about? And so when I quickly had to do something, I could just drag that kind of like along. Do you have a. Way of thinking about, like creating almost these repositories of data chunks that you can use for, for quick interaction with the models?
[00:05:21] Nicholas Thompson: I only do that for one thing. Um, so I am a, you know, excessive runner. I have particular goals. And so I have one long running. It's probably now four or five months in chat g pif. Um, where was initially, it was a specific URL. And now it's a specific project within the sort of the enterprise version of uh, chat g PT that we're using. And I take it. And I'll put in whatever I ate during the day, how I. trained, resting heart rate, pace, and then it will gimme suggestions on recovery, proper diet, what I need to do, you know, uploaded my hematocrit hemoglobin levels, like, okay, so now let's figure out, you know, strategies for increasing those, the proper ways, given that race day is five weeks away. And so I use It as my personal trainer, and dietician, and coach. Um, And so that's extremely useful, because if it doesn't know, you can't really start up again And say last week I did, you know, four by two miles at this pace with this recovery and upload everything from the last six months of workouts. it's also very limited. It doesn't fill the context when it's just text. Yeah. Um, and so that I've kept going for a long time. Um, otherwise it's pretty much starting new chats constantly.
[00:06:29] Jeremy Utley: Can you tell us about how did you train that trainer so speak? Train your trainer. What kind of pre-training did you give that GPT to be able to give you reliable call it dietician training advice?
[00:06:41] Nicholas Thompson: Well, I initially tried, I exported all of my old Strava files, so Strava social media network where you keep track of all your runs. I exported my Strava files, I exported my old running log data, um, I inputted it, and then I asked it, and it was kind of like, a little hard to parse 'cause the data's not clean and I've used different heart rate monitors, which have different variances And errors. And so that turned out actually not to be useful, or minimally useful. Um, so then I just gave it a long prompt. I said, this is who I am, these are the times I've run, these are the workouts, this is the mileage per week, these are my dietary goals, this is my normal diet, let's start from here. Um, and it worked pretty well. In part because my training is it's, fairly normal. I, you know, I kind of do the same kind of training that lots of other people do. It's not as though I have some like bespoke training thing that it's never seen in a running book or a running log. Um, so because of the sort of simplicity of my training, which is like two speed workouts and a long run a week a mileage goal, minimal cross training, reasonable pescatarian, healthy diet, like it's fairly easy to map advice onto that. Whereas if I had some crazy diet and some crazy bespoke training plan, it would be a little harder.
[00:07:58] Jeremy Utley: I wonder actually. I mean, just in so far as specificity drives the focus of the model, I wonder if a crazier, more bespoke diet might actually give you even better just as a counterintuitive finding, right? Perhaps.
[00:08:10] Nicholas Thompson: That's interesting. It's an interesting thought, right? Maybe particularly as the models, that would be the great test, right? , okay, this is what I'm gonna do for my next marathon or next ultra. I'm gonna say, Hey, gPT, come up with a training routine and a diet that no one has ever done before. Take the temperature rating to the max. Give me something nuts and we'll see whether it can figure out things that no coach has out before. And then if i, you know, run 10 minutes faster, 10 minutes slower, we'll, we'll have a good data point.
a
[00:08:39] Jeremy Utley: Clear AB test. You just have to keep every other variable exactly the same
[00:08:43] Nicholas Thompson: Right. Totally. I was, that's exactly the way to do it.
[00:08:45] Henrik Werdelin: I was curious on your, uh, emotional relationship to that train bot. Uh, I'm asking because I started to lock calories. And it does it obviously incredibly well. Um, but I still send the pictures of what I eat to my trainer because there's like, I don't really feel responsible in the same way to. The chat chi that it does to my trainer. Um, and then I noticed you did a a video a few days ago, which was about how, models get anxious in the same way as humans get anxious, which, you know, you explained would make sense because obviously he's trained on, on all the stuff we set, And so like it, it might replicate that, but. Have noticed any change in your own relationship back to the model?
[00:09:25] Nicholas Thompson: Well, I think like you, I'm clearly, I have had a small window in my life where I've had a trainer every day I sent him what I ate. And I was much more reliable, like I don't, chat Beauty doesn't get every meal, right? I sometimes forget, sometimes forget for two or three days, um, so it's definitely the case that because It's not a person, , I'm not as anxious about missing it for good And, for ill. But do I do get a little endorphin rush when it says, Hey, great workout, right? When you say, and it's nice that I should probably actually in the prompt, I should tell it to be a tougher.
[00:10:00] Jeremy Utley: My recommendation, not a little bit tougher. My phrase is cold war era, Russian Olympic judge. Okay. my phrase because otherwise he's always just gonna kind of gas you up. Right. Good job, Nick. Well done. Right. And unless you go cold war era, Russian judge, that to me is the appropriate level of kinda calibration for gPT,
to actually
[00:10:18] Henrik Werdelin: that's not an child to give it that. I mean, like feel like a very caring mother It breaks me all times.
[00:10:25] Nicholas Thompson: That is better. I mean, that is my, like the I find The strava AI. I adore Strava. think it's great media platform. The AI is useless because no matter what you do it's as great, you crush that workout. Even if you just, you were terrible. And you know you were terrible 'cause you you can tell very precisely running is a sport, unlike soccer or basketball or any other sport where you're competing against someone else Or you have other people in the field. Like I. you know, how fast you ran a mile And it was either faster or slower than you thought you were gonna do it. And the reason you went faster is either 'cause you're in better shape or you had a better day. And the reason you went slower is 'cause you're in worse shape. You're aging or you had a worse day. And Strava's AI is always like great work. And you really sometimes want to say is that one kind of sucked, but tomorrow's another day.
[00:11:05] Jeremy Utley: You know, the Henrik, your question about emotional connection reminded me, I, don't know if, this triggers anything for y'all. I'd be curious. I had set up a voice an advanced voice mode conversation with an Enneagram coach to basically kind of know my Enneagram type and wings and things like that. I had a mentor who told me he wished he had done the deep work of really understanding himself. at my age. He's about 20 years older than me. And he said, I recommend Enneagram. I recommend you have a coach. And I go, Hey, perfect opportunity to do a daily check-in with an advanced voice mode, Enneagram coach. and I actually had a very kind of specific and regimented set of instructions. I gave it. I used that coach every single day for a number of weeks. I was looking. Looking forward to to the point that I actually would bypass a podcast that I loved to have that which to me was like the kinda watermark of, I'm looking forward to relationship. And then I made the mistake and it was almost to the point feel like the devastation of like losing a friend. I texted into the model, uh, , with my fingers. And when you use. Text input. It reverts the model, and it was like my model's Memory got wiped, and honestly, it was so devastating. I haven't ever reinstituted the practice because I literally felt like I lost a a a relationship. Um, I mean, one, it's kinda weird to say, right? But two, we're seeing this phenomenon of AI relationships, character all that. Do you, Nick? I dunno if you have any thoughts about where are we headed? Or what does it mean for our world Do we now have relationships with intelligences that are different and less accountability, but in some ways more available than maybe a traditional relationship, right?
[00:12:49] Nicholas Thompson: Yeah. I find it terrifying. actually. Um, I think when I think about the worst risks of AI, it's not, you know, AI creates a new monkeypox, which is worrisome. it's that some social media platform builds bots that are so good looking and so good at pushing our buttons that we start to lose our human connection. And there's this amazing David Foster Wallace quote, I think a lot from like 1996, right? I don't have it exactly in front of me, but it says, you know, it's getting ever easier. To be amused by a screen controlled by people who do not love you, but do want your money, right? And he was talking about the introduction of internet pornography, but he could also have easily been talking about social media 10 years ago or these bots coming out. And there's clearly a kind of relationship, like a goofy relationship with a tutor, right? Or a relationship with, you know, my silly training bot. Um, but what I don't want is I don't think that we should develop I. You know, we spent our entire evolution, like, figuring out how to be emotionally close to people, when to separate, what it means, how it affects us, what love is. And suddenly, to have all of that knowledge transferred into a machine, it's a way of manipulating us in a way of controlling us that I think is very dangerous.
[00:14:15] Jeremy Utley: It's beyond our ability to combat, really, if you think about it.
[00:14:18] Nicholas Thompson: Yeah. And, you know, I had a, I had dinner. with a group of people. It was maybe six weeks ago. And, you know, you read these stories in the paper about, you know, the woman who her husband because she falls in love with a replica bot, or the person who kills himself because the, you know, the bot, um, said something to him. But this was the first time where I met someone sort of in my social circle, someone who I friendly with someone who was, you know, sort of Um, you know, at an event who said, Oh, yeah, I talked to my wife less than I talked to this bot because this bot just like responds to all my needs. And I was like, Oh, my God, we really are getting there. We're not that away. Um, we have a wonderful essay in the Atlantic this week from Caitlyn Flanagan called sex without women. Right? Huh? What happens when. Men could also work in reverse, you know, feel as though it is better to converse with their bots than with their spouses. Right. And then you don't, you don't, you don't have to worry about how you look. You don't have to shower before they conversation it saves you all this time.
[00:15:18] Jeremy Utley: It requires nothing of you. It requires no effort whatsoever.
[00:15:22] Nicholas Thompson: And so you don't learn things. Right. And the, the biggest risk of ai, and I. love ai, right? We're not the biggest risk one of the biggest risks, a, we get sucked into these, and then two, is that cognitive offloading becomes this real phenomenon and we become kind of helpless without our tools, without our digital tools. And so you don't wanna become helpless in relationships and human interaction because you've spent so much of your time. learning how to interact with bots that are not humans and where you don't have to, you don't have to do any of the hard stuff.
[00:15:49] Henrik Werdelin: I think that's a super astute observation. And I think you're right. You know, we definitely, obviously COVID started us being antisocial. And then, if you really look at most of the times where we would even have like casual social encounters, like uh, you know, like you, you would meet somebody at the bar, you know, except now you just. Do seamless, right? You know, you would meet somebody in the movie line, except now you do netflix and, you know, you meet somebody like it. all these different interaction points. That was kind of quote, unquote, forced upon us is obviously not here. I was in a workshop the other day, and we were talking about using agents as a good. Brainstorming tool and somebody made the same observation saying, Hey, what's gonna happen when we are just all sitting individually here and like brainstorming with our stuff and that kind of like random things that happen when you're kind of bouncing into each other. Yeah. And so I think one is in personal relationship, but I would even say that I have less interaction now with the people I work with that I used to, because so often that little brainstorm or the little bit of like, Hey, what do you think about this? Right, right. just so kind of conveniently done by, by a model.
[00:16:50] Jeremy Utley: Not just romantic relationships, but even, you know, coworking relationships. Yeah.
[00:16:54] Henrik Werdelin: so my question, I I guess on that, and I, I'm not sure I necessarily have an answer is, uh, I think I can't remember I read that piece, but somebody was like, Hey, we're just not ready for a GI, um, and then I was saying, okay, what, does that actually mean? Like if I was to be ready, what would I do more of? Right. And I, like you, know a little bit of a, AI and coding and all these other things, but I'm actually not sure how I could do anything today that would make me prepared for the problem that you identify or even for like the bigger unknown problem or what a GI would mean. Do you have any thoughts on that?
[00:17:30] Nicholas Thompson: I mean, I have a few thoughts. One is, I wish the AI industry didn't pursue a GI, Um, I wish they had focused on building tools with AI. and You know, there's a open AI route towards a GI open AI, andro, and then there's kind of the deep mind route, which is like, let's solve protein folding with AI. And if the whole industry had been focused on different tools, as opposed to focused on making AI as much like human intelligence as possible, it would've much more slowly. A lot fewer people would've made a lot of money by now, but I think we'd be at much less risk, but it's too late, right? We've pursued a GI, We're going there. We're gonna get there. Um, I would like it to come slowly. So we have time to think through these questions. So we have time to set up our social norms. So we have times to, to the extent that there's use of regulatory policy that's possible. Um, the extent we can do that, but most importantly, So, we can set up social structures, norms, and respond to this very crazy world we're about to head into. My general level of confidence that we will handle our first interactions with a GI, whatever that is, my general level of confidence is pretty low. I think it's gonna be disruptive, disorienting, Um, but, you know, humans have dealt with a lot of technological change and we'll get through it. And then, so for my own personal, how I will handle it personally, you know, I, I do try to you you need to set up like separate social structures outside of I can't remember what philosopher said, there used to be like a third space in your life. and, like part of what is happening with loneliness in American life is the lack of the third space, right? You have your family. Everybody still goes back to their family, I guess, where these people, use bots. But like, family's pretty solid, right? Getting less solid, 'cause everybody's on their phone, but pretty solid. There's work, right, which is getting less solid because nobody comes into the office anymore. Or They come in, they come in now more than they did a year ago, but there's like less, like, physical connection with each other And you know, coffee. And then there's the third, which is like, what are the social structures? Where do you outside? you go to church? Do you go to the, bowling league? Do you go play, pickup soccer? Um, and that's really important And that's declined a lot in America and that's led to you know, some of the social issues we have. And so, I try to constantly make sure, okay, so where are the third spaces where am I meeting new people? And am I going to go play the pickup soccer? Am I like, I. I'm talking to the other parents when I'm at my kids, you know, soccer games, am I running with other people, right? Making sure, and then I, because of my job, I'm constantly connected and networking, so it's less of a problem for me. But thinking through it with my children and other people I know, trying to make sure there is all of this human connection outside of our devices.
[00:19:58] Jeremy Utley: Hmm. That's I I love that as a antidote to kind of the question of a GI is cultivating. Human third places. Yeah. Um, that's, cool. I wanted to go back for a second because you mentioned two risks, one around kind of the, uh, ca I I'm excited to Caitlin's piece, by the way, sex of that one. I think it's a huge, a huge. Epidemic. And I think with robots and where things are going there, I think it's only gonna get worse. By the way, there's also the kinda existential economic implications of declining population, Right. Which, you know, so let's not go there yet, but the other, thing, or ever, I know. uh, the other is that you mentioned was this cognitive offloading. And to me, I I really want to go back to that because I think there's something really profound here that even was just coming to my mind as you were speaking, which is, we talk a lot about, you know, Henrik uses this phrase, Iron man suit. Right? Mm-hmm. All of a sudden, we have this power up. that's incredible that we didn't have before. And to me, as I think about the danger of cognitive offloading, which we could define as uh, almost a learned helplessness. Now we can't do the task because the AI's Doing it. One thing is relative to our current capabilities. A whole other level is cognitive offloading relative to my Ironman spacesuit, right? If I become used to being able to bench press a thousand pounds, so to speak, you know, metaphorically speaking, at some point I can't afford to not be using AI because like my human arms are so like, even if I can still bench press 2 25 or whatever, it's nothing compared to a thousand. And I think right now the value proposition of a lot of AI is you can Amplify yourself. You can. 10 X your output. But then we get in this weird world where relative to my 10 X self, I can't ever afford to not, you know, I was uh, just by, by way of analogy, I was talking with a good friend who was telling me he's realized like on a dopamine scale, we all have children. And he was saying he was watching one of his kids bathe, you know, he is sitting there by the bathtub and he, and then he got a notification on his phone and he and he remembered this book or talked he heard recently where, Apparently, you know, from a, long history perspective, we live on say a dopamine scale of like one to 10, uh, and, you know, watching your kid bathe, maybe is like an eight, which is very enjoyable and great. Whatever. Now that we have notifications, everything they're, kind of triggering us on dopamines. that's literally off the scale. And his comment was, it's not fair to my child because. She literally can't compete with this device. Like the only dopamine hit she can give me is black and white compared to the color of my device. And that really struck me, you know, kinda wanting to live, wanting my kids to be in color, so to speak, really struck me. Uh, it seems to me that in regards to kind of the cognitive augmentation that's possible with AI, the danger is. It puts our human lives in black and white in, in, in a, you know, by, by way of analogy, how do we think about the responsibility to get power up versus the danger of now that we're power up, we can't be normal. Yeah,
[00:22:58] Nicholas Thompson: That's a great question with a lot in there. So I suppose that part of the answer is a little bit related to what I was just saying. So, you know, for your friend, he should turn off all the notifications, right? And That's you know, one thing that I do. And one thing that everybody should do, right? You should, your phone should never. You should get no notifications. I still allow myself to get notifications from signal 'cause it's a channel that is only used by like my general counsel, um, and the editor in chief, right? Like there's, but, and then I allow notifications from my wife via iMessage and that's it. Right? Um, and you know, so you, part of, part of being able to be human in this world is making sure. That These devices, which are designed by the smartest people of my generation to take away all your attention and give it to them so they can profit off of it Part of the challenge for us with social media and with phones has been to put them into black and white instead of color. That's gonna be a much harder challenge with a GI It will be to right. make sure that the Benefits that are coming from aI or phones, whatever device we strap on to suddenly be able to bench press a thousand pounds instead of 2 25. by the way, bench pressing 2 25 pre
[00:24:06] Henrik Werdelin: I was also like, are you doing 2 25? man? That's very impressive.
[00:24:09] Nicholas Thompson: That's good stuff. Um, you know, learning, you know, figuring out when we actually need that. And when we don't need that will be one of the. Great challenges and then back to the cognitive offloading, I do think that morally, spiritually, and even intellectually, there will be no point where AI is so powerful that your sort of, your unwired intelligence won't matter, right? There's no point where the only thing that I mean, no point in the foreseeable future where AI is so good do
[00:24:41] Henrik Werdelin: you do you say that?
[00:24:43] Nicholas Thompson: everything I've seen about AI, and as I've seen what people using AI. the smarter they are when unwired Right. The, The organic self has a real effect on how well they can use ai. So yeah. if you can bench 2 25 when you attach a GI, you bench a thousand, If you can bench one 50, when you attach a GI, you can bench 800, right So there's no, point at, I've, there's no point I've seen. And I've, I, I imagine that strengthening our unwired facilities won't matter. And so then if you believe that, then it means that, you should. You're constantly trying to figure out how to strengthen your unwired capabilities, and
[00:25:22] Henrik Werdelin: hmm, hmm.
[00:25:23] Nicholas Thompson: you should be taking time while you're disconnected, and you're thinking, you're walking, and you're, you know, you're just reading a book from beginning to end, um, as opposed to like, firing it into the AI And asking for a smart summary. Um, so keeping those skills strong will always, for the foreseeable future be an important thing.
[00:25:40] Henrik Werdelin: What do you think those skills, uh, I, I think it's a, it's, a tremendous interesting point. in the same that you're running not to get away from a bear or to get food anymore, you're running, you know, because of enjoyment, but also kinda like to push yourself as a human. Yeah. Uh, if you are looking at the scales that we, you know, what is the equivalent of, Bench pressing as a knowledge worker.
[00:26:04] Jeremy Utley: You mean, cognitively
[00:26:06] Henrik Werdelin: Uh, in the aI space, like what is it that We need to become, you know, one that you think was like reading the book from A to B. So clearly whole concentration, even like. when things are not kind of spooned for, you seem to be one that you identified. Are there other, are those kind of skills that we should think about that we should keep hone?
I
[00:26:22] Nicholas Thompson: mean, yeah, it's like, let's go back to parenting, right? So your kids in the math, being able to, Engage with them, like figure out the right thing that they need and figure out the right way to communicate them. They don't have an a GI Device while they're two years old, right? Um, Being able to talk with them on their level and you can, I can, you know, I sometimes ask the aI, right? Like my 14-year-old son has this, you know, complicated thing he's working with at school. How can I help him through it? But most of our interactions are they're independent, person to person And I can either give good, helpful parenting advice, or I can give bad parenting advice. Um, so that's true of relationships. Particularly with young people who won't have been introduced to this technology, but also with other people, right? I have dinner with my wife, we're disconnected from our phones. You know, how do you, you know, she's gotta make a big job decision, right? I can ask the AI, like, help me think through this job decision that she's making. But really it's, you know, you wanna be able to do that, disattached. So, It's human interactions, it's emotional interactions, but It's also just thinking creatively. You know, if I think about the best, I ask AI all the time, right? And I asked it. In fact, I had deep research just the other day, right? I figured, let's see, haven't used up my 10 credits for March. Um, let's have it like think through the grand strategy of the Atlantic, right? So here are the various problems like. Sort through what we should launch next. Uh, and so I do use even for it didn't gimme anything. I didn't know, unfortunately, or maybe fortunately, unfortunately, in the sense that it didn't gimme any new ideas, fortunately, that it suggests that I won't be fired immediately. Um, so I do use AI for those hardest tasks, but I also think that. The more knowledge I have disconnected about the grand strategy of the Atlantic. the better choices I'll make, even if I'm relying on ai.
[00:28:07] Jeremy Utley: Hmm. I love the comment you made about ability thinking about, for example, engaging with a child an ability that has to be cultivated. and I think you know, devices have already somewhat right? I I happen to see if I'm out to dinner, the number of one children that are on devices. But then I look at their parents, they're on devices too. Yeah. I think it's an ability that's not being cultivated, just set aside AI, just because of our device obsession now. Yeah. But I think framing it as an ability is so important because it's if, if you frame it as an ability worth cultivating, then. I can put it on my radar as something to attend to. I think right now the sad truth is it's not even being attended to, which is by why it's being neglected.
[00:28:52] Henrik Werdelin: Yeah. I think there's, there's two elements of that, which kind of one thing is I had the same thing. I've been skiing a lot this season and I was skiing with my 5-year-old and I was realizing that I was kind of thinking about as I was skiing with him what I was doing next. And then obviously got men's guilt 'cause I was skiing with my 5-year-old. What be any better, right? Mm-hmm. Which Nothing could be better, But I also didn't realize that I'm,
[00:29:12] Nicholas Thompson: we praise Jeremy's bench press getting your kids skiing at is good.
[00:29:15] Henrik Werdelin: He full, full, full French fries. Uh, but what was realizing was that I am, I've turned my life being their concentrated when I'm sitting and do work. Like I am, I'm pretty much in full flow when I sit and have this conversation and when I, you know, do emailing because. I've done that for a long time. I'm very trained in doing, it. you know, obviously, skiing with a 5-year-old, I haven't done my old life. and so therefore, Right, Right, and so, but I think, and, and then that just kind of my, a way for me to think about it as kind of I. reshifting my ability to be good at different kind of like, things in life you know, I was. good, I'm good at work, but now I also would like to be very good of being present while skiing with my 5-year-old. But I think what we're also saying here is that not only will that in itself become important because that will keep us human, but I think the really big reason why listeners on this podcast should be interested is that it might even be, make us better off using AI, right? And to you, I I think it's such an important point. Like if you can. If you can mentally and as a human do stuff that is uniquely human originality and all these different things, you can probably even get much more out of AI. that way. and so it's a little bit of like the Vax on vAX offer version Yeah. of becoming good of AI.
Yeah.
I think
[00:30:24] Nicholas Thompson: that's, I think that's very, very well stated.
[00:30:27] Jeremy Utley: it reminds me of one of my favorite comments on this podcast ever. Nicholas, we had a woman, an advertising executive named Ginny Nicholson on the show. And one of the things she talked about was bringing your humanity to the model because it's really the only that's not already in there. And so if you aren't foregrounding your humanity. Then you aren't, you're, you're not getting any differential output from a model 'cause you can get what anybody else can get. And that speaks to henrik. I love the wax on wax off. There actually has to be, or as Nicholas, as you put it, an unwired capacity that's being developed to get differential output from a model. Period. And I think unwired capacities are, it's, i, i I, don't know that we've talked about it in such terms before. I love that. I wanted to ask you about deep research, the grand strategy of Atlantic, not, not because I want, I wanna know about that per se, but because it reminds me of, we've talked a lot about you as a runner, as a content creator, as a futurist, right? We haven't really touched on you as a leader of an organization. Yeah. Can you talk for a little bit about how has AI affected your ability to lead, practices to lead, et cetera?
[00:31:36] Nicholas Thompson: Yeah, very hard. So there a whole bunch of different dimensions for it. Um, And I can go deep in any one of them, but the first is trying to set the strategy, right? So AI was trained. On the kind of data that we produce, in fact, specifically on the data that we produce and is creating competitors to what we produce and so navigating a future in which there are AI based competitors and not only that, navigating the future in which the primary way people find our content, which is through Google search will gradually go away because Google will just answer the questions instead of directing you to the Atlantic. Right? So that's part of it. So I have to figure out our strategy for that. Then there's the strategy on how we relate with these companies, right? And so these companies are great allies and that they've built tools that you are extremely useful to us. But they also, you know, stole our copyrighted content and did not pay it for us.
And there's a little bit of an adversarial relationship there, but also potentially beneficial relationship. So I negotiate that. There's the question of how we, um, use AI inside the Atlantic, right? I use it obviously all the time to make my better, to think about the grand strategy Atlantic. I am trying to get my staff, I, I, I tell them very specifically never use it for writing. Don't use it for writing because unethical, 'cause we might lose copyright and because it's bad. But, you know, beyond that, like, you should be trying and you should be using it for all kinds of different things. And I, again, I should know, I don't oversee the editorial side of Atlantic, I just run the business. Um, but, you know, figuring out how we can use AI in our, you know, facebook marketing it's really important. Like That is one of the main drivers of our growth, which has been massive, right? You know, we switched over to, you know, writing certain things with AI was extremely helpful. Um, so figuring out how to get it to flow through the organization. That's really important. And then the other thing, you know, is anybody who reads the Atlantic knows the staff has been quite critical of the AI industry. and fact, of the deals that we've struck, right? You can, there's a story about me in the Atlantic called deal with Devil. right?
Or Bargain with the Devil, or something like that. Right? So there is a, you know, there's a and forth. You know, between, you know, internally as well, that is complicated for matter. So those are all, like, amazing vectors to work with. Um, and, you know, it's stressful at times. It's been exciting at times. Whatever it is, it's working 'cause the Atlantic is, you know, firing on all cylinders and we're bringing in subscriptions at a faster rate than we've ever had. You know, advertising is through the roof. Like, it's, it's going well, or all the factors combined are going well, but it is not a simple issue.
[00:34:14] Jeremy Utley: No kidding.
[00:34:15] Henrik Werdelin: Has it been It's difficult to convince an organization to use AI when you work in an industry that, probably more than many others have been kind of affected by AI already? So there, to your point there's this this kinda like very polarizing kind of like relationship.
[00:34:34] Nicholas Thompson: A a hundred percent. It's very interesting because you know, the closer you are to the AI training, the sooner you'll be disrupted. Right. coders are getting disrupted very quickly. Journalism media getting disrupted. And that's scary. But if it scares you so much, you kind of put up your hands and say, go away, AI, then the disruption happens faster, Mm-hmm. If you fight AI. you know, will make your job worse sooner. Um, and and, you know, I talk about it a little bit like, one of the metaphors used in an all hands when I was going and forth on this issue is, look, look, there is a storm coming right? And the storm is AI and it's gonna massively change the way people do journalism.
I can say, I don't like the storm and I can go stand out there naked and I can yell at it, right? Or, I can put on a raincoat, I can get an umbrella and I can try to figure out the path of the storm and how best to protect what I do, right? And that's what we're trying to do, which is Let's figure out the AI Companies that we can work with. Let's figure out the companies with whom we can make licensing debts. Let's understand their economics. Okay, they're cash poor, but equity rich. Let's understand like how that changes the dynamic, right? Let's think about what the actual value of our content is to them, right? Let's think about how gonna change as we move into a world of synthetic data. If you. understand, Understand , how AI works, how it was built, where it is going. If you understand that better today than you did yesterday, you have a better chance of making the right choices for your company. And that is a conversation that I am having continually. both of our immediate companies and with my own staff, because there is so much fear and so much anger, and the anger is justified, right? These companies came, they scraped our sites, they ignored our terms of service, They ignored our robots.
text. there have been lawsuits that have been filed, but those are gonna take forever, at which point they will no longer have a need for our training data. And so there's this original sin that all these models are built on. And in fact, right now they're all asking the Trump administration to you know, be given sort of a permanent get outta jail free card on this issue. Um, so there's this original sin that makes this industry quite angry and the anger is justified, If that anger then leads you to make stupid decisions. You know, I got angry, so I myself in the foot. that's not a great choice.
[00:37:04] Jeremy Utley: So you say you're talking about this all the time with other media companies, with your own staff. Can you talk for a minute about as a leader, what are the forums or mechanisms where you are talking with your own staff? How are you talking about it You know, and just to give you one example, we had Brad Anderson who's one of the presidents of Qualtrics. He worked for Satya for a bunch of years, top 20 leader at Microsoft for like eight years. He's now at Qualtrics. And he told us a story about how. in every all hands meeting, he takes five minutes and he shares his screen and he shows people what he's using AI for. Yeah. That's to me. That's one of, that's a kind of A hallmark leadership behavior not telling people to do something but showing them what you're doing. Can you talk about what are the mechanisms? How are you leading your staff to explore?
[00:37:48] Nicholas Thompson: Yeah, that's a, that's a, I might take that, actually. I, you know, I do these monthly all hands, and I, you know, talk about it And I, talk in general about how I use AI and what I use it for and what I don't. But I haven't done the share the screen method. Um, I have often, You know, I'll walk to people's desks. I spend a lot of time trying to walk around the office, and, you know, why don't we try, like, let's upload this proposal, and let's see, like, what the AI says about this proposal, and, and how to make it. better. we've built a little, very small team, Um, that is trying to, build AI products that makes people's jobs faster. We had a three hour training session with OpenAI on how to build custom gpt, just this week, , which was, um, which was wonderful. So there are a few forums, right? There's the all hands forum, there's a, slack channel where we're uploading and discussing tools. Um, and then there are also some particular complexities where, you know, we are set up in such a way that the editorial side specific, like, I have no idea what the Atlantic is gonna publish today. Not one idea. I don't have a single clue. Right? we are set up with. total separation, and, for good reason. because if the folks managing the money had any influence whatsoever on the people writing the stories, then you would write different stories, and they wouldn't be as pure. And so we have a total church stage so I can say all I want, but, you know, half the staff, you know, , should not listen to me.
[00:39:03] Henrik Werdelin: Hmm. I
[00:39:03] Nicholas Thompson: mean, they, they can listen to me, but they certainly, they, they, don't report to me.
[00:39:06] Henrik Werdelin: What, what's an example of a tool that you guys are thinking about that will make, uh, work easier?
[00:39:12] Nicholas Thompson: Well, you know, so for maybe I'll talk about my own journalism use So, one of the things, you I was a journalist for most of my life. I, was editor wire before coming to see The Atlantic. I was on the other side of the house. And so, I care a lot about it and I spent a lot of time doing it. When I moved into the CEO role, I kept writing 'cause I was writing a book. I was writing a book about my life running. general hypothesis is that you look really closely at running you can understand some of the hardest things in life. And the book is now done. Almost done, it'll be out in October. And I used AI in a bunch of ways that were very helpful. One of the things I would do is I would, they're characters I interviewed, and I interviewed them over the course of four years, right? so there are lots of notes in their files. And I would upload the transcripts of all those conversations I would have the AI look at the transcripts, and have the AI look at what I'd written about the character, and I would say, is everything that I've written. Accurate based on what's in these transcripts. And are there any quotes in the transcripts that are better than the ones that I've selected or is there anything that I've left out? Right. And it's extremely good at That Um, I use it for, Identifying, you know, I've, I was having a conversation about alcoholism And running and I was thinking, God, where do I put this in the book Or I could go in this chapter about this guy who's dealing with drug addiction could go in the 1970s where my father starts to run and like, becomes alcoholic. it could come in the eighties where he's like, really struggling with it where should I put it and so I write a few sentences about alcoholism and running. I say, hey, where in the manuscript should this go? And it gave me a bunch of very good suggestions, right? Because it can read it instantly. right? If I'd sent it to my editor, my editor would probably have had better suggestion, but it would've taken four hours.
Whereas you can get, you know, AI to do it instantaneously. And So there are all these use cases that are really useful even for Kind of serious journalism and writing Tho those aren't things that, you know, they're just practices that I think journalists and writers, I
[00:41:03] Henrik Werdelin: might, I might give you an extra one 'cause I have a book coming out the uh, August 5th about called me, my customer and I about, uh, AI and entrepreneurship and, uh, little tool that I just kind of like came up with was, uh, I'm just getting all the endorsement blurbs kind of getting done. Yeah. And obviously, this is awesome, you know, obviously wanna read the book, but then sometimes, you know, it's tough to write an endorsement. So I now have like a GBT that have read the book. and then I then feed them in the linkedIn profile of whoever I'm asking for an endorsement. and then I say, like, basically try to come up with endorsement make three different versions, make something that's kind of angered in what this person as an expert in and then also like, uh, Outline like something specifically in the book and it's quite remarkable what it comes up with. and Obviously it makes it much easier.
[00:41:50] Nicholas Thompson: That's amazing. yeah, the book endorsement, that, that of all the, of all the parts of our industry, they're gonna get blown up. I think that's a,
[00:41:57] Jeremy Utley: so Nick, here's, here's a question going back to something you said about The earlier, And then this example with your book, which I love, what writing? And I ask that specifically because you said it's unethical to use it in writing, but then you just gave, and I I know that there's an answer here, but just help resolve this tension because you also said, I used to see where should this quote go? , did I pick the best reference? Right? To me, those sound a lot like writing. Yeah. How do you define writing when you say, don't use it for writing? And then how do you define, how do you describe what you're doing? where you say it's actually really good at doing this part.
[00:42:31] Nicholas Thompson: Aha. Very astute observation. So an AI purist, and there are many of smart people in my field who would say, look, you know, writing is thinking. And if you use AI at any part in the process, you are corrupting. You're learning, right? And , it's unethical and it's a form of cognitive offloading. Therefore, you should AI from the entire process. Right? And that is a very pure approach. At the other end of the spectrum would be, you know, what we've seen at CNET, what we've seen at Sports Illustrate, where it's like, let's just have AI completely create this, and, um, you know, and let's, you know, put a fake byline on it. And so it is true that my is somewhere in the middle. And the line that I draw is, there's not one word. In the book that will have been written by AI or even suggested by ai. I don't ask it for synonyms. You know, I never say rewrite this sentence or gimme another suggestion on this sentence. Um, you know, so you, you could draw the line at a bunch of different places. Yeah. You could, for example, say, well, maybe, maybe it should never write like a, paragraph. right? You could maybe say, you should never talk to it about anything editorial. I happen to just draw the line at. No words came out of the AI Zero, right? not one zero. Um, now why draw the line there? A. It's clear, right? And it's a good place to. draw a line and B. There's the additional advantage of the question about copyright. like it is. It is unclear where the legal lines will be drawn over who owns the right to a work if an AI Helps to write it. So if I say, Hey, You know, GPT seven, like write a book about running, include profiles of these characters, right? Turns out a book and I publish it. Who owns the right to it? Do I own it? Does, OpenAI own the rights to it. right? Does Microsoft own the right to it? Does Chrome browser, right? and you know, we don't know the answer to that question. It hasn't been determined by the courts. And so if there's this additional advantage that the line that I think is sensible also kind of lines up with um, legal risk aversion.
[00:44:28] Jeremy Utley: Yeah.
[00:44:28] Nicholas Thompson: It will be interesting if, my argument will be weakened if the courts decide this and you no longer have the copyright.
[00:44:36] Henrik Werdelin: I also think back to your point about context Linda. I mean, like I wrote a book about entrepreneurship AI and, And that was based on a lot of internal white papers that we'd written over the last few years. And honestly, I. Would have almost preferred AI to write it. Uh, and we just, you you just couldn't get it to kind of like create a consistent narrative. And to your point about, uh, you know, writing it, thinking, obviously at the time you then sit down and kinda like go through all this different stuff, then all these kind of original ideas kinda materialize. And so I also just think like practically, like of course it can write a book 'cause it gets spewed out endless amount of just copy.
[00:45:11] Jeremy Utley: Well, and there, There are tons of, There are tons of, uh, examples, Nick. Even on our show, for example, we had Steven Johnson, who's the, you know, One of my favorite writers who built notebook LM, right? He talked about how he's using Notebook LM for his next book project. We talked to, I mean, Kevin Kelly, you know, wired founder who talked a little bit about that, more about image and Now song. He's really into that area. Uh, but guy Kawasaki, I love this If you want like a fun example, he talked about, he needed an illustration, He needed a real story to illustrate a point. About, you know, say career transitions, I think. And he said, he asked, ai, Hey, what are some radical career transitions? And it it helped him realize Julia child had actually, before she got into French cooking, she had been a spy and she had been on assignment in France and fell in love with French cooking. He said, it's serviced an example that I never would've otherwise he's like, I'm thinking of bezos and didn't really wanna include Bezos in in that part of my book. Right. Um, I think about ethan Molik, who, you know, who wrote a great book, co intelligence about ai. He talked about how he'd get stuck in a paragraph and he'd say, hey, what are 10 ways I could take this paragraph? Right. So there's, to me, it seems there's an enormous spectrum. And that's why I wanted to get your thoughts on, you know, what is writing, because I think somebody could hear the statement earlier in the show, don't use it for writing. And they just write off all of these kinds thought processes, just kind of unthinkingly. And so I wanted to dig into that more.
[00:46:36] Nicholas Thompson: Yeah. I'll say, okay, so great. Um, I'll also add that I do think that there is a, I haven't explored this in part 'cause I worked for a conservative, you know, my publication, you know, was founded in 1857 people trying to like stave off Civil war. like, I, by nature have to be conservative, right? You don't wanna be anywhere, like, you kind of think of like, okay, Harry beat your sto, Ralph Walder and Henry David Thoreau, like looking over your shoulder, right? You wanna be pretty, pretty cautious when you work at The Atlantic. Right? I do think that there is a future state, right? And it's more for somebody like Kevin Kelly or a publication like Wired to use AI, not just for writing, but to generate new forms of storytelling, right? So Steven Johnson did this with this wonderful thing he put out a couple months ago where, you know, we had a story where the AI is like guiding you through it. It's like a choose your own adventure story. Yeah. Then I I once had a startup called The Artivist, where one of the initial models was we'll build a CMS or multimedia storytelling, including potentially choose your own adventure books. And. I. AI can do that. I. Um, in a way that traditional writing can't. So I do think that there are you know, I'm very conservative in the way I write my book about running. I wrote a book for my kids. I guess my my 14-year-old and I wrote it for the 10-year-old on his birthday printed it out and it was about these stories. I told him about the animal world Cup where different animals play against each other in soccer, right? And it's you instead of like Germany versus the netherlands. It's like the slugs versus the raccoons, right? And we wrote the national anthems using AI, and we illustrated it using dALI. Um, like, I don't care on that, Right, right. Um, That's like the perfect use case, for AI. And so, um, it's, you know, I just have different modes, right? And in some ways, if I was an independent writer, had a substack, or I was, at Wired, or I was at some different publication, I may say, like, you use AI to, like, experiment. be aware of the hallucinations, be aware of the copyright issues, be aware of the fact that a lot of the pros it spits out is generic and cliched because it's been trained on generic pros and, you know, go and do other things. It's not right though for the Atlantic.
[00:48:42] Jeremy Utley: Yeah. No, but you also, one, one thing you're bringing to my mind is the importance of playing. You know, you talk about making the animal world Cup illustrations. I think right now a lot of folks who maybe are unfamiliar over just kind of dipping their toes. AI is kind of a productivity tool, therefore it fits in the productivity It's sit next to PowerPoint and Excel, and it triggers kind of very. Particular associations, and then there's this whole other world or sandbox around, you know, kids story time. I think if folks aren't playing in their personal lives, their imaginations aren't gonna be sufficiently kind of call it loaded to even what's interesting at work.
[00:49:21] Nicholas Thompson: That's so interesting, right? And it gets back to cognitive offloading, right? Like imagination and skills and letting your mind wander, right? I think Johnson said this in the podcast with you. He said that. Um, He talked about the importance of I can't remember exactly what he said but you would remember since you were interviewing him, I was just listening, but he was saying that like, one of the reasons why it's important to go for a walk Yeah. Said, in order to reach an interesting idea You kind of have to hop through six different things. and You can't get to the interesting idea immediately. And so. You need to, like, be detached And let your mind be wandering. and so, AI can, like, cut you off, 'cause it can, like, you know, get you tethered to your screen, Or it can get you so reliant that you don't trust your imagination anymore. Or you can use it to help infuse your imagination and inspire it. Mm-hmm.
[00:50:03] Henrik Werdelin: the last question for me, which probably is, uh, with your future hat on, we talk a lot about writing for people. but obviously you're alluding to SEO, kind of increasingly people might come through whatever agent they're asking for something. And then that will point to the menting. But When will it just be my agent that will read your stuff and then tell me in the kind of Like, the tone and the way that I would like to have it told?
[00:50:28] Nicholas Thompson: I mean, in a way, that happens now, right? If you go into Grok and you're like, Hey, tell me what happened today. Right. It's, you know, it's reading Atlantic stories and telling you that. So, and you're asking grok your specific way, um, if you go into perplexity or, you know, you go into lots of sites. They are, they're doing that. I mean, , you can even remove the written word and you can go into Amazon Alexa, right? Or you can just say to your Alexa bot, right? Um, hey, tell me what happened today. and it will be responding with data that, it's, you know, fortunately licensed from a whole series of publishers. Um, So I don't think we're that far away. And you can see also there are all kinds of apps out there, right? The first one was artifact, right? wonderful app created by Kevin sister, Mike Krieger, where it would summarize the stories, right? And so in a way that's like your agent, cause you're choosing what kind of summarization You want, what length, right? So you have an agent that. It used just be relationship with you is direct. There's our text that we write and there's you as a reader. Now there's tech companies in the middle, there're agents in the middle, Particle and other things.
[00:51:34] Henrik Werdelin: Now,
[00:51:35] Nicholas Thompson: particle does this as well. And so we have to, like part of my, with my CEO hat on, not as my dad, animal world cup hat on, I, have to figure out how to make sure that our business can survive. Um, in that world, which is not an easy challenge.
[00:51:51] Jeremy Utley: You know, , one thing, uh, Nick, just maybe last memory or just statement , if there's something that's interesting, I'll say last question, uh, when you and I ran into each other, downtown Palo Alto. you're walking around and , I accosted you in the street. I realized we had met at a previous conference, but uh, you said, Hey, anything interesting, I should talk about my video today. Mm-hmm. Uh, and I thought that was a, to me, it reminded me of, you know, when I'm in book writing mode, as they say, as a writer, everything's grist for the mill. Yeah. Is that, your kind of call it sit receptors open mode when you think about your daily video, are you kinda Always thinking about that. And because to me, it was like, a, you know, like an, I don't understand biology at all, but like antioxidants will grab an acai berry or whatever, or vice versa. Um, is that your kind of primary receptor mode is you're looking for stuff you wanna be talking about. Because it was, it was striking to me that that was the first thing that came up in conversation.
[00:52:50] Nicholas Thompson: It's probably not the first. I think the question I'm asking the most is what do you think of the Atlantic, right? Because my goal is to. Manage our brand perception and like you wanna get an early indicator on, oh, I, I, love the Atlantic, but you know I I can't get it to renew properly right or the app the funk is too small, right? And so you're trying to constantly get feedback on that. I used to I used to drive my sisters crazy They would just get so annoyed by it I used to when I was back on the other side of the house like I'd approach every cocktail party as well. Let's like somewhere in this room. There is an amazing new story, right? Like I'm gonna find it before the night ends. Um, and they're like, why? Like, why are you asking our friends these questions? I'm like,
[00:53:29] Jeremy Utley: that's hysterical.
[00:53:30] Nicholas Thompson: How I am. Um, and I, you know, to to degree, the most interesting thing in tech is part of that. Um, in some degree, it's, you know, the more general quest to understand AI, which I think underlies so many of the most important choices I'll have to make in the next few years. Um, and then to, you know, to the biggest degree. But yes, I do have that receptor mode and I am always asking questions like that. And it was fun to bump into you. That was early in the morning too. It was like seven 30 in the morning. We were walking down Emerson street.
[00:53:57] Jeremy Utley: Yeah, yeah. Wow. Good memory. Good memory. Uh, I think Victor Hugo's sunset, everything ends up in print.
[00:54:03] Nicholas Thompson: Yeah. I mean, steve Martin said that, look, everything in your life will eventually end up in your act. Right. And so once you recognize that, you realize that, you know, no matter what you're doing, you could be finding an idea that's valuable to whatever it is you're working on.
[00:54:20] Henrik Werdelin: That's incredible.
[00:54:20] Jeremy Utley: Mm-hmm. Mm-hmm. Mm-hmm. Well said. Well said. Perfect note from a creativity perspective to end on for sure.
I mean, that was a pretty wide ranging conversation, wasn't it?
[00:54:30] Henrik Werdelin: It was really was. It really was.
[00:54:32] Jeremy Utley: I think for someone who is one who's as steeped in futurism and current technology trends as he is, it's super fun to not only talk to him as a creator, but also as a runner, as a father, as a leader and executive. Uh, I'm sure in that episode, I feel like there's something for everybody and our job is to make it practical, pragmatic, and to leave folks with some simple takeaways that they can apply into their life. What? What are you, maybe let's start this way, Henrik. What are you taking into your life from this conversation?
[00:55:03] Henrik Werdelin: You know what, and I think I'm taking a pretty profound lesson with me, which is how do I practice? I. Um, skills and traits of humanity that I have. How do I train them? I mean, like I was thinking when we was talking, when somebody has a tough time getting a romantic partner, then sometimes the trick is not to kind of become better at finding a partner is to kind of upgrade themself, right? You know? Go to the gym or go to the studio. There's also, oh, go to the dance studio. Like, just become a more interesting person. Mm-hmm. The same way I, I heard once when, you know, the founder of Virgin, um, said Richard Branson said, uh, was asked famously, if you had only one piece of advice to an entrepreneur, what would that be? And he answered, go to the gym. And so for a podcast that is all about how to become better of applying ai. To your business and to yourself. I think it's actually like a such an interesting. Very, very true, but also kind of a counterintuitive kind of thing of like, one things that we have to remember to keep training is our ability to see when the model is hallucinating. Mm-hmm. That is not an AI skill, that's a human skill. Mm-hmm. To be better of being very unpredictable in the way that we ask the model. Right. Because originality in gets originally out. And then I think as somebody who is, uh, who, uh. Kind of leaders in their AI fields. And, I would imagine that most people who listen to this, uh, are kind of like considerate, uh, , AI thought leaders, or at least like, you know, knowledgeable about it. We have an interesting responsibility in making sure that we don't become so lazy. That we suddenly lose the ability to use AI in an effective way to effectively, as you were kind of pointing it out, we need to be able to bench patch 225 pounds in order to bench press a thousand with ai. But if we become so lazy that we can't bench press more than. 20 pounds, you know? Then we are also not taking advantage of this technology. And so I think this idea of practicing the human side of yourself, either because it's good for you as a human, but also because it'll make it better for you to become a super user of AI is just such a fascinating point.
[00:57:29] Jeremy Utley: Yeah, unwired intelligence and cultivating our organic selves, I, I thought was very cool. I love, you know, the thought of cultivating abilities, like talking to a child, like holding a meaningful conversation with a romantic partner without, I. Intervention, you know, of technology. I think that, , it sounds Luddite, but you know, spoken from someone who's at the bleeding edge, I think it causes those of us who've been dismissive of, call it mere human capabilities, you know, brings a, a healthy skepticism to say, am I attending to all the aspects of my life as much as I should be able to bring various models to bear on improving my running, you know, regimen as Nicholas is. That able, I should be able to leave devices behind and be fully present with my family and recognizing that if I can't do the latter, I probably won't be very good at the former. To me, that's a really cool kind of interplay and interdependence that I don't think we've teased out as much, maybe a little bit in our conversation with Ginny about bringing humanity to the model, but I, I thought that was really special. I agree.
[00:58:36] Henrik Werdelin: Um, , you know, also just like as a personal observation, like he's so good of completing full sentences. Mm-hmm. You know, like just, you know, you ask him a question, he comes with this very powerful, very impressive kind of answer and it's like compact and then he stops.
[00:58:55] Jeremy Utley: Yes. Yes. It's, I think that's probably a function of his daily practice of recording these videos.
[00:59:03] Henrik Werdelin: Yeah.
[00:59:04] Jeremy Utley: You know, because you've gotta have a clear, open, you know, midsection mid game, so to speak and end. And he's cultivated that ability. I mean, he, if for, for folks who don't follow him, he's got 2 million followers on LinkedIn. It's follow him and every day there's a very short video. It's very thought provoking, but as Hendrick's saying, it's a complete thought and there's a clear end. You like that?
[00:59:27] Henrik Werdelin: You and I are just begging to kind of continue.
[00:59:31] Jeremy Utley: You know, the other thing that I, thought was really fascinating is, uh, his comments, the closer your organization is to the training data, the sooner you're gonna be disrupted and. I love this quote. Fighting AI is just going to make your job worse sooner. Don't fight it. Instead, embrace it. Find ways to incorporate it and as he said, you know, get the rain slicker, get the umbrella, and try to figure out where the storm's going rather than standing naked screaming at the wind.
[01:00:03] Henrik Werdelin: I think that was it for this time. Thank you for everybody listening, and thanks for people who are writing in and putting comments on LinkedIn and YouTube and on the various, , podcasting platform. Obviously to everybody who subscribed, it means a lot to us.
[01:00:16] Jeremy Utley: Hey, whoa, whoa, whoa, whoa, whoa. Henrik. One thing we gotta do is we've gotta have a, a code word. People are getting into the code words. I dunno if you're noticing this.
[01:00:23] Henrik Werdelin: Hmm. What's the code word for this one?
[01:00:26] Jeremy Utley: Oh, oh, oh. Uh. , unwired rain slicker.
[01:00:31] Henrik Werdelin: I wouldn't be able to spell to that.
[01:00:34] Jeremy Utley: Well now, now we'll see what you, okay. So if you got the end of this episode, drop us a comment on social unwired, rain slicker. We'll know that you got here. Thanks for listening. As always. See you next time.