Beyond The Prompt - How to use AI in your company

You Can't Outsource Wisdom: Bestselling Author Ryan Holiday on What the Stoics Have to Say About AI

Episode Summary

Ryan Holiday, bestselling author and writer on Stoicism, joins Beyond the Prompt to explore what the Stoics can teach us about living and working in the age of AI. As technology makes it easier to outsource thinking, writing, and decision-making, Ryan argues that wisdom still comes from doing the work yourself. The conversation explores cognitive offloading, agency, change, and why the defining skill of the AI era may be developing a finely tuned “bullshit detector.”

Episode Notes

Ryan Holiday argues that while AI can generate outputs, it cannot generate wisdom. Drawing on a story from Seneca about a Roman who used educated slaves to sound intelligent, he compares outsourcing thinking to outsourcing exercise: the value comes from becoming the kind of person who can do the work, not simply producing the answer.

The conversation explores the difference between useful cognitive offloading and surrendering judgment entirely. Ryan explains that while tools like GPS may replace navigation skills without much consequence, writing, decision-making, and critical thinking shape the person on the other side of the process. AI, he argues, tends to amplify existing tendencies. People satisfied with mediocre work will settle faster, while people pushing for exceptional work can use AI to refine and challenge their thinking.

Throughout the episode, Stoicism serves as a counterweight to both panic and hype. Change and uncertainty are constants throughout history, not exceptions. Ryan reflects on leadership, family, adaptability, and skepticism, arguing that in a world where AI can confidently produce both insight and nonsense, the ability to question, verify, and think independently becomes increasingly valuable.

Key Takeaways: 

Ryan's website: ryanholiday.net
Daily Stoic: dailystoic.com/podcast/

00:00 Intro: You Can’t Outsource Wisdom
00:29 Meet Ryan Holiday
02:03 The Dream Was To Work Less
03:07 Who Actually Gets The Time?
06:32 Leadership, Culture, And Family First
08:38 How Will You Measure Your Life?
10:11 The Stoic View Of Change
14:44 AI Hallucinations And Shameless Confidence
17:21 You Cannot Outsource Wisdom
19:08 Cognitive Offloading Vs Real Understanding
20:22 Ego, Flattery, And AI
22:52 AI As Editor And Thought Partner
24:59 Mediocre Vs Exceptional Work
31:15 Why Bullshit Detection Matters
38:06 Stoicism, Agency, And Adapting To Change
43:31 The Debrief

Episode Transcription

[00:00:00] Ryan Holiday: He hires these educated slaves. Instead of having them teach him, he just used them. If he needed to say something smart, he would have them whisper it in his ear. A friend comes up to him and says, "You know, I noticed, you know, you're so smart." He says, "Well, uh, have you ever thought about taking up wrestling?"

And the man says, "I'm way too old to pick up wrestling. What are you talking about?" And the guy says, "Ah, but your slaves are still young." You think you can outsource wisdom, but you can't outsource wisdom just as you can't outsource exercise. 

I'm Ryan Holiday. I'm an author. I live here outside Austin, Texas. I write about ancient philosophy and, and what it can teach us about modern life. And in this moment of, of change and disruption, I, I actually think that the Stoics have quite a lot to teach us about AI and the skills required to sort of thrive in this new world.

[00:00:49] Jeremy Utley: I had written a newsletter a few weeks ago that was more personal than usual.

I was basically talking about the premise of AI as kind of more, more, and more, and more, and more. You know, you can do more work, you can do all this stuff. And I just asked myself the question, just me personally, if I think about what I value, for example being a father- Yeah ... am I more father?

[00:01:09] Ryan Holiday: Yeah.

[00:01:09] Jeremy Utley: I- in the age of AI.

I mean, supposedly I'm able to be more. Am I more of what I value? So I wrote this post, and, um, I got a bunch... I mean, as you know, in the newsletter business you get a bunch of interesting replies, and somebody wrote me and said, "You know, Stoics have been grappling with this question for thousands of years."

And I think it's kind of a fascinating moment to be considering our humanity when, in a sense, the age in which we're living, the opportunity cost of just being an unwired human-

[00:01:37] Ryan Holiday: Yeah ...

[00:01:37] Jeremy Utley: in a sense is higher than ever before, right? You know, like, you can sit down with your family, but the kind of work product that you're sacrificing by not being at the desk, it's like an arms race that we, in a sense, aren't equipped for.

It's an unfair fight. Yeah. And so I just thought it'd be really cool to hear your perspective on how do we preserve the things that are uniquely, you know, human for us in this day and age. Does that spark anything for you just right off the bat?

[00:02:03] Ryan Holiday: The idea of sort of what... It is interesting, right? When you look at new technology, our sort of first instinct is like, how can this allow me to do more work, to work harder, when not that long ago, the sort of dream of, of certain kinds of technologies was that it would allow you to work less, you know?

Hmm. Hmm. They were sort of fantasizing about a 20-hour work week. You know, like there, there, there once was this fantasy that technology would allow us to live in a better world where we, where we weren't so much a slave to our professions. And then the funny thing is when the technologies do come around, we focus on how it makes us better and more effective at what we do, but, but never in a replacement sense, only in a sense of like, um, doing the same or more than we currently are, which is kind of...

I guess it's understandable on one level, but it's sort of tragic on the other because, like, we had this dream of sort of- freedom or autonomy or we have this sense of what our priorities are, but then when push comes to shove, that's not actually what we use it for

[00:03:07] Henrik Werdelin: Do you think that's always true, if I may ask?

And I'm asking just because- Yeah ... I last week had my first staff member call me up and saying, "Hey, I'm probably four or five times more effective with AI." And I go, like- Yeah ... "Yeah?" And she goes, "So I'm starting to take Fridays off." And like, because why would, why would that extra time belong to you and not me?

Yeah. And I- Yeah ... and do, don't you think you might see more of that where people are starting to have that debate on like, you know, this extra resources, who do they belong to?

[00:03:39] Ryan Holiday: Well, I guess the question is, h- has it ever worked that way historically, right? Exactly. And I guess a, a, a lot falls to you as the, as the boss or the manager, how do you respond to that?

And I think I, I, I've actually had to have some, some conversations with my team. Like I was talking to someone who does customer service for us, and I was like, "Hey, I just want you to know," because I could tell there was a little bit of like paranoia or doubt, but I was like, "If you can figure out how to automate this stuff or have it make you more effective, let's just start the conversation by me letting you know that I'm not going to hold that against you."

Right? Like, if, if you find out how to get twice as good at this or you can respond, uh, to twice as many inquiries or do it twice as fast, I'm not gonna reward you for making the business better by laying you off, right? I was like, "I want you to be upfront with me and talk to me about it. There's always gonna be some use for your creativity, your judgment, your understanding of how the business works.

So let's just open that line of communication." But I, I, I thought it was interesting. I could tell, and when we were talking about it, that her hesitation towards automating some stuff and using AI was that she was worried about making herself irrelevant. And I, I do think that's the sort of default assumption from, from most workers.

And probably rightfully so. Sure.

[00:05:05] Jeremy Utley: Sure.

[00:05:06] Henrik Werdelin: And do you think... 'Cause I think I've heard other people say that, and then there is a creeping assumption that let's say that this person then come back and saying, you know, "I do 500 tickets every week-

[00:05:22] Ryan Holiday: Yeah ...

[00:05:22] Henrik Werdelin: and now I'm doing it in two days." How do you think you would react if you suddenly then walk in and, and she's gone- Yeah

Wednesday through Thursday, Friday? Yeah,

[00:05:32] Ryan Holiday: yeah. No, it's, it's funny the, the way I operate my, my company is we're, we're in person three days a week on average. I'm there every day. But it's come in three days a week. If you wanna take a four-day weekend every, every weekend, I don't really care, as long as the tasks get done.

And that's what I say having understood that I put on this person's plate, you know, roughly a, a full-time amount of work. But yeah, as, as the efficiency increases, there is the temptation to just sort of throw more and more at the person and, and effectively to punish them for getting better at what they do.

And so-

[00:06:06] Jeremy Utley: Well, but what, what constitutes a full-time effort of work, right? I mean-

[00:06:10] Ryan Holiday: Sure ...

[00:06:10] Jeremy Utley: if you think about, you know, a college student coming out now, what they're able to do with their full-time job if they're AI augmented, right, and they're proficient, right? The problem is we're not, we're not competing in a vacuum.

We aren't working in a vacuum. Yeah. And what somebody's able to do in a 40-hour job, if it's now 3X, that... At some point you actually have to recalibrate as a business owner, I think.

[00:06:32] Ryan Holiday: Yeah. You know, it, it was interesting that we were talking about Popovich as we were just getting started. I, uh, one of my favorite Popovich stories is about him and Mike Brown when Mike Brown was an assistant, uh, on the Spurs.

Mike Brown, uh, he was going through a divorce, he had his kids that weekend, and, um, he was, he was gonna miss the team plane. He was gonna be like five minutes late, and he, he asked, he texted Popovich and said, "Hey, you know, my kids are having trouble. Can you hold the plane for five minutes?" Which is a big ask in professional sports, as you know.

And Popovich said, "You know, what's the problem?" And he said, "You know, it's my kids. They're having trouble. They're, they're taking this hard. I, I just need five minutes to, to handle this." And Popovich said, "We're not holding the plane, and if you try to make the plane, you're fired." And he's like, "What?" He's like, he was like, "Don't take five minutes.

Take as much time as you need. Just catch a commercial flight and come see us." He's like, "Your family's more important," right? Or he, he was trying to instill in the organization not just sort of lip service to taking care of your family, but sort of actually meaning it. And so here he was sort of intervening.

The person is trying to put work over family, and, and he's like, "No, no, no, no, no. Not on my watch," right? And so I do think, uh, part of what we're talking about and part of leadership and part of creating a good culture and part of being able to ride the ups and downs of trends and things is to put your people first, you know, and to actually think about what they need and what's going on with them, not just to see them as sort of widgets or sort of numbers on a spreadsheet.

And so I do think depending on the job, depending on the role, depending on the health of the organization, you gotta sit down and think like, "Hey, uh, if, if I was in this person's shoes and I was suddenly getting more and more effective at what I did- And, um, my boss was effectively punishing me, uh, for, for having done that.

What kind of overall message am I sending to the rest of the organization where I'd also like to pick up efficiencies? And, and so I, I think re- really thinking about not just the individual, but how you treat that individual and the message that sends throughout the culture really matters as well.

[00:08:38] Jeremy Utley: You know, one, one thing I've been thinking a lot about, I do- Are you familiar with Clayton Christensen's- Mm-hmm

classic article called How Will You Measure Your Life?

[00:08:45] Ryan Holiday: Yes.

[00:08:45] Jeremy Utley: It's one of my favorite pieces ever. For folks who aren't familiar, the basic premise is humans tend to orient towards that which is measurable. And in- Yes ... work, you're, you get rewards on a quarterly basis, right? Whereas at home, maybe you'll get the reward of having raised a good family for like 20 years, and he- Yeah

observed that his classmates never implemented a strategy of ending up in jail or being estranged from their children, and yet 20 years later they effectively implemented that strategy because on the margin they would invest time where they could get a more kind of visceral return, right? Yeah. And the more visceral return is at work.

I feel... And, and the reason I keep thinking about that, and you, you made me think of it with the Popovich story, is he wrote that before AI.

[00:09:29] Ryan Holiday: Yeah.

[00:09:30] Jeremy Utley: Yeah, sure. Right? Before social media even, right? And at, even at that time, humans were so grappling this, with this question of how do I invest the marginal unit of my time?

I think now AI has only turbocharged that, the trade-off that people are dealing with.

[00:09:46] Ryan Holiday: Yeah. I mean, that's right.

[00:09:47] Jeremy Utley: And, and, and to me, the, the tradition of stoicism I think has a lot to say. Can you talk about some of the practices that the stoics have advocated as our kind of resident stoicism expert? What are some of the practices that stoics have been advocating for millennia to stay grounded in that which you could say is much harder to measure, but much more important?

[00:10:11] Ryan Holiday: Well, I think the, the, the first thing I would point out, and this is something that can be easily missed, is we tend to think of the past as stable, right? So you, you look back and you go, "Okay, this is what ancient Rome was like. This is what ancient Greece was like. This is what the past was like." Except for the past doesn't exist, right?

The, the past was always the cutting edge future at every moment. E- every time in history was basically, with the exception you might argue of the Dark Ages, but the, you know, obviously they don't understand they're living in the Dark Ages, the most advanced time in human history, right? Like, they are dealing with technological disruption.

They're dealing with the collapse of the old way of doing things. It feels like traditions aren't as true anymore. It, it feels like there's a new generation coming. It feels like things are speeding up. This is, like, the, the most perennial of all the emotions and feelings, like, the idea that, like, "I don't know how things are gonna turn out."

That's how every time in human history- It's always been ... has ever existed. Mm. And this is something, you know, when you, when you look at Marcus Aurelius' Meditations- He probably talks about the theme of change and the pace of change more than just about anything else, and it's pretty remarkable, right?

He's just over and over again, he's like, "Time is, is sort of racing past us, and nothing is stable." And he says, like, "When you're frightened of change," he says, "you should remember that the status quo that you're trying to preserve was itself a product of change." He was like, "You didn't exist, and then you came into existence.

You yourself are a change and just like you're no longer a 25-year-old or a 15-year-old or a 10-year-old or a 5-year-old or a fetus." Like, we are ourselves always changing and evolving. And he's sort of trying to meditate on change, and one of my favorite little riffs he has is he says, um, "It would take an idiot to feel distressed or indignant about change," he says, "as if any of it lasts."

It reminds me of an expression we have here in the South, which is, "If you don't like the weather, just wait a minute, it'll change," right? And the idea that, that the things that we're stressed about, the things that we're, "Oh, I'm grappling with this, I'm trying to adjust," as if itself is stable and not going to change.

Just a couple of years ago, we were wrestling with one version of AI, and now we're wrestling with another version of AI. And meanwhile, they're hard at work on a- another version that we can't even comprehend yet. And so there's a certain amount of presumptuousness and arrogance and sort of recency bias in, in how we react to what's happening around us.

So I think that's, that's one thing to focus on.

[00:12:50] Jeremy Utley: Does that, does that mean that concern is almost just normalized? Is just to, just to put a word on it, it's like it's perfectly normal to be concerned, and now in this moment where maybe AI is kind of hyper-present in the media and such, it's just that's what people latch th- this feeling of- Yeah

general existential concern. They go, "Well, now I have a name for it. It's AI." Yes. Is it?

[00:13:09] Ryan Holiday: I, I, I think that's right. I, I think first off it would be like, okay, um, i- is it common to be concerned and worried? Yes. Like, we go, "This is unprecedented," as if in fact it, i- it's never happened before. But some version of this has happened forever and always and always will.

Like-

[00:13:26] Jeremy Utley: Continuously,

[00:13:27] Ryan Holiday: yeah ... all, all of this is very precedented, right? The names and the dates and the people and the types of things are changing, but, but the change itself is, is ceaseless. So I think that's first and foremost. The second part that Stokes would have us think about is like, well, where did all that worry get people?

Did fretting about it and worrying about it and catastrophizing about it, did it help any of those people actually adjust to the changes that were, uh, in front of them? Did it arrest any of the change? And you, you might say, like, did it set any of those people up to take advantage of that change?

Probably in some cases, but in, in most cases, also probably not. And so I think what, what stoicism is trying to be, and sometimes people mistake it for kind of emotionlessness, I think a better way to describe it would be- Kind of a, an even keel, not getting too high or not getting too low. Mark Sriroos in Inventations talks about being like the rock that the waves crash over, and eventually the sea falls still around.

It's actually similar to a thing that the Buddhists talk about, which is, you know, you take a cup of, say, water from the river. At w- it, it looks clear in the river, then you grab it in a cup and you can't see through it. But if you let it settle for a second, you know, all the silt goes down to the bottom and then it becomes clear again.

[00:14:44] Henrik Werdelin: Would you then argument that agents, AI agents are very stoic? I guess they're always at the moment because they only respond to a prompt you have, and they are pretty even keeled 'cause they don't have emotions. I mean, that seems very stoic.

[00:14:59] Ryan Holiday: That's

[00:14:59] Jeremy Utley: true. They are stateless in a sense. Yeah, that's

[00:15:00] Ryan Holiday: a good point.

There, there, there is an emotionlessness to it. I, I mean, it... My favorite and also most exasperating part of AI is the complete shamelessness with which it will make mistakes. I was with my, my boys the other day, we were walking back from breakfast and, and we f- they found this quarter on the ground and that they'd never seen a quarter like this.

And I go, "Oh, I wonder if it's, like, rare." And so we take a picture of it and we ask ChatGPT what, who was on the quarter. We hadn't recognized the person. And, uh, it promptly goes back, "Oh, um, you're not holding a quarter. That's actually a penny." And I go, "Uh, no, it's definitely a quarter." A- and it goes, "I, you know, I could see why you would think that it's a quarter, but it's in fact a penny."

And I go, "Look, man, it's a fucking quarter. I don't know what to tell you. Like, uh it says on the front it's a quarter." And then it goes, "Oh, you're right to push back. Okay, it is a quarter." You know? And then, and then it's like, "Uh, that's, uh, that's, uh, Eleanor Roosevelt on the quarter." And I go, "I'm pretty sure it's not Eleanor Roosevelt, you know.

It doesn't look like her at all." "Oh, y- you're right to push back. It's not Eleanor Roosevelt. It's, it said some other person." Finally, I was like, "Just give me a list of all the women who have ever been on commemorative quarters." And then I scrolled through, I figured out who it was. But the, the interesting thing to me is, is like, I was getting frustrated having been bullshitted, like, so many times in a row.

But it not only didn't pick up on any of my frustration, 'cause it's not a person. Like, if I was talking to an actual human assistant, this exchange would've been getting heated. But also, just the confidence and the shamelessness with which it gave me the wrong answer every time, like a fresh slate, right?

Because- Did you

[00:16:38] Henrik Werdelin: see the, did you see the meme where, a meme where, uh, somebody asks an AI where, uh, the wives asks the husband, "So did you get the breakfast done for our kids?" And he goes like, "Sure." And she walks to the kitchen, there's nothing there. She goes like, "But you haven't made anything." He's like, "Oh, my bad.

I

[00:16:53] Ryan Holiday: mis- Yeah, yeah ... I made a mistake." Exactly. And, and like, um, the way it senses our emotions in that it's like, "Oh, you wanna be congratulated for proving me wrong." Mm-hmm. Mm-hmm. And, and so, so anyways, it is a, it's an interesting insight reflected back of human psychology, both like- Why we have things like shame and responsibility and accountability, but also how much our emotions can distract us from, from the point as well.

[00:17:21] Jeremy Utley: Okay. So one, one quote I wanna give you here. You've written a lot about this. Seneca's quote, "No man was ever wise by chance."

[00:17:30] Ryan Holiday: Yeah.

[00:17:30] Jeremy Utley: Just tell us why is that so important right now?

[00:17:33] Ryan Holiday: Well, I, I, I actually use that quote. He- It's part of a larger story that he tells, and I, I, I talked about it at length in, in the book I just did on wisdom.

He was talking about this wealthy Roman who wanted to seem smart. So instead of, you know, doing the reading or going to school or getting tutors, he hires a, a collection of wealthy slaves. And i- in Rome, often, uh, slaves were captured from other countries, you know, sort of smart people, and then they were used as tutors.

So most wealthy Romans would've, would've had slaves as their teachers or freed slaves as their teachers. So anyways, he hires these educated slaves. Instead of having them teach him, he just used them. So if he needed to say something smart, he would have them whisper it in his ear, or he would have them feed him answers like at dinner parties and stuff.

And so finally, uh, you know, he kind of thinks he's getting away with it and, uh, a friend comes up to him and says, "You know, I've noticed, you know, you're so smart. You, you've been entertaining us all at this, at this dinner party." He says, um, "Well, uh, have you ever thought about taking up wrestling?" Uh, that being o- obviously one of the other sort of dominant areas of ancient life, the sport of wrestling.

And the man says, "I'm old. I, I'm way too old to pick up wrestling. What are you talking about?" And the guy says, "Ah, but your slaves are still young." Um, and, and his point was you just can't have people do this stuff for you. You think, you think you can outsource wisdom, but you can't outsource wisdom just as you can't outsource exercise because the point is, is that it's a byproduct of the work that you do, not a, a thing that, that you're a conduit for.

[00:19:08] Henrik Werdelin: But do you think that is the same thing for technology? I mean, like, you can still be smart and use your GPS. You know- Yes ... there are cer- Mm-hmm ... there are some technologies that we've adopted that then have meant that we've lost the ability to do that thing, but we- Yeah ... are probably not less smart off it.

[00:19:24] Ryan Holiday: I, I think that's a great point, right? They call this sort of cognitive offloading or cognitive surrender. And so there are some tasks that are not really worth doing that you can probably surrender. Uh, I, I've certainly surrendered whatever ability I had to navigate before GPS, uh, not that it was particularly impressive, a- almost entirely to, to my phone.

But I can't call myself a navigator, and I wouldn't consider myself good at navigating just 'cause I rarely get lost, right? Like, that's a function of the GPS, not anything that I have. And I think wisdom, uh, is a good example of something that, you know, you can have access to lots of information- It's not quite the same thing as being wise or, or, or, or even smart.

And so to me, it, it's, um... It's like, look, you can have ChatGPT write your essay, but the point was never to write the essay. The point was to be the person on the other side of writing the essay who has clarified their thinking.

[00:20:19] Jeremy Utley: The person who could write that essay.

[00:20:21] Ryan Holiday: Exactly.

[00:20:22] Jeremy Utley: Exactly. Exactly.

[00:20:22] Henrik Werdelin: I have another question on that.

I did, uh, did a keynote last week, and one of the points was that the way we can scale ourselves as humans is that we can start to understand ourself better. Because if we can- Mm-hmm ... understand our principles and our values and stuff like that to an agent, it can do most of us while we're asleep, right?

And then I was, uh, preparing to this conversation, you know, I obviously read through the synopsis of your books, and they ... I read Ego is the Enemy, and I was like, "Huh." And I realized the book has, has a slightly more refined kind of like concept to it. Yeah. But I was curious on your observation about this idea of the role of the ego and the importance of it in a world where you're now trying to make mini versions of yourself by using AI.

Well,

[00:21:06] Ryan Holiday: I haven't, I haven't thought about the sort of, uh, uh, sort of replication connection to ego, but I'll... I will give you an example of where I think ego and, and AI are, are really interesting. One of the things you find about ego and egotistical people in real life is how easy they are to manipulate, right?

Like, how do you get an egotistical person to do something? Uh, you make them think it was their idea, right? Like, they're susceptible to flattery, they're susceptible to bullshit, they're susceptible to wishful thinking. The ego would... I would make distinct from confidence. Ego is a, a vulnerability, right?

That most sort of astute, uh, observers of other people sort of lock onto very quickly. And so one of the things that I think is interesting, we were just talking about AI and how it, it tends to tell you what you wanna hear, or couch its answers in kind of flattery or enthusiasm as a mask for sort of a, a bad answer.

Um, I, I think it's gonna be interesting to see how that interplays with these agents. O- One, I think it, it, it probably makes you susceptible to some of these sort of, uh, weird- Delusions ... parasocial relationships and delusions that can happen. But I also think it makes... It's like one of the things you, you learn as...

uh, I've learned as a writer is like, often when you find exactly what you're looking for, you have to be really suspicious. You have to look for disconfirmation of, of, of what you're after, because you can end up finding something that's too neat or that you're missing some obvious part of it. And so, so how do you maintain that sort of skepticism, that healthy distance, when really smart people have coded a really smart machine to, to prey on precisely that vulnerability?

I think that's gonna be interesting.

[00:22:52] Jeremy Utley: Well, well, one, one thing I would say, Ryan, I, I read... You've got a great post of, uh, 26 kind of habits for 2026 or, or something like that. And one thing I was thinking about is y- I don't know that you framed them as necessarily anti-AI or opposite to AI, but one of the things I thought of is there's actually a number of them which are amplified by AI.

For example, embracing contradiction, seeking out people who disagree with you, creating a second brain, assembling a board of directors. All these things, they aren't opposite to AI necessarily, a- and in fact, they could be augmented or even amplified by AI. For example, give red team this idea, play devil's advocate- Mm-hmm

to poke holes in my thinking. AI's incredibly gifted at that. The problem is the human operator has to be proactively looking for it.

[00:23:36] Ryan Holiday: You have to want it. And- Yeah. Like, I think people think like, "Oh, I can use AI and it'll write for me," right? And I actually find the writing tends not to be very good. But what I do use it for is I'll upload paragraphs and I'll go, "Tell me what's wrong with this."

Mm. "Tell me how it could be better. Tell me where I have errors. You know, tell me how it could be shorter. Tell me how it could be tighter." Which, by the way, is the same set of prompts that I give my human editor and my research assistants and my friends- Yes ... right? Yes. Like, it, it's just, what it really is, is a way of scaling something, uh, that like, you know, I might hire an external editor on my books.

You know, it's not cheap, but it, it makes the project better, and I can afford to do that. I'm probably not gonna hire five, right? I'm not gonna hire 10 external editors. Um, but if you told me that by hiring one, uh, I, I might get two or three for free, I'd be interested in that, right? And so, so I think what AI can allow you to do is scale some of this stuff.

Again, you have to have the desire, you have to have the interest, you have to have the openness to it. Um, and there probably is, at this point it's not currently as good as some of the best people in the world at these things, but sometimes

[00:24:48] Jeremy Utley: you don't- But very few have access to the very best, right? Very few could afford- Yes

the very best. And for, for most people, the choice isn't between the very best editor and a mediocre one. Right. The choice is between no editing whatsoever, right? And- Totally ... but this, this I think actually, it, it, it's emblematic of something I've observed, which is AI tends to amplify the underlying bias, right?

Yeah. So one of the, one of my obsessions in kind of the problem-solving innovation space is this bias of satisficing or being satisfied by that which is good enough, right? Herbert Simon. Mm-hmm. And humans tend to, to, just to settle, right? Which is fine most of the time, but when you're trying to innovate, it's actually, that's a, that's a problematic cognitive bias.

And what AI does is it gets somebody to mediocre much more quickly. And so if you're the kind of person who's gonna settle for mediocre, AI is your enemy. However-

[00:25:35] Ryan Holiday: Yeah ...

[00:25:36] Jeremy Utley: if you're the kind of person who's gonna push for exceptional, you've never had a better thought partner than AI, right? Which is to say it amplifies what's underneath.

And I think- Sure ... with cognitive offloading, I get this question all the time in keynotes, what about cognitive offloading? And to me, what I always say is, "It's gonna make you who, more of what you wanna be." If you wanna be lazy, it will help you be far lazier than ever.

[00:25:57] Henrik Werdelin: That'll

[00:25:58] Jeremy Utley: be me. But to your point... No you don't, Henrik.

You definitely don't Ryan, to your point, if you want a thorough teardown of where this paragraph sags, it will be more brutal than your most brutal human editor, right? Because in a sense, it's a mirror of your underlying desire. How- what do you think about that?

[00:26:16] Ryan Holiday: Yeah, I think that's right. And, and, like, look, if you're gonna be satisfied with the first thing that you found from ChatGPT, you're probably the same person who was just satisfied with whatever Google kicked up five years ago, right?

So- Right ... whether, uh, your, your base AI answer is better than your base Wikipedia answer, I, I suppose is interesting, and you could compare them side by side. But, but basically the person who's gonna settle or takes things at first glance or doesn't question appearances, they're gonna be doing this either way, right?

E- exactly. And, and so- Yeah ... in a way, what's interesting for me, having ... I, I, you know, I started as a research assistant for a great writer and I watched him, you know, sort of throw out most of what I brought to him because it, it either, it wasn't good enough or he had enough prior knowledge to know, hey, that, that wasn't accurate enough.

I ... Watching him be so diligent, I worked for this great writer named Robert Green. Watching him be so exacting and diligent in how it worked obviously shaped me in working for him and then as a writer, and then it, it, it's something I try to implement with, with the research assistants that I use. And then now, as you bring sort of AI into the toolkit, it's the same thing.

It's like, "Hey, this answer is interesting, but, like, show me what it's from. I don't just want this quote, I wanna see the context in which the quote is. Oh, okay, actually it's from a book," you know? And, and now I'm looking at the book and I trust the author and the lo- ... Okay, I'm gonna use this. And then it's like, oh no, this is actually a quote that you just found from a-z quotes.com and isn't actually real, or actually the author or the person saying it meant something very different than what the sort of perverted understanding of it means.

So o- one of the things you just learn is that, like, the, the research that people can surface for you or the things that people track down for you, it's really just, like, the first in a several step process before any of that can actually be used. The funny thing is, like, I know this when I'm researching, you know, I'm writing something about Lincoln or whatever, um, and I'm very diligent and then I'm like, "Hey, I've got this rash," and ChatGPT tells me that it's this.

I'm like, "Okay, that's what it is." You know? And

[00:28:28] Jeremy Utley: you, you don't push, you don't push for the level of depth- Exactly ... that you did for Abe Lincoln. Is that what you're saying? That, there's a term

[00:28:33] Ryan Holiday: for that, right? Yeah, 'cause you don't know. You don't know what you don't know, right? And there's something, I think it's called the Gale Amnesia effect or something like that-

[00:28:40] Jeremy Utley: Yes,

[00:28:40] Ryan Holiday: yes

where, where you tend to question media coverage about things which you have some level of expertise about, and then you're pretty deferential when it comes to topics you know nothing about. Can I ask on

[00:28:51] Henrik Werdelin: the, that? As I understand it, Stoics, uh, warn against outsourcing judgment, right? And what I think a lot of people now is that they ask ChatGPT, "Should I do X or Y?"

Yeah.

[00:29:05] Ryan Holiday: Yes.

[00:29:05] Henrik Werdelin: What can we learn from the stoics on becoming better or more comfortable in our own decision-making?

[00:29:13] Ryan Holiday: Yeah, it's a good question. It's actually funny. In, in the, the opening pages of Meditations, one of the things that Marcus Aurelius, um, acknowledges that he learned from his stepfather, who was his predecessor as the emperor, was his, uh, willingness to cede the floor to experts and to listen to them and ask probing questions.

Which, you know, again, we're talking 2,000 years ago, it's interesting how sort of timeless some of the skills can be. But I think judgment is an interesting thing because everyone thinks they have it, and then on average, obviously we don't, right? Like, most people don't have good judgment, and, and that's why most people struggle to do things or get themselves into trouble or make bad decisions.

And I think e- even individually, our judgment, we can have great judgment in one area and, and pretty poor judgment in another. And so, you know, this idea of like, "Oh, I, I'm just a very gut-based person, I trust my gut," I'm always interested on, on like have, has that person done the work to have that trust?

We'll make these sort of snap decisions which feels like it's based on a lot of experience or a lot of knowledge. Then you kind of get under the hood and you go, "Actually, you don't really know anything about this or anything about anything like this." And that deference or that certainty in the judgment is really problematic.

[00:30:30] Jeremy Utley: By the way, Ryan, you know what we've learned as hosts? We probably only release 75% of the interviews that we have. Oh, interesting. Because we realize fully probably one quarter of the... Which, by the way, is a better hitting percentage than most VCs, right? Sure. So I don't mind. But fully a quarter of the people who get recommended to us, Henrik and I get halfway through the conversation, we go, "I'm not sure there's actually a lot of depth here."

[00:30:55] Henrik Werdelin: Yeah.

[00:30:56] Jeremy Utley: And this is a person who is a, you know, they're the head of blah, blah,

[00:30:58] Henrik Werdelin: blah. It'll

[00:30:59] Jeremy Utley: be so

[00:30:59] Henrik Werdelin: awkward now, Jeremy, if we don't realize this. Yeah. This will be the most awkward interview ever done. If

[00:31:03] Jeremy Utley: we don't release it. You know, Ryan really didn't have that much to say about, uh, you know, stoicism and wisdom in the modern age.

It's weird.

[00:31:09] Henrik Werdelin: Speaking about our own egos being inflated, you were like, "What did he know about stoicism?"

[00:31:14] Jeremy Utley: Uh, okay, hang on. One, one thing I wanna, I wanna get at. Yeah. I saw, I, I saw you say this the other day, which I thought was great. The essential skill of our time isn't prompt engineering or coding. It is having a finely tuned bullshit detector.

[00:31:28] Ryan Holiday: Yeah.

[00:31:30] Jeremy Utley: How does one develop the detection ability?

[00:31:34] Ryan Holiday: I mean, I, I, I wish I could give you a great answer. I, I do, I do think that is sort of the skill of our time, and it, it was clearly a skill most people didn't have well before AI. We live in a time of conspiracy theories and- Misinformation and nonsense and sort of popular delusions of all types.

And so to add in there, you know, again, a, a very savvy sort of technology that is ultimately a consumer product, and consumer products do not wanna tell you no or I'm not sure or I can't do that. Um, and so you, you, you do end up getting these sort of hallucinations or these sort of fudging, hedging answers.

And to be able to separate something that seems true or is close to true is gonna be really important. I, I would say though practically, like, you know, one of the interesting things about AI is because it's not a human and because you can't hurt its feelings, just some simple habits of like, "Are you sure?"

You know, or, "What's the evidence for that?" Or, "What I heard was X, Y, and Z," right? Just like one of the things I, I do when I'm messing around with it is I almost inst- not only do I instinctively check all the work, but it doesn't cost anything to make it do it again. Right. Right? Like, uh, th- there's a, there's a famous story about Kissinger where he, he's, he asks someone to give him a report, and they give him the report, and he, he sends it back and he says, you know, "This is wrong.

Do it better." The guy comes back. The report's tighter, better research, whatever. He, he says same thing, "No, this is still wrong. I told you to do it better." Uh, a- and then he sends it back, and then the guy sends it... And finally the third time he says, "Okay, now I'm gonna read it." Because humans, just like machines, don't always give you everything they've got the first time, right?

Wow. And that there's always- That's

[00:33:31] Jeremy Utley: so good ...

[00:33:32] Ryan Holiday: s- some room for refinement. So that's kinda one of the things I think about is just there's no cost in me calling bullshit. I- if I say like, "No, no, that's not right," worst thing is gonna happen is it's gonna waste some compute doing it again.

[00:33:46] Henrik Werdelin: I think that's super interesting.

I, uh, I was on- That's brilliant ... I was, uh, having dinner with a friend of mine, and he's a movie director, and he was saying the same thing but about actors. Mm-hmm. That sometimes he would, uh, he would, he used to work together with a very famous movie director, and sometimes the actors would do it, and he would go like, "Do that, but just better."

Right? And then he said, "I often feel bad about doing that, and so I'll go in and, like, act out, like, the whole scene like I want it," right? Which obviously is dumb 'cause he's not an actor. Yeah. But I really like that idea of just, you know, saying to the, to AI, "Hey, I like what you just did there, but just make it better."

[00:34:24] Ryan Holiday: Yeah. That was awesome. Yeah.

[00:34:26] Jeremy Utley: But there are tons of tactics like that. Uh, to me, this is critically important. I feel the question is often leveled as a conclusion, meaning this- What about cognitive offloading is not an honest question, meaning what do I, as a conscious, sentient, well-intentioned human do I do knowing that this collaborator, like every other human I've ever worked with, is liable to trigger cognitive offloading?

They render the question ... It's framed as a question, but it's really a conclusion. Sure. Therefore, I'm opting out.

[00:35:01] Henrik Werdelin: Yeah.

[00:35:02] Jeremy Utley: And I think that is really dangerous to say, you know- If I

[00:35:05] Henrik Werdelin: could just add something there, Jeremy, on that. Yeah. Like I now have, uh, I use all these different agents that run all the time, so I have like, whatever, 10 agents that do stuff.

And my kind of my uber agent, my agents of agents called Iggy, and Iggy have this script now that we, that I run every week where it looks through all our transcript and it tries to kind of like answer that question. Like, what do I always offload? Where is my biases without knowing it? And then sometimes it will prompt me back.

So for example, if I say yes to too many decisions, it'll basically like, "Ah, you seem to be just offloading decision-making to me right now. Are you tired? Do you need to go have a break," or whatever. And so ironically, I think increasingly because it's so seductive, you need to create all these protocols for yourself to kind of avoid you falling into this super seductive trap because it is

[00:35:52] Jeremy Utley: awesome.

Well, that, yeah, and, and that's, that's kinda what I was getting at. To me, the answer is not opt out. The answer is get educated, right? There are techniques. I mean, like the Kissinger, right? That's one of 100 techniques, Ryan. But it's like if Kissinger always said, "Do better"-

[00:36:09] Ryan Holiday: Yeah ...

[00:36:10] Jeremy Utley: that should be a part of AI training programs.

And then every human that collaborates with AI who doesn't do it is uneducated. Sure. And every human that opts out is opting out of tremendous augmentation. But to me, it's very important to realize the answer is education, not opting out. Yeah. And I feel like right now there's all of these issues, which by the way, have always been present with people.

AI is just trained on the way we think and work and all that stuff, and there's a, there's kind of a class of people that are putting up this smoke screen. What about, you know, environmental impact? You know, blah, blah, right? And to me, those aren't actually the issue. It's just, it's a way to justify disengagement, and that I think is actually, I think it's dangerous.

[00:36:53] Ryan Holiday: Yeah. And I would say that even if you don't wanna use it, right? Which I, I, I think there's some, some ethical reasons to not wanna use it or you're worried about atrophy or, uh, whatever. You don't wanna use it. Re- may- uh, I get it. That doesn't mean that everyone else isn't, and so to not have any familiarity with it makes you actually vulnerable to still the same sort of sloppy, crappy work product, right?

Mm-hmm. So like if, if I hadn't spent the time that I've spent sort of messing around with AI and trying to figure out how it works and using it in some cases and finding it insufficient in other cases, I would be a lot worse at recognizing, for instance, when I'm getting like an AI written pitch. Or when- Could be a mark

someone on my, someone on my team sends me something that it's very clear they offloaded this task. So instead of sitting down and thinking about how to actually do this best, they had AI write the CTA for this email or this outline or whatever it is, and that's why it's problematic. And so it, you, you have to understand it both offensively and defensively- Yeah

I think. And to not do that is to, i- is to just, as you said, sort of opt out.

[00:38:06] Jeremy Utley: Circling back to the beginning of this conversation, we talked about how the stoics took change as a given. Yeah. Change as a constant. Recognizing that folks right now tend to think we're in a moment of particular change, are there any mindsets or behaviors, rituals, routines, et cetera, that you would recommend inspired by the stoics in terms of how to successfully navigate change?

How do we do it? What are the things that we can implement in our lives that the stoics have known?

[00:38:37] Ryan Holiday: Well, so the, the basic idea in stoic philosophy is we don't control what's happening, but we control how we respond to what happens. And so, um, the idea for the stoics, it wasn't like, "Hey, these are the things you have to do.

This is how you're successful." Instead, what they tried to be is really adaptable and adjustable to circumstances, that, that's what Epictetus said he was trying to teach his students, was to teach them to be able to be the kind of person that whatever happens, they could say, "Oh, that's just what I was looking for," or, "Oh, I can work with that."

And so I think, you know, thinking again about what are the sort of meta skills, what are the things that aren't going to change, what are the things that are going to remain valuable, um, i- you know, even if some of these trend lines continue, that, that's kind of where I would go as a person, um, is like, hey, what are the kinds of things that throughout historical moments, throughout moments of, of flux and change, have sort of remained unchanged?

And that, that's probably the kind of, um, you know, investment I would make.

[00:39:40] Henrik Werdelin: If I can make an add-on to that question.

[00:39:41] Ryan Holiday: Yeah.

[00:39:42] Henrik Werdelin: If it is about, you know, basically saying there's things I control and the things that I don't control and, and what I can change is how I kind of deal with it. But increasingly, isn't AI, it gives you control of a lot, right?

Like suddenly you can write the legal brief without being a lawyer. You can ship an app without being an engineer. And so the, the nicety of being able to say, "I am not in control of all this, so therefore I just have to deal with my own emotions," is that being kind of like degraded now that you are in much more control?

You mean

[00:40:17] Jeremy Utley: almost evasive or like burying your head in the sand? Is that what you're getting at,

[00:40:20] Ryan Holiday: Henry? I, I, I don't think the stoics are saying, "Hey, uh, just focus on your emotions. Nothing is in your control." I, I think when you look at the lives of who the stoics were from, from, you know, emperors to generals, and entrepreneurs, and executives, all these people throughout history, they were always high agency people.

Mm. And so I think, you know, seizing the, the increased amount of agency that certain technology might offer us doesn't strike me as outside the bounds of stoicism. I think what they mean is like, hey, um, you don't control whether you're living in the middle of an AI revolution or not. You do control how you respond to that revolution, whether you decide to learn about it or not, what, how you deploy it inside your business or your life or not.

You decide whether you despair and give up and go, "Everything's pointless. We're all gonna be replaced," or not. And so at, to me, that's what the idea means, is like most of the macro trends in the world are not up to us, you know, including the weather, and the economy, and the political situation. We can have little bits of influence here or there on some of those things, but for the most part, the larger macro world is not up to us.

But our sort of, our inner emotions, our actions, our decisions, our, our priorities, these are all things that are up to us.

[00:41:34] Jeremy Utley: Can we wrap maybe on this idea of agency? Yeah. Because I'd like to, I'd love to drill into that. I, I happen to believe, Henry and I were having a conversation earlier today actually, um, and agency came up as kind of a defining characteristic.

Not bullshit detection- Yeah ... but agency is right up there in- Yeah ... big, r- neck and neck with BS detection. You have kids. Mm-hmm. Similar to our kids' age. Henry and I both have kids as well. One of the things I've been grappling with personally, I'd love to hear your thoughts, are how do we cultivate agency both in ourselves, and how do we cultivate an agentic attitude in our children?

You have any thoughts on that?

[00:42:09] Ryan Holiday: Yeah. You know, one of the things I'm always really impressed with with my kids is e- even if I, I don't like the thing, like I'm not a big video games person, I'm always impressed at their ability to suddenly be playing a thing or get good at a thing that I certainly didn't tell them anything about.

Like, the way that they heard about a game from one of their friends, and then watched videos about it, and then fiddled around in it, and then by trial and error got really good at a thing. So my, my son was just... He just asked me for this new computer game, and we were getting it, and then, you know, he was like, "Well, how do I do this?

And how do I do this? And how do I do this?" And I was like, I was like, "I don't know, and I'm not gonna figure it out, 'cause I have no interest." I was like, "I, I'm not gonna help you with this." But I was like, "I know you know how to do this, because I've seen you do it before. I didn't teach you how to play Minecraft."

And so how do we get good at teaching them how to get good at figuring stuff out? To me, that's the main skill. And that's why one of the reasons I started, you know, sort of messing around with some of these things is, like, I wanted it to be a thing that we could do together. But that in the process they would take away the meta skill of being able to use these tools to figure out how to do things.

[00:43:25] Henrik Werdelin: I think maybe that's a good place to wrap. Really, really, really cool stuff.

[00:43:29] Ryan Holiday: This was fun, guys. I really appreciate it.

[00:43:30] Jeremy Utley: Drop the mic. Professor Wertelen, what are, what are you, uh, what are your big takeaways from Ryan Holiday, Modern Stoic?

[00:43:38] Henrik Werdelin: I'm so fascinated by the topic, uh, and, and so was excited to talk to him.

The thing that I got to think about is when he was talking about how, you know, there's been changes before. Everything inside me was like, "Yeah, but not this." And then I talked to my dad the other day, who's 87, and he was telling me about growing up here in Europe under the Second World War, and he goes, "Life was pretty change- like a lot of changes there."

And so I guess the, the maybe not necessary, like, takeaway in how to use AI, I, I think for me the big takeaway was, like, this is going to be a big change, but there has been big changes before. And I think the second thing he said is that we can ever- either sit and kind of like skulk about it, or we can try to figure out how do we do something positive.

And, and I'm reminded when I did work trying to get entrepreneurship into organizations, I often would say, "Well, if you want to see yourself as an entrepreneurial person, one of your jobs is to take anxiety on your shoulder for other people." And so when they ask you- Hmm ... "Are you sure that this is gonna happen?"

You have to say, "I'm absolutely 100% sure," even though in your hearts of hearts you're like, "Hmm, there's a lot of things that can go wrong here." And so I

[00:44:51] Jeremy Utley: do- No, wait. I'm sure about what? What is... What was your dad's advice? Are you sure

[00:44:54] Henrik Werdelin: about what? Whatever it is. I mean, like entrepreneurship, uh, is always about thinking about a potential future that hasn't happened yet, and then basically articulating to other people, right?

Like, that's often what entrepreneurs do. And so we're kind of like the imposters that go around imposter-ing all the time, right? But I think it's part, part of it is our job to say, give people comfort in saying, "There is a potential path, and while it's not, uh, crystal clear that we should walk it, let's try to walk it."

And I think it's a little bit the same thing with AI. Yes, there can be many ways that AI could be used, and yes, there are things that could happen that is not ideal for society, and I think we can all worry about that, and, and we do. But there are also a lot of positive path we can do. And so we have the choice right now to say, "Do I think and do I wanna try to come up and use AI for something good, or do I wanna just sit and be very worried about all these bad things happening?"

And I think that is an individual choice that people will have to make. And so I think the big thing I took away is that that is a moment you can take every second, every day, right? Mm-hmm. Mm-hmm. Mm-hmm.

[00:46:00] Jeremy Utley: And,

[00:46:00] Henrik Werdelin: and I guess I'm choosing to do the latter.

[00:46:03] Jeremy Utley: Well said. Well said. I think this, the, the point about change, it's interesting that your remark is, "Well, but what about this moment?"

Like, this moment does feel particular, and I think- There is a sense in which that's true, right? That, that this is a different kind of a change. However, every kind of change before was relative to the changes that came before, a different kind of change. And I, and I find it somewhat uplifting and heartening, the opposite of disheartening.

Can you be heartened? I don't know. But the opposite of disheartening, it's somewhat heartening to realize humans have been dealing with this, uh, the future is unknown and uncertain, and there's great upheaval and uncertainty, right? For example, I take the realm of politics. If you think that we're polarized now, and there is data that suggests we're more polarized than ever, yet when you look at newspaper clippings, even going back to the 1800s, 1700s, whatever, the, there's vitriol there, and it's the most important election in the history of the nation then, right?

It's almost like every election's most important because it's the only one that's unknown. And we tend to put e- excessive valence on this particular unknown. Like are you... You aren't concerned at all about the deal that you even... Not, forget the deal you won last year. The deal you lost, you probably don't spend any time thinking about.

Mm-hmm. Why? 'Cause, like, everything worked out, right?

[00:47:27] Ryan Holiday: Yeah.

[00:47:27] Jeremy Utley: But this deal now that's unknown, there's this anxiety, right? So to me that was, uh, it was a little bit of a heartening message. I think that humans have actually been grappling with the question of, and the uncertainty of change for quite some time, and to know that there are traditions that have been developed to help humans grapple with that I think is, as I mentioned at the top, somebody replied to my newsletter saying, "You know, the stoics have been grappling with these questions for a couple thousand years."

And that's actually what kind of got us in touch with Ryan. It, it, to me, when I got that reply, it was a realization, oh wait, there are tools for this. Because right now w- what's being hyped in the media and everything is this is unprecedented. It's, there's nothing like it. There's something reassuring about, uh, the reminder, oh no, actually people have always been feeling that way.

Yeah. And they've gotten through it. You know, I, I, I find that uplifting.

[00:48:21] Henrik Werdelin: And also sometimes, I mean, I'm definitely somebody who defaults to worry, so you know, like, the, the, the concerns about AI is not lost on me. Mm-hmm. But then you start to look at sums of the stats of different things, right? You look at when ATM was introduced and everybody was worried about the people that work in banks, or when McDonald introduced their self-serve kiosk, and then you read that basically both of those industries hired more people instead of less afterwards, right?

Mm-hmm. Or-

[00:48:50] Jeremy Utley: Same with radiologists now, right?

[00:48:52] Henrik Werdelin: So I think, I think there is, like, this tendency to worry, and I do think if we look at stats... I have another person the other day who made the argument that the market, like the financial market, which is like this very, very intelligent machine where everybody pours a ton of money in to basically predict how the future is.

And it thinks it's going to be better than it used to be, right?

[00:49:16] Ryan Holiday: Yeah.

[00:49:17] Henrik Werdelin: And so there's all these data points that we get that things are probably gonna be fine, and then the... But then you still kind of like worry about it. I guess his point was that it doesn't really matter. We are here today. The weather tomorrow's gonna change, but make sure to look up and, and check out the sun.

[00:49:34] Jeremy Utley: You know, you know what I was... I, I haven't studied stoicism much myself, and so I, I don't know some of these things, but I was actually... I don't know how we could approach the conversation. One thing I find myself wishing for is some more kind of practical, like tangible... Like for example, I, I think one stoic practice is kind of a daily reflection.

[00:49:53] Ryan Holiday: Mm.

[00:49:53] Jeremy Utley: I think that's probably a great idea for all of us, right? Yeah. I think another stoic practice is something about kind of contemplating your own mortality, recognizing you are going, like the only thing that is certain in your life is that it will end.

[00:50:06] Henrik Werdelin: Yeah.

[00:50:06] Jeremy Utley: You know? And yet that's probably, we treat, uh, as they say, right, we treat death as if it's uncertain and life as if it's certain, whereas the opposite is true.

Life is uncertain and death is certain. So to me, uh, it, it's, I, if nothing else, I, I want to pay Ryan the compliment that, uh, he has created in me a, a suction for... I'm curious to go and see what do the stoics have to say about this stuff, 'cause we didn't get into too many tactical details. One thing I did wanted to mention that I absolutely, or one thing I did want to mention.

Wanted. Actually, now that I say it again, it does make sense. Both work. One thing that I want to mention, maybe let's just leave it. Let's just, let's just see what happens. Uh, drop in the comments, Jeremy's a doofus. Um, the Kissinger anecdote. That was worth the price of admission for me. I mean, I'm a, I'm...

And Ryan actually, you know, Ryan will read entire books. I know this because I've read that he does this. He'll read entire books for a single anecdote. So, and he'll, he'll, he'll read a 800-page book. So I was sitting in a one-hour conversation. The, the Kissinger anecdote is worth the price of admission alone for me.

I mean, that is such an exceptional practice for AI.

[00:51:20] Ryan Holiday: Yeah.

[00:51:20] Jeremy Utley: Whether he knew it or not. And I would say I don't implement it nearly enough, but it just... If you want the playbook here, if you want to write and maybe just have your AI take dictation for a second. Whatever the AI says, say, "This is wrong. Do better," without reading its response.

And then when it comes back, say, "This is wrong. Do better." And then when it comes back, say, "Now I'm going to read it for the first time."

[00:51:46] Henrik Werdelin: That's funny.

[00:51:46] Jeremy Utley: And that to me, it's not, I mean, it is funny, but it's so true.

[00:51:50] Henrik Werdelin: Yeah, yeah, yeah.

[00:51:51] Jeremy Utley: It's so true. And insofar as I- Do

[00:51:52] Henrik Werdelin: you think technically if you say to an agent, "This is wrong.

Do better," that it would be so, uh, ready to please you that it might actually go the wrong way because you're wrong it, wrong it?

[00:52:05] Jeremy Utley: I w- maybe what I would say is not this is wrong, 'cause I agree, it might, it might go the opposite direction, which could be interesting. But what I would, what I would say is it feels like you didn't exert sufficient effort and thoughtfulness- Hmm

to do an exceptional job here.

[00:52:18] Ryan Holiday: Do

[00:52:18] Jeremy Utley: better. I think you can do better. Or even I, I... Like, I've heard actually a lot of people in our friend group say things like, "I think you can do better," and AI does better, right? Yeah. But that, I would say I've known about that as a tactic, but I haven't incorporated it into kind of my standard practice.

And granted, what does it co- it costs basically, it's only tokens. Yeah. It's only... And, you know, to environmentalists perhaps that's, that's not zero, but meaning it's, you know, to a human it's another two weeks of work. Yeah, yeah. To an AI, it's literally another 15 seconds. And if you can actually improve the output on the proportion of two more weeks of work for 10 more seconds, and the only thing that's keeping you from getting that better work output is having the instinct of Henry Kissinger to tell the AI you think it can do better.

Uh, you know what Steve Jobs used to do? Have you heard this? I, I read this in Isaacson's book. He would often say, when he had an a- audacious goal, he would often tell someone, he'd look them in the eye and he'd say, "Don't be afraid."

[00:53:17] Henrik Werdelin: I now have, like, a lot of people sitting listening to the podcast just shouting at us, "Do better."

So, like, that's where I'm going. Maybe, maybe we should end it there, and then if people send us an email with "do better," we will send them a book. Isn't that like-

[00:53:30] Jeremy Utley: That's a terrible- ... the sort of like pact? Well, hang on. There's, there's one other thing that I wanna cover- Okay. Uh- ... just while we're rattling through- Yeah, yeah

while we're rattling through kind of... There is maybe to, just, like, to, uh, hammer everything's a nail, maybe to a professor everything's a, you know, education problem. This became clear to me this week when, again, for the thousandth time somebody asked me in a se- I'm in a room of 400 people and there's a 10-minute Q&A, and five minutes are devoted to what about cognitive offloading?

And again, to the honest question asker, that is a legitimate... There are legitimate strategies for addressing that. My feeling is most of the time that's a conclusion someone is leveling about why they're disengaging-

[00:54:08] Ryan Holiday: Hmm ...

[00:54:08] Jeremy Utley: not a question they're earnestly asking. And I think it's at, at the very least, it's a good kind of personal check.

I mean, it's probably impossible that anybody listening to our show is in the one camp, but it is worthwhile for folks to realize, you know, listeners to our show are probably the evangelist in their company. When people ask that question, you have to realize they actually aren't interested in the answer.

They're, they're telling you... Or, or, or y- your approach will change maybe. The way you engage with that person is gonna change when you realize what's actually going on, is not they, they need tactics. It's a different mindset. I'll leave it at that.

[00:54:47] Henrik Werdelin: Amen. I think, uh, I think that is all for the show. So I guess we only really have one thing left to say, and that is...

[00:54:57] Jeremy Utley: Bye-bye.

[00:54:57] Henrik Werdelin: Bye.