Beyond The Prompt - How to use AI in your company

Can AI Replace Me? Evan Ratliff on Letting an AI Clone Live His Life

Episode Summary

What happens when your AI clone starts answering phone calls for you—even with your friends and family? In this episode, journalist Evan Ratliff shares the story behind Shell Game, a six-part podcast exploring the unsettling, often absurd world of voice cloning and identity. From real-time scam baiting to AI therapy, Evan walks us through his experiment building and deploying a voice agent that sounded just like him—and what it revealed about technology, trust, and the future of being human.

Episode Notes

In this episode, Evan Ratliff, journalist and creator of the podcast Shell Game, shares the wild and personal story behind his experiment in AI voice cloning. What began as curiosity turned into a six-month dive into building an AI version of himself—one that could answer phone calls, conduct interviews, and even fool friends and family. From scamming the scammers to testing AI therapy, Evan walks us through what it’s like to put a synthetic version of yourself into the world and watch how people respond.

The conversation explores the uneasy collision of identity, automation, and ethics. Evan talks about the emotional reactions people had when they realized they weren’t actually talking to him, the disturbing effectiveness of AI in fraud, and the strange intimacy of hearing your own voice say things you didn’t write. He also reflects on what it means to resist optimization—not because tech can’t help, but because some parts of life aren’t meant to be outsourced.

This episode is a human story wrapped inside a technological one—about trust, loneliness, and how we navigate a world where even our voices aren’t entirely our own.

Key takeaways: 

LinkedIn: Evan Ratliff | LinkedIn
Website: Evan Ratliff – Journalist
Shell Game Podcast: Shell Game | Evan Ratliff

00:00 Intro: Thoughts on AI Deception
00:40 Meet Evan Ratliff: Technology, Crime, and Identity
01:13 The Shell Game Podcast: Exploring AI Voice Cloning
03:50 Challenges and Improvements in AI Voice Technology
04:57 Inspiration Behind the Voice Cloning Experiment
11:05 Practical Applications and Ethical Considerations
17:31 AI in Scamming: Risks and Realities
25:04 Protecting Yourself from AI Scams
27:49 Reflecting on Technological Change and Human Adaptation
29:59 The Reluctance to Embrace New Technology
30:36 The Dangers of Social Media
31:59 AI in Therapy and Personal Experiences
33:39 Creating an AI Agent of Yourself
38:09 The Challenges of Small Talk with AI
38:55 Personal Tech Stack and AI Usage
42:59 Balancing Efficiency and Meaningfulness
45:32 The Future of AI and Human Interaction
52:18 Concluding Thoughts and Reflections

📜 Read the transcript for this episode: Transcript of Can AI Replace Me? Evan Ratliff on Letting an AI Clone Live His Life

Episode Transcription

[00:00:00] Evan Ratliff: I mean, I think it's upsetting to be tricked in general. And when it comes to, , friends and family to kind of like, Start speaking to something, someone as if, oh, hey, what's going on? And then like, even if it happened really fast over the course of 30 seconds of sort of realizing that you feel like an idiot 'cause you, you've just been talking to no one. And it's just the idea of being fooled in some cases. But then in other cases, like I had another friend who just said in the moment, just said like, I feel so lonely because fundamentally when you realize it. You realize you're just talking to yourself, you're just talking to no one.

Wow.

Hi, I am Evan Ratliff I cover, , technology, crime, and identity, where those three things intersect. , I write for, , wired Magazine and Bloomberg Business Week, among other magazines. I have a podcast called Shell Game. I also wrote a book called The Mastermind about a criminal cartel. And, , today looking forward to talking about, , how AI is, , infiltrating our identity.

[00:01:06] Henrik Werdelin: so Evan, I have had the pleasure of listening to the podcast, the Shell Game and read a lot of your stuff. In general, , would you mind in your word, kind of like, , explain what the Shell Game is and then I will, uh, , pebble you with questions after, that.

[00:01:22] Evan Ratliff: Sure, so Shell Game is a six part podcast. It is, uh, a, an immersive journalistic experiment into how it will feel when more and more of the voices around us are artificial intelligence or otherwise fake. Um, and the kind of premise of the show was that I started messing around with. Voice cloning and trying to figure out sort of like what it meant for me as someone who does a fair amount of podcasting in addition to my writing and other journalism. And, uh, then I had the idea of hooking it up to a chat bot. I. Then hooking that combination up to my phone or a phone line, eventually my own phone line, um, in the form of what everyone now knows as a voice agent. Um, and then setting that voice agent loose in the world as me, uh, including for work purposes, to do interviews and things like that, including dealing with customer service. Uh. Dealing with scammers. So I set up a specific line to attract scammers and telemarketers,

and then I had to, oh, that's

[00:02:29] Jeremy Utley: great.

[00:02:29] Evan Ratliff: uh, answer those calls and then eventually also my, my friends and family. So it was this voice as it was attached to my, you know, to my mobile number. So I could call someone, or it could call someone on my behalf. They would think it was me and they would actually get voice agent with my voice, but powered usually by chat, GPT. Sometimes I would use another Another model, and so I did that for about five, six months and then it's kind of the story of each of the ways I deployed it and how people responded.

[00:03:03] Henrik Werdelin: It is very cool. If anybody haven't heard it, they should go to hear it. And also, uh, , you did like a, I think I saw her, and NPI did like a one episode version of the whole thing together with

[00:03:14] Evan Ratliff: Uh, yes. Uh, Radiolab did a, did an episode, uh, that was based out of, it was basically kind of like a, a merging of two episodes of the show onto one episode of Radiolab.

[00:03:25] Henrik Werdelin: Um, okay. Many, many, many questions, but let's start with, I guess it's six months ago or something that you did and maybe even a little bit longer.

[00:03:33] Evan Ratliff: It came out, it started coming out. Actually, it's, God, it's been almost a year. It came out starting in July and it went through August. Uh, the last episodes aired in August.

[00:03:42] Henrik Werdelin: And so if you were doing it again, What do you think would be the biggest difference now from when you did it back then?

[00:03:50] Evan Ratliff: The biggest difference now is latency. That's the number one difference. So latency being the time that it takes to respond for the chat bot to formulate a response, and then for that response to get translated into voice and respond on the phone. That I. That was the biggest giveaway for the agent when it talked to people was it just took a little long to respond. Uh, it still, a lot of people did not recognize that they were talking to ai, not talking to a human, but most of the time that it gave it away. That was the main reason. And actually, I just used it on stage at an event in Portland last week, and it was, I mean, it's significantly better than the last time that I used it, and I'm using the same services I was using before. They've just. Know, They've tweaked it to be a little better, a little bit better. It still has some cadence problems. You know, it still has a lot of problems and you can still detect that it is ai, but the, the technology has, advanced as I expected It probably would.

[00:04:48] Jeremy Utley: Before we dive into what you learned, which I'm really eager to hear about, uh, I am an idea guy. I spent a lot of time thinking about where do ideas come from. Can you talk for a second about what inspired this whole experiment?

[00:05:03] Evan Ratliff: Yeah, I mean, I had been interested in voice cloning. For a while. I think anyone who does any kind of audio, you know, it comes up like there's this company called Descrip that has a, you know, a, a podcast editing software. And I worked with one company that was using it and it, you sort of notice in there that they have this thing where they could clone your voice. I mean, they've had it for a couple of years actually. They could clone your voice and they could use that to fill in, you know, instead of doing what we call pickups, where you have to go back and rerecord something, they could use that voice to maybe like add a line somewhere. And the company I was working with did not actually use it for that purpose, but it did get me interested in voice cloning.

[00:05:44] Henrik Werdelin: That's what we do on this podcast. We will actually not use any of your stuff except for what we just recorded.

[00:05:49] Jeremy Utley: It's just a voice sample, and then we just get you to say whatever we want. It's great. It's great.

[00:05:53] Evan Ratliff: I mean, that's not that far off. That is, I mean, there have got to be people, I mean, we could talk about this later, I'm, my main interest is in like crime and scamming. So like there have to be people already running a similar type of podcast related in any case, um, I was interested in it, but then I was also. don't by the fall of 2024, I was just, I was sick of hearing about ai. Like I, I thought, I don't wanna cover it. I don't want anything to do with it. I was covering other things and I had this moment, I mean, it wasn't like anything prompted this moment, but I of realized like, oh, is this just being old? Like I used to be a person who I embraced every new technology, like when Vine started, like I got on Vine, like I would do because I'm. Both for my, job 'cause I write about tech a lot, but also just like I was a person who's interested in the future and I worked at Wired magazine, you know, um, and I was like, actually, am I becoming a person who just like refuses to address the new thing? So I thought, you know what? Voice is gonna be the thing. I'm gonna, I clone my voice, I'm gonna pro clone my voice and see what it's like. And then once I did that, I started leaving people voicemails, like just calling up and like. Playing a voicemail, you know, into their voicemail and like fooling them with it a few times. And I was like, well, that's fun. But then I thought, what if I could just call with this thing? Hook it up to chat GPT and , at the time there was like a plugin you could use that could sort of made this possible. And that got my sort of old like tinkering, uh, senses going. And I like, oh, I can, and I spent weeks making this thing so I could make phone calls with it. And I started calling my wife with it. And some of that is in the show, like the, my very early efforts. And I was so proud of myself for building this voice agent and. Then I discovered that there were like five companies that just, they had platforms where you could just do this. You just sign up, you put in the you, you just attach it by API to like 11 labs where you clone your voice and they just did it all.

So the show is actually after the first bit, it's all using these platforms, which are much, much better than the system that I was actually, they shut down the plugin 'cause there was no point in having it anymore.

[00:08:08] Henrik Werdelin: What were some of the things that you didn't get do but you think you could do now?

[00:08:16] Evan Ratliff: I don't, I don't wanna give 'em away 'cause I am doing them now.

[00:08:18] Henrik Werdelin: Ah,

[00:08:19] Evan Ratliff: Oh, uh, I we're working on season

[00:08:21] Henrik Werdelin: Well, let's, let's go back to be a little bit concrete then. So I've calmed my voice, of course, I made the Henrik GBT. You know, I get, you can ask a question, it looks at all the stuff I've written, and then it asked my thing. I think we Guy Kawasaki on that also done this, and basically things that It's a better version of Guy Cata 'cause they can actually remember all this stuff that he's ever done. And so he is like, you should just talk to this. Um, just like a little con like any tips on the cloning. 'cause I've tried both the professional one with 11 labs and I've done like the one where you just upload 30 seconds and obviously it's quite like hit and miss a little bit, like how it will clone it. Um, any kind of pro tips on cloning.

[00:09:00] Evan Ratliff: Well, I th the, the biggest pro tip, uh, and I actually got this when I interviewed the guy who started this company, vapi, who's one of the platforms that I use to, to do the calling, um, is that you gotta think about what you want to use it for. So if you actually want to clone mine is a professional grade clone using, I mean, I have many, many, many, many hours of tape of me speaking a microphone like this one, a professional microphone. Professionally recorded much of it in a studio, and so it sounds very good, but it actually almost sounds too good for phone calls some ways. And so if what you wanna do is, which not, I mean this is kind of special case, but like what I wanted to do. Do, when you wanna scam your family. If you wanna call people with it and it sound like you're on the phone, the thing to do, and I actually did this and did a separate clone, which I didn't use in show 'cause we had already started using one of them and it was like, it would too strange, jarring to hear both of them was just like, call something and have it record like a half an hour just through your phone.

So the sound is actually not that great, Ah, yeah. sound of the phone's not that great. And so it's really about like the purpose that you wanna use it for, but the, I mean, the main tip is what, I'm sure 11 labs probably says this now. It's like the, the better the microphone and the quiet environment and the consistency of it. 'Cause when mine messes up too, I can tell that there are some recordings that I gave it that are probably a little more hollow.

Hmm. It was probably in a room that wasn't that great. And it, it will kind of like it. I, I don't know if it averages or what it does, but it, like it, you will hear sometimes like a slight change in kind of like the room tone of the

[00:10:43] Henrik Werdelin: but I think actually that's the super tip. I don't think even 11 loves had, I did it recently again, and uh, and I don't think they like, like, make the tone like you, what are you gonna use it for? Like even, like if you're gonna do a professional, like a presentation, you're gonna have one 10. And if you're just , call your friends and saying, Hey, what's up? Then obviously you use different words, different, 10 different kind of like ation in your, in your voice.

[00:11:05] Jeremy Utley: Yep. Well Now, so one thing that's making me think is I listen to y'all kind of, you know, geek out on the audio for an everyday person. What have you discovered are the applications? Because I think a lot of people listen, they kinda go similar. You know, we had a, , artist who's won a pretty prestigious art award, but a lot of people go, Hey, I'm not, I'm not making paintings. I'm not a photographer. And I think a lot of people could be tempted to check out and go, I'm not. Creating voice recordings myself. Right. What have you discovered are some of the practical applications that somebody off the street or you know, just kind of a typical retail customer, you go, hang on before you dismiss this. Think about using this tech in this way.

[00:11:45] Evan Ratliff: Um, yeah, have those. I'll preface this by saying, I mean, I would say like I myself am a skeptic of. How this is used. I mean, I'm a skeptic of everything. I'm a journalist, so like my default approach is sort of interest in something, but also like where is this gonna go wrong? And so what I was trying to get people to think about was not using it themselves. It was that you are going to encounter this in the world more and more, yeah. and you need to be ready for it. So with that preface, I do think that even people who are very skeptical of it don't think about some of the uses. The very obvious one is there are people who have lost their voice for medical reasons in all variety of ways. Who can use these? Types of voice agents to absolutely transform their lives. So like that's the baseline. There are people who don't have a voice who can use this to have a voice, so should think about that. I think then there's the question of kind of deploying it for practical purposes and, you know, there's there's a wide range of things. And feelings about this. So for instance, after the show came out, I had people ask me, well, what about using it? Well, could I set one up to like speak to my, uh, you know, mother or father, grandfather, grandmother who has like, is in memory care and basically like, can't remember when I'm, I've contact them. But it's nice for someone to check in with them every day. And I feel like they should have this communication and they could communicate with this ai. There's, even now, I think there's a company that maybe does this. Hmm. Now, some people are just absolutely disgusted by this idea. You know, they think that it's actually inhumane and horrific that you would stick a chatbot on your mom or dad or grandmother. And other people I think are like, well, if you've ever dealt with this situation and how difficult it is, like I'm open to it. Like I'm open to anything that makes them feel better. And so.

Again, like I tend to be skeptical of it and think like, well, I, I wouldn't do that. Um, but, you know, people are gonna have a wide range of views of what's appropriate and what's ethical. Uh, similarly to, uh, another idea that people, a lot of people are into, and I I do get into this in the show, is sort of creating a voice agent of a loved one or of yourself so that when you are dead, other people can talk to it or you can talk to a loved one who is now gone. So. Feeding A lot of information, giving it the voice. I don't wanna spoil the show, but like, ultimately I conclude, like, that's not for me. Um, that's not something I would wanna do, even though I have a member of my family that actually would actively wanna do that in some ways. Um, so I think there's a, those are the different uses. And then there's this sort of like everyday stuff that AI companies sell, and that's where I'm the most skeptical of it.

Like, I know, creating, creating agents. I think it's telling that whenever they talk about creating agents, the example they almost always use is getting restaurant reservations and like the number of people that even in America get restaurant reservations, much less the number. And then any of the people who get restaurant reservations and then. I can't use any of the available services, like, oh, it's too hard to use Resi or OpenTable, like I do get restaurant reservations. And so they're like, you can create an agent that will call 200 restaurants and get the best reservation. It's like that is a solution looking for a in the most extreme way.

[00:15:21] Jeremy Utley: I would say it's hard to know what good niche versus bad niche is, you know? Right. So Uber, you know, started as a black car service. You go, actually not a lot of people want a black car. Well, when you realize that it worked. Which is to say it's really difficult to know kind of which edges of the possibility space are relevant or irrelevant. I, i, love the example of somebody who lost their voice. You know, we had Josh too, who runs Meta's Orion project. So he's working on, uh, on, that product and he said that one of the lead users that they're getting a lot of inspiration and getting a lot of great feedback are folks who've lost their sight. Folks who've lost their voice. So I think there's kind of this augmentation of a lost human ability that, that makes a lot of sense to me. All the other stuff, I don't know if it's a good niche or a bad niche, So

[00:16:07] Henrik Werdelin: you don't have, you don't have anything that that you kept in operation where it just was so much more convenient to deploy the agent that you were like, you know what? I'm just gonna keep having them do that.

[00:16:19] Evan Ratliff: No, think, and I think that that's a little bit particular to voice. Like I know, you know, some of these sort of like power users of AI who like automate the writing of their email and all this stuff, you know, but I think when it's voice, how many phone calls do people like make these days? Like most people don't make that many phone calls and so most of the phone calls that you make. The amount of time you spend writing the prompt, currently the amount of time that you would spend writing the prompt exceeds the time on the phone Like I've used it for some during the show. I was trying to see like, well, I need to change a flight, you know, with Delta. And I like, you know, it says you're gonna be on hold for two hours or something.

Like, that's something better. I used it at the DMV, but you have to prompt it so perfectly. Like I, I had a car registration problem. I sent it to talk to the DMV, like the real DMV, it, it did get to a human user, like it navigated the phone train, it got to a human and then it just made an appointment for me to go to the DMV, which is like, I didn't, that's the thing I was trying to avoid.

[00:17:21] Jeremy Utley: That's the last thing I wanted.

[00:17:25] Evan Ratliff: But it did it and it's, it was kind cool to listen to it, do it, and person at the other end of the line didn't know.

[00:17:30] Jeremy Utley: Okay. You, you you saying the person on the other line didn't know Let's talk for, because, because you started and we went down a different rabbit trail. But you started by saying, I wanted to know what do we need to be aware of as, as this becomes a reality and that kind of phrase, the person on the other end didn't know. If you think about the user. Who we should be designing for, potentially as person on the receiving end. What learned about living in a world where this is now a possibility? What folks be aware of?

[00:17:57] Evan Ratliff: I would say for the time being. the the big issue that there's no requirement on anyone's part to disclose that AI is being used. You know, choose to do that on their own. There's obviously no law about that. There's a little bit of FCC stuff around robocall where it's illegal to use AI for robocalling currently. , So the question is like, what happens when someone encounters ai but they're not expecting to encounter ai? And in my experience, people find that quite upsetting for the most part. Like they find that either, uh, irritating or almost Existential, you know what existential is more when you're talking to someone, you know, in, in my case, I was calling up friends and family who did not know they were gonna encounter AI and then suddenly realized it. Some of them found that like pretty profoundly disturbing that they thought they were be talking me for a minute. They were thinking they were talking to me and then realized, oh no, this is something else.

[00:18:54] Henrik Werdelin: And you think the latency was the biggest, thing that gave it away, or what do you think is the, the giveaway? latency

[00:19:02] Evan Ratliff: for it's more just like little subtle, you know, differences in the way I talk like it would, it's a little bit faster sometimes, like actually in its speech, like it has the latency in it's time to respond, but then i. would I. It would speak a little faster, and it was like a little more enthusiastic than me. That was a common problem. Like people described it like Evan on cocaine and was like, how they, my friends? That was like their shorthand for it, uh, that it would just, I'm not necessarily like enthusiast in that way. way. Um, and then the other thing was like sense of humor and, uh, it, it has, I mean, I'm not saying I have the greatest sense of humor, but it has a shit sense of humor. Like it's absolutely just, its jokes are, not jokes that not only that I would make, but like you just wouldn't make with friends. They're the jokes skimmed from, you know, training data and the guardrails make it just sort of like, it's just lame. So like,

[00:19:58] Jeremy Utley: So those deficiencies, 'cause to me that those seem like pretty, um, glaring deficiencies, why despite those deficiencies was the, you, you said it's quite upsetting to people. So what is it that's upsetting?

[00:20:13] Evan Ratliff: Um, I mean, I think it's upsetting to be tricked in general. Um, and when it comes to, , friends and family to kind of like, Start speaking to something, someone as if, oh, hey, what's going on? And then like, like even if it happened really fast over the course of 30 seconds of sort of realizing that like you feel like an idiot 'cause you, you've just been talking to no one. And it's just the idea of being fooled in some cases. But then in other cases, like I had another friend who just said in the moment, just said like, I feel so lonely because fundamentally when you realize it. You realize you're just talking to yourself, you're just talking to no one. Wow. Uh, and I had a, a more extreme example, which was a friend who didn't realize it because the way that the conversation went, he thought it was sort of me being sarcastic and then he thought it was me. Being strange, and he thought I'd had like a, like a mental breakdown. And so he was actually very upset in the moment because he was trying to figure out what he should do about it.

[00:21:24] Henrik Werdelin: Did you then break cover?

[00:21:26] Evan Ratliff: I, I wasn't on the call like I, for, for 95% of these calls. I wasn't there. I

[00:21:31] Henrik Werdelin: Oh, you didn't even, you didn't even send listen in

[00:21:33] Evan Ratliff: No, no, no. I just sent it away to do its thing and it, it records all the calls and so then I would pick up the recording later. There were a couple of exceptions to that, but yeah, for the most part, I then would listen to the recording, uh, and then call my friend and say like, Hey, I'm okay.

[00:21:48] Henrik Werdelin: and do you think when we have all these obviously audio. Is pretty developed now. Even more so when you did it, they've started to create all different video clones too. We hear stories of people who had transferred money because their CFO called an assume call and stuff like that. What's your view on how FAU we away from this? Definitely being used for scamming purposes, but then secondly for use for kinda like other use.

[00:22:16] Evan Ratliff: Uh, we're so far into it, you being used for scamming purposes already, which yeah, it is the best. It might be the best technology past, just like the internet and the telephone, like the best technology for scamming that has ever been created. I mean, there are already thousands, one thousands of what they call the grandparent scam, which is, you know, they can quote a voice and then they use the voice for, you know, a few seconds to be like, oh, I'm in trouble. They call a relative, oh, I'm in trouble. You're gonna hear now you're gonna, I'm gonna give you my lawyer. We need money. And then like, Supposedly handing the phone off that scam is now routine. Not to mention that like the normal scam architecture. They often now use AI to sort of like weed through the marks. And I know this because I have this dedicated scam line that gets like, now it gets like 40 or 50 telemarketing scam calls a day. And so they'll just use that to like, because scamming iss a volume game. So like they'll use it to go through the marks and then if they get someone who's actually engaged, then they pass 'em off to a human to like close the deal basically.

[00:23:20] Jeremy Utley: Is there something about that you, you said you still have a live kind of scam receiver. I remember Mark. Rober did this thing with grandparents. I dunno if you saw it, but he actually, that he ended up working with the FBI to shut down a call center in India that was grandparents and he was able to verify it and prove it because of his kind of YouTube channel. Um, is there, do you have any. any altruistic or otherwise kind of mission oriented goal with keeping this scam line alive?

[00:23:51] Evan Ratliff: Not for me. I mean, there's all these sort of scam baiters on YouTube, uh, and there's a little bit about them in the show. And even since the show came out, actually some. Phone companies like O2 in uk and I've seen a couple of others. They have implemented these bots where they, the bots answer the phone and pretend to be a grandma and keep the scammers on the phone. But I actually, I just think the whole premise of sort of like wasting scammers taught there's, scamming is an industrial strength business now it's, I mean they, these are massive, massive endeavors. And so the idea of like even. Keeping them on for 20 minutes. And then, I mean, the FBI doesn't have trouble down like finding scams like there's millions of scams. The problem is like the jurisdictional issues of trying to chase, like they have to be so big to even spend the time for law enforcement to go try to track them down. It's, it's really, it's actually like an incredibly difficult problem. And so, no, I mean, I'll, I'll say upfront like I'm just interested in what the scammers are doing and looking for stories for myself. that scam line, it does anything to like

[00:25:04] Henrik Werdelin: Do you then have, uh, we have, uh, we have safe words now in the family. , Is that something you, you recommend too?

[00:25:11] Evan Ratliff: Yes, absolutely. I mean, I think everyone should have that now. Everyone should have that. I I mean, it doesn't have to be, doesn't necessarily have to be a safe word. It, it's, it's as simple as like, uh, if, you get an emergency call from me, just also me. Like that's all, like, that would solve, that solves it, you know, it's like a problem that if you know about it. You can avoid for the most part. It's the same with what you described before, which is like more advanced forms of business. Email compromise, where people are, you know, companies get contacted and like someone pretends to be the CEO and gets them to like transfer money. And it's like if you know about the scam, you could it if that's the bin defense

[00:25:50] Jeremy Utley: so so sorry, just to be clear, 'cause now you know as, as I'm recognizing my heart rate is elevated, thinking about the industrial street scamming, I want to clear kind of checklist, how do we protect ourselves? You, You're saying safe word. It's Yeah.

[00:26:05] Evan Ratliff: Yeah, you should have some way of identifying yourself to to your family that like only you would know and someone outside wouldn't easily find out. So if you get a a call from your, Daughter or sister, whatever, and they say, I'm in trouble and I need money from you right away for a variety of reasons. Like you can say like, oh, say the word, uh, and they could say the word, and if it's them, then obviously you need to go into action mode. But what they're trying to do is get you to go into action mode, get your adrenaline going, and not even think of that call, making that text.

[00:26:46] Henrik Werdelin: I don't, I mean I find it to be quite common now my parents get a text where it says, Hey, this is Henrik. I just got a new mobile number. You know, Heads up, and then they kind of try to take it from there. Like it's happened at least a few times. Mm-hmm. Uh, and I think both my parents have also been going through, you know, scams and some of them at one point successfully, right? And so I, I mean, I, I a full a hundred percent with you, like I am, I just assume that it happens and it happens over and over again. And I think, you know, at least I can help my parents, uh, kind of not, not jump in the, in the trap twice.

[00:27:23] Evan Ratliff: It's, it's hard though people, you know, everyone's living their lives more online, but like the basic principles of if something incredibly strange happens, double check it. And also like if you buy something and it's way too good to be true in terms of price, like it

[00:27:39] Jeremy Utley: I wanted to go back to the bean of the conversation, if you don't mind, because there's something you said that I thought was really interesting that I just find personally curious and I would love to know more about it. You mentioned when we asked where this idea came from, that in the fall of you were sick of hearing about AI or Fall whatever it is, sick of hearing about ai and then you ask yourself this question, this just me being old and. I wonder about the temptation to opt out in this moment, you know, for folks who have experience, because there's always technologies pass us by. Right? And you know,, I think of Bryce Shamal, who's the head of AI at Moderna. He was on our show. Amazing episode. If you haven't had a chance to listen to Evan, you should, and every listener should. Um, but one thing Bryce said is they've made stickers at Moderna that say, don't be Fred. Use ai. And that is a reference to their old CEO's boss who, uh, when email came out, uh, had his assistant print his emails, and then he wrote his response and then his name Fred. He would hand the written response to his assistant who would then type the response via email.

And it's kind of an allusion to the fact that we always have a tendency to check out from technological changes. And when you said, this is just me being old? I'm curious if you have any thoughts on that human tendency broadly, or specifically how do we know whether we should be checking out or leaning in? What, what have you learned about that?

[00:29:09] Evan Ratliff: I. Think it's very difficult, especially when you become, you know, a middle aged or older person because it's very difficult to separate. Uh, this sort of natural tendency to think that things were better, uh, when you were younger or the way you came up or the way you, your career went from something being genuinely negative in society. And I mean, there are many examples of over but the one that I use in the show is , I found this. There's this article from the New York which is like my favorite article that I've ever, uh, read, , from 1925, or maybe it's 1924, um, which is an article about like a guy in New York City who has a shop who's one of the last people to like adopt a phone in his shop. Like he, he has refused to have a phone in his shop. He's like the incessant ringing. And it's very, it's a beautifully written story and it's sort of about this idea of. Well, true that when the phone like it fundamentally changed people's relations with each other. And people, certain people were very upset about that and people resisted it because they were like, I don't want people to be to contact me all the time. Like I, what about the quiet of the city? And all these sorts of things. And now we look back, back at that and we're like, the phone, like, come on. But you see it, you know, it's happened with all of these different technologies. Then it's also a trap because some technologies are dangerous and I think social media is a good example of like, oh, ha ha, the joke was on us.

Like Yeah,

[00:30:46] Jeremy Utley: perhaps our guard was too, was too far down.

[00:30:48] Evan Ratliff: Yeah. exactly. We adopted it so easily and we let ourselves be manipulated so thoroughly uh, by the companies that were creating it. And they, you know, if you look now like they weren't really hiding their incentives, like they were just growth, growth, growth, growth, growth. And the fastest way to do that was to create incentives for us to yell at each other. And so, all of which is to say, I try to like all, I think all you can do is kind of reflect that and think about those questions and like have those questions in your mind. And I think. In this case, I do know a lot of people that are rejecting ai, you know, writers, journalists who, who are just, they're very angry about it and rightfully so, like it is built on stolen, copyrighted material. Whatever. They'll, they'll never make it to court, probably they'll settle. But like I think in our hearts, we all know what exactly happened there, the question is like, what are we gonna do about it? And what I want is for. Because the adoption happening so quickly for people to try to take on some of these questions themselves think about them because no one is gonna do it for us.

And it, I think one of the best examples is AI therapy. AI therapy is happening, including voice chat is already a huge thing. Voice doc therapy is happening. You could talk to an AI voice therapist. I had my AI talk to an AI voice therapist during the show. I went look at the number of studies about ai, voice-based behavioral therapy and its effects, and there was one, one study millions people are gonna be using this very soon. it's just getting ahead

[00:32:35] Henrik Werdelin: of one of the things that I've been thinking a lot about is, 'cause I was part of building the social web, and obviously I. we didn't think much about. Taking pictures of each other's food and putting on an app would kind of destroy kind of a generation of, of mental health, right? wasn't, it wasn't really something that we were planning at the time. And so there's all these other consequences. And one of the things that seemed clear to me now with AI is that all this loneliness that kind of like has slowly kind of crept in and worse during COVID could potentially be even more kind of. Spa But now that have all these agreeable, very charming kind of synthetic people that we can with. And at one point we might cut more and more real people out because it's nicer to talk to the agents that say nice back to us all the time and then we might kind of wake up one day and be incredibly lonely. So, I mean that's definitely seemed to be kinda one. but on that note, one of the things that spent a lot of time was talking to your agent, or at least kind of like thinking about how your agent would be a representative of you. Was there something in how AI works that you kind of got surprised about or you. learned was an important kind of discovery in taking as much of you as possible and then synthesize it down to a voice, but that then should represent you.

[00:33:59] Evan Ratliff: I mean, yeah, I, I, I learned a lot. I, I, I feel like it was a valuable exercise. I mean, I, I learned some things. I got more of a personal sense of like the way AI works and I think as much as you read about it and you know, you guys know it very well, but like, you know, a lot of people use AI and they don't really think about what it is or why it can do what it can do. You know that it's a predictive engine, that it's built on this training data, that it's sort of like trying to synthesize what a human would be expected to say. In this particular moment when you put it into conversation, if you listen to a lot those conversations, and in particular if you try to have one represent you in conversation, it becomes very clear. Like using my voice, speaking in this manner sounds the average person. It doesn't sound like me, like the content of the conversation. So it was interesting to kind of like really feel almost like how the models are working. Um, but then the, the flip side of it is in particular type of situation where you're trying create an agent of yourself, it's, again, it's like a little bit of a trap because it gets the more information you give it like.

[00:35:14] Henrik Werdelin: And do you, and this is a super geeky question, using these rack models that you did, did you figure out what would the best kind of repository of content to give it? Because I would imagine just taking your wide articles wouldn't kind of like make it sound like you invoice. So was it more transcripts of kind of conversations or, or how did you kinda like hone the tone,

[00:35:37] Evan Ratliff: to speak, Uh, you couldn't, I felt like the, the, the knowledge base that you could give, it didn't really change tone. So I think to, if you wanted to do that, you'd have to sort of like train up a mini model. Um, but I didn't, that's not what I did. I basically like basically using the standard models, but then pasting on. This knowledge base of information that it could access, which is like, it's post training. So it's not really in there. all I believe, I may not have the technical aspects exactly right, but like, I believe it's like, almost like giving it a gigantic prompt. Like if you could put all this stuff in the prompt. So I would do things like write a dossier, you know, I was sending it to therapy, so I, I wrote a dossier of like my whole mental health history and like what I, what I had been to therapy for in the past and my family history and things like that. And then, you know, that was sort of like, I did learn things from listening to it, but it was unclear what to trust. You know? It would say something and, and I, it would kinda remix my problems

[00:36:37] Jeremy Utley: kind of becomes like horoscope almost.

Yeah.

[00:36:39] Henrik Werdelin: That, the way, was probably one of my favorite episodes. Why you go through that?

[00:36:43] Evan Ratliff: Well, thanks. , And listening to it, you're sort of like, well, maybe I do have problem. Like, I hear myself saying, I have

[00:36:48] Jeremy Utley: My, my my elbow is hurting that you mention it. My elbow is hurting. That's hysterical. ,

[00:36:55] Henrik Werdelin: Have you, one of the things that we've been talking a little bit more about in this podcast is how we about humanity using of this technology and how much we learned about ourselves when we And one thing that we talked to, uh, um, CEO Atlantic about which. Blew my mind was kind of this of what are skills that we need to train as humans to become better of ai.

So if we assume that basically you put garbage in, you get garbage out, you put unique original thought into it, it kind of amplified and you become even better. I think the way that Jeremy put in that podcast was if this is nine men suit and you can lift 200 pounds, then you might be able to push a thousand with ai. But if you can only lift 25 pounds, then you can then lift. 200 pounds. So it was like an exponential. You become exponential better by understanding your own uniqueness in humanity. Was there something as you were doing this that made you think, you know what, as a human, I need to become better of doing this? Either just becoming a better human, but maybe even like to become a better human, to be using this technology.

[00:38:05] Evan Ratliff: I mean the first to become just a better human. The main thing that it kept. Highlighting for me is sort of like a small talk. Like I'm terrible at small talk and I mean, the AI is worse than me at small talk. It's it's boring and it's so, it's just painfully bad because again, it's like not only is the small talk, it's trying to find the average, the most small talk that you could, do. small talk,

[00:38:29] Jeremy Utley: The smallest small talk possible.

[00:38:32] Evan Ratliff: sounded me, so I'm listening to it saying like, man, that is, that's what I sound like, that's what I sound like on the playground with like parents of other kids when I'm trying to make small talk and I, I gotta do better. Like I gotta be better than this thing. So it did have that, that self improvement,

[00:38:49] Jeremy Utley: it was a negative. just, well, okay, so we've been talking about audio mostly. Um. Can you talk for a second about your own, call it personal tech stack, like how are you using Gen AI broadly in your work beyond your explorations with audio?

[00:39:06] Evan Ratliff: Well, this is the crazy thing with me, which is that the answer to that is not at all like. Uh, practically zero. I mean, I'll give you a a couple examples how I, how I do use it. 'cause obviously it's useful. part of my whole thing is like, I like my job. And I spent my whole career trying to like orient around what I wanna do and the way I wanna do I'm not really looking surprised at all the people that are looking for efficiencies all the time.

Like, I'm just, I'm not trying to not answer my email and let some, like, let an AI answer my email. Like maybe I, I don't get as much email as I did when was like running a company and being an editor and like, so I understand being overwhelmed, I, I just like. I've been working on my life myself for many years, and I'm not really looking for like a digital solve problems.

Like, I like solving problems and or like writing. I like writing. I mean, it's, it's hard sometimes, but like, that's what I I signed up for. If I to not write, I would've done some other job that paid better than writing. So.

My, that's kind of my attitude towards it. Like, I'm not anti ai and what, I'll see these places where it could be used like vacation planning and stuff like that.

I'm like, oh, cool, yeah, I'll use it for that. But in terms of my work, I'm a little bit, and it's a little bit of a pushback on like the constant sort of like, uh, tuning tuning of everyone's life. I just don't believe everyone's lives are Needing like dials to be tuned all the time. Like people should, should just relax a little bit and like have some dead time. Enjoy. Like, I don't know just a little bit, uh, of a personal belief. But that said, for things like reporting instance, you know, I use like Notebook lm, the Google. Uh, the Google one, uh, put in a ton of court documents for a story that I'm on, and then be able to query those court documents and try to find connections. And not, it's not like a miracle machine. Like it's not finding incredible stuff that I would never find, but it's just like time-wise, like yeah, I can find a lot of stuff in those documents that otherwise would require many hours of

[00:41:20] Jeremy Utley: And so just to dig into that workflow for a second, you're saying if you're reporting on a story, you take a bunch of kind of relevant documentation, you upload it to Notebook outline, you create a new notebook, and then how do you query it to identify connections? Like what's, what's a recent kind epiphany moment that you feel like might have been accelerated by Notebook lm?

[00:41:39] Evan Ratliff: Well, for example, oftentimes, like I'm doing a story right now where there are probably five different cases. There are federal court cases. Each of them has a ton of documents, and then in those documents there are, uh, redacted company names. So there's companies that are described as like a staffing company from Texas, but the US attorneys are not always consistent across. They're not like sitting down in a room and trying to make sure no one could find out. They're just like, okay, get redact this and give it name. So if you actually like compare all of the different ones, you can often suss out. Which ones they are. Now, I didn't find that notebook, Ellum could itself suss them out that well, like, didn't say like, oh, that's, you know, I. uh, DuPont, that that's the company. But what it could do is give me lists. I would be like, give me a list of every company that's described and their descriptions. And it would just be like, I'd be like, okay, give me 10 that are described. Seem like they're described two different ways. Like you could just get all these lists. Now that would take me, I would've to go through. Know, hundreds and hundreds of pages. Yes. Which I, I have done in the past, but it's just like, it does speed up that So even I am looking for efficiencies. The person who is rejects constantly looking efficiencies.

[00:42:58] Jeremy Utley: that's, I I think that's great. I mean, to me it, it illustrates something of the tension, which is There are areas of our lives where we, we want efficiency and there are areas where we don't, and I think what your kind of first set of comments was geared towards is. Acknowledging a really important point, which is we don't want everything to faster. There's some stuff we actually like to do because we like to do it. And the fact that it's hard doesn't, doesn't mean it should be made easier. Maybe it being hard is the point. Like for example, exercise. You actually don't want exercise to be easier. If you do, you're no longer building muscle. Right? So that's like the example. Um, but then to say that I don't want to be efficient. Ever is also you go, that's not totally I mean, I don't really love reading through these hundreds of pages of chord And I think, you know, maybe encouraging our audience too, be a little bit thorough in assessing their workflows to say, what's the stuff that sucks? You know, our friend Nicholas, as this phrase, it sucks that, right? What's the stuff that sucks you go, I actually, I'm not deriving value from that being difficult. And then what's the stuff that is hard, but. It Being hard is the point. Right? , and then perhaps you actually, you put up harder confines as, as it sounds like you have Evan, you say, that's an area where I actually, I don't want assistance. Not because it couldn't but because I actually enjoy doing it myself.

[00:44:20] Evan Ratliff: with that. I, I think that's, that's very thoughtful and also like, it's, it's, it was exaggerated for me to say I don't seek any efficiencies. I will that it's very difficult to identify the things that suck, that are somehow like, not meaningful. Like for example, you know, I think one of the areas where they're gonna try to replace, use this voice AI, is to replace, you know. Clerks like they're using in fast food restaurants and you're like, well, who cares? Like what, what is the the 30 seconds that I collect my order from someone at a fast food restaurant, like if that was a robot, like fine, I'm just trying to get my food and maybe that job, that's not a great job anyway, but. I feel like that's life Like you can't eliminate those, interactions from life. And I think if you start down the road of just treating every inconvenience as something to be solved with technology, again, like technology solves a lot of in inconveniences, but like you end up with this like smooth life that is also in a weird way, meaningless. Like you, you can't spend your whole life having No, I mean, very wealthy people can do this, but like everyone else has to deal with shit all the time and like

[00:45:32] Henrik Werdelin: one thing that I am, I'm, as you you heard earlier, I'm very kinda like, uh, in line, in tune with kinda like that problem statement, right? But one thing it's. I kind of been ponding a bit about on this specific matter is you had autotune that kind of came online and then for music and then everybody was like, ah, that's autotune. And milli Vanilli kinda like got like, you know, that was not even auto tune, that somebody else's singing, but nonetheless like, but then suddenly autotune became kind of like the thing, and now music sounds auto toy in order to. Be real. And then you had filters on Instagram. It was like hashtag no filters, it's about to be filters. Now, of course, everybody just almost optimized the filters. And then you had, I mean, they like fashion, like you had fake lips and boobs and stuff like that. And then now to the point that that's almost like what you would like, it needs to be so fake looking that you can see. Right. And so maybe back to you being old. And we're both born the same year, so I can say that, uh, wonder

if that is just little bit of the feeling. Because if look at all these other things like the, the thickness, the auto, , , the Clark that ticks your order, or even the synthetic version of Evan that somebody call and talk to now, like if you look at these other things, it might turn the same way, but it's not just like you don't want it, but that is the thing you want because it's a little bit. Max. He drove me, I guess like there's not a question. It was more kinda like a philosophical kind of like No, I, yeah, Playback.

[00:47:08] Evan Ratliff: right. mean, that is the, the biggest place where the sort of like, are you old or are you seeing something negative about technology? Question comes in is like, tastes around art or music or anything else. And I'm, I'm very skeptical of my own perspective on any of that kind of stuff. Like, yeah, I, if people wanna listen to AI music, I mean, I, yes, of course my reaction is like. I listen to music because a a human made it I, but I listen to a lot of electronic music too, they use tools to make those yeah, that's, I, I, I completely agree with you in that area. I think my, I think what really like gets in my craw is around, it's this sort of like tech industry hyper efficiency around ai, changing our habits and allowing us to do stuff. And like, that's all fine if it's what you're doing is meaningful. If you, if you, if you're really truly doing something very meaningful and you're freeing up more time to do the meaningful thing, but I'm not convinced that the whole, that's what people are doing.

[00:48:05] Henrik Werdelin: it's the whole, we should make a baby in four month with two women. You know? That's of stuff. Here.

[00:48:10] Jeremy Utley: That's funny. Yeah. Split the, split the labor. Well, I do think Evan, I mean, just. I, I, It's not a counterpoint necessarily. But one just data point, which is kinda interesting, we we interviewed John Waldman, who's a CEO of a company called Homebase, which serves, hundreds of thousands of small businesses, you know, coffee shops, you know, record stores, things like that. And, um, one of the things he mentioned was leveraging AI powered prototyping tools specifically has resulted in his team spending a ton more time in the field with customers. And He said that the design team and the, and the development team, product team used to spend a ton of time with the office kind of PRD documents, which kinda standard for product development. And he said now, of, a 20 page PRD doc, which somebody's gotta write, and then somebody's gotta read to be kind of a good citizen, Now we're building prototypes and natural and we go, what do we do with all those hours? that spent, You know, writing and reading documents. We're redeploying them to be with our customers. Which to me is a really spectacular, kind of the thing you could hope for, or, I don't wanna say best, but at least in that context, of course you should be spending more time with your customers. Of course you should be trying more things. So I think that it's not, uh, it's not straightforward and there's kind of dangers on both sides. Um, but I, I do think even when it comes to inconvenience, you know, as a really interesting, you, you had said a second ago, is every inconvenience kind of. To be solved. And you know, as a professor of innovation and creativity, one of the best ways you can teach a young person, or you know, an older person for that matter to come up with novel ideas is to tell them to keep a bug list. It's a, it's an assignment we've been giving at Sanford since the 1960s, which is not errors in lines of code. This is long before computer programming entered common parlance. It's just write down a list of things that bug you. Because it turns out that someone who's attentive to problems becomes capable of solutions. And so, and in, you know, in Jerry Seinfeld, I know he's got a great bit about how, you know, that's why you have to have kids because then you always have material because your life is so annoying, you know? But there's something about attention to annoyances and kind of the human drive to eliminate them, that. while I agree, we can't kinda have a friction-free experience. And also what we observe from the kinda hyper elite is a friction-free life isn't really necessarily to be desired granted the one hand. Yet. On the other hand, I say to people, if you wanna innovate, look at inconveniences, look at stuff that's irritating, right? Chances are there's an opportunity So again, another case of it's not a straightforward. I. Um, uh, conclusion that we can draw from some of these just recognizing there are trade-offs and there's kind of dangers on both sides of the equation. But I if we, I I really like what you said earlier, putting these questions in people's minds and, and prompting them to consider. I. The following, I think we'll really do a lot to, help usher in a mindfulness and an awareness of the kind of future that we want to be building towards, rather than just getting kind um, derailed or blindsided by a future that we, that we really didn't

[00:51:17] Evan Ratliff: Yeah, I agree. I mean, I'm, I'm, I'm not, I'm not opposed to human improvement for sure, like, and or technology in any sense. I just think, uh, there's a strong incentive for companies to be selling us convenience improvements. I. And they've been doing that. And, uh, as has been discussed earlier, like, people don't seem happier, seem lonelier. So I feel like the improvements need to be on some more systematic level than, , you know, what startups are kind of creating for us in the moment. At the same like I'm, I'm not opposed to, Any sort of attempt advance, know, the human condition. So when people are doing that, I think it's a example. Something that causes people to like return to in-person communication, like a technology that frees up time for people to return to in-person communication. I agree with you. That is our best scenario. Like that. We want more of that. Whoever's making that, I want more of that

[00:52:15] Jeremy Utley: John we approve of your work.

[00:52:18] Henrik Werdelin: On that note, I think we're out of time. I thought it was an, it was super interesting and I mean, I'm really, really happy that you made the six episode, uh, show. 'cause uh, I'm very much enjoyed it. Well,

[00:52:29] Jeremy Utley: we'll definitely link to it. in show notes.

[00:52:31] Evan Ratliff: Thank you.

[00:52:33] Henrik Werdelin: Okay, Jeremy, that. I mean, I was a very, I am very big fan of his show, the Shell game, and so it was fun to get him on and, uh, and be able to ask a few more questions. Yeah. , what, what stood out to you?

[00:52:47] Jeremy Utley: Um, you you know, I love. I love someone who takes action in some way in order to learn. So that to me is just a, I dunno if it's a journalistic impulse or what, but I really commend his instinct that, wow, I'm kind of tired of hearing about this. Maybe I should get in the game. And then finding a unique angle. , I haven't listened to the show, but I'm, I'm super interested to check it out now. Um, that's that's kinda one thing. The other thing that found kind of shocking and surprising, I'm sure you did as as well, is how little, it sounds like he uses AI of in of in the broader context of his workflow. And I thought, I mean, we, we obviously don't have to rehash got into there, but uh, uh, in the conversation, but I thought it was pretty interesting to think about the tension between things that we want to be made more efficient and things that we don't, and. Also how hard it is to know the difference.

I think that there, we may kinda value the hard work of something, but then boy, howdy It sure is nice when it's not hard work. And if we, if we aren't kind of thoughtful and maybe experimental, um, to test what parts of my work are hard, that should be hard or what parts of my work are hard, that shouldn't we may actually have the wrong instinct about some of that stuff. those are some of the of the that I, I kinda left thinking, what what about you?

[00:53:55] Henrik Werdelin: I mean like definitely that everybody should have a safe word. It seems like be such an obvious thing. It doesn't take too long time. It's a little bit like, I guess it's in the prepper kind of like world, but, and so it might be, you never have to use it, but it also seemed to be very easy just to agree. I mean, a lot of people have, uh, like a code word for their alarm.

So like suddenly the alarm goes off, they call the security company and saying, here's my password. It just seemed to be prudent that everybody should have that. I just thought it's really interesting. As you know, I've been a little bit on this kick on like what is uniquely human as we raise towards a GI and like these machines that can be kind of as good as the smartest PhDs. What is it that they won't be very good at? And it's kind of interesting that one of those things, it's kind of small talk, you know, 'cause

[00:54:43] Jeremy Utley: you know, it's kind of complicated and joking. Yeah. The, the fact that it's really difficult to kind of casual jokes that are are contextual, I think is a. It's an an understandable, weak point of ai, but it's a pretty interesting one to realize it's, it's such a prosaic, natural part of human language, and yet perhaps maybe is it underrepresented in the data? I I don't know if there's a lot of small talk in training data as a,

[00:55:03] Henrik Werdelin: probably not

[00:55:04] Jeremy Utley: hypothesis. I don't know. That's interesting.

[00:55:07] Henrik Werdelin: And, then, uh, I mean like, is a little bit of the same, along the same lines. You like we, you and I speak a lot about like, what is that we can optimize with ai, I. And we also talked, you know, recently about a lot of the things that, like what could be some of the downsides of AI that we might not, that might not be crystal clear. And so I think adding that little question of like, I can automate this, but should I, you know, like Is actually I think, an interesting kind of little question to ask yourself over the next few month as we kinda like just do more and more with ai. Like, is this something where, that I do, because I do this, like if I am, I, I realize a lot of people doing performance reviews for their staff through this, but like the birthday, birthday note, should that be written by AI or should. You just maybe do that yourself because it kind of trains you be in it human or whatever. Then I dunno. I thought that was an interesting kind of thing just ponder a little bit, but.

[00:56:02] Jeremy Utley: Yeah. Yeah. the question of what is your time? You know, like as an mentioned he Notebook LM to parse court documents and stuff, and I I wonder, I. There's a question of what? what? the line you're gonna write the article that be fed or informed or accelerated by ai? There's almost like the human accomplishment piece like I can imagine there's something very gratifying about reading those documents and then making the connection and going, oh. It's DuPont or, you whatever it is, right. That all of a of sudden, yeah. Granted he saved himself a bunch time. Perhaps also lost that kind the reward that comes from sleuthing sifting and is an interesting question. Going back again, just we're revisiting this question of hard that's doing?

[00:56:52] Henrik Werdelin: Another example, I saw a lot of people online taking, uh, drawings from their kids and putting into chat D tea and make it like, get, get to come to life, make it look professional. And then I was going, yeah, it looks 3D and nice now, but like, that's not really the point, right? Like, the point is not that it looked like a fine, a finished kind complete piece of work. Um, so anyway.

[00:57:16] Jeremy Utley: I think think signposting, just previous episodes. Shout out Challamel who you know, we mentioned, know, don't be Fred, head of of ai Moderna. That's one. Another one, Josh too, in conversation about Orion and augmentation, and then also your point about augmentation and identifying opportunities. Josh Wöhle about the. You the practice of identifying and uh, implementing and perhaps Henrik, adding a layer that conversation, which is assessing whether whether you you want to automate should to be a, a only activity. Anyway, lots of of interesting there, always for listening. enjoyed the episode, please like it, share it, it to friend, uh, what's what's our safe word for this episode, Henry?

[00:58:01] Henrik Werdelin: Oh, what's that safe word? Um,

[00:58:03] Jeremy Utley: could safe Would that be meta? What's your safe word? word.

[00:58:06] Henrik Werdelin: do safe. Word. Awesome, man. Have a good one.

[00:58:10] Jeremy Utley: you You too.