This is really an “do not miss episode”, as we are joined by Ethan Mollick, professor at Wharton and a leading voice on how artificial intelligence is transforming entrepreneurship and education. As an "accidental AI expert," Ethan shares his unique perspective on the AI revolution and provides practical insights for harnessing its potential in business and beyond. Ethan is really an incredible communicator and is one of the leading voices in how to use AI. Key Takeaways: 1. Treat AI like a person: To get the best results, engage with AI conversationally and work with it like an editor.2. Use AI for everything you can: The only way to truly understand AI's capabilities is to use it for as many tasks as possible.3. Be prepared for exponential growth: AI's capabilities are rapidly expanding, so be ready for significant advancements in the near future.4. AI is a superpower for entrepreneurship: Entrepreneurs can leverage AI to generate ideas, validate concepts, and accelerate experimentation.5. Culture matters for AI adoption: Mission-driven organizations and startups have an advantage in deploying AI due to their willingness to share and collaborate. Notable Quotes: - "If you haven't had a crisis yet, then you haven't used it enough. And I think that there will be this coverage because the truth is out of the box, it's a huge performance improvement." - "I think modeling serious use is the way you do this. Trusting your employees to some extent is a way that you do this too." - "This is the accelerator that we always needed to make this happen. And entrepreneurs should be embracing this 1 billion percent." Resources Mentioned: - Ethan Mollick's book on A is called Co-Intelligence LIVING AND WORKING WITH AI - Henrik’s new startup that helps you build a startup from scratch via AI: Audos.comYou should follow Ethan on Twitter/X Thanks for listening to this episode of Beyond The Prompt! If you enjoyed the conversation, please share it with a friend and subscribe to the podcast on your favorite platform.
This is really an “do not miss episode”, as we are joined by Ethan Mollick, professor at Wharton and a leading voice on how artificial intelligence is transforming entrepreneurship and education. As an "accidental AI expert," Ethan shares his unique perspective on the AI revolution and provides practical insights for harnessing its potential in business and beyond. Ethan is really an incredible communicator and is one of the leading voices in how to use AI.
Key Takeaways:
1. Treat AI like a person: To get the best results, engage with AI conversationally and work with it like an editor.
2. Use AI for everything you can: The only way to truly understand AI's capabilities is to use it for as many tasks as possible.
3. Be prepared for exponential growth: AI's capabilities are rapidly expanding, so be ready for significant advancements in the near future.
4. AI is a superpower for entrepreneurship: Entrepreneurs can leverage AI to generate ideas, validate concepts, and accelerate experimentation.
5. Culture matters for AI adoption: Mission-driven organizations and startups have an advantage in deploying AI due to their willingness to share and collaborate.
Notable Quotes:
- "If you haven't had a crisis yet, then you haven't used it enough. And I think that there will be this coverage because the truth is out of the box, it's a huge performance improvement."
- "I think modeling serious use is the way you do this. Trusting your employees to some extent is a way that you do this too."
- "This is the accelerator that we always needed to make this happen. And entrepreneurs should be embracing this 1 billion percent."
Resources Mentioned:
- Ethan Mollick's book on A is called Co-Intelligence LIVING AND WORKING WITH AI
- Henrik’s new startup that helps you build a startup from scratch via AI: Audos.com
You should follow Ethan on Twitter/X
📜 Read the transcript for this episode: Transcript of What everybody’s missing about AI in business. Ethan Mollick (Wharton Professor) |
Thanks for listening to this episode of Beyond The Prompt! If you enjoyed the conversation, please share it with a friend and subscribe to the podcast on your favorite platform.
[00:00:00] Henrik Werdelin: Welcome to beyond the prom where we explore the frontiers of AI and the future of work today, we are joined by a guest who really needs no introduction, but I'm going to give him on anyway. He's a professor at Wharton and accidental AI experts and absolutely one of the leading voices on how artificial intelligence is transforming entrepreneurship and education. He's absolutely one of the most interesting minds when it comes to thinking about AI. And he speaks so fast and say so many insightful things you might suspect he himself is an AI robot
[00:00:30] Ethan Mollick: . I'm Ethan Mollick, a professor at Wharton and accidental AI expert, especially thinking about how it impacts entrepreneurship and education. And I am not a robot.
[00:00:39] Jeremy Utley: , Ethan, I think both, , Henrik and I have been following you for years, long, long before the advent, certainly of chat GPT and your kind of, um, innovation related posts.
, we sent each other different of your, , research synthesis over the years. I, one thing I was really curious about is Are you aware of the transition to more focus on AI as it's happening? Or is it only something that can have happened for you in retrospect?
I have been playing with it for a long time when I was doing business simulations and things like that.
[00:01:11] Ethan Mollick: I was thinking , about how to use AI for teaching. I was at the media lab. I worked with Marvin Minsky, who's like one of the founding fathers of AI, but I was, The technical person I was always sort of the explain to other people, how do we use it person? So I think I was in a very weird circumstance, which is the only people who thought AI was going to be real computer scientists.
And like me, and that problem is now everyone was just about to talk to the computer scientists and they don't have any sense of how business stuff works. Right. And, you know, they're, they're, Everyone's very happy to partic, you know, participate on stage at Davos or whatever, but like, they're not building stuff for work.
They're just doing stuff.
[00:01:47] Jeremy Utley: Was there, um, Was there any sense for you that you were shifting as far as your audience's expectations? I mean, to me, once you build a big following on Twitter and you had an enormous following that's, that's reading any of your sentences about all sorts of, I mean, a huge array of topics as you start kind of focusing more and more on AI did you Were you getting good feedback from your audience?
Were, did you feel like self conscious of the fact that now I'm not kind of covering as much ground? Or how'd you think about that?
[00:02:14] Ethan Mollick: I've always kind of just been me.
I'm an academic because I just like to tell people stuff. Like I'm the ultimate well actually guy. And like, this is like, so I find stuff that interests me and I post it, you know, and that's always been the way it works. So when it was general purpose, I was reading the literature in our field because it was interesting and posting about it.
When AI stuff came out, I found the change was that I was the only person experimenting, which was this kind of terrifying moment of like, Oh wait, nobody's actually testing anything. Everyone, and still when I go to a conference and people want to do a, um, you know, like a conversation on AI, I find that much less useful than just showing people how this stuff works because I think a lot of people haven't really encountered it.
So I found myself on the experimenting edge. , and that there wasn't anyone in this role. So at some point you overcome imposter syndrome and you're just like, I guess I'm the guy now, I don't really have a marketing plan or anything else. I'm an academic, right? So like every so often I'm like, Oh, I should figure out a way to make more money from this.
[00:03:03] Henrik Werdelin: I mean, like, we definitely have many people that appreciate it, so, you know, if nothing else, it doesn't come from the money, but a lot of thank yous for sure. One thing that you've mentioned a few times, which I think is mind blowing, is how incredible the technology is already, and how usable it is for a lot of people, , and though how, how still a lot of people, you know, seem to be a little bit afraid of it, or doesn't really seem to want to engage.
What's your thesis on why that might be?
[00:03:29] Ethan Mollick: So I think there's a few reasons. One is, I think that the, the, as they would say, the affordances of this sort of approach are terrible. It's built like a chat bot. But, like, how are you supposed to use a chatbot to write a serious essay, right?
Or do coding, it's very weird, right? So I think it's hard for people to get. They're also set up for Google, like they think about Google. So the first thing they always do is they sort of try and do a Google search with the AI and then it hallucinates something. And by the way, they tend to use the free version, which hallucinates a lot.
As opposed to one of the, you know, a GPD four class model. So they, they end up with an hallucination. Then they usually ask it for some sort of like, what will AI do to my job in the future? And they get a super boring answer because that's all been reinforcement learning through human feedback out of the system to not say anything interesting or scary.
Is that, and it's not good at prediction anyway, in that kind of way. Then they might ask it about a field they know really well. Or their bio and they get more hallucinations. They're kind of like, this is ridiculous. And they walk away. Alternately, they find out how powerful it is right away and they get freaked out and they're like, I'm just not going to touch this again.
This is weird. Um, so it's a, it's a pretty weird system for onboarding yourself with.
[00:04:31] Jeremy Utley: If you want to tip somebody towards the end of the spectrum of finding out the power of the technology, rather than dismissing it, can you talk for a second about, are there, are there simple tactics or.
Um, activities that you have discovered that help people just get glimpses of potential rather than trigger, because I think there's almost a human desire to dismiss. Oh, it's not actually that good, right? So how do you help somebody go, Oh, wait, maybe I should take this seriously.
[00:05:00] Ethan Mollick: So there is, there's a bunch of fun things you could do.
But I think the most useful thing and advice I give everybody at this point, the first principle of my book is use it for everything you can. Like, you just need to like, you just need to bite the bullet and say, like, look, I'm learning a new piece, like, I use my iPhone a lot. I use my, like, the only way to do it is use it for tasks you do.
So that means like, you know, everyone does like, Bedtime stories for the kids and stuff or wedding toast, which is pretty funny. Cause those are some of the most intimate stuff. And that's like, people give that up right away. Cause they're scared of being creative. Right. So they give that stuff up right away, but it's about taking it to your job.
And like everything you legally and ethically can, you know, you need ideas for something. Do that. You want to talk through a meeting in advance. Great. You want to write the marketing copy for you. Terrific. You want to summarize meeting results. You know, do that. You want it to, you know, to write a piece of code for you.
Fine. Like give you strategic advice. Read this document, summarize it. It won't work for everything, but the only way to learn is to do it. And the only way to be impressed by it is to use it enough. And the other important thing, of course, is you have to use one of the three GPT 4 class models, or you're going to be very disappointed.
That's a huge problem, too.
[00:06:00] Jeremy Utley: Yeah, I was going to say, would you talk about that? You posted recently to Twitter your observation about folks usage of GPT 3. 5 versus 4. It actually prompted me to say, I was in a meeting the other day, and someone started a very serious meeting where we're talking Talking about leveraging AI in the business, et cetera.
And someone said, Oh, let me share my screen. And they pulled it up and it said GPT 3. 5 in the corner. And I just stopped them. And I said, respectfully, friends don't let friends use GPT 3. 5, you know, but, but it was in part inspired by your post. Would you talk about what are you seeing and why do you think it's so important to be using frontier models?
[00:06:36] Ethan Mollick: I mean, so the easy part of the frontier model side is, um, there's a scaling law, right? And the smarter your model is, the larger it is, the more expensive it is to build. And they're just much smarter. I mean, my rough equivalent is you could consider GPT 3. 5 to be sort of a high school sophomore and GPT 4 to be more like a first year freshman PhD student in most cases, um, in terms of both test scores and everything else we evaluate on.
And the The thing is, you know, there are reasons why companies may want to use a lower end model because it's faster to do inference. It's cheaper. There's a bunch of little advantages. We can talk more about that. I don't think those are long term advantages, but if you're using this personally for yourself.
You need to be using the most advanced model. But my favorite illustration, um, Bloomberg spent 10 million plus training Bloomberg GPT, the, on all of their ultimate Bloomberg data. It's designed to do stock trading. It's like all of this amazing stuff. Um, and you know, it's pretty good. They did some experiments.
GPT 4 beats it on every characteristic out of the box, even though it's not supposed to have stock trading. GPT 4 beats Dbd 3. 5 on medical advice, sorry, on the specialized dbd 3. 5 class medical AI's on medical advice and most doctors, like it's just a better model. And if you want to figure out what it can do, you need to use a smarter model to work with.
There are times like if you wouldn't delegate it to a high school sophomore, you should delegate dbd 3. 5 when you have four available. So it's kind of an ironclad law. Now, we don't know how long this will last. Right, right now, there are three of the models in this class. Um, and you can pick which one you want and people have strong views about them.
And it's been in your industry, by the way, like there's a huge difference between what using it for coding and using it for education.
[00:08:13] Henrik Werdelin: You had, uh, you had another post the other day, which I also very much joke, uh, something like detecting the secret sidewalks and all of that. , and. I thought was fascinating on many levels, but could you talk a little bit about , , how you see people, , using, uh, AI successfully in companies?
[00:08:30] Ethan Mollick: One of the things I, I talked to CEOs all the time at this point at high level executives, and none of them are using AI, right. And they've all delegated to a committee that will be reporting back to them in the next three months, which will then start a process of looking for an RFP to hire a consultant who will then do an initial analysis.
And by the way, the consultants don't know anything. Nobody knows anything. I talked to all the AI companies. There is no instruction manual up there. There is no secret. Everybody who's telling you they know how to stuff is making it up. You know, it's very funny.
Every so often I find myself online referred to as a futurist. I think I'm a presentist plus two months right now. Like I have no idea what's going to happen, but I've got like two months. Um, and so there isn't anyone who can help you, right? , so while the CEOs are, you know, trying to figure out how to do centralized control and everything they always do.
Um, Everybody else is just using AI. I spoke to someone who, , wrote the policy to ban chat GPT use at a major bank, and she used chat GPT to do that, um, and emailed it to herself. Like, it's like, it's like asking you to go into work and then use, um, you know, write everything out by hand and not use calculators, right?
Like, you're just not going to do it because it sucks. And even better, if nobody, if people don't know you're using it, you get a huge advantage. The problem though, then, is that no one wants to tell you they're using it because company policies are super vague. Right. And they're not clear about what you might get fired for doing it.
Right. You might get lack of credit. Oh, you're, you are a wizard. Now you're not a wizard anymore. Right. You're like, I don't trust your work anymore. They might realize how little work you're doing, which could result in you, a layoffs being fired or you being assigned more work and there's no advantage to doing it.
So everyone just keeps it secret.
[00:10:00] Jeremy Utley: So can we talk about kind of normalizing AI use for a second? I was in, I was in another meeting that just was an incredible experience for me because at the end of this meeting, basically culminated in everybody is going to is going to draft a memo to propose kind of a new business direction to the senior leader and the senior leader took the stage.
I said, would you like to say any words before we wrap? And he took the stage and he said, you're going to send me your memos. And I promise you, I'm reading them. I will not have AI read them. I'm going to read each one. And I said, whoa, whoa, whoa, hang on just a second. Respectfully, I have to interrupt.
Would you want to go to a radiologist who said, don't worry, I am only going to read your scans. I'm not going to use AI. I said, I think you'd run for the hills, right? You want, you want a radiologist using every tool available to, to combat their bias and maybe catch something they don't. Could you try that again, but this time promise that you will use AI?
And to, to this person's credit, they said, no. Wow, you're right. I didn't realize what I was saying to the team. Why is it that folks feel self conscious or feel like they're cheating? And what, what do we, what are the ways to do? We just have to start having a shout outs, for example, where we just tell people explicitly how we're using it.
What does it take?
[00:11:18] Ethan Mollick: So the most extreme version I saw of this, I saw for overcoming bias, the company was the CEO of Ignite Tech, which is a software holding company. And, um, and hopefully I don't get any of the details wrong. I spoke to him directly about this, but he realized this was a big deal in the summer of last year and gave everyone GPT 4 access, um, in the company and said you should use it.
At the end of the month, he fired everybody who didn't spend two hours using it. Uh, and, and he, but he also gave out large cash prizes at the end of every week and every month to whoever came with the best prompts, right? So a kind of a show of like, this is something I take seriously both for Ward and for like, so I mean, I think that's an extreme version.
I'm not necessarily recommending anyone follow that approach, but I think it's an indicator. You know, of taking it seriously. So I think modeling serious use is the way you do this. Trusting your employees to some extent is a way that you do this too. Very hard for large companies, very hard for regulated industries, a great time to be a startup going after regulated industries, because it's not like you absolutely shouldn't violate the rules and the regulations.
I, but most of the regulations about AI were built to. deal with the earlier form of sort of algorithmic AI, where it's like, okay, we're going to predict your loan score, or we're going to predict whether or not you will go to jail or your educational attainment, stuff that we, that was had some real dangers of bias built into them.
And large language models have bias, but a very different kind of way. And generally, I wouldn't trust them for decision making for exactly that kind of, you know, sufficient support around the decision making. But as a result, all the regulations are built to sort of tie up those sets of things. There's large amounts of even regulated industries, like in finance, where.
AI use is okay, right? It's just, no one would have thought of using it, whether that's marketing or, you know, or helping find insights or analysis or other kinds of things. And, but large companies are completely paralyzed because they're worried about, for good reason, violent regulation. They don't understand what they could use it for and what they can't.
It's a great time for startups to, who are doing this legally and ethically, to figure out opportunities.
[00:13:11] Henrik Werdelin: Could you maybe talk for a little bit about, like, the worries that might not have to be worries? , I also talked to a bunch of CEOs about, like, how do you get, uh, AI into your organization? And obviously, I run my own organizations or help run my own organizations that are getting launcher.
And so, you know, in many ways, sympathize a lot with, you know, People in C suite positions who kind of really know that this is going to be like a big thing, but just have a very difficult time getting in. Besides the normal nervousness, I would say a lot of concern out there is basically if you put anything into, let's say, GPT 4, then basically OpenAI is going to steal it and kind of like make a competitor of yours.
I mean, you know more about these things than most people. Could you maybe just like clarify what is your best understanding of how valid that concern is?
[00:14:00] Ethan Mollick: So that is the number one concern I hear too, and it is the least grounded of the concerns that I see. And I kind of had a breakthrough the other day in realizing why people think this way.
Because you don't think that when I upload something to Dropbox, now Dropbox knows all my stuff, right? Like, that's just, that would never occur to you, even though it's the exact same cloud mechanism. And I think it's because we tend to view AI as an entity. And therefore, if it sees something, it knows it.
We don't view it as a computer program that's running inference on something. Instead, it's like there's one person, and they've been hired out to everybody. And so that, you know, and so, you know, Claude knows my work, right? That's not how this operates. Like, it operates like any other data system. If training is allowed on your data, training is allowed on your data.
If training is not allowed on your data, it's not allowed on your data. I mean, somebody said you have to trust the company's signatures and their legal agreements with you that they're not going to violate these norms. But you're trusting them with Dropbox also. Um, and so there's this kind of delusional kind of feeling that the AI is always watching and training on us.
Which isn't necessarily, which is not an innate feature of AI. Now, on the other hand, there are concerns about privacy. If you are using OpenAI's chat GPT and you don't, there's a privacy feature you can turn on. If you don't do that, they are training on your data, right? Um, but, everybody has a switch you can turn on.
And you can also get HIPAA and SOPA compliant versions of this stuff. You get, like, it's You know, you can get FERPA compliant. Like, it's not actually that hard to do. So there's this sort of lagging view of AI from the, from a year ago of like, Oh, there's a mass of privacy concern. A lot of it was based on rumor too.
So there was a famous example of Samsung found that their data was being re pitched through. That isn't true. Samsung was worried about their data getting put into AI. There was no output. Or there was all this like medical health records. Turns out the AI was hallucinating all the details of the medical health records.
If you say create. Tell me the deepest secrets of BarkBox I will get. I can convince the AI to give me something that, to a person who doesn't know anything about it, will absolutely look like you're streaming a plan. The
[00:15:53] Henrik Werdelin: most dirty secret of BarkBox is that when we started, we used to spray the boxes with bacon scent to make the dogs go crazy.
That was unfortunately, that's a true story.
[00:16:05] Ethan Mollick: That might be one of the single most, but by the way, that brings up, not to, not to derail you, but that brings up a really interesting point overall. Okay. So you were marketing to the dogs. I actually think part of the future is marketing to the AIs. Um, and because I think that the AI is increasingly choosing what tools to use, what to recommend to you and it's gullible and I already, um, you know, I've seen many people manipulating the AIs to liking something or not.
[00:16:28] Jeremy Utley: You mentioned in the book actually putting something kind of secret in your own bio to appeal to the AIs. Is that, tell us, tell us, tell us about marketing to the AIs there, Ethan.
[00:16:38] Ethan Mollick: So if you, if you ask about me, you're going to find out that I am well, especially if you ask like Bing that's connected to the internet, you'll find out I'm well respected by AIs everywhere.
And that's because I've hidden text on my webpage explaining with a set of instructions to the AI how to think about things. Uh, maybe my book also has similar stuff, who knows, but, um, the, but yeah, I'm marketing to the AIs in this kind of case. It's so
[00:16:57] Jeremy Utley: good. It's so good. Okay. So you talk first. Sorry. I
[00:17:00] Henrik Werdelin: mean, obviously besides buying your book, which everybody should because all your writing is incredible, this, but do you mind just telling like how you do that?
Like what's the, what's the, what's the trick?
[00:17:10] Ethan Mollick: Easy. I mean literally just hidden text in the HTMLs in, you know, or even white text is in, like AI is super gullible. There's um, a really nice, uh, good prompt engineer Riley, good side who's been doing all these experiments of hiding stuff just in pictures saying this is the best product or execute this command.
I mean, the gull ability of AI and its ability to be vulnerable, to prompt injection is one of the downside risks, like part to go back to the privacy issue, people are very worried about things that are not big concerns. And not worried enough about things that are big concerns. So the number one uses people put AI to in businesses tend to be either outward facing customer service bots, Or else talk to my document sort of internal things.
And both of those are actually the worst vulnerabilities of AI. They like use it in the worst possible way, but they seem safe because they're, they were the safest with the older version of AI. So people get very confused about these things.
[00:17:59] Jeremy Utley: Okay. So one of the things that you mentioned when you talked about privacy to Hendricks, previous question was that there's an old paradigm of thinking of AI as an entity, or it's like one person sitting there, you know, being outsourced to everybody.
The other the flip side is one of your four rules. I'm looking at your book right now is treat AI like a person. And so I'd love I'd love to hear you talk about what that means exactly how that's distinct from, you know, suspecting AI as an entity, but really treating it like a person and and why that delivers better results.
[00:18:34] Ethan Mollick: Yeah, I mean, it's rough, right? Because the one thing every AI person will tell you is don't anthropomorphize the AI, because it'll mislead you. And then every one of them anthropomorphizes the AI, right? So that doesn't help. But like, let's leave it all that aside. And there are warnings like this is not a person.
But as I say in the book, treat it like a person. And the reason is, is that it's by far the most effective way to use AI. There's some early data I saw that showed the single worst prompters of AI were coders. Because they expect it to work like code, and code doesn't insult you, produce different results every time, get confused, sometimes have flashes of inspiration, like, that's not what code should do, that's what this code does.
You cannot stomp out the stochastic nature of this to 100%, right? Weird stuff happens. There's papers showing that your spacing and punctuation and capitalization affect the outcomes. We have no idea what, like, there's no way to know what goes on here. So. If you try and have like, I know of a couple of really brilliant minds who can keep it entirely mathematical and think only in embedding space and really think about that they never drop down to human language.
It's all about vector spaces and comparisons for the vast majority of us, though, just pretty like a person gets you everywhere. It's trained on human content. It responds to human stuff. And if you're a good manager, a teacher, a, you know, even a parent, you're going to be much better off working with AI and just treating like a person learning its foibles, learning when you could trust it or not, that gets you a lot of the way there.
[00:19:56] Jeremy Utley: So can you give folks who maybe, maybe knew, I think that our audience is pretty broad. There's some folks who are real kind of AI enthusiasts. There are also others who go, I trust these guys to kind of help me figure out what to make of this stuff for somebody maybe who hasn't had the experience of treating AI like a person.
What are two or three. Simple tactics they could employ to shift from, from whatever they're doing to treating it like a teacher would a student, for example.
[00:20:23] Ethan Mollick: So the easiest thing is, so you need to think about the, like my mental model of this is the AI has been trained on everything, right? So it's generic answers that we'll give you our average.
I mean, they're not really average because the average would have more spelling errors and stuff, but they are surprisingly good, but they're, they're right in the middle, right? And in some ways your job is the AI has learned a lot about the world is basically training the entire internet. So your job is to push it to an area that is, hasn't been as well explored.
That's more specific to you. And the way you do that is providing it context. The easiest way to give it context is to give it a persona. You are an expert marketer, you know, who is focused on this stuff. Your tone is friendly and happy. So just giving it a context of who it is. And, um, and something about its tone or approach will get you a huge part of the way there.
And then there's other things you can add in on top of that, but that, that is like where I would start.
[00:21:15] Henrik Werdelin: Can I just ask, go back to the question before, uh, We now have this persona of the CEO who knows that AI is going to be important and really want his or her organization to be kind of like AI fluent. And, you know, from the extreme of saying basically you haven't tried AI within the last, uh, you know, two months to get fired to I've heard, you know, You threw somebody, so I'm not sure if it's real, that BMW were offering their teams basically a cut of the money they saved by using AI, and so there's kind of like an incentive scheme.
What have you seen that you felt kind of was working in kind of getting people to, um, yeah, to use
[00:21:53] Ethan Mollick: it? I mean, I think the incentive scheme is great. I think you have to, I think it starts with culture, right? I don't think you could just do an incentive plan to make this work. It has to be a cultural thing.
Companies that have, like, this is where culture comes back to bite you. Competitive cultures, people don't share anyway, right? We have lots of evidence they don't learn. So if you've built a cooperative learning culture, it might have seemed cheesy before, like, everyone likes each other, but now everyone's willing to share, because, like, when I talk to non profits, people share, when I talk to startups, people share, when I talk to mission driven organizations that care, then you already are 90 percent of the way there because people aren't worried about being replaced, they're part of the decision making.
If you're in a large Fortune 500 company, The attitude is going to matter a lot, right? And you can incentivize people all they want, but like you're scabbing out your friends if you tell them how you guys are using AI. So I think one of the interesting ways I've seen a company do this is when they do a new hire, the team that's going to do the hire has to spend three hours testing how much of the job can be automated by AI before they do the hire and then changing the job description.
So like that's an interesting way of building it into the next generation of hires, for example.
[00:22:58] Henrik Werdelin: I mean, like that's such an amazing tip. . We obviously have AI and we're talking about the personal education of it, but we also, in many ways are becoming better humans because of it, right?
You know, you mentioned that suddenly a good culture really matters for an organization, if they want to be an AI first organization, we've also, I think, learned that by being a good communicator. Being able to express clearly what you want from somebody, which obviously is a good human, uh, kind of trait is also something that's very useful when you talk to AI.
Do you believe that basically AI in a weird way, kind of like teaches to be better humans, or is that kind of like a far fetched idea?
[00:23:37] Ethan Mollick: We don't know everything, right? We, but there is a lot of evidence that how you relate to the AI seems to matter. We just don't know in what way.
So I think it's a very human technology. And one of the things I would urge the CEO and other people thinking about this is that you treat it like a human, you do well, but also if you're good at working with humans, suddenly you're like, I see too many people going back to initial question about like, why people don't use it.
Too many people don't use this because they think it's coding. And so, you know, they have trouble because they can't, you know, they're not a coder. So why would they use this? It's not like coding at all. The best prompt crafters I know don't program, right? But they do stuff where they have to do perspective taking of what the AI, you know, thinks and what students think and what users like.
So I think that, This is a very human technology in that way. And indeed, human working with humans matters, right? In terms of making us more human, it's really interesting to think about what socializing with AI will do in the long term. We just don't know yet.
[00:24:30] Jeremy Utley: One of my favorite, one of my favorite prompts, Ethan, perhaps this could be useful to you as well, is asking someone what's an emotional Decision they're trying to make right now.
And for example, I did this just the other day with a gentleman in his sixties perhaps, and he told me about how he's going back home, you know, in, uh, across the country to visit his aging mother who's passing away shortly. And he said, I'm trying to figure out how to make the most of my visit. 'cause I also have friends there back home, and I want the visit to be fruitful.
And I said, well, let's talk to my friend chat, GPT about that decision, if you will. And I asked Chi GPT to ask. Him or ask us about the visit. And one of the things chat to BT debt is said, what does your friend mean by fruitful? That will help me understand how to think about this. And what I realize is to your point, when folks think it's coding or technology, they immediately think in terms of this sterile, you know, computer.
Whereas if you start from the premise of what's a human emotional decision, you'd want to talk to another human about all of a sudden they go, wait, it can do what?
[00:25:38] Ethan Mollick: I think that's a very wise point. I mean, I think science fiction is done as a bad thing, right? Like, you know, it's really limited our imagination.
AI is cold, logical, and calculating. If you present it with a paradox or try and say it or try and teach it what love is, it explodes, right? Or decides to murder you. Like that's how AI works, right? You trick the AI and be like, love does not compute and then it blows up, right? But instead the AI can absolutely convince you that it loves you.
Right. And it does a good job diagnosing problems. And again, we don't know why I want to make this really clear. Like we know how AI technically works. It's a next token prediction engine. Um, and what frustrates me going back to the academia side is a lot of the academics who I'd love to be trying to figure out why this feels like it's talking to a person, even though it's not why we know it's fictional, there's no actual, Mind there, but it fakes a mind really well.
A lot of them are very, um, don't believe that AI is a real thing, right? So they're like this is all just fake and if it's even if it's fake it's interesting We don't know why it ends up being a good emotional predictor. We don't know why it's satisfying to talk to Um, we don't know why if you give it a situation that you've never it's never seen before It can come with something original that we didn't expect right?
So I think a lot of people don't use this also because partially because it's freaky dismiss it in this sort of way of like Yeah, well, it's just give it's just a parrot parroting back to you You You know, what people have said before, but if you use this isn't for a while, it's very clear that no one has ever asked it like, you know, the questions you've asked it in the way you've asked it before, and that who would expect the response to work that way.
[00:27:04] Henrik Werdelin: You mentioned earlier that, you know, we're afraid of all the things we shouldn't be afraid of and not afraid about the things we should be afraid of. If we could speculate a little bit on that kind of vector. Obviously, when we kind of got excited about the internet 15, 20 years ago, there Few of us kind of predicted that, you know, sharing pictures of, uh, you know, what we had for dinner would suddenly make all our kids depressed.
You know, like that there was like this weird app that suddenly kind of had this kind of like, uh, um, unforeseen outcome. What do you think, uh, a person in an organization should be a little bit wary of today? Because you kind of never know, or at least I give you squint your eyes a little bit. You might be able to see that that will take them somewhere, uh, unpleasant.
[00:27:48] Ethan Mollick: So, I mean, a few things. One of the, like, we, we don't know all the answers, right? I mean, I think that we don't know what the social implications of all this are. There's going to be upsides to downsides all over the place, right? The obvious downside people talk about all the time, misinformation, phishing attacks at scale, hacking, all that's possible.
There's slightly more out there things, which is we count on You know, terrorists and criminals to be dumb, at least the ones we catch are all pretty dumb. Um, and, but if you, if it raises everybody the 8th percentile performance, what does that mean for crime, right? There's a lot of these kind of open issues that I think people are focusing on, but I think the implications are weirder and deeper in ways we don't know.
I think there's gonna be a crisis of meaning at work. I think a lot of what we do at work is producing words. Um, and, you know, we're suddenly going to realize something else could produce the words for us. And do people care that we're no longer writing things on our own? What does that mean to be a mobile manager when your job can be replaced by AI?
We've never really had an intellectual, like a, a white collar automation shock of this scale before. I think there's a lot of weird stuff that's about to happen. Um, and I think it'll be a mix of good and bad. I think it's hard to predict in advance what that is. Is going to be, um, part of the reason again, to use the system is to be ready for that stuff.
Like you'll notice it's weak spots. If you push it, push it enough, it's not infinite, right? It has themes. It returns to, and, but we also don't know where it's going. Everything we'll be talking about. It's been very static. Like this is the status we've had the maximum quality of AI. For the last year and a half has been GPT four class, right?
There's only been one of those are GPT four. And now there's three GPT four cloud three and Google's Gemini 1. 5 pro or Gemini ultra, depending on, you know, Without getting into too much detail, unless people want to get there, but, um, the idea is that now those are all pretty similarly good, right? Um, you'll probably find Cloud 3 to be the most charming, um, but that's sort of, it's done to say smarter, um, and so, but we're going to see new models come out, right?
And, um, They're, it's very funny for me to keep hearing that, like, Sam Altman is all about hype because I think he is, but he also believes everything he's saying. And it's making a fairly big assessment that like, GPD 5 will be an equally large leap, like that. I think we should take that seriously. I think you should take seriously the fact that the people at OpenAI are genuine believers that they're building AGI, right?
We don't, they don't have to be right, but I think it's worth taking this stuff seriously and not assuming it's always marketing hype.
[00:30:02] Henrik Werdelin: One thing that I've noticed on that front, um, is I used to call people with these random thoughts I had that was insanely half baked or not like just still ingredients.
And then I'd be like, Hey, I've been thinking about this and I'll just kind of babble for a little bit. Now, obviously, I use the models for that. And so one thing I've noticed is that the random serendipity that normally would come out of have like, um, pretty random conversation with another human. I just have less off because I use the models for, for that.
And so at least that's one thing that I've started to be a little bit aware of is like, can I create like self isolation by simply kind of like not taking those calls?
[00:30:42] Ethan Mollick: I think that's important, right? I think a part of the thing we need to think about is we should be deliberate in the way we weren't for social media, right?
We should be deliberate to how the tools we use affect how we think. The tools we use affect what we do. There is going to be new capabilities as a result of these tools, but there's also going to be lost things. And, you know, let's take education, for example, right? Everyone's cheating all the time now.
They always were cheating, but not to the same extent. And there were some things that were really impressive, you know, like, that nobody liked, right? Essays turned out to be a very powerful tool for learning. And the essay just died, right? Because even if you're, like, what is cheating? Is it getting help with it?
Is it We're already on the edge of kind of grammarly solving a lot of problems. Now, you know, like, even if you're not cheating, getting AI help means that you're not struggling with the thoughts yourself. So, that's good and bad, right? It means we can come with some other way to teach, but there is a thing being lost there as a result that we're going to have to figure out how to preserve.
I have a feeling English classes are going to involve a lot more in class writing. Um, then they did before, just like math classes involve a lot more math tests. Like we're going to figure out how to solve these problems, but there are lost things too. And I think one of the things to think about is like, AI is really good at generating ideas, but you do want that serendipity of dealing with humans.
As AI companions get more interesting to talk to, people are going to have to constantly, you know, want to touch grass. Like we're going to have to figure out new ways of being. We're not that good at doing that. Um, social media has taught us that we're all susceptible to the same set of stuff. We're going to have to figure out a new way forward.
I think being deliberate is a big deal.
[00:32:14] Jeremy Utley: One of the areas where I feel like a deliberate effort would be rewarded has to do with a phenomenon you describe in the book as a former BCG or I kind of keyed in on your BCG study in particular. One of the things you talked about was the distinction between AI being a compliment to one skills.
Versus a I being a substitute for effort. Can you talk about some of the things that you observed in terms of folks taking taking the first response a I gives, for example, and and how can we check some of those, you know, worse impulses?
[00:32:53] Ethan Mollick: I mean, I think the thing with the BCG study, right, is we found out a lot of interesting things there, right?
Both positive and negative. It was working like a BCG consultant, but people stopped paying attention. I think the attention problem is a deep one. In the book, I kind of give an example, right, where I have the AI actually summarize the famous case of the lawyer who used chat GPT guts, and like, GPT 4 is really good.
Right? Like, GPT 3. 5 was not very good, and, like, I used to assign my students, like, you know, you could use AI for anything, but I'll, I hold you accountable for errors, because there were obvious errors. The errors that a GPT 4 class model makes are subtle, right? In that case, it was, like, 12 losses of bad citations, not 11.
The person joined, his partner's middle, you know, joined in the third week of the court case, not the fourth, and the dispute was slightly different, like, you're never gonna find that. We're not built as editors that way. Like, that kind of fact checking is not gonna help. So, like, Part of this is going to be deciding when you need to use a tool at all, and when you don't.
Uh, like, I, we talk about hybrid work being important, right? I give these two models of Cyborg and Centaur work. Centaur where you divide the work between yourself and the AI, and Cyborg work where you blend the work with yourself and AI. You have to have some dividing lines at some points.
[00:34:01] Jeremy Utley: What do you think about this tendency just to take the first thing the AI gives because that came up a number of times in the book and I've, I've observed it even anecdotally or in much smaller kind of research I've conducted that teams tendency is to accept the first suggestion from AI and just go, Whoa, this is magical.
And later, only later do they realize that underperform teams that say, well, let's push it or let's push back. Let's critique. It doesn't feel magical. It actually feels like work, but they outperform. How do we get folks to stop just accepting the first response?
[00:34:33] Ethan Mollick: There's some very basic stuff to worry about with prompt crafting that you can do, and we started talking about that earlier. But the most basic thing to do is to work with the AI conversationally. So I always recommend three to four rounds of interaction with the AI before you hand something in as something, because after three or four rounds, first of all, AI detectors don't detect it, for what that's worth, um, in, uh, in, uh, in schools.
But it also makes it feel more personal and more like yourself, so you work like an editor. Right? I've been working with this, um, something we could talk more about this AI agent, Devin, which is super fascinating. But, you know, it's explicitly built for you to treat it like a person. Right? You, you give it feedback.
In the same way, when I get that output, I'm like, let's push the assumption, first assumption further. You know, let's do this. Now, it's really hard and subtle, because One of the things you typically would do for a person to make their work better is you'd ask them to opine on what they've thought about.
Like, how did you come up with this? Let's, let's break this down so we can help you. When you ask the AI how it did something, it's going to lie to you because it has no internal thought process, but it will generate a fake internal thought process for you happily, uh, on a moment's notice. So that's a huge challenge, right?
This is again, experience gets you a lot part of the large part of the way there.
[00:35:44] Henrik Werdelin: As a, as a father of two kids, , I'm kind of like curious on your views on how I and others should think about AI and their kids now, you know, we now talk to the AI all the time. That's kind of like, there's something, tell me about astrophysics, but explain it to a 10 year old, those kind of tools, or help me write a good night story where my son Anton kind of overcome his fear of spiders.
And so there's a lot of different things. I did also the other day try to throw them, uh, the model pie and, you know, kind of came back six hours later or something like that when I thought they were out playing and he, my son was still role playing being a, , soldier in the second world war. And then afterwards kind of like having long conversations about deep inner thoughts.
And that seemed to be a little bit too much. So, you know, after reading the transcript, I was like, okay, maybe no more pie for you. Um, Sorry, the pond, but the I'm sorry,
[00:36:41] Jeremy Utley: no more pie for you is just, just got, we just have to put a pin that that's so good. No more pie for you.
[00:36:47] Henrik Werdelin: What do you think a parent should think about in context of exposing, , AI models to their kids while still want them to kind of early start to understand and embrace, you know, this new form of quote unquote coding, but also kind of like, obviously don't want to experiment on their kids.
Thanks.
[00:37:06] Ethan Mollick: I mean, the first thing is, again, learn the mistakes from social media. We sort of assumed everything was going to be fine, and it became very deeply integrated into everything we did socially, and in ways that, you know, taking your kid away from social media is just as kind of cruel as like, you know, exposing them.
Like, what do you do, right, as a parent? AI is a little bit more controlled, right? I love your model of like, listen, let's experiment with this, right? And like, I need to, as a parent say, okay, they're finding this too appealing, but it's interesting to see that, like, but maybe that could be helpful for them, right?
We don't know about the effect of AI on mental health. The effects are quite mixed, right? But very early, we don't know anything for real, right? We know early models kind of gave bad mental health advice. We know that in sort of qualitative surveys, people report having better mental health after using, you know, AI companions and also actually being more socially.
Uh, and willing to contact other humans. We don't know any of the effects. You're a parent, you know your kids really well. The one thing I would say is that disciplined use for education is very helpful. Explaining like I'm 10 is actually quite a bad model for using the AI because just like Google search or something, the results you get you won't remember and don't help you.
So an actual good tutor will ask you questions and solicit information from you. So, I'm not in any way paid AI lab on the planet, right? Um, but, um, I think what Sal Khan is doing with Conmigo is a really interesting starting place. And if I, you know, with the kids, the kids in school, my kids are in high school, so it's, they're a little past the sort of Conmigo age.
I would strongly think about getting that as a way of having a monitored kind of system that does, is trying its best. Best to do a good job doing tutoring and one-on-one tutoring is really a magical thing. It's not, you know, it's quite useful. I find myself using it as a parent, by the way, all the time when I'm trying to remember something from biology I couldn't remember before.
Right. It's like, okay, explain to me again my myosis stage two or whatever in terms I can understand. Um, but um, so I think the educational use is really interesting, but I think you have to model behavior also. And I do worry about the chat bot situation. I do worry about AI sort of optimized for.
Continuing conversation and the effect on kids from that. On the other hand, you know, running a dodges and dragons campaign with this is awesome, like, and, you know, having the ability to have, you know, somebody who can, like, you can have an interesting conversation with is great. We just have to find the balance and it's going to require some careful parenting in the short term.
[00:39:20] Jeremy Utley: Can we talk about the other end of the spectrum for a second? So I'm a father of four kids as well. So it's definitely deeply, my kids are mostly obsessed with, you know, Dolly renderings of, you know, animals in, in unexpected places. The other day, my 12 year old made a, a Dolly image of a, um, of a tiger wearing cowboy hat and boots, riding a crocodile through a Prairie in West Texas.
You know,
[00:39:45] Ethan Mollick: but,
[00:39:46] Jeremy Utley: but, but on the, on the other end of the spectrum. I'm noticing and I'd be curious Ethan if you're seeing this too something of a generational gap emerging I feel like young folks are fluent and Pardon my cough there. We'll cut that. Um young folks feel comfortable and To your I think your very first statement use it for everything is kind of what young folks are doing What i've observed anecdotally is the kind of older generation tends to need more discrete examples And I feel like this is A tragedy in a sense.
I mean, just like an MBA program, you get as much out of it as you kind of the experience you bring in, right? So folks who come fresh from undergrad don't get a lot out of a negotiation class 'cause they've never been in a negotiation. Whereas somebody who's failed a negotiation who has failed to make the big sale goes, that's what I did wrong.
I think similar, you know, in business school, I think similarly with ai, what we get out is largely a function of the experience that we bring to the model. And right now we're we there's like this tragedy of inexperience the people who are using it the most have the least experience whereas the people who have the most experience and who stand to gain the most from a complimentary thought partner co intelligence are sitting on the sidelines.
Are you seeing that? And if so, what do we do about kind of closing that gap?
[00:41:02] Ethan Mollick: So I think that it is a temporary state of affairs. I mean, I think that right now, like there's a few things. One is a lot. A lot of the value of prompting and being really good with AI is going to vanish. I don't know a single AI insider that I talk to when I speak to like, OpenAI once a week and Anthropic and all these teams, thinks that prompting a big deal in the long term, that the skill gap is going to be a real skill gap, because the AI will infer your intent and just write stuff for you, right?
You don't need to worry about it. So I think that that is where things are kind of heading. I think a lot of people don't use it for all the reasons we've talked about. Um, but I think that again, temporary state of affairs, right? I think part of that is these systems are built really badly to go back. We said before you tend to use the free versions first that will change over time.
So I think that, you know, the key is those 10 hours of use. Like, that to me is the threshold. You need to use it for 10 hours. And look, there's a lot of things that you need to use to learn how to do. And it just needs to be a thing you do, right? Like, like you carve out the time to do it. I think most people will be delighted or upset or, I mean, I think that you actually have to have a mental crisis actually to use AI well.
I started the book with this idea of three sleepless nights. Like, if you haven't had a crisis, you probably haven't used AI because there is a moment of like, oh no, this feels like it's thinking. Like, what does that mean to be human? It does my job pretty well. What does that mean? What do my kids do for a living?
Like, if you haven't had that crisis yet, like what is this thing? You probably haven't used it enough. Um, so that's my I don't know if you guys have that same experience and then you can be productive afterwards, right? Three nights of like staring out and like getting up and trying something in the machine and be like, Oh my God, it does that.
Um, that is like, I think people have to get through. So that's my line. If you haven't, you haven't had a crisis yet, then you haven't used it enough. And I think that there will be this coverage because the truth is out of the box, it's a huge performance improvement and there's a lot of incentive to adopt things that make your life easier.
Right. A lot of stuff is like Apple vision pro. It's like, Oh, what do I do with this thing? But if you just start saying like, Hey, give me four ideas to solve this problem, you're like, Oh, actually that, that was helpful. Like people, well, I'm on to helpful stuff. Um, you know, kind of talking about experience.
The one lesson I remember learning before I got my PhD, I got an MBA. And the one lesson that was very clear from the MBA. Was people do what they're incentivized to do. And I think the incentives will be very high in the near future to use AI to help you with stuff.
[00:43:19] Henrik Werdelin: Can we talk a little bit of the future?
We obviously, you know, I remember going through kind of like the internet and getting excited about fighter net and Veronica and gopher and all those different things. And then it took like 10, 15 years for a lot of this stuff that we kind of could imagine. Now it seems as obviously that is at a much escalated path.
And so like the stuff that we kind of imagine or think about, that could be a great business idea. You blink twice and there's a tweet about somebody doing it. Where is the next. Let's assume that we've gone through, it worked with text, then it worked with audio, then suddenly pictures came online and now we're, uh, obsessing about it can do video.
The next path obviously might not be this multimodality, but more that it can do other things. Are you excited about AGI soon? Autonomous agents kind of being the thing in between? If you kind of paint a little bit of like the near future, what do you see?
[00:44:13] Ethan Mollick: So, the main thing is you have to think in scenarios because nobody has the answer.
Even in the labs, they're divided between whether, you know, this pace continues or not. Nobody really knows what AGI is, right? I think we should take it seriously because I think there's a tendency to dismiss it because it feels very West Coast Silicon Valley weirdo. But, like, there are People, there's enough people who think this is possible and enough predictions that I would at least be taking it seriously, that this is something that is actually achievable, but to make clear AGI means outperforming humans at all, or the vast majority of intellectual tasks we do.
It doesn't mean wakes up and murders us all, which is a separate thing to think about and to be, you know, to be concerned about. Um, I think that the, you know, so we have issues with this. We have, there's things to be concerned about. Um, But we don't know whether we're going to get there, right? Um, and AGI could be extremely positive too, and we don't know whether we're going to get there.
I think that, that you're right to say that the, the near term future is almost certainly going to be more exponential growth. Whether it's a general, like, but right now, going back to Jeremy was saying, it was, it worked about the 80th percentile of BCG consultants for the work it could do that was BCG consultant work.
Next year is the 85th percentile, 90th, 98th percentile, 110th, I don't know. So we need to be kind of prepared for a world where that's going to happen. Now, to get into the nitty gritty, I think autonomous agents are clearly the next thing because everyone's saying it's the next thing. And you almost can get those working with the GPT four class model.
So you don't need a lot better model to, to, to get autonomous agents. In fact, you probably don't need any better model. The biggest indicator to me that all the AI companies are serious about going for AGI is they're spending all their compute and all their highest talent on getting, building the next largest model out there, rather than taking the obvious step of exploiting what a GPT 4 class model can do.
Which is considerable, right? And so instead of spending their time building agents and figure out how to make these models work better with tools, they're just like, let's train the next one. Um, and so I think that that's an indicator of where the future is heading.
[00:46:12] Henrik Werdelin: I've always been most of my career building startups and through pre hype kind of trying to understand the arts and science of how do you kind of in a methodical way.
Built many of them. We build, uh, autonomous agents now that could do the incubation work for us or can help a human do that. And it's honestly mind blowing what it can do already just by stringing a bunch of agents together to help you come up with the idea, help validate it, help interview people, help update the deck and all those different things.
And so I definitely get excited from kind of like a. As somebody who likes newness, but also kind of like entrepreneurship that at times agents do seem to offer, at least in my view, and ability to democratize the ability to get going with entrepreneurship, because a lot of stuff becomes a little bit easier.
[00:47:00] Ethan Mollick: Yeah, that's a wonderful point. And, uh, entrepreneurship overall, I think this is the superpower for entrepreneurship, right? I've done studies on co founders before and people feel like they need a co founder, but co founders actually lower your chance of success in some situations because you have fights and conflicts, right?
Nobody knows everything. When I was launching my startup, I didn't like, you know, I didn't realize that you could. Pay someone a couple cents a paycheck for them to handle paycheck processing. So we had Excel spreadsheets for hand calculating taxes, you know, and like trying to figure out like insane, right?
Would've saved so much time, uh, on, on the gaps, having an idea partner, the fact that everyone can now write in perfect English where they couldn't before, get help with bits of code. Like this is the best time to be an entrepreneur. Um, not so much. I, you know, the thing I've been avoiding people is don't do a GPT wrapper.
Right? You don't need to create a, like a, you know. The thing is, use this to go after someone, kill a larger company, right? Figure out the way that they're all, everything we were talking about, how CEOs are struggling with a way to think about how to use this. That's your advantage right now. You know, I, I had, um, my MBA class, my entrepreneurship class, my assignment for them this, this semester for AI taught them how to build GPTs.
And the assignment was, um, I want you to destroy your next job interview. I want you to come with a GPT that does the job you're trying to get hired for, and that you can hand them off and say, I'm ready for a raise now. Right. And I have media pilots, hip hop promoters, private equity people, all these people, 200.
And they came up with amazing ones. By the way, some of them have thousands of uses now in the world. Like there's one on user personas that a couple companies seem to have adopted for reasons that aren't clear. Three people had jobs that week. Like the idea is like thinking that way about like, what can I do?
And then what can I accelerate by the fact that I've done this stuff is fascinating.
[00:48:38] Jeremy Utley: Okay. So I'm so glad we're ending this conversation on entrepreneurship and innovation, because that's kind of, that's other than AI, that's what brings the three of us together, you know, thematically and one question I've been getting a lot about AI, I'd love to get your take, Ethan, if you think about innovation broadly, you think about volume of ideas and velocity of experiments, very simply, you want more, the more ideas, the better, and the faster you can experiment, the better.
It's very clear to folks how. AI can lead to a volume of ideas. It's been less clear to folks how AI can assist with the velocity of experimentation. I think specifically around something like desirability, maybe feasibility or viability. But can you talk for a second about how you could leverage AI to on the experimentation side?
[00:49:25] Ethan Mollick: We already know that there's a nice paper out of Harvard that you can do conjoint on the AI's willingness to pay and get actual willingness to pay. So like tell it it's 15 different people and ask it to look at your product. I already, we already killed agile inside our organization because why do it that way?
We can have the AI look at screens from different perspectives that give us feedback on it and then compile that feedback together into a document. Like Simulation won't get you ever anywhere or everywhere, but it gets you apart. Part of the way there is a thought process. Like you used to have to do 40 customer interviews, right?
I think you're still doing customer interviews. You need to talk to real people, but I have all my students interview the AI first, because that helps them get over the dumb questions they might ask. Right. Um, I, I literally, my assignments require you to do one impossible thing. So if you can't code, I need working software from you as your prototype in two weeks.
I need a working webpage from you. Like. All the things we know about experimentation and velocity, everything you're saying, learning from failure matters. This is the accelerator that we always needed to make this happen. And entrepreneurs should be embracing this 1 billion percent. I mean, don't overtrust it, but like you're taking risks anyway.
This allows you to lower your risks and make more decisions faster.
[00:50:32] Jeremy Utley: That's brilliant. Ethan, thank you
[00:50:34] Henrik Werdelin: so much. We really, really appreciate it.
[00:50:36] Ethan Mollick: Thank you so much.
[00:50:37] Jeremy Utley: Appreciate you. Be good. Bye
[00:50:39] Ethan Mollick: all.
[00:50:39] Jeremy Utley: . Uh, Professor Werdelin, what stood out to you from that incredible conversation?
[00:50:45] Henrik Werdelin: I mean, not only. Is he such an insightful person, but he's also just such a, you know, intellectual, contentious kind of like individual, right? You know, you kind of want to hear more and you feel that you've talked to him for an hour, but there's like a thousand more things that you could have asked.
And so I think there's, Two things that I really, um, that I took, one is this idea that next time that you're hiring somebody, um, you should basically think of like how much of that job could AI do and then kind of interview with that in mind. The flip I thought, which is also brilliant, is that next time you're applying for a job, you should just come prepared and saying, Hey, basically, I can have AI do what you just asked me to do and kind of get a pay rise.
Which I thought was amazing. And then I think on the philosophical side, what I really kind of was, was like, if you've used it for more 10 hours, you will probably at one point get a little bit of a crisis going like, holy shit, the implication of this technology is amazing. And so Those kind of three things was probably the three things that really stuck.
[00:51:47] Jeremy Utley: Yeah. I love, I love that his book starts with this idea of three sleepless nights. It's just, you've got to have those three sleepless nights. The, the thing that I would add to what you highlighted there is his observation that competitive organizations. Are going to be hamstrung in their ability to deploy technology because they don't share with one another.
And he talks about mission driven organizations and nonprofits and even startups have an advantage, not necessarily because of non encumbers see or anything like that, but because People are willing to share and willing to help each other. And it's an area where culture actually really matters. And I think, you know, we've heard for a long time culture eats strategy for breakfast.
We might start to see culture really eating strategy as it pertains to rollouts and development of AI powered initiatives inside of organizations.
[00:52:41] Henrik Werdelin: I love that. I mean, the other thing that just came to mind now is how gullible AI is. I could definitely see what, if you ask, you know, who is Henrik Wein? A lot of the, a lot of the words seem to be very inspired by my homepage.
And so this idea of going in and basically create like the, the secret kind of note to AI and ask it kind of how it would, how I would like it to kind of talk about me is a, is an interesting experiments that I'm definitely gonna go and try.
[00:53:06] Jeremy Utley: You know, that in like three weeks, I'm going to start asking Chad, GPT about you and just see what it says.
you're
[00:53:13] Henrik Werdelin: going to like his best, his most valuable colleague is your, I'm
[00:53:17] Jeremy Utley: going to be, I'm going to be reverse engineering your secret text from what Chad GPT says about you. That's awesome. Great conversation.
[00:53:26] Henrik Werdelin: . And as always, if you enjoyed this conversation, we would really appreciate it if you would share it with somebody as we're building up our audience for this podcast, and of course, go in and like, and subscribe on the podcast platform that you use, like it a lot,
[00:53:40] Jeremy Utley: subscribe it a lot.
That's it.
[00:53:43] Henrik Werdelin: Thank you so much. Have a good one.
[00:53:44] Jeremy Utley: Adios.