Mikkel B. Rasmussen, founder of the Human Activity Laboratory, joins Beyond the Prompt to explore what it really takes to understand people. Drawing on decades of applied anthropology, he shares why true insight requires surprise, struggle, and embodied experience. He also reveals how AI is already helping us get there. From LEGO to synthetic data, this episode is a masterclass in uncovering hidden assumptions and designing for real human behavior.
Mikkel B. Rasmussen brings a rare lens to the AI conversation. As an applied anthropologist, he has spent decades helping companies like LEGO uncover what is really going on beneath the surface.
In this episode, he shares how deep insight often begins with being wrong, why surprise is the clearest sign you have found something meaningful, and how the pain of not knowing is essential to breakthrough thinking. He also explains how AI is transforming his own research, from pattern recognition to video ethnography, and introduces a provocative idea: Anthropology Without Anthropologists.
Jeremy and Henrik reflect on what it means to teach AI how to surprise us, how synthetic data might reshape experimentation, and why better insights begin with better questions.
Key Takeaways
HARL: humanactivitylab.com
00:00 Intro: Why This Conversation Matters
00:25 Meet Mikkel: Founder of Human Activity Laboratory
01:14 Understanding Anthropology and AI
03:32 Applied Anthropology: Tools and Techniques
04:56 The Role of Narratives in AI
07:06 The Importance of Sensory and Social Dimensions
13:06 Case Study: LEGO and the Anthropology of Play
21:07 The Role of Surprise in Anthropology
27:51 AI and Human Synergy
31:26 Exploring AI's Limitations and Potential
32:46 Anthropology Without Anthropologists
34:17 AI's Role in Generating Insights
37:23 Human Bias in AI-Generated Ideas
42:05 Synthetic Data and Its Applications
47:34 The Future of AI in Anthropology
49:25 The Debrief
📜 Read the transcript for this episode: why-ai-gets-people-wrong-the-real-source-of-insight-with-anthropologist-mikkel-b-rasmussen/transcript
[00:00:00] Jeremy Utley: Folks, a brief note, take this episode Seriously, this is one of the most invigorating conversations we've had in a long time, and it ranges far beyond your typical AI use case exploration. To get into the meat and the heart of innovation and problem solving, you do not want to miss a moment.
[00:00:22] Henrik Werdelin: And with that, over to our guest. Here's Mikkel.
[00:00:25] Mikkel B. Rasmussen: Hi, I'm Mikkel and I'm the founder of something called Human Activity Laboratory. Where we work on understanding anthropology and AI and how humans deal with this whole new world of artificial intelligence, and I'm excited to talk about how AI can be used to understand people better and how anthropology and AI can become something that supports each other rather than be opposites.
And how. When do you need a human being to understand another human being and when can a machine actually help you do that?
[00:01:00] Henrik Werdelin: Michael, very excited to have you on and look forward to this conversation and then very much looking forward to introduce you to Jeremy 'cause I think you guys will really hit it off.
So also just excited to be a fly on the wall while you guys kind of gee out. Right? Migel, maybe you know, you and our mutual friend Christian Berg have kind of introduced me to the world of anthropology. And I am increasingly convinced that it has a huge and very important, will have a very important impact in how we understand how to use AI better.
And I can go into more detail and have a bunch of questions around that. But for those who have not studied the science of anthropology. Would you kind of just give a little bit of an explainer of what that is and some of your research and your work and kind of give a, give a little kind of like a 32nd introduction to yourself and the domain.
[00:01:57] Mikkel B. Rasmussen: Yeah, I mean, anthropology is something I've worked with for 25 years and, and I have to say apply it. Anthrop prodigy, which means, uh, studying human culture and studying human beings, and particularly the social world. So what is play, what is, uh, illness? What is traffic? What is ai and how do we construct the world around that?
And it's used to understand the meaning of things, and particularly in, in, in corporate world. Uh, corporations use it when they have no hypothesis about the problem they're solving, uh, particularly, so it's what's called pre hypothesis science. So imagine Davin when he went on and wrote his amazing piece on.
Evolution before he knew there was a hypothesis about evolution. He went and studied, you know, nature biology, and then he created the theory of evolution. And anthropology is a little bit the same. We just study people instead of studying flowers and animals, we study human culture. So children, doctors, hospitals, uh, car drivers, dog owners, you know, uh, it could be any kind of human culture.
The key is that it's not psychology. We're not trying to understand the human mind. We're trying to understand humans as social beings. So not so much the study of individuals, but the study of, um, when we are together. So family and kinship and class and uh, gender and all those things are, are important in in, in anthropology.
[00:03:32] Jeremy Utley: Could I ask you, ml, what are some of the tactics that you employ or some of the tools in your toolkit, so to speak, as a. Applied anthropologist. If someone's doing blank, I know they're practicing applied anthropology.
[00:03:45] Mikkel B. Rasmussen: Mm. I mean, if you're doing what we would be calling field work, which is going out and studying a particular, uh, group of people by being with them, by observing them in their natural habitat like a scientist would do, then you're doing anthropology.
That means if you wanna understand, let's say. How do kids play? For example, you'd go to kindergartens and playgrounds and you'd study kids and you'd talk to parents, and you'd actually engage in the real world, what is called participatory observation in a scientific world, which means you are participating in a social activity and studying it, like looking what's happening.
And I've been in. I've been in walls, I've been in hospitals during operations. I've been working with train drivers and car drivers and golf instructors and kindergarten teachers and all kinds of people to understand the world from their point of view.
[00:04:47] Henrik Werdelin: The reason why I was so keen to get you on is, and you can then debunk this thesis, and then the podcast will be short.
Yeah. One of the things that you guys do well is that you form narratives about people and help. It seems sometimes people understand the self narratives they have, and if you are looking at how AI models understands the world, they understand the world through language, and so I'm increasingly convinced that the way that we get more out of ai.
Is to become better of articulate the narratives that we have for ourselves and for what we want. Because without being able to have a very clear narrative about, for example, what we do as a company, and not just what we do, but what we want to be and what we feel that we serve and the purpose and all those different things, we have a tough time having AI help us with that.
I'll give you an example. Let's say that. I describe BarkBox one of my businesses to an L model, and I say, what is the next product I should build? It would assume that it is two treats, two toys, and a chew that gets in the box. And because that it wants to be statistical accurate, it's most likely will recommend me something that is similar to what I do, not who I want to be.
So it will suggest to make a cat box, for example. Now at bark, we would never to make a cat box. We're not the business of doing stuff for cats. We're in the business and making dogs and their people happy. So simply by reframing the way that we talk to a model about we want to make dogs and their people happy, suddenly the model will much better understand what the next step should be for.
And so it seems to me that anthropologists have had a lot of training in really understanding this narrative creation. By using the tools of anthropology, we will be better of articulating a company's purpose and strategy and us as an individual's purpose and strategy for how we'll use models better.
Yeah, but that, that makes sense. Tell me that I'm just directionally correct.
[00:07:04] Mikkel B. Rasmussen: No, uh, well, yeah. I think there are some nuances to this because if you wanna understand human behavior and human culture and human activity. Language is one road. So, so talking to people is one, and understanding what's been written and how they, the words they use and what are the meaning of the words they use.
There's a lot to learn from that. And anyone who's been in a company for more than a couple of months know that in every company they have their own language. Ford has a very particular language about cars. It's very different from all the old Volkswagen, for example. And so there is a lot of, you know, gold in understanding language.
Um, and I think AI generally does a, a very, very good job at that. And increasingly better and better as there is more data, but language is only one dimension of human nature. There is also the body. Like how does things feel? How do the sensory system, so you know, how does things smell? So try to explain how something smells very, very difficult to do with language.
Or how does something look, how does, you know, early morning in October in Copenhagen, look, it's, you can do it, but you need Al almost to be like a poet to describe, 'cause it's not just. You know, words, it's also emotion and, and, and what you see with your eyes. And then there's a whole thing around how things feel with your hands.
So the sensory, and there's a whole lot of science on this that we are basically, it says that, you know, we don't think without brains alone, we think with our bodies. And, and that's probably something that is not yet fully. Understood by AI or why AI has a little bit of a weakness because it doesn't have a body yet.
So I think that's really interesting. We can talk about some of the things I think will be coming soon that I'm excited about. For example, some of the stuff that Google are doing with their pixel phone that has what's called contextual video. So if I look at you, Jeremy, it would be able to see what books you have in the background.
And instantly understand what you've read. For example, you can recognize if a dog comes in that is a dog and not a cat, et cetera. And when we get that sort of intelligence, I think it's sort of, it's an exponential increase of what we can call intelligence or, you know, smartness. Uh, that, that's really interesting.
So you have language Henrik, which is definitely important, but there are other dimensions and, and then. There's also the whole thing about sociality that I think particularly engineers often forget that people are extremely connected. That we are born into a world that was already made for us. For example, Denmark, where I'm from, and that gives you, you know, your ethics, your language, your boundaries, um, what a family is, religion, all kinds of things that are non-negotiable.
You are born into that. And you know that all Danes are social democrats, rik, and that's because we're born hands with that, right? It's sort of, you can't escape it. And, um, that's not in language, that is in social pressure and what's called being cultured. So you, you, you are being cultured into a specific way of thinking and morale and, um, behaviors and routines and rituals and things like that.
So those things are, uh, not just language. It's, it's, it's other things to study. And when we study things, for example, uh, dog owners, we would of course study what they say about their dogs. But we will spend a lot of time figuring out what's happening between a dog and a dog owner, which is super interesting, by the way, how we give them names and how we make them almost human.
And we talk about him and her and all of those things, you know, and it's, that is very, very difficult to do just through written language that you, you almost need to see it in action to understand what a dog owner is. I think that makes
[00:11:23] Henrik Werdelin: a lot of sense. And I think we had a guest on not too long ago that talked from a technical point of view about that specific thing, right.
It was, uh, you know, the next generation of models and the way that we get closer to a GI is is those kind of things.
[00:11:37] Jeremy Utley: Em, embodied learning, right? Yeah. A GI doesn't smell or taste or, no, I, I was actually, you mentioned, uh, Christian Keller who joined us from Meta and he was saying that's Jan Koon's entire criticism of.
The, the paradigm that says large language models will get us to a GI is, they don't know how something smells or tastes. They know what words we use to describe it, you know? Yeah. And I was actually thinking about the, for example, being full, it's one thing to describe being full or, or being hungry.
Mm-hmm. You can talk a lot about hunger. Right. But have you been hungry? Right? You can read a book about Abraham Lincoln. Do you know Abraham Lincoln? Right? There's, there's very different, like, to know someone is very different than to study them. To know an experience is very different than to, you can read all about what it's like to be hungry and desperation, but unless you've actually experienced hunger, it's, it's a totally different kind of knowledge.
So I'm right there with you, Michel, where I went immediately. Is this question of to what in meaning if you think about applied anthropology, you mentioned it's fascinating what we learn about dog owners and dog people. And my question is to what end? And maybe just to level up one level, for folks like me who are new to this field, I'm gonna ask a very dumb question, which is, who hires an applied anthropologist?
Like who? Who do you do this work for and why do they hire you?
[00:13:06] Mikkel B. Rasmussen: So I've done around 250 studies the last 25 years for a lot of big corporation, but I, I'll give you an example, which is Lego, the toy company that, uh, we worked with over 20 years, and the problem we were solving there was that they were almost going bankrupt.
You know, maybe it's about 20 years ago and they were going bankrupt because they had the wrong assumption about what success is and what growth is. Which was at that point, it's just only about the brand and it's about creating more and more diverse products. And they got a new CEO who's a smart guy and he said, well, maybe we should figure out why kids play.
And if we are the best in the world, it's, it would be a competitive advantage for us to know more about kits than any of our competitors. So they engaged in this nine month study that we were engaged in to understand, not, you know, how kids play when they play or where they play, but a much more interesting question, which is why do kids play?
So think about that question if you have, do you have kids?
[00:14:17] Jeremy Utley: Yes. Yeah. Yeah. So
[00:14:18] Mikkel B. Rasmussen: think about not, you know, why do they play? And it's actually a little bit of a mystery why they pay. 'cause they don't have to, it's not like a utility, like it, it play does something and, and, and. That's one of the things where senses and understanding the embodiment and of is really, really important.
So they started kids for nine months and that led to a complete change of what they thought about why kids play. So they thought kids played in order to be satisfied. They thought that, uh, they had a, a contract called Instant Traction, which is a great choice, something that satisfies you very, very fast.
So it blinks and says sounds and have wonderful colors, that things should be easy. Um, that, you know, PlayStation was taking it all at that time that every play will become digital and so on. And when we then went and did anthropology and kids, so that lived with kids for, you know, a long period and, and, and stayed with families and talked about what's it like to be a parent and, um, saw kids when they were playing, we discovered that it was completely wrong assumptions they had that.
Kids actually have depth when they play. It's not instant traction. They have the same interest for seven to nine years. They know everything about dinosaurs or football or basketball or, yeah. And so there's a great deal of depth and complexity and mastery to play. And that led to a different product roadmap for Lego, where they basically cut away around 70% of their products that weren't necessary.
Spent that money on doing toys that had a very clear proposition around mastery and depth and being engaged in something for a long time. And also the role of creativity in growing up, like really honing in on that, which wasn't really a thing before because we basically discovered that plate is a way for a kid to discover themselves, you know?
And it's a way to. Socialize and to discover how hard can I hit somebody? What's the hierarchy in, in, in this school, in this playground, et cetera. And Lego wasn't built for that sort of place. So they did all sorts of things. For example, built bigger boxes with much, much more complex they're building in instructions.
They, um, built the first social network for kids. They created a movie that's all about that. Studies led to a movie, the first Lego movie, which is all about. What is it like to be a kid in a world where adults want to control kids, and how do you break free from that playing? So it's, that's an example of a company that used Anthrop party.
[00:17:04] Henrik Werdelin: If you were to go to a company, let's say, like I called you up and said, and I, maybe this, you would say, this is a flawed question, but said we think that AI will have a increasingly big impact on. And play in our world, and we are uncertain on how we should figure out like, well, how this technology should play a role in our world.
How would you kind of compute that question and where would you apply the anthropology kind of toolbox? Like first you would need to figure out who you should study, would you not?
[00:17:45] Mikkel B. Rasmussen: Yes, and I would, I would have to figure out a little more about why do you see that as a problem. Is it? Is it really a problem?
And what is the problem and what are your assumptions? So all great insights. All great insights in my mind come from a gap between how you think the world is and what it really is, what reality is. So in all companies, there are these assumptions. Do you build your business around, for example, instant traction in the case of deco, and then there's reality, which is the opposite, or often, and it's that gap.
So. The first thing I'd do is talk to the company about what do they assume AI will do? What is it they fear? How do they think it'll play out? And then I would actually go and study how kids use AI today. What is it for? Particularly things like AI agents I think are really interesting to study because kids, I know this because I've started a little bit.
They think about AI in a very, very different way than I, for example, do. It's a much more, um, natural extension of their world than it is for me. So it's not another thing. It's like it's a very big part of being, and then I'll study what they actually do and how it's connect it to creativity, play imagination.
Um, how it unfolds. Activities like riding, using your hands, how it, you know, uh, how it affects those things because we know plays are very, very embodied. So the plays is very much something you do with your hands is not just the mind game. And AI isn't there yet, but maybe, you know, there are many ways that kid, kids, I mean, what we find always with kids is they, they change the rules.
I mean, I once did a study of a digital playground. They made this digital, physical playground with nine stations, and then you were supposed as a kid to go to station one and then station two, and station three and two things, right? And then they had a system to calculate, uh, and it should take around half an hour to go through all nine stations.
And the, and, and it did like the first day and the second day, but the third day there was a kit that did it in 25 seconds. They couldn't figure out why. And it turned out that the kids collaborated. So they figured out what each station should do to solve the, the, the task. They stood there, nine kiss and say, 1, 2, 3, all go, boom, that's 25 seconds.
Right? And that's what childhood is like, I think. And plainness and creativity is really like, which is, and kids are just amazing at that. So I'd, I'd be careful having too many assumptions around. How AI is dangerous or destroying things, and I'd be much more interested in figuring out how is it used as a companion to kids in, in their play.
I think that'd be really interesting to study.
[00:20:47] Jeremy Utley: I want to go back to this idea of insight and I will unabashedly proclaim. I don't care if we talk about AI at all in this conversation, um, because I think your expertise is far more interesting and I, I do, I think Henry and I may actually have some AI powered ideas that we could contribute if we understand your world better.
But I want to come back. I, I wrote this down. You said all great insights come from a gap between how I think the world is and how it actually is. And I, I just think that's so beautiful and so profound and so important, and I'd love to give you this word surprise and ask you to talk about in your practice of applied anthropology, what is the role of surprise?
What role does that play, and what do you do with surprises?
[00:21:32] Mikkel B. Rasmussen: That is an amazing question. That is a really good question. I wrote a book about that called The Moment of Clarity about that moment, that there's a moment when you do so. When you do studies like we do where you study 90 kids over, you know, nine months and you get millions of data points, like thousands of photos and what's called field notes and conversations and videos of kids, and you'd look at all of that and you sort of bombard your brain with all of that.
There is a point where the data. Emerges into a sort of pattern. There are patterns, uh, in the data that shows you what's going on. And then there's a moment where all of a sudden you see it. You see if you're smart enough. And um, and I've seen CEOs do this several times where all of a sudden they see what's going on.
They're surprised, like you said, and it's like a crazy moment. Because from then on, it's fairly easy to see what should we do about it. It's when, when the CEO of Lego discovered that plays about mastery in depth and sociality, it's fairly easy to see for him, you know, okay, we need to make the boxes bigger.
Like it's not, that's not like, okay, that's not super creative. But the surprise moment was very difficult. And another one, Jeremy, that I think is so interesting that I've observed and also in myself. Is that I have never gotten to that moment of surprise without pain. It has never happened without sleepless nights, doubting myself, being struggling with, can that be true?
How does this connect to that? And, uh, there's so many people that do research that say, oh, so it is like a linear process. Like first you do get the data, then you find the patents, and then you have your conclusions. I think it couldn't be further from the truth. It's almost like doing a painting or writing a book or properly making a film or something like that.
With this, there's a moment where you doubt does this project, will it ever, will it, will it ever succeed?
[00:23:49] Jeremy Utley: Yeah, yeah, yeah. That's no. One of my favorite little books on creativity is by an old advertising exact name. I think James Webb Young. It's like the best $5 you can spend on Amazon. It's called a technique for producing ideas and one of his critical ingredients or steps, if you will, is hopelessness.
Yeah. And or, or at least that's, that's my read of it. I don't think he actually uses that word, but basically he talks about you have to, to your point, bombard your brain so much with information and then you have to get to the point you say pain. I can't remember what word he uses, but the way my interpretation of this hopelessness, you have to feel, it's impossible.
Then he says, you know what you do next? You go to the theater. Yeah. You do something totally unrelated. Right, right. But, but which is just brilliant for another reason. But the point is, I think most problem solvers or creatives or you know, when they encounter the roadblock, they think that's evidence. I'm doing the wrong thing.
Yeah. And what you just suggested is it is an essential precondition to insight. And that to me is so powerful to realize the pain is an, just like the pain of exercise. Yeah. Is essential to health or fitness. The pain in the problem solving process. You can't deliver truly breakthrough products and services without that pain.
[00:25:12] Mikkel B. Rasmussen: No. And you know what's really important with this anthropology thing is that you're not starting it to describe what's going on in itself. You're doing it to. Think newly about something. Think about something in a new way, right? So it's when I have teams that work. So I have teams of people with PhDs and anthropologists that go out and do this field work.
If they come back and say, oh, it's fairly easy. We think we've got the insights already, then I get really nervous. Then I know this is probably gonna be superficial, banal, uninteresting. I'm much more interested in then coming back and saying, it's really high problem to crack because there are these complexities and all of that.
Then, then I know this is gonna be great.
[00:26:02] Jeremy Utley: Wow. If it's easy, I get nervous. It's such a great, it's such a great leadership heuristic. I'm, I'm concerned if my team says this is too easy. One
[00:26:12] Henrik Werdelin: thing that I, we talk a lot about on this podcast is. What is the thing that humans will do when a AI becomes better and better and different people have different vocabulary around it, and it's all about human humanity.
And some talk about taste, some people talk about originality or creativity or whatever it's, you have talked about, and I know it's mentioned also in your colleague Christian Berg's book Sensemaking about, and this was about big data, but you talked about thick data in that book.
[00:26:47] Jeremy Utley: Yeah.
[00:26:47] Henrik Werdelin: And as I read it through an AI lens, 'cause most of my thing thinking that with Ray Islands is it was kind of like a pre AI articulation of the things that people are talking about when it comes to what is the thing that AI currently is not very good at.
And some of the examples that I mentioned in the book, I call them are things like. The mood in the office is kind of off, or Yeah, the party is just getting started. Yeah. And those are things that, you know, uh, innately human and we will all understand what people will mean when they say that, but it's probably difficult for, for example, another model to just read a transcript to a point and then extract that from the conversation.
So with all that in mind, when you think of people saying. Yes, there need to be a human in the loop because there are things that AI won't be, uh, good at. Do you a buy that and two, if you buy that, how would you describe or articulate what those things are?
[00:27:51] Mikkel B. Rasmussen: I think it's an evolving field. You know, I'm very impressed, so I use AI personally in my to do what's called pattern recognition.
We used to do that on walls. We would hang pictures of people and. Field notes and try to find connections and you'd spend maybe three or four weeks making sense of it. Like, and now I can put it into a model and get a rough description of what are the 12, 16 patterns in the data. Not, not the insights, not the aha surprise.
'cause it can't create that. That's, it's mostly bullshit when it does it right. And I think it's all getting better. So I just one note, I think it's super interesting when you're not an engineer, not from the AI world to read about ai. Because as an anthropologist I would say AI is a human constructed phenomenon.
It's something we created, it's not made by a machine, and we did it on purpose, right? And, and often we talk about AI as something that came from space, you know? It didn't like it. It's something that's been evolving for years and years and years and years and there's a culture around it. So that's important.
And then I think it's interesting to read, you know, I dunno, wall Street Journal, where the same paper, the same day says this is gonna displace all jobs in the world. And then on page three it'll say, and it's a bubble that will break soon. And you kind of go, I kind of go. That's interesting. It can't be both.
Right? And I think that model of AI is very destructive. That idea of are we discussing whether it's a bubble or whether it's completely disrupting everything. So I think it's a very destructive way of thinking about ai. I think it's much more interesting to look at AI as synergistic to human skill. And human capability meaning is something that makes us greater and smarter.
But it doesn't replace us.
[00:29:56] Henrik Werdelin: But why not?
[00:29:57] Mikkel B. Rasmussen: Because we are humans. And AI is a machine, and a machine does not have a body. It does not have contextual understanding. It's not born with morale. It's not, it doesn't take 18 years to train a human being is very, very complex and, um. I think the whole ideal
[00:30:24] Henrik Werdelin: around, I just like thinking So you can't make a baby with three women and in three months.
Exactly,
[00:30:31] Mikkel B. Rasmussen: exactly. They just think about how, how long time it takes to make a baby. It's, there's a reason for it, but that's another story. But I think there's a general tendency to what's called anthrop sizing, I think it's called when you assume that the machine you're building is human. You give it a name and you call it it, and him and her and so on.
I think that's a, a mindset that I probably will disappear over time when AI becomes more na, a natural part of our world. And then I don't think we'll even discuss, you know, what human works machine, because it'll be very obvious,
[00:31:11] Henrik Werdelin: a little bit like I think some people on the podcast have described it as electricity.
Yeah. And we don't, we don't talk about it. It makes it light up in my room. Right. It's just like I turn on the lights and Right. It's there.
[00:31:26] Jeremy Utley: Can we talk for a second about your, the tension that you described of you used to spend three weeks doing, you know, I picture like a crime scene investigation, you know?
Yeah. Everything's up on the wall. You're looking for patterns. You say, now AI can give you the 12 to 15 kind of patterns, but you said, and I quote it, can't do the surprise thing. One, I would just submit to you as a fellow practitioner and student, anytime we find ourselves saying it can't, I have learned myself to say, I haven't taught it how to blank.
And so I would just offer that as a gift and as a paradigm. And I wonder what you, you uniquely with your expertise might be able to do if you thought in terms of, could I train an ai, like I train a junior applied anthropologist. To be able to do this thing that I can't think, it can't do, so. So that's kind of just a gift to you.
Yeah. And my question is, talk for a second about the difference right now. Like what is the surprise thing in you and how is that different from what it seems right now the AI can't do? Because one, I'm just interested in understanding, and two, I think if you can articulate it, that might be part of the key to training an AI to do this interesting thing.
[00:32:42] Mikkel B. Rasmussen: Yes, I think there, I mean, we could go quite far with that. Um, so, you know, we have a, uh, Christian and I, the other founder I work with, we have a pretty big project called Anthropology Without Anthropologists, which is about what you are talking about, like using AI as much as possible to gain insight on human activity and uh, understand, uh, social situations, et cetera.
That includes using video. So we are experimenting with body cameras right now, so we are attaching body cameras to people that have specific disease without an anthropologist being in the room, without a topic guide, without questions. Just raw video like, because the problem with anthropologists is that they're in the room and they're biased and they're trained.
Most of their theories are Marxist. Really, to be honest. It's about power and struggle and dominance and class and, you know, it's quite good for analytics, but sometimes it's biased and which I think we've seen plenty of the, the, the, the last couple of years. And so we would love to have an unbiased, clean view of, uh, what people do and how they see the world and how they connect and so on.
I think that's a fairly ambitious project and uh, I'm super excited about it because I hope I'll get super surprised and see things that, you know, I couldn't see before. In order to train an AI to get to this moment of clarity or moment of insight, you'd have to understand. The assumptions, the problem we are starting with.
So you'd have to understand and,
[00:34:31] Jeremy Utley: and your assumptions. Yes. Yes. And you like surprise is, is always there's a question of relative to what. Yeah. And, and just even as you're talking about surprise, I realize it's relative to my own expectations and understanding. Going back to your definition of insight, right?
[00:34:44] Mikkel B. Rasmussen: Yeah.
[00:34:44] Jeremy Utley: There's a gap between what I think the world is Yes. And, and what it actually is. But the important thing there is I mm-hmm. And the reason perhaps that AI can't deliver the surprise, I'm just riffing right now. Yeah. But it can't deliver the same surprises. It has a poor approximation of I. Yeah. And then you go, okay, well, if the goal is actually to.
Ask AI interview, for example, if I'm you, I'm just gonna role play as you for a moment. Yeah. Yeah. Claude or Chad g Bt interview me as an anthropologist about my understanding of this space and all of my assumptions as an anthropologist, seeking to understand an anthropologist kind of priors, right?
Interview me, ask me every question, and then formulate a psychological profile of me as the person reviewing this data. Yeah. Now review the data as me. Tell me what's surprising to me. I wonder whether that would start to get you there. 'cause what's interesting is you're saying AI is good at doing the pattern matching.
What is not good at doing is mimicking you. But that's just because you haven't thoroughly thought about how do I get me in there anyway, I'm just, I'm just riffing, but I think there's something interesting.
[00:35:54] Mikkel B. Rasmussen: Yeah, absolutely. But, but my point was just like, let's say you are a company, then you could imagine Zoom.
You'd be able to say, go and talk to $2,000 employees with an AI audio interview kind of or conversation thing that you can do with chat Gino and record all that transcript. It take everything we've ever written about the company, all brand guides, all yearly accounts, all you know, internal meetings, everything we had feed into it.
And then tell me what is our three most important assumptions about our customers, for example, or our business or the world or whatever, the market. And then you would, uh, go and study, you know, people in their context, the way we talked about. Um, but again, not so much by talking to them or interviewing them, but observing how they.
Trying to get into their shoes as much as possible, including the body, how they see the world, how, you know, how they're cultured, et cetera. But I think it, we can get very close to that. And then you connect those two things and where are the biggest gaps?
[00:37:14] Henrik Werdelin: That'll be so fascinating, huh? Yeah. And they'll probably be big gaps.
[00:37:21] Mikkel B. Rasmussen: Yeah.
[00:37:21] Henrik Werdelin: I mean, yeah.
[00:37:23] Mikkel B. Rasmussen: I think there's one thing that's important to say, which is part of doing. What we do. What I do is also an embodied process. It really is. So when I work with a CEO or leadership, it's important that they see the field or the the people that we are starting themselves and that they get a embodied sense of surprise.
It's not just intellectual
[00:37:49] Jeremy Utley: that is, that will so, so here's as an interesting kind of experiment. I've done this in a number of my entrepreneurship classes and programs. I'll have people brainstorm by themselves and then I'll have people work with AI to brainstorm and they generate lots more, you know, volume and variation and things like that.
And then I have them select what they think the highest potential ideas are. It's not all the time, but the vast majority of the time people select one of the ideas they came up with. They rarely select an AI driven idea. Interesting. Even though in separate laboratory experiments, according to kind of third party evaluators, AI's ideas are almost always better from an objective perspective than the human's idea.
This is, this is kind of well established now in the literature that AI is capable of generating as good, if not better ideas than even experts in particular fields. But the problem was when the human is the one selecting. And if the human objective evaluator knows that it's an AI versus a human idea, they tend to choose the human idea.
They tend to overweight the human. Yeah. And we, because of our own kind of familiarity with ourselves, we overweight ourselves. It's an interesting question of when you get back, going back to this question of a senior leader having to have the embodied epiphany. Hmm. I wonder whether the, I'm just, again riffing with you out loud, but I wonder if the question is actually how do we leverage AI to facilitate the human having the embodied epiphany?
Yeah. And whereas the embodied epiphany required, so like for example, like, you know, there's the IKEA effect, right? Something that you put a little bit of work into you value more. Yeah. So what, here's, here's an idea. What if you get an AI to generate 95% of the kind of requisite. Conceptual material and then you give it to the CEO to then say, what would we call that?
Yeah. Yeah. And then if they name, I'm just making this up. Right. But if they name it No, no. Would you find that They feel that's embodied sense of epiphany. Right? But if you've taken out 95% of the work and all, but, but you've learned, if I give them a hundred percent fully baked insight, they reject it.
Right. And if I give them a 95% and they've gotta like turn the screws on the chair legs, they feel like that's my, i, I don't know. I'm just, but to me it's, but you an interesting question.
[00:40:10] Mikkel B. Rasmussen: But it's, it's sort, sort of the feeling. That's my idea. Like it's my insight, but what does it take
[00:40:17] Jeremy Utley: to get to the point that it's your idea?
Yeah. And I, my personal humble point of view is there's probably a lot of wiggle room to figure out. To me, that's a really fascinating question to actually answer. I can give you empirically very,
[00:40:27] Mikkel B. Rasmussen: a good example. When we did that LIGO study almost 20 years ago, one of the things that fascinated them was there was this kid in Hamburg, in Germany.
There was a skateboarder, he was 11 years old, and we were talking about, you know, what are the rules of skateboarding? How do you do it? And he was talking about the hierarchy of skateboarders. And when you are what's called a king, which is when you can teach other kids how to do a kick flipper, for example.
And it's a sort of a hierarchy that's hidden. It's not written everywhere. It's in human nature. But every kid knows it. Like that kid is a king. And then he had a pair of shoes and he took a picture of those shoes and he asked him, you know, what do you think those shoes are worth? And he said, probably a million euros.
So why? And then he told me, well, they are thorn in the right place to show I a king. Uh, so it just, it was an insight. And when you tell the story for most people and you see the picture, you see, oh, there's so much step to play and to the small hidden symbols that kids have to show each other hierarchy and the pure interest they have in something years after years.
The opposite of instant traction. You know, this instant thing that you have a shoe that's won the right face, that picture of that shoe still hangs. In the lobby of Lagos headquarters. That's cool. 'cause that was a moment of surprise. Right? They went what? Right. And it, that became the symbol. Yeah.
Beautiful. When you mentioned these, that's beautiful things. Beautiful. I'm
[00:42:05] Henrik Werdelin: curious what your thought is on synthetic data. Like Yes. If I rendered, you said you were living with like, oh, you were studying 90 kids for nine months. Let's assume that I have some models run, right? Every while. I ask them to come up with 90 different kids' personalities, and then basically have them render both in text and image and voice things that they say.
And do those nine months, would there almost per definition not be an insight there because humanity would need to be injected into it? Or do you think there might be an insight, but obviously it'll be a pathetic. Because it was not rendered by real humans. Like, where does all this, what happens to all this studies when suddenly it could be rendered?
[00:42:58] Mikkel B. Rasmussen: I believe that already today, the technology is there to do quite a lot with synthetic data and understanding, uh, your customers and, and stuff like that. I do think that the insights you get from that is a very particular set of insights so that it's small. Problems. Not big problems you can solve with that, but there is plenty of small problems.
So it's things like pricing and uh, things you would do AB testing on, basically you could do with synthetic data. I think in a couple of years it'll probably be improved when we get what you talked about before, you call it thick data or thick description, which is if you take the eyebrow, it has millions of different ways of moving.
Each of them signal, anger, love, disgust, surprise, and you can instantly see it because we are human beings, right? It's super hard for an AI today to recognize just the eyebrow and then take the mouth and the smell, sound, history, upbringing, language, and you put that all together and you get a human being.
And I don't think we are at the moment. In a place where we have anything, we are not close to even 1% of the mystery of what a human being is in terms of train and and AI to do that. That doesn't mean that we can't get there, but we need this, the body like we need senses. You need eyes, smell, sound touch, and you need understanding of social, social connections.
So what does it feel like to be a family? It's super hard to describe and once we have that, I think you can probably build some synthetic models of particular people, let's say golf players and predict how they would react to quite, quite a number of things. You know, what if we change the rule of golf, what would happen?
Like you don't have to ask people. It would be able to predict that pretty precisely. And I think you can do the same thing with. Healthcare in a, in a number of ways. You could do it with traffic, you could do it with public policy. So I think there's a lot of cases where synthetic model of human behavior and human culture can help us progress faster and understand people faster, and also understand things that it'll just take forever to understand for human beings.
Because we can't put the patterns together. Right. And a, a good example is, I mean, most companies, I think most listeners that work in a company know that they have over the years built what they would call insights. Could be marketing research, could be studies, it could be surveys, it could be interviews, et cetera.
And they've probably done it a thousand times, but the data doesn't connect. Like it's like there and there and there, you know. One thing that I think is super interesting with synthetic data is what if a company could take everything it knows about its customers and build into a synthetic model that you could then improve over time?
Right. I think that is quite interesting, where you have what's called, you know, that's multimodal data. Video, audio texts, sales reports, like all kinds of things, and you put it together and make sense of it. I think there's a lot of potential there. And you know, I don't know how many, I maybe, you know, I don't know how many are doing that, but I haven't, I haven't seen it.
[00:46:45] Jeremy Utley: I know a couple of folks who are beginning to test the fringes. Maybe folks who we should have on the podcast, Henrik, we can talk about, and Michael, I'm happy to introduce you as well. To me, I think practically, what's the practical implications of that? One thing I would say is the rate of experimentation is somewhat limited by, you know, test panels and things like that and market research.
And if one or an organization could leverage synthetic data to accelerate the rate of experimentation, you think innovation is basically rate limited by how many experiments we can deploy. Yeah, and synthetic data represents an incredible opportunity to accelerate experimentation, therefore, but you know, theoretically accelerate innovation TBD, whether that can be realized, but I know some people who make very strong claims that it can be so be interesting to study.
[00:47:34] Mikkel B. Rasmussen: Yeah, but also, also, I mean, today we are doing experiments on AI interviews, like using real voice and conversation to talk with, for example, doctors. About disease or healthcare and so on. And what's surprising to me is we've tested being interviewed by a human being and being interviewed by an ai. And if you tell people it's an AI that's going to interview you, and they know the premise of it, it's actually better data than if a human being does it.
It's better at improvising. Answers. If you can guide and say, please ask open questions, please ask follow up questions that build on research on what you're talking about. And you know, my anthropologists can't do that. They can't all of a sudden know everything about diabetes. Wow. You know? Right,
[00:48:26] Jeremy Utley: right.
[00:48:28] Mikkel B. Rasmussen: And another thing is very practically. You know when you're doing an interview and you need to do a test, let's say you want to test a concept of something, a prototype, you'd have to call people, figure out when are you available. You have to enable a video connection. You have. I mean, there are all these things and it take weeks.
Here is just press a link. So that means in 24 hours you could test something on 200 people. Pretty deep. And I'm, I think. People find it, like the people that are being tested find it entertaining and interesting and they open up, what's the cost of that? Like, that's great. So I think there's a whole, I mean, already now there's a whole lot of potential where we don't have to build synthetic models, but use the power of some of these AI tools like audio and video, um, to do things that used to take like forever, uh, and cost a fortune.
[00:49:25] Henrik Werdelin: Hey, Michael, thank you so much. I find that every time I meet ML to kinda like leave the conversation with a ton of new thoughts. I, I hope you enjoyed, you know, having a conversation with him too. Oh, thank you.
[00:49:40] Jeremy Utley: I mean, it's so, so rich. I love talking to experts actually in between recording that conversation and, and right now recording our debrief, Henrik and I were just talking about the power of.
Folks whose expertise lies outside of ai. It, to me, was incredibly, and I realized halfway through that conversation, Henrik, we should have been recording it. 'cause you know, we're kinda learning and evolving live before our audience. But I think it's so gratifying to be learning from world class experts like Nicole.
I can think of many others that we have as well. But it was really fun. I mean, to me, his whole definition of insight I think is so profound. You know, the gap between what or how we think the world is and how it actually is. The idea of surprises, the, the moment that you see it, the embodiedness of that moment.
I thought the notion of pain as a prerequisite to solution and insight was really fabulous. So to me it's a, it's, there's something for everybody here, not just about ai, but definitely far beyond about human flourishing, human creativity, problem solving. And then of course, I loved, he said at one point I wrote down.
We don't even understand 1% of what it means to be human and what it means to be a part of a family. I thought, wow, this is just, there's so much rich stuff there.
[00:50:53] Henrik Werdelin: Yeah. I think when we, I think a lot of the conversations ladies that we've had on this podcast is about all these things that we don't understand about humans and how AI is this magnifying class.
'cause we kind of have to tell a robot all these things that we take for granted. Right. You know, we. Mention those things, like the party's just getting stunted or the mood is kind of awful. You like this. He seems to be a little bit under the weather, whatever it is. Mm-hmm. We, we all know what that means, and I think now we are trying to understand how we explain this to robots.
We have to understand it ourselves, which is just fascinating. So, I, I, I, I, you know, just completely agree that there'll be fun increasingly in some of these podcasts to talk to. Experts that know something very specific and then ask them about AI and really start to kind of evolve our thinking about ai, but not necessarily about just talking about ai, but talking about kind of like the world in general through the lens of ai, which
[00:51:55] Jeremy Utley: I, I don't think, I mean, what's interesting, what I see again and again and again is someone's, the fact that they're an expert in some area.
They, in some ways they're uniquely qualified to comment on ai, but other ways there's, they're just as prone to misunderstandings and bias as much as the rest. And I think that, you know, it's generally true that humans want their. Expertise to be unique to them and want it to be irreparable. And I'm always skeptical when I hear an expert say, but it can't do this really special thing that I do yet.
Mm-hmm. And you know, you heard me challenge him. I think hopefully respectfully, but I hear this from so many experts and so many people that, you know, can do all these other things. But this unique thing that I do, it can't do it yet. And I really think that paradigm shift, you know, from assuming it can't do it to taking responsibility and saying, I haven't.
Thought about training it, or I haven't sufficiently trained it. Hmm. At the very least, it's a, it's a really powerful reframe to improve the performance of models. Um, but the, the reality is AI will perform to your expectations, and if you have low expectations, it will perform poorly. Not because it can't perform well, but because you don't want it to.
Yeah. And so to me, it's almost like aro rock or whatever. Right. Interesting. It ends up, it ends up being a fulfillment of what you see and what you expect. And I just think like one message that I find myself reiterating again and again and again is some version of expect more. Raise your expectations.
Assume what if it could do it. And by the way, like if you think about in a world of disruption and startups, things like that, imagine for a moment there's a team of hungry, scrappy entrepreneurs who are spending all of their energy getting an AI to try to do that thing. Or trying to get an AI to do it.
It's like the default isn't the status quo and the default isn't. Nothing changes. The default is everything is changing and you get to be a cause in the matter and contribute to it or get blindsided by it, potentially.
[00:53:54] Henrik Werdelin: Yeah. I think. Meanwhile, I think the world of anthropology is probably pretty, is a conservative kind of group of folks, right?
And probably like human based. Sure. And knowing, knowing Christian, and no, he's, I mean,
[00:54:05] Jeremy Utley: anthropology without anthropologists is like, that's such a profound frame even on the knowing some of the projects practice they're working
[00:54:12] Henrik Werdelin: on. Like, they're definitely, I think, very ambitious with, uh, what they can use AI for.
So I, I think it's, I think you're right though, that you have to really be playful with. What you want to use it for to kinda like start to see its limits. Uh, but obviously there's limits too. Mm-hmm. I also, um, I was fascinated by, I was kind of like ready for him to ppo all over synthetic data. I would imagine somebody spending his whole career, like looking at people was like, well, I have to look at people.
And, and in the country it's like, oh yeah, think there's a lot of like, maybe small problems to start with, but over time, bigger problems. I thought that was fascinating too.
[00:54:50] Jeremy Utley: And even his comment that actually AI interviewers are better than his own. Yeah. You know, human anthropologist, I thought I was not expecting him to say that.
I feel like that we could have an entire follow up conversation just on that topic. That's awesome. Maybe you should get his, uh, partner at Christian to continue the conversation. Oh, that's a good idea. Can I make a re request? Can I make a request live on the air that you, I I don't know if they, if I, I don't know how much overlap their expertise is, but if they're working together, I think this conversation just kinda stimulated so much more curiosity in me.
I would love the chance to
[00:55:21] Henrik Werdelin: continue. No, I'd love to get Christian, I'm a big fanboy of his partner, so, uh, I'll go and, and do my, my appropriate begging. All right,
[00:55:28] Jeremy Utley: sounds
[00:55:29] Henrik Werdelin: good. Lemme know
[00:55:29] Jeremy Utley: if you, you need to take my reputation for a spin. Let's go.
[00:55:33] Henrik Werdelin: Okay, Jeremy. Oddly, I think with that we will end up and we will try to beg people to share this episode with somebody that they think might not really think about AI in like a classic term.
Then want it to kind of, uh, yeah, maybe like, uh, into what, what you doing? You're doing something that has, I wanna make
[00:55:54] Jeremy Utley: sure you get the hashtag, the, the code word. The code word hashtag Oh. The code is, is without anthropologists.
[00:56:00] Henrik Werdelin: Okay, so that's a possible, if you hear it all the way to the end, send us a note and we'll send you a gift.
We'll send you one. We'll, we'll send you one of our books each. There you go. There you go.
[00:56:11] Jeremy Utley: Each wow, can we not alternate for crying? That loud okay's a good idea. We'll do it. We'll do it. Send us a note. We'll do it. And with that, bye-bye. Bye-bye.