In this episode, we engage in a thought-provoking conversation with Stephen Kosslyn, former Harvard professor and dean, who has spent decades at the forefront of psychology, neurology, and educational sciences. Kosslyn shares his journey from academia to leading AI-driven educational startups, highlighting the critical differences between passive and active learning. He emphasizes the importance of 'learning by using'—a method where knowledge is applied in real-world, open-ended situations, which leads to better retention and creative application. Kosslyn discusses how AI can serve as a cognitive amplifier, helping learners by storing vast amounts of information and enhancing critical and creative thinking. He also addresses the limitations of AI in handling context-specific, open-ended problems, which humans still navigate better. Drawing from his extensive experience, Kosslyn shares insights from his upcoming book, 'Learning to Flourish in the Age of AI,' set to be released in December. He underscores the enduring value of the humanities in an AI-driven world. This episode offers deep insights into the future of learning, where AI and human creativity intersect, fostering a new era of education.
In this episode, we engage in a thought-provoking conversation with Stephen Kosslyn, former Harvard professor and dean, who has spent decades at the forefront of psychology, neurology, and educational sciences. Kosslyn shares his journey from academia to leading AI-driven educational startups, highlighting the critical differences between passive and active learning. He emphasizes the importance of 'learning by using'—a method where knowledge is applied in real-world, open-ended situations, which leads to better retention and creative application.
Kosslyn discusses how AI can serve as a cognitive amplifier, helping learners by storing vast amounts of information and enhancing critical and creative thinking. He also addresses the limitations of AI in handling context-specific, open-ended problems, which humans still navigate better. Drawing from his extensive experience, Kosslyn shares insights from his upcoming book, 'Learning to Flourish in the Age of AI,' set to be released in December. He underscores the enduring value of the humanities in an AI-driven world. This episode offers deep insights into the future of learning, where AI and human creativity intersect, fostering a new era of education.
00:00 Introduction to Stephen Kosslyn and His Career
01:01 Active Learning
02:08 Retention in Learning
03:40 AI's Impact on Learning
04:18 Early AI Experiences
09:56 Cognitive Amplifier Loop
15:09 AI in Learning: Challenges & Potential
23:18 AI Personalization Complexity
28:45 Transfer Problem in Learning
30:55 Problem-Solving & Cognitive Limits
31:58 Evolutionary Learning & AI in Creativity
35:45 AI Context Switching Challenges
37:34 Student Motivation
39:50 Innovative Teaching Methods
42:22 Intrinsic vs. Extrinsic Motivation
54:11 Humanities in Learning
55:36 Final Thoughts & Reflections
📜 Read the transcript for this episode: Transcript of How AI Transforms Learning - with former Harvard professor and dean Stephen Kosslyn |
[00:00:00] Stephen: I'm Stephen Kosslyn. I was on the Harvard faculty for three decades. I was chair of the psychology department there. I was dean of social sciences. I was a co director of the mind of the market lab at Harvard business school. I was also on staff in the neurology department of mass general hospital, left Harvard to go back to Stanford where I'd done my graduate work to run the center for advanced study in behavioral sciences, which didn't work out as well as I'd hoped it would. So I left Stanford to join a startup company. Called Minerva, which is a startup university. I was their founding dean and chief academic officer for about six years. Which is fantastic. Be able to take a step back and start with a blank slate and decide what's important and how to teach effectively. From there, I started another startup for working adults, teaching them skills and knowledge that would not be easily automated. And now I run a company called Active Learning Sciences, which uses AI primarily to develop educational programs all over the world.
[00:01:01] Jeremy: I know that today you've been diving deep. In a I and I'd love for folks to appreciate a little bit of the background context that you bring to this conversation when you're an expert in learning and specifically this idea of active learning. So just for folks who maybe aren't familiar with that concept. Could you do kind of a 60 second introduction before we then get to the topic at hand, which is a I accelerated abetted inhibited active learning of some kind.
[00:01:28] Stephen: Yeah, so a lot of people talk about active learning as learning by doing, which I think is not accurate. Think a better way to think of it is learning by using. So you've got some kind of a goal, which in formal settings is typically characterized by a learning objective, and you've got material that you need to use in a specific way to achieve that objective. So active learning is about using material, might be in a debate or problem solving session or role playing game many different ways to do it. But the point is, you're not just letting the information sit in your head. Making it come alive by using it in some way.
[00:02:07] Jeremy: And why is this important? I mean, just for the uninitiated, what's the difference between passive learning and active learning in terms of learning outcomes?
[00:02:15] Stephen: Well, a lot of passive learning is actually a misnomer. There's no such, it didn't happen. You, if you passively sat there you may have had the information passed through you temporarily, but it didn't stick. And if it did stick, It didn't get absorbed and integrated in a way that you can actually use it productively and creatively. So in my view, the most important thing about learning is to be able to use it in open ended situations moving forward. And if you just passively encounter information, it's unlikely that's going to be the case.
[00:02:49] Jeremy: It's the classic case of even listening to a podcast or something and somebody says, well, what'd they talk about? And you go, uh, you realize. I can't remember. . I don't know if you can clarify this. I heard a statistic sometime that something like, 80 percent of knowledge is forgotten within 24 hours of the quote unquote learning moment.
[00:03:09] Stephen: Yeah. I've seen 70%, but that's, we don't have to quibble over the details. Yeah. It's an enormous amount. If you wait about three days, it's about 10%. There's a figure I've seen that is retained. From a lecture, from a passive sitting there listening to it. Yeah.
[00:03:26] Jeremy: And so the goal, obviously, of active learning is for the learner to retain more of what they've learned. And your point is not just so that it stays in your head, but so that, I love your phrase, so that you can apply it in open ended situations.
[00:03:40] Stephen: Yeah. In fact, I just finished writing a book, which I turned in. It's coming out in December, apparently where it's called Learning to Flourish in the Age of AI. And it's got two central themes. One of them is that one of the things humans are good at and are likely to stay relatively good at compared to AI is the ability to respond in open ended situations where context must be taken into account. So AIs are not as good as humans at this at present anyway probably won't be for the foreseeable future.
[00:04:18] Henrik: We tend to start by asking how you got introduced to AI and often kind of like the moment. And Yeah, so maybe we start there.
[00:04:31] Stephen: Well, in the early 90s, I was doing neural net modeling with simple three layer backpropagation based feedforward networks. So I was kind of doing this a long time ago and wrote a book with Olivier Koenig called Wet Mind. Which was the idea that you want to get inspired by the, it's like early 90s still, inspired by the brain to think about what the brain does. So if you think of the mind is what the brain does, which is how I think about it, mostly what it does, does other things too, but that's a major thing for human brains. Being inspired by the brain an old idea, but it wasn't obvious what to do with that idea. And when neural nets started being developed, it really opened up a whole new way of looking at things. So that was a long time ago, and then when I found out in December, I think, of 2022 about CHAT GPT 3. 5 I mucked around with it, and I discovered that I could ask it to pause and wait for me to do something before it responded. Wow. That, that one ability just opened up all kinds of things you could do with it.
[00:05:41] Henrik: I seem to remember another book with that thesis also a few years back, on intelligence? Was that was that in the same space? Did you ever read that?
[00:05:50] Stephen: I don't know. Who wrote it?
[00:05:52] Henrik: It was Hawkins, , , the one I remember was Jeff, Jeff Hawkins, who I think created PalmPilot, right?
[00:05:59] Stephen: Yeah, he did. Yeah. The trio. Yeah. No, I didn't read it. I don't think I've even heard about it.
[00:06:03] Henrik: Cause he wrote this book and that was the first time I got introduced to this idea that one way that was at the time being used to build AI was potential flawed and the more that you could mimic the brain and create like more neural networks that kind of yielded more. And so when you got access to kind of open AI, was that part of the 2020, 2021 kind of phase?
[00:06:28] Stephen: No, it was 2022 at the end. I was just a ordinary person who, you know, a former colleague from Minerva when I worked at Minerva University is now running AI and education for YouTube. Jonathan Katzman, um, pulled out his phone and showed it to me. And At first, I thought it was just kind of a fancy Google, so I didn't think much of it. But it was only later that I figured out it was a lot more, and in some ways less, than a fancy Google, the whole hallucination issue.
[00:07:03] Jeremy: Why was wait so important to you?
[00:07:05] Stephen: Because you could set up branching activities where depending on what they said, it would do different things in response. And then when I figured out it was actually quite good at responding to qualitative input, if you combine those, you can start doing active learning.
[00:07:24] Jeremy: And so you're in December 22, ChadGBT 3. 5, and you realize all of a sudden we can start branching. I think in the introduction to this book, actually, you mentioned that you had been intending to write a different book. When did you realize that your life was now derailed because of this new potentiality?
[00:07:45] Stephen: I'm not sure I'd say derailed, maybe rerailed if that's a word. Yeah. So it's interesting. There's Jonathan Katzman. It was the guy on the West Coast, and there's John Katzman, and these are different people. So John Katzman is the founder of Princeton Review, founder of 2U, and founder of Noodle, which he's currently running. He had read an early draft as a favor of a book that I'd taken a book that I'd published in 2020 called Active Learning Online. It was in response to the pandemic. And how to use active learning with zoom and all this stuff. So it wasn't just an attempt to give a lecture through a camera take advantage of the medium to do interesting things with it. And then I was going to update that to be focused on hybrid where some of it was online. So it was in person and some of the online synchronous, some asynchronous, and I'd written that and John read it and he made this one little comment. This is probably in December of 2022. He said, you know, This AI thing is going to be huge. You really got to say something about it. Cause there wasn't a word about it in the book.
[00:08:49] Jeremy: That's great. By the way, I wish I had John in my life. Cause my book came out in October of 22 without any reference whatsoever to AI, one month before Chad GBT comes out. So I wish John Gatz was in my life, but so he said, you should really say something about it. I assume you pulled on that thread a little bit and how long until the sweater unwound?
[00:09:10] Stephen: How about two months? I mean, that book got thrown away. I preserved part of one chapter and edited volume, which is still not out yet, on hybrid education. I've got to come up with this elaborate algorithm for how to allocate which parts, of instruction ought to be asynchronous and which parts ought to be in person with people, with all these different factors. It was so complicated. I hated to see it just go. So I converted it into a chapter. But yeah, just your analogy of pulling the thread is right because the book really unraveled the other one and just got completely redone.
[00:09:45] Henrik: What's your kind of current thinking on what AI as we know it now will mean for learning?.
[00:09:56] Stephen: So this book that I just turned in has two central themes. One of them is this idea that humans are better at open ended situations that require taking context into account. And the other is this idea that we should think of AI as cognitive amplifiers. So don't think of them as copilots or collaborators or tools. , you know, they don't have their own goals. They're not really capable of. taking your side on something in the way that a human would. What they really are good at though is boosting what we already do well with proper direction and helping us compensate what we don't do very well. So a lot of what's in that book It's how to use AI, um, to help you use AI.
[00:10:45] Jeremy: I love that. I love that. Yes.
[00:10:47] Stephen: Yeah.
[00:10:48] Henrik: Do you have concrete examples?
[00:10:51] Stephen: Yeah. I set up something I called the Cognitive Amplifier Loop, the CAL, which is this thing where you start off with a goal and why you're dealing with the AI and the goal can be vague by the way, part of the goal can be to firm up the goal but that leads to a prompt. That leads to a result and then a variety of things happen. You either loop back, realize you don't have what you want, but it's close, you update the prompt or you realize really far off. So you may update the goal, realizing you weren't asking about the right thing, or it might turn out that it's close enough that you end up refining it. So the idea was to take each of these stages and think about what kind of thinking. is required. So, for example I broke critical thinking into a couple dozen specific types. So, you can think about categories of critical thinking, like deciding whether this source should be taken seriously or not, deciding whether an argument actually makes sense or not deciding whether the alternatives and you're faced with making a decision should be weighted deeply, and so forth. These are all different sorts of critical thinking. Those are categories which got broken down. It turns out that A, there's too many categories for somebody to keep in mind, and B, some of them are pretty complicated. So you can use the AI to store for you all these different things. I have this table, it's like three and a half or four pages long, literally, it just types of critical thinking. So I just drop that into the AI, into the context window, and I write a prompt and I say, help me with this. problem by drawing on appropriate kinds of critical thinking and table, whatever it is, 2. 1, um, which it does very nicely. So I can offload a lot of the cognitive load, a lot of things that were straining me as a human. On to the AI, but having it help me in my goal of using the AI.
[00:12:50] Jeremy: So one way that I've heard it described, which I think is pretty apt is it's working memory. And the reality is you can't keep a table in your working memory, right? But an AI, Can keep books. I mean, , I think Eric Schmidt actually say the other day, Hey, remember these 20 books, you know, and then just upload the books. Can you tell me the dah, dah, dah. And effectively the way to think about the context window is you're telling AI what to put in its working memory, which when you think about it like that, it's so profound because the reality is you're an expert you've got, four decades of teaching experience at elite institutions and yet Being able just to keep a table of types of critical thinking in mind When seeking to apply that table is a difficult cognitive feat, right? Imagine it's an average person but for an ai So what do you call, what is that type of, if it's not co piloting, what do you call it?
[00:13:44] Stephen: Cognitive amplifier.
[00:13:46] Jeremy: . It's like, I know this, but I forget that I know it almost kind of.
[00:13:49] Stephen: Yeah. But you know, I, use books that way. It's not a new thing. When I read a book, I kind of remember what's in it so that I can go back and look it up later if I need it. I don't remember this stuff. I just remember vaguely what's in it. Similarly, I know what to ask the AI to do. That I can store my working memory. You're quite right, Jeremy. That's exactly right. It really is a way to expand your working memory by having fingers into it, as it were sort of pointers so that you can quickly access and have it help you. So yeah, so going through that loop. When I first get the result back, I need to examine it from a critical point of view. And then if I have to update it, I got to do creative thinking. So another big thing is it turns out creative problem solving is also complicated. You've got this initial stage of divergent thinking, where you try to come up with as many alternatives as you can. And then you do the second stage of convergent thinking, where you narrow them down within the constraints of what the solution is supposed to accomplish. So the AI can help you a lot with that too. But it's more than that. I mean, AI can help you all kinds of stuff. So I'm sorry, I'm going to use the book as a structure since I just finished it. It's on my mind. The first part was just getting laid out with this cognitive amplifier loop is and how to use it. Second part was about people. So I read somewhere that, you know, a lot of people are worried about being put out of a job by an AI. That's probably not what's going to happen. They'll get put out of a job by somebody else who's using an AI, right? Do you know who said that? It's not me. I picked it up from somewhere.
[00:15:25] Jeremy: No, to me, it's almost at this point, it's almost a meme. So , it's so ubiquitous as to be sourceless. Going back to critical thinking. I'm not sure where you rank a meme level, uh, tautology, but that's probably where it is at this point.
[00:15:40] Stephen: Yeah, I think you're probably right about that. Yeah, so the second part of the book is about people. So it's about leadership, followership, collaboration, emotional intelligence. So you might think at first glance that people have it all over their eyes with emotional intelligence. That's not so clear to me, by the way.
[00:16:00] Jeremy: Well, you read even studies of bedside manner, right? Patients actually prefer an AI clinician to an actual doctor, right? Speaking of emotional intelligence.
[00:16:09] Stephen: Yeah, but I'm not sure whether that says more about the AI or more about the actual doctor. Touche. Touche.
[00:16:14] Henrik: I have, um, obviously because you've done a lot of work on mental imagery, I was keen to ask you on any advice for how best to explain to an AI what you're seeing from your inner eye as to help get it to help you visualize it, you know, as you're prompting something. Have you been playing around with that?
[00:16:39] Stephen: I have not. I've used it as a way to induce meditation. That this is part of, it's again, it's that same book. Which involves mental imagery, the type that it does. And I've had it, so it'll adapt to the user, because not everybody's especially visual. Some people may prefer more auditory or tactile even, you know, laying on a warm beach with the sun on you kind of thing. One of the big advantages of AI is just how flexible and responsive it can be if you set it up right. So I have used imagery in that context, but I've actually not I had to do mnemonics. I think in that first book on AI, there was a section on mnemonics in there where I did some imagery related stuff. But what sort of things do you have in mind?
[00:17:26] Henrik: It was just like, I think one of the superpowers of these models is of course that it makes an entrepreneur or a maker out of people who don't necessarily have that as a craft. So, if you wanted to generate an image for. And add or to send to somebody or you're sitting and talking to your son and he's explaining something he has is his mind. I find it to be a fascinating tool for trying to bring some of that imagery to life because I can vocalize it with words and then obviously the eye can render it for me. And so I was curious of if there was ways to increase the fidelity, between what I had for my inner eye and what that it might understand.
[00:18:12] Stephen: I think there will be, but it's not there yet. I have been incredibly frustrated with the image generation abilities of AI. Especially when it generates something as close to what I want. And I ask it to try again and just do this or that, it just doesn't do it very well.
[00:18:28] Jeremy: No, I didn't mean start all the way over and do something totally different. Okay. Okay. Well, do you have another thought on that?
[00:18:36] Stephen: No, no, but eventually what I would like is what they keep saying they're about to do. Which is, you know, think of Michelangelo releasing the statue within the block of marble, where you can direct it to lop off this little piece here and massage that one over there. That's what I want. But , you were right. I mean, it's incredible. It's close. You say, just make it a little, nope, starts over again.
[00:19:00] Jeremy: Nope. Entirely new. That's not even the same character.
[00:19:03] Henrik: I was trying to render a dog the other day, sitting in an airplane, and then I wanted it to move the dog to another seat in the row, which turned out to be impossible. I was like, this, the AI did not want this dog to have a window seat whatever it was doing.
[00:19:19] Stephen: I'll stop on this in a second, but Beth Callahan, who works with me tried to get it to generate an image of two pans, a scale, an old fashioned scale, one which had like five iron balls in it, one at one. So the heavier one would be lower, impossible, could not get it to render it. So the one with more weight in it was actually lower. I tried. She was right. I couldn't do it.
[00:19:45] Jeremy: I wonder if that's something to do , with the having too strong of an idea in our mind about what we want, perhaps. I think a lot of times folks misuse GNAI when they expect it to read their mind. And I wonder the extent to which we don't realize it, but Basically, trying to get it to read our minds, and perhaps because many of us aren't visually inclined or painters or whatever, we lack the requisite vocabulary which actually, in a way, leads me back to the question that I was wondering about, which is how far upstream can you take the AI's ability to teach you?. So for example, I'm giving feedback to someone or I'm attempting to, and I might say, you know what, would you give me feedback from the perspective of Dale Carnegie, how to win friends and influence others? what would he say about this feedback? A lot of people hear that and go, Oh, Well, how'd you think of Dale Carnegie? That's a great idea. Well, and I go, if you don't know to think of Dale Carnegie, just ask the AI for whose perspective you should get. Which is to say I've suggested they go upstream of their activity. And in this case, I happen to know of Dale Carnegie, whatever. But the point is at some point, I feel we lack the requisite context to even appreciate an upstream recommendation. I do believe that one ability of AI is to complement what we don't know. But the challenge is, how do I prompt for what I don't know? How do I realize, for example, in this imagined interaction that I need to get feedback on my approach to this, right? And for me, the challenge is actually knowing it is part of the metacognitive load, right? And so, I have often wondered how can we push folks and encourage folks to move upstream in their thinking and leverage AI as a thought partner in that way. But even as I try to describe it, I realize it's hard to describe. I don't know if that triggers anything for you.
[00:21:37] Stephen: Yeah I like very much the way you're thinking about AI, because that's exactly what I do. I will do things like the prompt doesn't work, I'll give it to the AI and I'll say, here's what I want it to do. Here's what it is doing. Can you redesign it? It actually is pretty good at that. Most of the time. So stuff like that. But so I very much like this idea of trying to push it backstream, but why not just ask it? Just say. Here's the situation, what should I be thinking about? Here's a frame I really like a lot from the guy named Alcott. I think it was published in the 60s. He had a hierarchy of ways of thinking about knowledge. The idea was data is uninterpreted observations, one sort or another. It doesn't have to be quantitative, it can be qualitative. And that information is when you've interpreted the data. And then knowledge is when you've integrated that into what you already know. And then wisdom is when you take perspective on it all. So what you're talking about, Jeremy, is somewhere, I think, between knowledge and wisdom. where you've got to have it integrated in so you can get the associations out, because often you don't know what you don't know, so you're dependent on kind of associations that happen that trigger things. But wisdom is this idea that sitting on top of it all, you've got it all in perspective, which takes a huge amount of knowledge and experience to get to. But I think from Alcott's perspective, Think about how we could use the AI to supplement our own knowledge, to supplement the kind of associations. Yeah. Yeah. Yeah.
[00:23:16] Jeremy: It requires a humility. ,
[00:23:17] Henrik: . If we take that as a way of doing self learning, and you obviously knowing a lot about the science of learning, how would you best talk to a human, but in this case, the AI to kinda help it, help you understand that better.
[00:23:37] Stephen: So this brings up a problem which I expect to be solved in the relat new fu future to do it well. The AI should have a model of you.
[00:23:47] Jeremy: Right. I was going to, I immediately went to custom instructions, right? I, and I don't know if that's exactly what you're getting at Steven, but Henrik, when you asked that question, my immediate thought was. You got to tell the AI to interact with you in a way where you want to learn and you're aware of your learning gaps Or you know cognitive, gaps potentially. I don't know.
[00:24:05] Stephen: Yeah, that's right But it also needs to know enough about you to know what motivates you What interests you something about your background things they can use as foundational material and so forth I have this thing I do and talks Where I start off with a picture generated by AI. This is a good example we're talking about, by the way. So I have this log, picture of a log, with a Greek philosopher sitting on one end, and a student on the other. And I say, back in classical Greece, this was the ideal educational environment. A log with the instructor who knew everything relevant, including about the student, what motivated them, what they already know. . et cetera, et cetera. The problem is we can't duplicate it. First of all, there's too much known. Nobody knows everything, but even if we have specialists, it just won't scale. However, I say we can distill a lot of the active ingredients here and substitute for the human instructor. Then I click slide and it goes forward to another log. With an AI robot sitting on one end replacing the the Greek scholar. I could not get it to use the same log.
[00:25:13] Jeremy: I was going to say, and you probably couldn't get it to place the AI on the right spot on the log.
[00:25:17] Stephen: And I couldn't get it to use the same student. I mean, I kept feeding in the other one. I said, this is perfect. This is exactly what I want. Just substitute this. Wouldn't do it.
[00:25:25] Jeremy: So the irony. Notwithstanding, it's a great teacher, right?
[00:25:31] Stephen: But the fact is the AI does have some advantages over the old Greek scholar. It can be set up with a lot about the science of learning. So it can actually know a lot about how to get somebody to learn a way that the Greeks didn't know systematically in the way that we do now. And also of course, the subject matter experts knows a ton more about lots of stuff.
[00:25:49] Henrik: How much does it matter? How? It speaks to, you were mentioning earlier, the difference between having either kinetic or audible or visual kind of words being used in a narrative, right? As I understand it, that's like neural language programming kind of thing. But as we are developing bots and as we're into acting with chat dbt, obviously it will have a default way of talking to us and then the stuff that we designed to other people, but , it does have the ability to change his way of talking to anybody. Right.
[00:26:27] Stephen: Yeah.
[00:26:28] Henrik: Do you have advice on how to ask it to talk to me, for example, in a way that would be best for me, because obviously I might not be aware that I use, you know, like kinetic things like, standing on a firm ground or that solid or whatever, but, somebody with your background would instantly understand, like how I like to be communicated to.
[00:26:51] Stephen: Well, not instantly. That's part of the point, actually, is it takes some experience interacting to figure that out. And that's part of the problem that I was zeroing in on a minute ago. You can't do fine tuning on them, you know, for a large language model. And even if you could, you'd need tons and tons of data. Clean data would take quite a while. So you can't really customize them so it knows Henrik.
[00:27:12] Henrik: Do you think we're that complicated Or are there more like five different behaviors and five different language kind of syntaxes that people enjoy.
[00:27:21] Jeremy: Is , like, is there a quick five minute online test Henrik could take, for example?
[00:27:25] Stephen: Yeah. So do you know about the big five? Sure. Are you familiar with that? Yeah. So there's these five big personality dimensions, each of which has, I don't know, a couple of dozen specific traits. Okay, take conscientiousness, it breaks down into, so the ocean. Right? That's the acronym for its openness, conscientiousness, agreeableness, extroversion, and neuroticism. Those are the five big dimensions, each of which unpacks like crazy. And each of those interacts, they interact within a dimension, but across dimensions. And here's the part that's really fun. They interact with situations. And the way they're going to interact with situations depend partly on how you interpret the situation. Okay. Which is a function of both your previous experience and your genetics your temperament. So you put all that together It's ridiculously complicated.
[00:28:24] Jeremy: Wow.
[00:28:25] Henrik: So There's no cheat sheet there.
[00:28:28] Stephen: No, well, this is the second part of that book. I just wrote I actually have personality as a chapter where I go through this stuff And talk about ways you can use the AI as a cognitive amplifier to help you with some of this. But it's not there yet quite. , I'm expecting in another couple of years even, it probably will be.
[00:28:44] Jeremy: Can we talk about this idea of what we're getting at between knowledge and wisdom and the ability for the AI to talk to us, et cetera. It reminded me of , one thing that you mentioned in your book. The single greatest problem in the science of learning. And I'd love for you to talk about this idea of, I don't know if it's associated, like how you would describe it.
[00:29:04] Stephen: Yeah, it's called treasure.
[00:29:06] Jeremy: Can you talk about that a little bit? Why it's a problem in the science of learning and how AI either helps mitigate or exacerbate that problem?
[00:29:14] Stephen: Yeah. So the problem is that you learn something in one context. like classroom maybe. And then you just don't use it in other contexts. It's a failure of transfer. So transfer means from one context into another context. And there are two types. There's near and far. Near is where contexts are similar. Far is where they're on the surface. They don't seem to have much in common, which is the way it usually is. The problem is that the only way to really Learn something that'll help you transfer it is to see lots of varied examples. Mm. Yeah. And you know what AI's really good at?
Yeah. I was gonna say bingo.
Yeah. Yeah. It's terrific at it. Really good.
[00:29:59] Jeremy: It reminds me a little bit as you describe it, are you familiar with dunkers radiation problem?
[00:30:03] Stephen: Yeah, sure. Of course.
[00:30:04] Jeremy: You that classic that there's
[00:30:06] Stephen: Yeah, yeah. In fact, that's a good example of failure to transfer. You, yeah. So people don't know about this. You've got a tumor. If you put radiation in directly, you'll kill all the cells between the tumor and the outside of the body and not so good for the human. So the solution is to break up the radiation into a bunch of smaller beams that intersect at the tumor, so that they're weak enough they don't actually damage the healthy tissue as they go in. So people like Keith Holyoak did this classic study where you give them this problem and get them to understand the solution, and then you switch them to other problems. Like, there's a army that wants to conquer a castle, but if they all come in at once down the main road, they'll see them coming and they'll have the fortifications up and do them in. And the solution is to break up into small little groups of bands of soldiers that come from different angles and then assemble at the end, right? People don't see it. People don't automatically transfer. That's the point of that paper.
[00:31:06] Jeremy: To me, the insane thing is, and , you probably know the statistics better than I do. Henrik, something like, I don't know, call it 10 to 30 percent of people can solve a problem that's analogous, right? If they know about the radiation thing, they should be able to solve, the army thing, right? The thing that's wild is if you tell someone Does the radiation problem have any bearing on this military problem? All of a sudden, something like 90 percent of people go, right. But the key is actually prompting them to, I mean, you could say it's prompting them to transfer. The question is how do you replicate that kind of experimental finding in is AI capable of helping us become aware of
[00:31:45] Henrik: my mind? Why is that?
[00:31:48] Jeremy: It's wild, right?
[00:31:49] Henrik: Why is that? Why is it so difficult for humans to understand the abstraction of the system that was applied to one thing?
[00:31:57] Stephen: I don't think anybody really knows, but here's a hypothesis. Imagine our ancestors discover some fruit that is edible. But if it's slightly different color, it's not ripe yet. And a slightly different shape is actually this other one that's poisonous. So what you want are what's called narrow generalization gradients. So a generalization gradient is the extent to which if you learn one thing, you'll extend it to things that are similar. So humans in general have pretty narrow extensions, as it were, because maybe it's the conservative way that evolution built us so that we don't get knocked off by, doing things that are inappropriate. But unfortunately, it has unintended consequences, I would expect.
[00:32:41] Henrik: I wonder if AI has the same thing. Cause obviously it's taught on all of our knowledge. And so it'll be interesting to try to figure out, cause one of the ways that. I try to get AI to perform more creative kind of results is to try to do that kind of context switching for it where you say, these ideas are great. Now give me something that would be considered widely legal and what you would do in a different industry. And obviously he'll answer. But because you suddenly prompted it with something that was, you know, not common, it suddenly spits out something that is not standard.
[00:33:17] Jeremy: Well, and the great wish, by the way, so Henrik's enlightened enough to know that far reaching analogies have greater combinatorial impact. Capacity or potential for a creative solution. So he knows to ask for it. I think going back to the earlier part of our conversation. Is there a way to get the AI or to prompt the AI to metacognite for somebody who doesn't appreciate distant analogies? What do they have to ask in order for the AI to say, I should try distant analogies to help you with this. You know what I mean? Cause that's an example of upstreaming one's thinking. And I feel like the challenge is actually knowing , do you always need to ask the question, right? Should we humbly perhaps presume I'm probably thinking too far downstream. Is that just a safer place to start? And because we don't default that way, right? We default to, I'm at the right spot in the stream and I just need you to answer my question rather than question the question.
[00:34:12] Stephen: That's great. So one of the things that we do is we have a set of guardrails that we put in along with whatever else we're doing instruction. The guardrails are things like don't answer if they ask you for the answer, because the students will do things like that as well as reply. That's so good. Yeah. So we also have one in the science of learning where there's a summary of the various principles. These are always in the background. They're always dropped in and the prompts are written in a way to take advantage of them. So I could imagine compiling a set of meta prompts that are always thrown in that'll help you do that kind of thing. But there is a more fundamental problem though, I think. So that's one part of it. I have a friend named Gary King, harvard professor who made a comment I thought was really pretty insightful. He said that AI is a really good in interpolation, but bad at extrapolation. And I thought that was really quite insightful because what they've done is they've been trained up on all this stuff. We're talking about LLMs here, which are, essentially trained to anticipate the next token, which is a way understatement of what they're really doing, given the way attention works in these things. It's much more complicated than that but that's a way to think about it, a simple way to think about it. And nevertheless, they pull out all these dimensions and dimensions and dimensions and so forth, a lot of abstractions. But it's still based on the data set and so interpolation between things that have been trained is something that they're gonna be able to do pretty well. But what about when you have to jump contexts? That's this point that I was making earlier about we're pretty good at it. And I got some vague ideas about why, , and why ARs are not. But it's interesting to think about why they seem to be limited when the context requires being changed. You can tell them to change it and it doesn't. Yeah, but it won't figure it out on its own necessarily. So this context thing is really interesting.
[00:36:11] Henrik: What do you call that lacking ability?
[00:36:15] Stephen: I don't have a name for it. Can you think of one?
[00:36:18] Henrik: No, but I mean, like, it's akin to a discussion we had the other day where I was rendering a ton of images of myself, because why not? And my wife looked at them and basically observed, like, these all looks like you, but they're clearly not you. And so it was lacking a sense of soulfulness in like a better word.
[00:36:39] Jeremy: I think the right expression is ginseng qua, isn't it? Haha.
[00:36:42] Henrik: Don't take the bait. Haha.
[00:36:44] Jeremy: Sorry. Sorry. We'll have to, we'll have to explain the French jokes to people because I don't know if we caught that context in the conversation.
[00:36:52] Stephen: Yeah. It's related to the old Uncanny Valley idea where there's something that gets really close, but it's missing something. So it seems a little creepier off in Uncanny Valley.
[00:37:04] Jeremy: Can I put a cynical view out there and , I'd love just to play with this because we've been talking about it. Like the problem is with the AI. And I thought you were going to give this hot take and then you went a different direction. So I'm just going to come back and tempt you perhaps. I wonder whether part of the problem is with AI. The human and specifically , the question of learning, and I wish we could kind of rewind the game tape to the exact moment that sparked this thought. I have been somewhat disenchanted at times by the even students manifest disinterest in actual learning. It turns out most people just want the outcome. They don't actually want to learn. And I think a lot of my challenges, I show up wanting to teach and people just want the outcome. They don't want to learn. Right.
[00:37:54] Stephen: Yeah.
[00:37:55] Jeremy: How do we deal with that? I would love for you to react to that. But then also, how do we account for that? Is that true? What do you think?
[00:38:02] Stephen: Absolutely. I've taught for decades. How could I not think that's true I had this really disappointing moment. I taught at this university that used a flip classroom, and it was super selective, really, really good students. And after I left there, one summer there were about a dozen or so interns in New York City from the university. So I organized a lunch, took them out to lunch. I had a separate room in a Chinese restaurant. And I asked them, I said So, this was a flipped classroom. You're supposed to do the reading, watch the videos, get all the information, transfer, content delivery, before class, so in class we can use that in some way. You know, active learning, you know, discussions. How many of you actually did the readings? What's your guess?
[00:38:49] Jeremy: Why would you do that to yourself, Stephen? Why would you do that to yourself?
[00:38:52] Stephen: I was curious. I was genuinely curious. 10 percent percent. Okay. What do you think, Jeremy?
[00:38:57] Jeremy: Oh I'm a cynic as well. So I would probably even guess single digits just because of my own experience. It's very, very small percentage. Zero. Zero. For those not watching video, zero.
[00:39:09] Stephen: Zero. One said, well, I kind of skimmed through them and I remember visit. Yeah. So, these are very smart students by the way. So they could rely on their raw intelligence to fake their way through the seminar, which they did. And it was kind of obvious by the way, some of the time that is what was happening. So I actually thought about that a lot and decided that, We're making this assumption that's just unwarranted, which is they are interested in learning, just what you said, Jeremy. They're not., they'll learn insofar as they think it's going to help them with the exam, and that they want to do well in the course because they want a better life and all that, and they realize they have to do well in school, and so forth. So, what have developed now, in the current stuff that we're doing all the courses we designed at the company, it's called Active Learning Sciences, and all, we build educational programs, so we've been building a university in Seoul, South Korea, which is now accredited and going great, TGA University is going great, working with them, and various other things, but anyway We have a four phase with the AI, four phase sequence for every class session. The first one is a short video, we call it teaser video, where the point is not an overview of what's coming. It's not to give them a cognitive structure so they can organize everything and so on. That comes later. The idea is to get them interested. And we don't assume they're interested. So we usually have the AI generate them. And occasionally we can find one on YouTube where the whole point is to come up with something counterintuitive, interesting, enticing, that'll make them see that there's something potentially relevant and interesting about the subject matter. So that's how we start. And then the second thing we do immediately after that is we have the AI figure out what they already know. that's relevant for the learning objective. So we've programmed it with a rubric, a detailed rubric, which guides it in an interview or some other way. We have lots of different ways of figuring this out where the idea is we're not going to teach them stuff they already know. That's not teaching. That's just bludgeoning or something. You already know it. Why do it? That instead we're going to focus on what they don't actually know and give them deliberate practice so that we're going to use learning science here. So that alone Is an advantage where you realize coming in, you're going to actually learn stuff you don't already know, so you better pay attention. And we've already set you up to see why it's sort of intriguing and interesting. And then we go through the content delivery and then at the end we have some kind of active learning. Where they do one of dozens of different kinds of activities. We have to pull it all together and use it in some way. And the AI is lurking in the background with the rubric and grades them and it gives them feedback in real time on which specific things are doing well, what they weren't doing so well and offers a tutorial, but then they weren't doing really well at the point. So that the point is because we're leading up to active learning and they know they're free to get assessed. And it's competency driven, so they've got to get through it or they're not going to be able to go on.
[00:42:16] Jeremy: Just on this real quick, before we go to the, to a new topic, Henrik. Does it have something to do with intrinsic versus extrinsic motivation?
[00:42:23] Stephen: Yeah, exactly right. So the intrinsic stuff, if we use self determination theory, which I still think is the best thing out there for understanding this you've got the competence, you've got autonomy, you've got social relatedness. Those are the three big dimensions that they pick up on. So competence you need to set it up in a way that, that it's challenging, but not too challenging. It can't be too easy, or it's boring and people lose interest. So you have to get it right at the top of the inverted U. That's going to be different for different people. And AI can figure that out by the way, which is cool. Is it can do that. So you get a sense of making progress, of competence. The autonomy thing some choice, uh, that is AI will branch is where the importance of that pause thing comes back again. So you have interactivity between the AI and the person where they have some control over what's going to be happening. And then social relatedness I really do think the hybrid, going back to that again, is important, that at some point you need human interaction, that what the AI is doing is a lot of the content delivery, which can get you up certainly to the information part, some of the knowledge part. But I think there really isn't a substitute for people interacting with other people, but I want to structure it. So I have a learning objective. I have the AI do that, by the way.
[00:43:44] Jeremy: What is that called, you say, self determinism theory?
[00:43:47] Stephen: Oh, self determination theory. I think it's in that book, I think, if I remember right. SDT, self determination theory. Dessy and Ryan are the two guys who did it. . And they've developed it for over many years now. It's quite well articulated.
[00:44:04] Henrik: You are somebody that are sitting in an organization and not in a classroom and you want people to learn about AI because you think that's where the world is going into your earlier point. People won't get out competed by AI will be competed by people who are using AI. And so as a business leader, you understood that you got the memo and you're like, okay, but your organization is hesitant. Because they're scared and whatever it could be the reasons is there stuff from the classroom that we could bring into the corporate world where business leaders can go and say something to make it easier to get people either to pay attention or to open their minds so they're Abilities to learn becomes greater.
[00:44:54] Stephen: So I was with you until the very last part about abilities.
[00:45:00] Henrik: I went back in my brain to BJ Fox, um, behavior change model of like, you have motivation and you have capabilities and if you make it very easy or you make a very high motivation. And so in my mind, I basically was trying to figure out if you have an organization with 500 people. You basically had the sense that five, 10 percent actually using it. You would like that to be higher than 50. And like you do the classes and you write the motivational kind of emails and you still kind of feel there's a little bit of like, yeah, probably not going to have my watch, but I hear you. To who are, great educators. And, I would imagine like this struggle is real because every day in the classroom you walk in and want people to learn, but there seemed to be hesitation. I was just trying to figure out if there's a way of taking the multi line approach thesis, but then to the corporate environment instead of the classroom.
[00:45:53] Stephen: Oh, I think without question, but I think in a corporate environment incentives. And extrinsic motivation are also really important. So the intrinsic, so Jeremy and I were having a little fun there earlier. Occasionally, there is a student who's self motivated. It's not the norm, but you do find people who really are interested. And those are probably largely future academics or something like that, which is a low percentage. So it's not zero. Similarly in a corporate environment, I suspect there are people who want to learn just because they want to be more competent, getting back to the intrinsic motivation. But I suspect a lot of people, right. But I think a lot of people are motivated because they want to move up. So the extent to which you can define clear pathways where developing certain competencies is going to help you, that's going to be a extrinsic major kind of motivation in a corporate environment, I would think.
[00:46:53] Henrik: That's great feedback.
[00:46:55] Jeremy: I'm a student of Theresa Amle at your institution. I had her on a different podcast a couple months back, and she talked about mm-Hmm, , , I can't remember who conducted the studies, but she said even, you know, 50 years ago, it's demonstrated that. An activity which is intrinsically motivating once there is an extrinsic reward attached to it.
[00:47:16] Stephen: Yeah.
[00:47:16] Jeremy: The person is no longer motivated. Right.
[00:47:19] Stephen: Yeah.
[00:47:20] Jeremy: So I wonder a lot about that, about how much does it actually have to be not work? How much does it have to be not related to achieving or moving up versus, right?
[00:47:31] Stephen: That literature is a mess. It's called Undermining Intrinsic Motivation and it originated with Mark Lepper at Stanford. I think Teresa was involved in some of that early on. I was a graduate student at Stanford. She was three years behind me. So I knew her back in the day. Yeah. It stayed in big touch with her over the years. But I think if you look at that literature, which I'm not necessarily recommending, it's complicated. It doesn't always replicate. It's another one of these things where it kind of depends. So it's not every case where you teach somebody something in the context of an extrinsic motivation that it's going to undermine their intrinsic motivation. And I'm not even sure how powerful intrinsic motivations are for a lot of the context we're talking about now.
[00:48:15] Jeremy: Yeah.
[00:48:15] Stephen: I do think that the Ryan and Desity stuff, the SDT is probably the best out there about, competency, autonomy, social relatedness. Those are the three big kinds of factors that seem to be intrinsic, but even they talk about how the extrinsic stuff can sometimes get absorbed and become internalized and the intrinsic stuff hooks in with the extrinsic. It's a complex system, right? It's something that's interested me a lot is. Yeah. If you've had any statistics you know about the difference between a main effect and an interaction effect. Most people seem to want main effects. You know, that a single dimension like height, right? Where you're going to single measure along as a main effect, but interactions where the value on one dimension depends on values on another dimension. So maybe it's you've comparing males versus females, but depending on the age, You're going to get an interaction. If you're going pre puberty, females will tend to be higher than, taller than males, but vice versa, post puberty. Or if you look at temperature, right, summer versus winter, you might think that's a main effect. Well, it depends on northern versus southern hemisphere. So whenever you see that word depends on, that's an interaction lurking in the background. Okay. Most things in the real world are interaction effects. They're not simple main effects and this thing's better than that, or this thing's bigger, whatever. It depends. And people hate that. We're really only good for maybe two factors. Some of us can do three. More than that, forget it. And in nature, this is what AI can help us. It's like Plato's cave, right? With a lower dimensional projections, the shadows in the back. That's what we see. That's what we appreciate, but AI can actually help us by doing different lower dimensional projections and helping us try to appreciate, et cetera.
[00:50:08] Jeremy: There's two quick things on my mind that I'd love to hear. And I'll tell you the second one, just so you can be thinking about it, but what's changed in the last year for you as you've dove deep? Because I know we were looking at a book that's been published and now you've already got another one under your belt. So that's second question. The first question is, you mentioned in your book, the single most important principle in the science of learning, the principle of deep processing. And I would love to learn, how can folks practically leverage AI, In regards to the single most important principle in the science of learning.
[00:50:41] Stephen: Yeah, that's in that book, the one that came out end of last year active learning with AI practical guide. Is what it's called yeah, there's a chapter on just that. Look, you can think about the principles, at least I can think about it. Other people have organized them differently. So I'll say it my way. As long as it's two big bins. So one of them is. If you pay attention to something and think about it, you're very likely to remember it, even if you don't want to. So think about, you know, somebody asked you about a movie you saw that was a grossing. You didn't try to memorize it. A week later, you can still go through it. Amazing. End of the day, you're laying there in bed, you're reviewing the events of the day. I mean, what percentage? Of what you remember, do you think you tried to memorize at the time? I've actually asked that question to a lot of people. It seems to be like five percent. Most of what we remember is a by product. It's a consequence of having paid attention to it and thought about it. It's called incidental memory, by the way. It's been studied in some detail. So the idea is to come up with techniques or methods, turn it over your mind, compare it and contrast it. Try to think of special cases. Try to think of exceptions to the rule whatever. There are a whole bunch of these kind of little things that you can, again, have an AI help you, by the way, because a lot of stuff to keep in mind, but that'll juice you to think deeply, which is to basically get to the point of making associations to it is what it comes down to.
The other big principle, by the way, is that when you start making associations, you want to forge mental connections, which help not only integrate this stuff so that you'll retain it, but also give you a hook so you can pull it out later when you want to use it. So these two things, think deeply, pay attention and think it through and forge connections, make associations. Those two, there are a lot of special cases, a lot of special cases about how to do that. I mentioned delivered practice. How do you guys play music by any chance?
[00:52:45] Jeremy: Badly.
[00:52:47] Stephen: Well, okay. So if you play music at all and you go through a score, the first time through you'll notice where the hard parts are. So you don't want to spend equal amount of time going through it every time. Some of it is easy. You don't have to spend much time on it. You want to figure out where the hard parts are for you and give disproportionate effort to those. That's deliberate practice. You need feedback of some sort or another to identify those spots. What it comes down to is pay attention and think it through. You want to be selective about it. It's not distributed everywhere. And similar, I could go on, but this is in those books. I'm obsessed with trying to come up with principles. There's, of course, interact with each other, that last book has some of that. Yeah. So what changed is. The previous books were focused on teaching. They were really designed for people who design courses, teach, and so forth. The current book I just wrote is designed for learners. It's designed for the rest of us. Who are going to have to learn in order to be able to adapt and grow and function in this emerging world. So that was a big change in perspective.
[00:53:58] Jeremy: And what did you see as you shifted perspective from the teaching to the learning? Is there any key realization that struck you?
[00:54:06] Stephen: I realized that the humanities were a lot more important than I had given them credit for. That a lot of learning that we need to do you can get a leg up by reading good literature put yourself in certain mental states listening to music or seeing good art reading philosophy and getting, you know, shoulders of giants and so forth. So thinking about what was required. To come up with and revise life goals as the world is changing. The role of humanities was much more important than I had previously given it credit for. That's in the last of that new book, uh, which I didn't really think I was going to be going there, but that's where I came up.
[00:54:49] Jeremy: What a beautiful conclusion to a conversation about learning in the age of AI is that the humanities are even more important than we ever knew. It's beautiful.
[00:54:57] Henrik: Wow. It's incredible to get you on the show. Really, really appreciate it.
[00:55:02] Stephen: My pleasure. You guys ask great questions. I had fun with this.
[00:55:05] Jeremy: You can tell Steven, we're huge nerds. We love the learning. We could geek out like for another hour easily on more learning science, more application. We're passionate about this. So thanks for spending time with us.
[00:55:18] Stephen: This has been fun. You guys are great.
[00:55:20] Jeremy: This is an exceptional conversation. I'm excited for our audience to get to hear it. Thank you. Thank you.
[00:55:24] Stephen: Well, thank you guys.
[00:55:25] Jeremy: All right, Steven, take Alright gentlemen. Take care. Have a great day. Bye. Same to you. Thanks. Bye. So, okay, Henrik, we geeked out there, didn't we?
[00:55:35] Henrik: We geeked out. Well, you geeked out when you have people like understands education as much as you do. I was just watching the game, but what I did take away, which I thought was fascinating is that students apparently is as difficult to get, to learn something new as, , people who work in organizations. I mean, like it does seem that a very simplistic way to think about it is to make sure that there is a carrot at the end of the AI rote for people. But I also like, the three tier model and I'll butcher it. And so you can repeat it much better than I, but I think about autonomy and the social kind of like layer to it and things like that. There's like a . three letter abbreviation. So I think those two were probably , the areas that I took away the most.
[00:56:23] Jeremy: Yeah. I thought it was a really insightful conversation with someone who spends a lot of time thinking about learning and The challenge of associations and the challenge of sparking ourselves now is a really interesting. I mean, I keep coming back to that idea of moving upstream, and just how do we get upstream of our thinking and the fact that AI is an incredible. Interpolator, but not a great extrapolator, I think is a really interesting kind of paradigm to spend some time with. I really am excited to read his next book because I just got up the learning curve on the last book and then he drops at the beginning of the conversation. I just created a new book.
[00:56:58] Henrik: And also I think that point. That I think you brought out about the context switching, how difficult it is for people to kind of observe one system or one idea in one space and then apply it to something else. And I would ask you without knowing much about it, that AI would probably be very good at helping people with that. And so something I'm going to play around with the next week or so is can I take a very brilliant observation or system that's been applied to military or medicine. And then can I ask AI to say, well, how would we apply the similar kind of approach to this problem and see if it does it better than most people do?
[00:57:37] Jeremy: You know, if you want , a nerdy kind of rabbit hole that you can go down, Henrik, this is something I've thought about. I haven't spent a lot of time exploring, but perhaps our listeners can and you can let me know. Analogous thinking is a really powerful way to spark new connections, new ideas, etc. And the farther afield the analogy, the better. One of the most interesting parts of the research to me is the realization that the hardest thing to do is to identify And analogy to leverage, and so a lot of times what we kind of revert to is just get out into the world. And what's amazing is the world starts to speak to you in different ways. And so my hack to that is effectively one could say, don't think about the analogy at all, just get out there and allow the analogy to speak to you. If AI could assist with analogy identification, I think that could be a really interesting kind of missing piece of the combinatorial cognitive puzzle.
[00:58:34] Henrik: One thing that I've used recently and I'll leave you on this is I'm writing this new book, as you know, called me, my customer and I about how entrepreneurship is changing in the age of AI. And one thing I've done recently is I took all the podcast. And when I have a point, which I think is a good point, but I don't necessarily have a super good case for, I basically put the whole thing in the context window and saying, Hey, I'm trying to make this point. Is there a quote or a story we talked about that people understand that this point better? And often there of course is, right? And so it's been a useful kind of little hack.
[00:59:13] Jeremy: It's interesting to think about that with AI, you can now drop into the context window, call it a walk around the block or a walk around the podcast. It gets back to the conversation with Steven about working memory, but you can effectively drop the history of the podcast in your working memory and say, have we talked about anything like that? Which is very, very cool. And I think most folks could make better use of the working memory like function of generative AI.
[00:59:41] Henrik: I think that's a good point to end this podcast on.
[00:59:45] Jeremy: I mean, as great of a point as realization that the humanities are way more important than we ever knew. How perfect of a conclusion for a learning scientist to arrive at in the age of AI, right?
[00:59:56] Henrik: We'll take that. Until next time, thank you so much for listening. And as always, if you enjoyed it, please share it and subscribe and like and upvote and all those things that podcast hosts normally beg you to do.
[01:00:09] Jeremy: Where do they upvote, Henrik? Tell people where they can upvote. It's on Myspace. Adios.
[01:00:17] Henrik: Thank you. Bye.