In this episode, Brad Anderson, President of the Product Group at Qualtrics and former Microsoft executive, explores the transformative power of AI in reshaping businesses. Drawing from his time working with Satya Nadella, Brad reflects on Microsoft’s cultural shift from a 'know-it-all' to a 'learn-it-all' mentality and how curiosity and customer focus drive innovation. Brad shares how Qualtrics has evolved beyond surveys, using AI to analyze unstructured feedback and enhance customer and employee experiences. He highlights the importance of embedding AI into daily operations, fostering a culture of experimentation, and upskilling teams. With engaging stories, Brad discusses how hackathons and AI summits at Qualtrics inspire innovation and predicts that organizations embracing AI will outpace those that resist. Packed with insights, this episode is a must-listen for leaders navigating AI adoption and business transformation.
In this episode, Brad Anderson, President of the Product Group at Qualtrics and former Microsoft executive, explores the transformative power of AI in reshaping businesses. Drawing from his time working with Satya Nadella, Brad reflects on Microsoft’s cultural shift from a 'know-it-all' to a 'learn-it-all' mentality and how curiosity and customer focus drive innovation.
Brad shares how Qualtrics has evolved beyond surveys, using AI to analyze unstructured feedback and enhance customer and employee experiences. He highlights the importance of embedding AI into daily operations, fostering a culture of experimentation, and upskilling teams.
With engaging stories, Brad discusses how hackathons and AI summits at Qualtrics inspire innovation and predicts that organizations embracing AI will outpace those that resist. Packed with insights, this episode is a must-listen for leaders navigating AI adoption and business transformation.
Key Takeaways:
Qualtrics' website: Qualtrics XM: The Leading Experience Management Software
Claude.ai: Claude
NotebookLM: Google NotebookLM | Note Taking & Research Assistant Powered by AI
ChatGPT: ChatGPT
00:00 Introduction to Qualtrics and Brad Anderson
00:21 From Surveys to AI: The Evolution of Qualtrics
01:41 Lessons from Satya Nadella
03:48 Building a Customer-Centric Culture
06:51 Adopting AI in Business
13:29 Upskilling for AI Integration
22:15 Organizing for Innovation
27:13 AI Summits and Hackathons at Qualtrics
36:42 ChatGPT Obsession
37:19 Notebook LM and Team Alignment
38:57 Innovation and Unexpected Wins
40:42 Using AI in Negotiations
44:18 The Future of AI and Large Language Models
45:25 Driving AI Adoption
48:47 The Rapid Growth ofAI
56:42 AI-Powered Surveys at Qualtrics
01:01:13 Generative AI and Feedback Mechanisms
01:05:54 Wrap-Up and Key Takeaways
📜 Read the transcript for this episode: Transcript of Why your organization needs to embrace AI—Or get left behind, with Brad Anderson |
[00:00:00] Brad Anderson: Hi, I'm Brad Anderson and here at Qualtrics I have the privilege of being the president of the product group here.
So my team is the team that's responsible for building the tools that, you know, more than 20, 000 organizations around the world use on a daily basis. Now, the way our tools get used is people use our tools to understand the lived experiences of their customers or their prospects. Of their employees. Now, historically, when people think about quality, they think about a survey platform.
And of course, that's where we got started. But the majority of our innovations over the last 5 years have been focused on NLU, NLP, and AI, because the reality is there's far more that can be learned by the unstructured, unsolicited data where customers and employees are already talking about their experiences.
Without even having to fill out a survey. And so the largest part of our growth in terms of usage and revenue has been in helping an organization collect and then understand and then act on this mass of of information that is Unsolicited is unstructured and that is where I've been spending the bulk of my time over the last 4 years.
And so that is where I've had a chance to really dig deep and understand where can you put AI to work to really transform an organization and how that organization serves their key constituents.
[00:01:15] Jeremy Utley: It's so good to see you again. I so enjoyed our chat over dinner. That was fun. And I told Henrik about , how insightful you were about getting AI adoption and culture change. And I said, we've got to get Brad on the podcast. I want to start at a different place. Brad, I was thinking about this conversation. I know we want to talk about AI and you've written eloquently. You've spoken eloquently about it. I wanted to kind of take a step back in your career because it's really cool to get to talk to somebody who's worked for so many incredible leaders.
I know, for example, you worked with Satya for a bunch of years at Microsoft, and I just I just thought it'd be fun, actually, to start outside of AI for a second and just kind of get your thoughts on what you learned from Satya about leading an enterprise.
[00:01:55] Brad Anderson: You know, it's actually a really common question that I get, believe it or not, you know, to this day, you know, I've been a Qualtrics for years now, at least once a week, I get the question of, why did you leave Microsoft and come to Qualtrics?
And , what was it that, what was it? Changed Microsoft when Satya became the cEO,
you know, so I had a chance to report directly to Satya for the 4 years before he became CEO. And then after he, after he became CEO, you know, I was definitely, um, , a trusted advisor and a trusted leader by him.
He's a very close personal friend. Someone I admire and respect and love deeply. Um, but in terms of, as I would describe Satya, the 1st thing I would say is, you know, everyone knows him as this incredible leader. But even before that, he's just an incredible human being, like his ability to be able to learn and listen and understand who you are as a person.
I mean, that really is core to who he is. And then a lot of that came from experiences that, you know, he and his wife had as they were raising their son, who was disabled, Zach. But then, you know, , I got to work with Satya for those 4 years before he became CEO, where he was really kind of proving out a lot of the concepts that he then took to the entire company.
Um, and, you know, first and foremost, everything was about culture. Um, and how do you drive the right culture and specific things like, how do you make sure at Microsoft, you know, we are learn it all's not know it all's because I would argue that for the bulk of the company, the company's history, they. You know, kind of active like know it alls.
Yeah, that was one thing.
[00:03:23] Jeremy Utley: Can I ask just on that? How do you discern a learn it all versus a know it all? Seems like it's easier to discern a know it all, but what's the evidence that someone's got a learn it all mentality?
[00:03:34] Brad Anderson: Yeah, are you asking questions and are you inquisitive? You know, one of the things I always look for when i'm interviewing leaders is are they inquisitive?
That's a big, big part of it. So learn it all versus a know it all culture. Um, the second thing , that I tell people that he really, how he really kind of changed the culture in the engineering teams is all of the celebration and reward system moved away from shipping to usage. So think about that for a minute when you're incented, you know , your equity, your bonus is all based upon.
Did you ship something? It doesn't matter if what you shipped is good or not. You know, there's the old proverbial wait till service pack 1 for Microsoft, you know, and some of that played into there when you shift the whole reward system to usage. Now, the customer is the center of the entire focus of the engineering team and.
You know, when something's happening with a customer, a customer doing deployment is blocked, whatever the case may be is everybody. It's all hands on deck and you get the customer up and running. So, you know, related very much to culture and this know it all versus learn it all is also the concept of.
How do you make sure that the customer is the core part of your celebration and your reward system? And I would argue in a SaaS company, the best way to do that for the engineering teams is to have their metrics all be based on high quality usage. So, for example, if you were to go look at my teams and how I reward them, it's not about revenue.
It's not about shipping. It's our people using what you built.
[00:05:04] Jeremy Utley: Simple but powerful paradigm shift when you realize all the other ways teams can be managed and incentivized.
[00:05:12] Brad Anderson: The last thing I would say is you look at for the first couple of years, Satya was the CEO. He largely had the same leadership team around him that Steve had, but the company felt very, very different.
And I love Steve just as much as I love Satya. But his focus on culture and his focus on driving the right culture was just remarkable and what it did for the company.
[00:05:31] Henrik Werdelin: Could you give an example of how that kind of came to life?
[00:05:36] Brad Anderson: Yeah, I can give some, I can give some specific examples, you know. And so let's talk about, , this example of, um, putting the customer at the center of your reward and recognition system around usage, right?
I'll never forget the 1st time I really understood when Satya was serious about this. I think it was 2011 and we were having a conversation about a bunch of our senior people and if they were going to get promoted or not. And there was 1 individual who came up for a conversation of a promotion. You know, this would be like the technical fellow, , I mean, one of the most respected individuals , in the company.
And as we were talking about this individual, you know, Satya made the comment of this person is great. They do great work, but what they've built in the last year or 2 is not being used. And it doesn't matter if it's the most beautiful architecture on the planet. If it's not being used, it's not creating value for customers.
It's not creating value for Microsoft. Therefore, why should we reward this person if they've not created value?
[00:06:32] Henrik Werdelin: I mean, it's such a strong point , and obviously one that normally is like a little bit ignored because of the other metrics are easier,
[00:06:41] Brad Anderson: you know, and this is one of those ones where everyone walked out of that room going, okay, Satya is serious about this. The recognition and reward system is changing.
[00:06:49] Jeremy Utley: Yeah.
[00:06:50] Brad Anderson: And it drove culture change.
[00:06:51] Henrik Werdelin: Do you, I mean, like from the outside, obviously where Satya in, in the kind of the last few years has really gotten to shine is this understanding of AI and the importance and the speed on which it seems that he kind of like pushed the organization to embrace it. Are you able to tell a little bit about like how, you've seen that?
[00:07:13] Brad Anderson: Well, I've been watching from the outside as well, , even though , I still have quite a few friends after being there for 18 years, um, , I think it just comes down to focus. I mean, I remember sitting one time in a conversation with Satya where back at that point, 95, 98 percent of Microsoft business was on prem software and, you know, Satya had come in to lead what was the cloud and enterprise division.
He had come from Bing. So he had been building SAS for 7 or 8 years while none of us, none of the rest of us had been building true SAS. Right? And I remember one of my first conversations with him, and he asked the question, Brad, what if we completely stopped any investment in the on prem solutions for the next two years?
Did a student body shift ride, get everybody working on the cloud server, and then, you know, we can come back and add some value to that. What would that look like? Um, but that's the kind of focus he would bring to any conversation. It's not, you know, it's not like you can't go in something and be half into it.
If something is going to be changing, like, you know, this transition from on prem software to sass, it was a fundamental transformation of the platform you have to focus. He participate like many of us in the platform. Transition from the mainframe platform to the PC platform saw that firsthand.
He saw the transformation as you went from on prem software to SAS and you pattern match and AI is just the latest platform transformation, but it's going to be happening at the fastest rate that it's ever happened. So he knew that the company had to be pivoted quickly.
[00:08:41] Jeremy Utley: Maybe that's a cool kind of pivot point. Now, going back to your, comment that Satya was laser focused on culture. I know from our conversations privately, I know that's a big passion of yours as well, obviously. Can you talk for a minute about how you think about an AI culture and or do maybe start with, do you think of an AI culture? And if so, what are the hallmarks of an AI first or forward or oriented, however you define that culture?
Sure.
[00:09:08] Brad Anderson: Yeah, great question. You know, so at its simplest form, if a company aspires to be an AI leader, I would argue every individual in the company should be using a I on a daily basis. You know, so if I was sitting with the leader and the leader would ask me, Brad, what would you do if you wanted to really make sure your company.
Is gonna be one of the leaders in this new platform of ai. My response is, the first thing I would measure is are my people using AI every single day? Every function. Yeah. Accounting, legal, engineering, product, UX is every function using AI every day because. You know, you learn as you do. And so I think that is the most important thing.
That's the first thing. The second thing, I'm sure we'll spend some more time on this, like any platform transition. There's fear and there's anxiety. There's cautiousness. And so you see a lot of leaders saying, like, hey, we don't want our people using, you know, AI right now. We don't understand it.
We don't know what the risks are. And so they say, No, but what happens is, People just bring it in from their homes. And so they're actually in a worse position because then what's happening is they're using their personal, you know, AI interfaces and apps, which is going into the large language model.
It's not even going in. It's not protected. And so I think you have to create a playground. We have that at Qualtrics. We have what we call the AI playground. And the AI playground is, a place where all of our people can go in and And use AI and we can be sure and we can assure our customers that none of the data that is being used as we're putting AI to work is ever being used externally to train any other large language models.
And so, you know, that's another big part is having that playground.
[00:10:49] Jeremy Utley: Can I just ask a couple of questions here? I don't, I don't want to interrupt your role. I want to slow your role, but like I already have a thousand questions. AI daily, every function, every day. As you think about that at Qualtrics and as you're rolling out that kind of that call, I love by the way that you defined culture as behavior.
I'm a hundred percent aligned with that. You started immediately talking behaviorally. I learned from Mary Barra culture is just how we all behave. That's her quote. I think it's dead on. So you immediately went to every single person must be using AI in every function every day. What are the kind of on ramps that you have found useful in helping somebody in say finance or procurement or marketing?
Because I think a lot of people go use it to do what? Exactly. Yeah. What have you found useful in, accomplishing or sparking that behavior change?
[00:11:38] Brad Anderson: Yep. So I'll give you an example of what I do. At least every week, maybe several times in the week, though, we'll have a few minutes, you know, as people are joining a meeting, or maybe we're kind of wrapping up a little bit early.
I'll be talking to the team. I said, hey, let me show you what I've been working on in my notebook. And I'll pull up, you know, a shared notebook that, , myself and a few people are working on. Show how we've imported data into it, start asking questions of it, um, and actually just show people how to do it.
You know, even when we were trying to help organizations transition from on prem to the cloud and then, for example, when you move to the cloud with Office 365, are you still attaching documents to email or are you actually sharing shared documents? That's a behavior change. And so the only way that I've ever known to drive behavior change is you have to show people.
And so, you know, most organizations know who their formal and their informal leaders are, and you need those formal and informal leaders constantly showing and demonstrating the new ways in which business can be done. I'll give you an example. Our legal team. Um, you can imagine the red lines that we're going through with, customers day in and day out, you know, across 20, 000 plus customers with the way that you can put AI to work to answer all the RFPs, all the red lines.
It dramatically improves the speed and the accuracy that you can do things like that. So that's an example where people say, like, well, how is legal going to benefit from using, uh, you know, generative AI, they can't live without it. generative Wow. Wow.
[00:13:09] Henrik Werdelin: When you talk about leadership and you talk about like showing them, can you talk to us a little bit about how, maybe how you started, but also like how do you yourself then upgrade your own ability to use it?
Because obviously it seems that you're showing the way forward by, by example.
[00:13:29] Brad Anderson: Yeah, you know, , this challenge of how do we upscale our people? Like, I'll give you an example. I've got a really specialized teams of, AI and ML scientists, you know, that, um, the bulk of them have come from Toronto, you know, it's kind of like the center of gravity for AI.
It seems like, um, but that team is the largest constriction point for me right now, as I think about how I scale all the innovation that we're doing in AI, and as you can imagine, the competition for this talent is fierce. And so, you know, I think every organization, you have to have a core like that that really understands the, technology and all the, models and stuff behind that.
But then you've got to figure out how you take and you upscale 15%, then 30%, then 50 percent of your engineering product team. And so we have very specific programs that we're putting in place that enable us to have engineers who are kind of riding along. With , the specialist in the team that we call DICE, that's our AI team.
They get a chance to, go along on the journey. They're learning and then they can go and start doing a bunch of the work that, you know, two months before that only could be done by this specialized team.
[00:14:33] Jeremy Utley: How do those ride alongs work? are they, they're actually invited to all the meetings?
It's like a legit old school apprenticeship. That's what I'm imagining. It's like, sit here while I weld this and watch me put the beads, right? Is that what's happening? Like, what does it look like?
[00:14:47] Brad Anderson: We hope to minimize the number of meetings our people are in for sure. And so it really is just about understanding the coding and the practices, the methodologies, all those pieces in it.
So, it's probably less meetings, and it's probably more one on one where organizations, you know, the 2 different individuals are working on an end to end scenario. And then, , the trained AI and ML engineers that we have. Are showing and demonstrating to these other people how to do that. So they learn as they do it. And then what happens is they can go and then teach the next set.
[00:15:18] Henrik Werdelin: Back to the, how you do it yourself. Like what's your, what's your own kind of like. That's what
[00:15:23] Jeremy Utley: I was about to ask too. So we're, we're on the same page. That's exactly it.
[00:15:26] Brad Anderson: Yeah. And it's, it's a super valid question, , and it's all really about time. And where do you spend your time?
The only way that you really can learn is to do, you know, Whether it's trying to understand what the concepts of coding are, whether it's trying to understand what the concepts of inclusion are, I think you have to have experiences, you know, and I say experiences, some experiences are of the heart and some experiences are of the mind. so I really believe that you learn through experience. And so the first time, the first thing that I ever did when I opened up chat GPT, I guess, , two years ago now was.
write me a product vision document for Qualtrics. And then I started to give it all the specifics, you know, that I wanted it to, include and stuff. And when it came back, I remember looking at it going like, you know, the strategy and the top level pieces are all very, very close. The depth is not there, , but you know, with any of these AI, especially generative AI, the depth and the quality of the prompt that you put in Completely dictates the quality and the depth of the output and so that was my first experience and then you know My children who are in university would come home and they would show me how they were doing their coding assignments in their writing assignments in minutes Like i'll never forget our youngest son was home on a sunday night And he said yeah, i've got three papers that are due at midnight.
I'm going like You know, hey, it's 10 o'clock right now. You got two hours. Don't worry about it. Chad GPT wrote him for me. Turned him in 30 minutes.
[00:16:49] Jeremy Utley: Okay. So now we have to take a slight diversion. This will not be a surprise, Henrik, but the specifics that will be, I'm often diverting.
I think a non obvious hack, I don't know, Brad, if you and I talked about this at dinner the other night, but I actually think a non obvious hack for success as a senior executive is having children in their mid twenties who aren't afraid to say, dad, you're a doofus, dad, you're doing it wrong. Dad, have you heard about this thing?
So many CEOs and senior leaders that I know who've created literally billions of dollars of market cap because of products that their kids told them, you guys aren't in this field, you aren't in this category and it's, none of their employees could tell them that none of their employees could, talk back, so to speak, but you can't fire your kid.
And I actually think a non obvious hack is. Talk to your kids about stuff they're doing and what they're using because it's a way to get Much more unfiltered feedback. That's my hot take your reaction. What do you think?
[00:17:47] Brad Anderson: Yeah, it's more than feedback It's actually direction, you know, these kids are seeing things and doing things, you know at scale and on a daily basis a year before the industry is.
[00:18:00] Jeremy Utley: So it's a way of getting in front of extreme users, so to speak, but , you've brought the extreme user to your dinner table and all of a sudden it's not work, right? I think a lot of times, actually part of the problem is the construct of work, right? Am I working right now? All of a sudden I act a certain way, I think a certain way, but when there's more kind of casual environments, more, you know, it's laid back, it goes from a waffle to a pancake, right? The compartments disappear, stuff starts spreading around and mixing. I love pastry. So of course I think in terms of pastries, right. But I think that's actually really important to have that kind of uncompartmentalized interaction with lead users.
[00:18:35] Brad Anderson: 100 percent agree. And that's another example, you know, where are the communities and the sources that you're learning from, , so just continuing on the question from Henrik, you know, I listened to a lot of podcasts, um, you know, like there was a podcast a couple of months ago that, brett Taylor had done about, , the work that they're doing in the agentic space. It was incredible to listen to, right? So, you know, you find the podcast and I have a, a dozen of them that I listened to on a daily on a weekly basis.
And it's funny. My family listens to a bunch of the same podcast and so like the entire family listens to all in and then we have chats and in the family chat about, you know, about the all in podcast. So that's, another thing that I do, you know, another thing that I, ended up doing is, you know, really my role and the role of my leaders right below me are trying to understand where is the market going to be.
Two, three, and five years from right now. The teams are all executing on what I call horizon one investments, which are things that are going to come out the next year are going to have impact in the next year. But, we're trying to see around the corners and try to understand what does this look like in three to five years and the pace at which AI is driving change is like nothing we've ever seen before.
I mean, look at what was announced last week at the AWS reinvent, and then the 12 days of shipments, , coming out from open AI. It's. It's remarkable how much innovation is coming out. And so I, block out specific time to just go read and experiment and do things that are going to help me to understand and give me a basis for where the market may go.
It's like right now I'm spending a lot of time learning about, what does, uh, agentic AI agents look like? What does that look like? How is that going to change businesses?
[00:20:11] Henrik Werdelin: Would you explain what that is?
[00:20:13] Brad Anderson: Yeah, and so the word agentic comes from agency. Here's about the easiest way to describe it.
What I see in the future is you'll go to a website and there's going to be an agent that you will interact with and that agent will be able to take action on everything that that website can do. So rather than you having to go in and navigate a website to do whatever you're trying to do, book a hotel, book a flight, you'll be able to go in in simple language Describe what it is that you're wanting to do.
The agent will go do it all for you, including going across different websites. That's a fundamental change.
[00:20:44] Henrik Werdelin: Do you think you're actually going to a website or do you think your agent, whatever that is, is talking to their, like my agent is talking to your agent kind of thing.
[00:20:54] Brad Anderson: It's probably both. Right. And it's probably a question of when in time, um, you know, , one of the, questions that you kind of have to ask yourself is like, , are these things going to be the primary form function that we're still working on? And if they are
[00:21:07] Jeremy Utley: notes, notes to listeners, Brad held up a mobile device, just for those who aren't mobile
[00:21:12] Brad Anderson: device or a PC, you know, um, if those are going to be the predominant devices that you're, working on, you know, Then you're probably still going to have apps and those kind of things, but if the primary way that you're interacting is glasses or ears voice, um, then it won't be apps anymore.
You'll be talking to an agent. So I think a lot of it will also depend on where the experience is centered at going forward.
[00:21:34] Henrik Werdelin: If you were. Betting is that do I hear that? That's kind of like where you would bet that augmented vision and audio is kind of because three and five years just a long time away from now, right?
Is that where you think the park might be going?
[00:21:49] Brad Anderson: In my mind, three to five years. I still think we're largely using the same form factors. And so I, I think we're still gonna be in an apps and a web-based model, but it's just gonna be fundamentally different because there's gonna be an agent there. And I may have a personal agent on my device that understands my preferences, my, you know, what I wanna do, and then interacting.
But I think we'll still be in, in a similar model. Just the way that things will be automated and done will be just much faster, much easier, much quicker.
[00:22:15] Jeremy Utley: So Henry, we can just make this a slight, you know, cul de sac diversion for a second. Let's talk about innovation more broadly, set aside AI for a second.
We had a really fascinating conversation earlier on the podcast with David Akhniev, who's one of the founders of Typeform, who talked really eloquently about organizing for innovation. And when you talk about horizon one, you said most of your folks are really focused on the near term here now, but then you have folks, you know, who are focused on the next three to five years, kind of typical horizon two stuff.
Can you Give us a sense for how you think about organizing the organization for the various horizons are different. Is it the responsibility of a different group or how do you attend to both executing the current business and exploring what's coming?
[00:22:58] Brad Anderson: Yeah, that's a. That's like the trillion dollar question, right?
You get that right and magic happens. You do it wrong and, there's, there's trouble you have to wade through. So here's how I think about it. I think about, um, four horizons, H0, H1, H2, H3. H0 is another way to think about it is, is KTLO, keep the lights on. What is the work that you have to do if you're not even adding any incremental value just to make sure that the product is scaling and meeting the needs of what you've already promised?
Okay, so that's kind of, let's call that H0 KTLO. Okay. A world-class SaaS organization has about 20% of their, capacity inside of that KTLO or H zero work. Then as I think about H one, H one is innovation that's gonna come out in the next year and have a financial impact for the company in the next year.
H two is, uh, same thing, but in a two or three year horizon, and then horizon three is in a three to five year horizon. And so the way that I've always tried to operate this is I try to have about 50% of the organization working in H one. Um, 35 percent working in age 2 and 15 percent working in age 3. So that's the first thing that I do is, you know, because , you've got to make sure that you're investing for the now, the near future, and then the far future,
[00:24:09] Jeremy Utley: But are they different teams? And do they know that they're focused on different things or is it within teams and product owners and things like that?
[00:24:16] Brad Anderson: That's a combination of the two. , Uh, the majority of the organization is focused on H1 H2. With H3, that is where we usually start to pull a separate team out. , like, you know, you talk about the traditional one pizza box team or two pizza box teams, you know, moving quickly, you're iterating. You're co designing and co innovating with customers, and so you're just moving very, very quickly. And then what you want to have is you want to have stage gates where you can see how the progress is being made.
And then what you want to have happen optimally is when something gets to the point where it's ready to actually have the rest of the team work on it, you graduate it out of that team into the masses. The thing that I always try to avoid is trying to have an A team and a B team. , for example, if you go back to the Microsoft days, there was the Windows 95 team and the Windows NT team.
And the Windows 95 team always kind of felt like they were inferior to the NT team. And so, , I really try to stay away from having this concept of, you know, this is the dream team and this is everybody else. But I do believe you have to have people who are focused, who are also zero to one people, you know, and so with all of your people, whether they're an engineer or UX designer or a product manager.
You've got some people who are world class at zero to one. That's actually hard to find those. You've got people who are world class at one to two or we're version 1. 1 to 1. 2. And so you need to make sure you've got the people in the right places and the skill set. So that horizon three group is a really critical group for us.
[00:25:41] Henrik Werdelin: The, I think it's important that you mentioned this thing about like the CRT 1 or, you know, the originators, like the people that kind of stare at the, the blank piece of paper, and then kind of think of something that's useful for the customer in a world where it's easy and easier to start stuff and where obviously entrepreneurship is something that a lot of people strive for.
How do you attract and how do you create a culture for the originators, um, to make? Then kind of stay in an organization rather than go and do it by themselves.
[00:26:15] Brad Anderson: Well, my experience is I've seen two kind of originators. There, are those who are, who just want to just innovate and then basically not worry about taking it to scale.
And that's, for example, that was like the people that I worked with at Microsoft Research for so many years, and they were just incredible at what they did. And then there are other people who say, listen, Hey, I've got this idea. I want to take it forward, but I also want to take it to the point where it scales and it's a billion dollar business.
And so I think having the, ability to understand what each individual is motivated by, what they're trying to accomplish in their lives, in their career, and then setting them up to do that. I love having some people who are just that rapid innovation, you know, running on week sprints, They're experimenting.
80 percent of the things that they're doing are failures or they're not correct, but, ultimately, if something's going to scale, it's got to come into a team and knows how to take it to scale.
[00:27:05] Jeremy Utley: , can we shift gears back to AI? Thank you for that slight detour down the, cul de sac of the Innovator Salena. Um, and now for our regularly scheduled programming. Can you take us back to the early days of the Gen AI, call it adoption curve at Qualtrics? What did you do to get people aware, to build excitement? What are some of the early stage activities from a top down perspective, you sponsored or encouraged?
[00:27:32] Brad Anderson: Yeah, so I held what I called an AI summit. And so in the spring of, 2023, I mean, really chat GPT, a lot of people knew about it, but it hit critical mass in February, March of 2023, right? So in the spring, I held a two day AI summit. What I asked everybody in my organization, I call my team PXE, PXE, Product User Experience Engineering.
I said, listen, just take a couple of days off, put your pencils down, and all I want you to do is go experiment and play around with our AI playground. Okay. Being very specific because I wanted I'm experimenting in where I knew all the data was going to be secure and safe and private and then what we did is we had two days and we just sat there and we looked at demo after demo after demo of functioning code of how JNI was being put to work. There were 80 demos that were put together that we looked at.
[00:28:19] Jeremy Utley: Wow. After two days of playing, you had a two day summit and you had 80 demos of stuff that people had built.
[00:28:25] Brad Anderson: Well, so the, playing had occurred before the summit.
[00:28:28] Jeremy Utley: Yeah. Okay.
[00:28:29] Brad Anderson: And so, you know, people had taken time before that.
And once you get working on something that is interesting and unique, I mean, everyone was working later than 5 or 6 o'clock, if you will, but we sat there for 2 days and we had. Let's say 70 of the most senior leaders in the pixie organization in that meeting, watching example after example, because, you know, it was good for everyone to learn, but then all of the leadership was able to see and understand what the experiments were in the different categories and the different things.
And then I followed that up with us with a 2nd AI summit, um, about 4 months ago. And, , a similar kind of a, thing. And then , for example, when Jeremy, when we were down in, um, , at that event that we were at, and , I showed a few demos, those demos were all part of the, the innovation summit, the AI summit V1 that we did, those were all things that were started there.
And then we took them to product.
[00:29:21] Jeremy Utley: You know what, I'm going to give you a word for it. You declared recess. And I love that you actually used a playground. So it totally works. It's like a perfect metaphor. And the reason I mentioned that, I think I told Henrik this story, but I was at a, a big conference and, you know, global research leader had just done like a huge deck, like global presentation.
And the CMO who's running this conference said, Hey, you know, so and so you just gave this enormous state of the union. Would you summarize it for the audience? There's like 2000 people there. And he said, effectively said, Sure, I'll do it in three words. Recess is over and he said, you know, it's time to get down to business and he had a very stern and everybody in the crowd is kind of solemnly nodding, you know, and I'm thinking, okay, what am I going to do?
Because I know I'm right down the line in the panel and I said, um, I love what he said about recess. That's great. Can you raise your hands if your company has declared a recess? And no hands across 2000 people. I said, I'm sorry, , I didn't define recess time to play without regard for the consequences or day to day responsibilities, hands up, please.
Again, no hands. And I said, I think the problem is not, that recess is over, but that in most of these organizations, recess hasn't even begun. Right. And how can you possibly imagine, you know, groundbreaking possibilities if nobody's playing with the stuff. And so to me, I actually think it's huge to call a two day to say, take two days to experiment.
And then 70 senior leaders are going to take two days to experiment. Review your experiments. I mean, , just talk about the investment from a people time perspective. It's enormous. And yet most organizations can't even get there because they go, we can't let up from the H zero, you know, keep the lights on is everybody's got to work , at a breakneck pace all the time.
And nobody can even look up from it. Right. How did you rationalize? Using that time prior to, I mean, now you can look back and go, it was worth it, but looking into it, you go, dude, am I really going to give everybody two days to play? How did you, as a leader rationalize that?
[00:31:22] Brad Anderson: You know, I, I honestly had no doubt that it was going to be a success.
We've run hackathons for a long, long time, and this is just a different kind of flavor of a hackathon. And, you know, in the hackathons that we run, we'll always set up categories and say, like, here's a category most useful for customers most useful for engineers, the most innovative use of AI.
And then we have financial awards that we give to people for winning these, hackathons and they've been incredibly successful. And so this was just an iteration on that hackathon concept. It's been so successful for us. You know, I talked about the version one that we had of an AI summit and the version two, a few months ago, I also held, what I called a PLG summit, product led growth, where I had all the teams coming in and showing the things that we can do in the product to help customers understand what are the things that they can do to get more value from us, similar kind of thing.
Take a couple of days, experiment on it, come in and show us what you've learned, what you've done. And then, you know, we green light some things, some other things are being worked on. But for us, that our culture, that has been a very, very successful way to focus and, really emphasize the importance of something.
[00:32:29] Jeremy Utley: And is there a, I'm just imagining here based on actually a conversation Henrik and I had recently with the head of AI at Moderna, who you've probably seen, they've done a lot of really cool stuff, Bryce was telling us that out of one of their early kind of hackathon type activities, they kind of dubbed or knighted a bunch of champions, folks who are early stage adopters, who've got cool ideas and champions.
I'm just thinking of your statement about. Rewards or incentives. What kinds of incentives do you find are motivating to people? And how do you leverage the momentum that's created from that event to start to cascade into sustained behavior change? Because it's easy for that just to be a blip on the radar, right?
It's like we did that two day hackathon now back to the day to day. And there's some kind of sustained accountability structure. I'm sure you've got in place. What does that look like?
[00:33:17] Brad Anderson: Yeah, let me see if I can answer your question and tell me if I don't answer it entirely. Jeremy. Sure. Um, so 2 things come to mind.
The 1st is, you know, you take a look at all of these, innovations, these pilots, these proof of concepts, um, and let's say the 70 we say, like, hey, these 15 right here, we want to get the prioritize that have you inserting that into your road map. And so, In a really well run SaaS organization, the product leaders are the ones who manage the backlog.
And so on a daily basis, the product managers can go and shift the priority and insert things above and insert things below. So then when the team has capacity, they go to the next important thing. So that's how we interlace things in and you rely upon the product leaders to make the decision on what's going to have the biggest impact for the customer and for the company.
That's the first thing. Then the second thing that I believe in deeply is, , take a look at this transformation to an A. I. Every engineer who's thinking about their future knows they're more valuable if they're skilled in A. I. So a really big part of this is we're helping you grow your skills, your capabilities, your career.
We hope you stay with us for a long, long time, but some of you are going to leave. And if we, can help you improve your cell ability, your capabilities by giving you a I experience. People will engage in that. They will absolutely engage and they'll engage hard.
[00:34:34] Henrik Werdelin: You have in your organization the, there seem to be three groups in the organization, at least I'm involved in one, which is kind of, you have the people who are kind of most like completely on board.
They are evangelizing it and they send you stuff all the time, stuff they've done with AI. And then you have the group that's kind of, it's intriguing and they use it a little bit, or they got like, you know, they're open for it. It's just like they have day to day task and they feel they're already stretched.
And then you have a little bit of the group that's kind of like, this is bad, or I don't want to engage with this. Have you, A, seen the same kind of dynamic? And two, like, do you have a way that you deal with these different groups in a different way?
[00:35:14] Brad Anderson: I think every company of size has got, people that are in each of those three categories.
Um, you know, the, things that when I think about those three categories, the last one, the ones who are opposed, they're not, being passive. They're, opposed, you know, they, can be like a poison. Yeah. You know, and so I think leaders have got to have the ability to have enough signal coming in to understand who those people are and go have a conversation with, and sometimes it's just, you know, the person needs to leave the organization or the company, but , you have more people in that middle one in your category too, which are the ones who are more passive.
And that's where I think the greatest opportunity and risk is. If, you know, the organizations that are able to get that middle category skilled up and really driving some are the ones who are going to deliver more innovation than anyone else. And so that's where I put my focus at, because that group of individuals, those product managers, those designers, those engineers are going to be the ones who are going to make the difference.
Because that's the scale, you know, so, Henrik, you can kind of think like maybe 20 percent of you're in that, you know, they're sending you stuff. They're, all in 70 percent are in that category 2 and 10 percent are the detractors. 70 percent is the lever. Yeah, that's the group. You've got to upskill.
[00:36:29] Jeremy Utley: What are some of your personal favorite AI tools that you find yourself using on a regular basis, or maybe the last thing in one of these meetings when folks are coming on, you say, Hey, let me show you what I've been doing. Like, what have you been doing in the last week?
[00:36:42] Brad Anderson: Yeah. So if you're asking that, that question of my family, they would say, dad is obsessed with chat GPT. Uh, I don't remember the last time I went to Google or being honestly, um, that's the Microsoft coming out of me saying being there. Right? I just love how fast I can learn and get information with chat. So, I mean, it is probably 2025 times a day that I'm in chat from Google. This thing called notebook.
LLM. Yeah, with the audio thing. It's incredible. Yeah, it's incredible. Um, I'll give you an example of something that, , you know, that I had then showed to a bunch of people about how I'm using it. So inside of Qualtrics, we have 2 centers of gravity. There is the product team that builds the product and then there's the go to market team and, each of us, as we're doing our annual planning going, you put your plans together here, the priorities and stuff.
Well, what I did is I went and pulled those 2 documents into, notebook. And I said, tell me where we're aligned and where we're not aligned. And it was remarkable. I found the two or three places. Hey, here's some places where there's misalignment. So we were able to go correct that. But It's remarkable how fast it can help you get to a point where you can make decisions and take action.
And so notebook is probably the thing that I use 2nd, the most on a, pretty much on a daily basis. My leadership team, whenever we're sitting in a meeting, people are taking notes. Everything's feeding into a notebook. And then we've got a notebook for each of the topics that we're working on.
[00:38:04] Jeremy Utley: And that's, I mean, it's a great as a source of truth, right?
So it's, you know, that what you get from a conversation with notebook LM is actually grounded in the source documents. And for listeners, you might find yourself familiar. We had Steven Johnson, who's the creator of notebook LM on the show. Several months back. He's one of our favorite authors who Google then pulled into the company to build this product.
So if notebook LM is kind of triggering your memory, it's because that Steven Johnson conversation,
[00:38:32] Henrik Werdelin: one thing about that, which I don't think Jamie, we've talked about is that he of course came in to Google to create this kind of beautiful AI based tool to help people to write. And what's fascinating is obviously that product suddenly got its kind of mojo when it added the audio thing.
It's great for the other stuff, but like we'd really like got notary was like the podcast, like on demand podcast stuff. As we're talking about innovation. It is kind of like a fascinating thing because sometimes, um, you don't really know where the innovation kind of hit will come from. And I would imagine that very few would have predicted that notebook LM would have been like an audio hit, right?
Like, cause obviously the whole premise of it was that it was all about writing. And so I do think it is fascinating sometimes when we work with innovation that you kind of need to be in the process of making all the time, because otherwise you're just not in the game of kind of like discovery, those kind of like magical features and, the feature is kind of a little bit awkward in the interface and like the whole, like, it's clearly something they just dropped in there, , almost as like a, a little potty trick.
[00:39:41] Brad Anderson: Yeah, that's true. So it's funny going back to the, you know. , how are the ways that we do things? , how do we get people to understand, learn and experiment? After that PLG summit we did a couple of months ago, the person who ran it took all of the documents. So, every one of the teams that had a proof of concept had, , a presentation or a document in prose, put that all into notebook and then how to create a podcast.
We sent out to the engineering team and said, hey, here's a podcast of the summary from the PLG summit.
[00:40:10] Henrik Werdelin: It's so good.
[00:40:11] Jeremy Utley: And now, I don't know if you've played with it, Brad, but now you can actually guide the podcast, the audio overview to focus on specific things. And so you can say, Hey, I want you to do an audio overview, but only about the points of difference or only about the themes that emerged or focus on these teams, and it's actually able to do that now, which is pretty cool.
[00:40:29] Brad Anderson: Yeah. It sounds like I got to get back in and do is that I haven't played it up around with the audio as much for me, it has become just an invaluable asset for search. And , for looking for insights.
[00:40:42] Jeremy Utley: I've got to tell you all one thing that I have found incredibly valuable is asking Claude to critique um replies that I have drafted but have not sent for my intent So I'll say, Claude, I'm trying to accomplish this goal.
Like right now I'm in the middle of a negotiation with a client about a substantial piece of work. Here's the background context . I want you to leverage, you know, insert favorite negotiator. Uh, I want you to leverage this framework, rate my reply per this framework. Oh, and twice. What's fascinating.
I mean, I think I've got a pretty good beat on people and I've negotiated a lot of deals like this. So I kind of think I'm okay at it twice. Now in the last week, Claude's been like, I would not say that. That is not what, you know, and emphatically, and then I'll kind of go, okay, I'll change that.
And then I go now notice how I tweaked it per your feedback. You did not incorporate my feedback. And I've been actually, , I mean, like, you know, ball's still in the air, so to speak on whether it works. But at the very least I would say , my level of confidence has gone up tremendous, my level of, uh, I not confidence in myself, but I think a lot of times you do stuff like that and you're like, Unless I like get Brad Anderson on the phone, I have no idea whether this is a good strategy, right?
Like, you know, the average person off the street doesn't have , a long Rolodex of amazing mentors who are capable of nuanced feedback. Right. But now I can feed that into Claude and get actually, I mean, what feels like again, TBD, whether it actually is, but at least from the perception of the user, it feels like incredibly thoughtful coaching shifts the trajectory of those interactions.
[00:42:27] Henrik Werdelin: Do you ever worry that we get to a point where the prism of the models Make us blind and what I mean by that is I I do as you do Brad I take now a bunch of content into a notebook or any of the models and I have it help me to basically figure out what are the core insights of it Um, to the point now where one of my lazy new behaviors is that I take whole books and I basically just shove them in there, like stuff that I don't have time to read, but it's on my read list, , and I just couldn't get that.
It's not approved,
[00:43:00] Brad Anderson: is had forever. Everyone on the podcast not approved, not approved, pay for
[00:43:04] Henrik Werdelin: it first. Um, but you sometimes obviously worry then that, you know, you take like a book like Idea flow. Written by and you say, you know, Hey, here's the book, please summarize the major frameworks, the most interesting stories and any kind of anecdote that is good of showing how do you come up with great ideas and it'll spit out something and you can compute it pretty fast.
But obviously I've lost the fidelity of like the thing. And so as you're now doing this more and more, how do you. Navigate in your brain that kind of like , the desire for getting like thing condensed and consumed fast and efficient with the anxiety of like letting something that is kind of alien technology kind of become the basically glasses on which you now consume all this.
[00:43:59] Brad Anderson: You know, what comes to mind for me is, so first of all, going back to what Jeremy was talking about, you know, , the way that I internalize the, what you described that you're doing, Jeremy, is you've got a methodology that you've built over years of practice here about how to negotiate, et cetera, et cetera.
It's a methodology. And then you're taking that methodology and you're implying that in different actions, right? This, to me, this is, why I believe we're going to be living in a multi, um, uh, Large language model world, small language model world as we go forward. It's not going to be one large language model that rules them all, you know, we should talk a little bit about, how do you make a decision when you use something like a chat GPT versus when you use something like llama because, because, there's a lot of decisions that you have to go into as a, software engineer there.
But fundamentally, you know, what I believe is there are going to be all kinds of organizations on the planet. They're going to have unique data sets or unique methodologies. They will be able to schematize those methodologies, schematize that unique, take that data and train a large language model that is specific to health care, to financial services.
To experience management and then it's going to be multiple models working together that are going to be able to bring the expertise from around the world in a way that is not compromised. But to me, that's, the key here. How do we, enable all of these models to work together? And I think that also kind of leads back into the conversation we were having about the agentic future and how does that work? Cause that's going to be a core part of that as well.
[00:45:25] Jeremy Utley: , going to the question about, Human behavior and, you know, Henrik's kind of bell curve of lead users, the laggards and the resistors are going all the way back to the start of the conversation where you talked about everybody using AI every day.
Are there mechanisms that you use to actually, uh, is that a part of people's KPIs? How do you know whether people are, is it just, an opt in or , is there some kind of accountability? Or is it a natural selection process? How do you think about understanding what the workforce is doing?
[00:45:56] Brad Anderson: So we were debating this right now as a leadership team. And I have a black and white thought on this. I want people using , our AI playground every day. And I know who comes into that every single day. And so I believe that we should have, okay. Ours that we have, you know, everybody in playground once a day using it.
And we should publish that to the managers and to the individuals.
[00:46:18] Jeremy Utley: Yeah. And , why do you believe that?
[00:46:19] Brad Anderson: Cause I think that, you know, you, you get what you measure. And so if you really want to kind of push an organization to do something, you have to figure out a way to measure that so that you can course correct, or you can nudge where you need to go nudge at.
, so I could walk around there, like if I were to pull up the list of people who have been in our AI playground in the last, you know, week, it's, half of the company. Half of the company has been it in the last week, but you can see pockets of different functions that are not spending time in it.
And so then what I would do is I would go to that function to say, Listen, we did a hackathon where we, had the engineers and the product managers and designers go look at how could they put AI to work to make their jobs, make themselves more efficient. Let's go do the same thing with you.
We will assign one of our AI specialists. Let's come in and have a hackathon for the finance department, a hackathon for the legal department. Where, you know, you take a day or two off and you just have people experiment with subject matter experts who can show them how to do it.
[00:47:15] Jeremy Utley: And what I'm hearing you say there, Brad, is it's not punitive. You're not measuring to punish or at least to begin with, right? You, you assume it's probably ignorance or, , uh, they need help sparking their imaginations, right? I assume there's probably a different action to take if somebody is an active resistor and refusing. But the first step is. We clearly need to do a better job of communicating the value here and helping people see that.
And something like a KPI around, you know, time in playground is a good heuristic for , where do we need to deploy our specialists to go unleash the human capital here?
[00:47:48] Brad Anderson: You got it.
[00:47:50] Henrik Werdelin: I wonder how fast this is going to go and if it's just going to go so fast that the resistors, like, honestly, just will get washed out. , like, one of the things you talk to someone like you, Brad, who's obviously had, like, such a long and successful career. But it's also, , you've clearly just picked up on this, , immediately. Like, the second it kind of, like, hit, like, yep, this is a new tool, you know, like, it's the next kind of platform.
And. I think we all kind of just believe that, the wave of AI is going to move so fast. And so we're now doing all these kind of like different exercises and I think all our organizations of how do we get people to use it. But I wonder if it's like a problem that honestly is going to solve itself a little bit because honestly like in even like six months.
If you are not using it, you're just going to be so disadvantaged, it's going to be like you saying that you don't use email or like something else that just would be kind of silly in today's world.
[00:48:47] Brad Anderson: I get this question all the time from family and friends, is AI going to replace humans? Um, what I tell them is I said, you know, AI is not going to largely replace humans.
There'll be some functions for that, that it absolutely will. But one thing I can tell you is the humans and the organizations using AI Are going to replace the humans in the organizations not using AI. It's that black and white
[00:49:10] Jeremy Utley: unquestionably
[00:49:11] Henrik Werdelin: and probably fast, right?
[00:49:12] Brad Anderson: Yeah. , let's have a little bit of fun with some numbers.
, I think 1 of the reasons why my mind, you know, sees these platform transitions is, you know, You know, back in the early nineties, I was participating in the transition from the mainframe to the PC and I saw, how the PC just completely transform by doing things in a much more efficient rate at a much more cost effective rate.
Right? I then saw failures. I saw where Microsoft, , missed out on the iPod, you know, zoom is not something I think anybody in the company was ever proud of missed out on mobile. How, you know, they, had a commanding presence in the browser and lost that. Those are experiences that you have, where you see the lack of action, or the lack of investment and focus can have a detrimental effect.
But then I also saw the move to the cloud, and I was on the front rows of the move to the cloud. As we moved office to the cloud, as we moved all the management and security things that I was responsible for into the cloud. And so, you know, you see how fast these things can pick up. But let me ask you a couple of questions.
Um, let's talk about the move to, uh, the browser, mobile GPT. How long did it take for Netscape to have 100 million active users.
[00:50:26] Henrik Werdelin: Years.
[00:50:28] Brad Anderson: Okay.
[00:50:28] Jeremy Utley: Did they ever get 100 million? I don't even know.
[00:50:31] Brad Anderson: Good for you, Jeremy. The highest that I can see is they were up at 85 and that was after five years.
Wow.
Okay. All right. How long did it take before there were 100 million iPhones? In production and being used.
[00:50:46] Jeremy Utley: I I'm going to go with three years.
[00:50:49] Henrik Werdelin: I'll take that.
[00:50:50] Brad Anderson: Yeah. A little bit more for four and a half years.
[00:50:52] Jeremy Utley: Okay. Okay. Order of magnitude? That's not bad. Okay. We're in the ballpark.
[00:50:56] Brad Anderson: All right. Now it gets interesting.
How long did it take for chat GPT to have a hundred million users?
[00:51:03] Henrik Werdelin: From like 20 when GP three five kind of came out
[00:51:08] Brad Anderson: three five came out. Yeah,
[00:51:10] Henrik Werdelin: six month
[00:51:11] Brad Anderson: weeks
[00:51:13] Jeremy Utley: So wait, but so so sorry I just I wrote the question down and I want to make sure that we draw The line from your answer here to the question So I'm just gonna repeat it Because I think the question you frame for yourself and then we're answering is how fast will it?
take to AI augmented humans to replace AI refusing humans. That's what I wrote down as the question. And so what do you take from you just mapped out that timeline? What are the implications of that? Uh, call it steepness of curve on the question of how long before AI augmented humans just replaced non augmented.
[00:51:46] Brad Anderson: Yeah. Thank you , for holding me accountable on that. I appreciate it.
[00:51:49] Jeremy Utley: I didn't want to let you off the hook easy, right. To like leave it to the audience to, to infer what they will. It's like, what's the statement. Let's hear it.
[00:51:55] Brad Anderson: Yeah. I think that, that, , the examples we use of Netscape and the iPhone and chat GPT does show. The rate at which the new technology is being adopted and this move to AI is going to happen faster than anything we've ever seen in our lives before, you know, certainly a lot faster than the move from the mainframe, the PC, a lot faster than the move to the browser, a lot faster than the move to mobile, um, , and so the way that we think about it internally is if we are not staking out a position right now, and in 2025 that we believe is where the long term position of the company needs to be, we're going to lose. You know, we, you don't have years,
[00:52:33] Jeremy Utley: I think, and just to put a fine point on that for listeners and we're, preaching to the choir here, but for listeners to say, if I'm not staking out a position personally, I'm going to lose. I mean, I think it's true organizationally. I think it's true. So now that being said, I want to go back to this question of KPIs and measuring , you said it's clearly black and white.
You're clearly in favor of actually measuring folks engagement. What's the argument against measurements or can you, steal man that argument? Why are folks on your, I would assume on your leadership team arguing against doing everything we can to get people up this learning curve?
[00:53:09] Brad Anderson: Yeah.
So the arguments though, or the debates we're having aren't around, you know, do we want our people using AI or not? It's a matter of, If we're going to measure 5 things for the company, what are those 5 things, right? Where's the focus at? And so, you know, it's more of a prioritization than it is, you know, a lack of, belief that we need to do this. It's interesting as I think about the, where we're at and kind of the AI adoption model right now, it has so many similarities to where the move to the cloud was, you know, even. As recently as 2017 and 2018, you know, the number of times that I would talk with CIOs or chief security officers and talk about moving them to Microsoft 365 and moving from on Prem onto the cloud, where the response would be something like over my dead body.
Are we ever putting our email in your cloud? Because I can protect it better than you can. Um, and then that just melt it. That just melted away and everybody's in the cloud now. You see the same thing happening with the adoption of AI. There's fear, there's unknowns, there's anxiety, there's caution.
And so you have some organizations who are just blocking it outright. No Gen AI. You can't use it, right? Big mistake in my opinion. Um, I think this actually is a call that goes out to the chief security officers for the CIOs. They have got to be experimenting more than just about anybody else in the company to understand how these things work and how they can be put to work in a way that, that makes everything better, more efficient, more productive, but still honors the privacy and the protection that, brands give to their customers, you know, that's the way I think about it.
[00:54:44] Henrik Werdelin: That makes a lot of sense. I do think it's interesting that a lot of chief innovation officers, chief digital officers, tech teams are being kind of given the, task of like, how do we get AI into the organization? And a few of them obviously, uh, mostly compensate on making sure that the company is safe.
Right? Like it's like when you give this to the legal team and saying, yeah, how much AI should we put in? , I had the conversation with the CFO the other day, um, we were talking about different ways of making sure that the wrong data didn't end up in the wrong foundational models.
And he said, you know, absolutely, we have to blah, blah, blah. And I go like, that's fine. So just give me a number. How much is that worth for you? Like, so what is the amount of money you're willing to pay to make sure that that happens? And then, obviously the conversation completely changed because like, you know, like.
He wasn't necessarily feeling that it was a 20 million like worth thing, right? You know, it might be like a few million dollar thing, or at least like enough of a things to show that we are doing it so that if ever there was any liability, so then it became much more nuanced. And so I am fascinated by kind of like your, thoughts on who do you think is really the organization that should be driving this kind of adoption in any organization.
[00:56:01] Brad Anderson: I think that varies by industry. Um, like for example, I mean, I'm, I'm a software guy. So I think the product and the engineering team should be the ones who are primarily driving it. , but that's going to be biased. But, you know, to me, , I guess the way I would answer your question is if you think about any brand, any company, any division, um, There's going to be one aspect that is the core thing that is being sold, whether it's a physical product, whether it's software, I think, whatever that organization is, that team needs to be engaged on a daily basis on how they're putting AI to work.
Um, , let me give you a couple of maybe a couple of data points that may help you understand where a lot of my learning on this has come. You know, so here at Qualtrics, when people think about Qualtrics, the first thing that comes to people's mind is they're a survey company. And that's where we started at 22 years ago, right?
When the company was founded by a father and his two sons in the basement. Um, but our investments over the last 5 to 7 years have been much more weighted on natural language understanding, natural language processing, AI. And to give you an idea of the scale at which we're operating at that gives us this unique data to learn from, um, in 2024, we'll have about 1.
2 billion survey responses come through our platform that we analyze. Okay, that's growing at about 16%. We will have over 2 billion conversations, the majority of which are calls with the call center come through our platform for analysis. And that's growing 57 percent year over year. So, you know, we will have over 2 billion calls that come into our system.
We transcribe it from voice to text, put it through our AI, come back and help an organization understand. What can they do to improve the experience of the customers? What can they do to experience of the products? What can they do to improve the experience of the people who are supporting the customers?
And as you do that, you know, then you build loyalty, you build connection and that leads to long term value, you know, LTV. But the scale at which we're operating at, requires us to have this incredible people who understand how to put this new technology to work. Cause the scale is just, is, is immense.
Give you another interesting data point. We've had the average survey is 13 questions. We've had more than 15 billion surveys that have been answered on our platform since our inception. That means we have over , 200 billion human answered questions. That we can anonymize and aggregate to learn from.
[00:58:20] Henrik Werdelin: That's incredible. , are you getting to the point where you can predict what people will ask?
[00:58:27] Brad Anderson: That's where I think it's going, you know, so for example, you think about how research has historically been done, whether it's research on product, research on the customers, three years from now, 50 60 percent of that's going to be done synthetic.
. And so, you know, you will have providers like Qualtrics who will have this immense data set that we've been able to train our AI with, and an organization will be able to go ask a question of what their users would want or what this community, this category would want. And without even having to go out and do research at all. Be able to get an answer.
[00:58:59] Jeremy Utley: Okay. So one, I was actually just talking with, uh, you know, consumer brand yesterday at their headquarters here in Palo Alto and , doing a talk and I, , my talk was focused on experimentation as a means of creating data and my big thing with surveys is they're great for the past, but. They're terrible for the future because you don't want, a user hypothesizing about the future. What would you do? No, you know, Henry Ford, if I asked my customers what they wanted, they would have said a faster horse, right? So great for what was your experience like, et cetera, bad for the, for imagining a future.
Within a world of synthetic data, and what I believe in, just to close the loop on experimentation is you want to afford someone an opportunity to make a decision. A decision is the kind of data that's highly credible. An opinion or a sentiment is less highly credible because those are, you know, excuse me, mister.
I just wanted to know, you know, if I brought you a beer every day at the end of work, would you want to drink it? It's really easy to manipulate that. Everybody says, sure, buddy. Put me down for a yes for your, you know, home beer service, right. Or whatever. Um, and that's, like how a lot of surveying about prospective opportunities gets done.
It strikes me that synthetic user kind of personas could be useful. How do you think about leveraging a synthetic data set , for novel use cases in the future? I realize that's kind of a broad question, but you just, you sparked it for me.
[01:00:23] Brad Anderson: Oh, I'm a, big believer that organizations like Qualtrics, like us, who have got this unique data set to learn on are going to be able to come back and help organizations get answers to questions.
You know, now that you say decision, , I term it as actions, decisions or actions. And I think. What we're trying to do here is, reduce the amount of time that it takes to get to a decision and reduce the amount of time that it takes to take an action. And if it takes you weeks or months to go out and do research, and there's another way to do it, that is instantaneous.
That's why I believe the market is going to head, and it's going to be organizations like ours that are able to pull that together and provide those experiences.
[01:00:58] Jeremy Utley: I like that reducing the I like that reducing the , time to action to action. That's cool. And, and
[01:01:05] Brad Anderson: eventually it becomes. You want to take actions in the actual interaction itself.
And that's why I think this magentic future really comes into play. But you know, Jeremy, let me just build real quickly on your point about, , surveys are good for historical, but they may not be good for predicting the future. This is a place where I think generative AI is going to have a profound impact because one of the things that we have built is in production right now and being used broadly is, um, generative AI powered surveys.
. So what that means is when you're asking an individual a question, it could be feedback. Where do you think this goes? The survey through generative AI is automatically able to adapt and actually have a conversation. It's like a digital interview, but it's happening at the speed of the cost. And it's been fascinating what we have found on this. So, you know, um, this is being used at scale right now. We call it conversational feedback. I didn't want to call it a survey because it's no longer a survey. It literally is. It's like a mini focus group or it's a digital interview. Um, and here's what we find 40 percent of the time when, we see a vague or inactionable response and we, prompt the individual to share more 40 percent of the time they engage and they share more detail. With no drop off on completion rates. What's fascinating then is there's all these different measurements that we use to understand the quality of that 2nd response relative to the 1st. And so you can use things like the number of words, the number of syllables. There's a bunch of things that you can look at the lexicon and the language that is being used in there, and you get a quality score on there.
And in all of those measurements, that 2nd and 3rd response is double the quality. Of the first response. So it completely transforms how quickly an organization can learn what they're, seeking because they now are having generative AI , you know, put to work in a way that just brings in much higher quality feedback much faster
[01:03:00] Henrik Werdelin: on that note. Do you think this will mean the end of? NPS as a measurement of feedback, because obviously NPS was a rudimentary kind of like just a number that was easy for us on the receiving end to compute, but now that you can ask high quality questions in the conversational survey format you just described, we can actually get much more fidelity in the feedback.
[01:03:23] Brad Anderson: I think NPS will always be used at least for directionally, you know, is, my customer satisfaction improving? Decreasing is it saying say so, you know, in terms of looking at what is happening over time. It's remarkable. But now, you know, , I'll give a very specific example.
Someone gets the question of, um, hey, would you recommend us to a friend? The classical NPS question. Right. Would you recommend us? And they give a six. Well, using adaptive feedback, this conversational feedback and say, what could we have done better? And let's say that, you know, the individual comes back with, , a question of, or some response of nothing. , that's not actionable. But if you can then ask the question, you know, what could we have done better? You go a little bit deeper, you start getting feedback. Now, what you're doing is you're actually going to be able to move the satisfaction of an individual when they're actually giving you feedback.
Let me give you another really interesting example. I was talking with one of our large customers, , who is, you know, it delivers an experience and what they shared with us is when an individual, a customer is having some kind of a problem during the experience. the NPS drops in half if they're able to solve the problem during the experience. So, you know, it could be like while you're at Disneyland, while you're at a hotel, while you're on a flight, they get half that NPS loss back. If they don't solve it by the end of the experience, it drops by another 20 points. And so those are examples of how are you able to understand what people, individuals are experiencing, whether it's a prospect, a customer, an employee, and then be able to take the action in the moment,
[01:04:59] Jeremy Utley: right? Contextual
[01:05:00] Brad Anderson: Yeah, you know, 1, 1 last example, using these conversational feedback, we had 1 of our customers who's using this production as a healthcare organization, and they had an elderly woman who was filling out a survey. It was an online survey. And. The way that our conversational feedback works is it understands when to show empathy. And so the individual was sharing what had been a poor experience. The conversational feedback came back with empathetic. And she actually called it out in one of her responses. Wow, you are so empathetic. You're listening to me. You're hearing me. This is the kind of thing that creates connection and loyalty with customers. And it was, it was remarkable.
[01:05:37] Jeremy Utley: That's cool. That's cool. You know what? You're a wonderful podcast guest and I know it even better, uh, grandfather, and we're grateful for your time with us today.
It's so fun. So fun to catch up with you. I learned something new every single time I talked to you, Brad. Thank you.
[01:05:51] Brad Anderson: You're kind. I, I feel the same way. .
[01:05:54] Henrik Werdelin: Jeremy, it's time for the wrap up. Let me prompt you like my favorite large language model.
[01:06:03] Jeremy Utley: I am your favorite large language model. What are you talking about?
[01:06:07] Henrik Werdelin: Can you please, um, put in a few bullets the, , most, , important, , concepts, frameworks, or ways of using AI that was mentioned in this conversation.
[01:06:19] Jeremy Utley: Oh, wait, you literally prompted me like an AI. Thank you so much. Folks, if you could see how happy Henrik is with himself right now, you would really feel, you'd be blushing.
You'd be blushing because he's quite happy for himself. Okay. Okay. Speaking as a notebook LM here for a moment. Uh, I mean, of course, you know, I resonated with the recess, with the idea of giving folks two days to experiment with the playground. I love the fact that they had so many demos and had senior leaders looking at them and assessing them and understanding.
I think that's a tremendous, I love the conversation about How do we think about driving behavior change and measuring? And of course, I'm on the Brad Anderson side of the spectrum that I think KPIs should reflect usage. . And increasingly, it's got to be not okay to not take action.
Um, and right now, I think in far too many organizations, the status quo is inaction is acceptable. I think inaction must be a punishable offense with consequences. That's the only way to drive folks off of the sidelines. And by the way, importantly, Henrik, importantly, I did not say success. Has to be the thing that's rewarded.
But I think action, experimentation, trying fail a hundred ways, and I'll give you a promotion. If you have nothing that you have tried, , we need to put you on the performance improvement plan. I think we have to make that shift that inaction is the costly activity.
[01:07:47] Henrik Werdelin: The thing that I really, really thought a lot about while he was talking was how much more. of, value that we can get out of stuff that is already being generated. And so he had two kind of things where that became very clear. It's when he does an offsite and he takes all the notes and all the recordings and the conversations and what have you, and just drop them in a notebook. Um, um, And then it's obviously when you do surveys and you have the ability to say, Hey, I'm sorry, you didn't like it.
You gave us a score five. Can you tell a little bit more about that? And because that , the system can be conversational and because that you can now compute a vast amount of input and distill it in a very efficient way, you can now get so much more fidelity out of, All this things. And so I think what I was kind of wrecking my brain on while he was talking is like, what are other sources of output produce that I can already today kind of just use and then put in like you had a small example, which is just your email, uh, you know, like that you're about to do.
But I do think it is fascinating to just brainstorm more and more in our own lives on what What big kind of chunks of output that is already being generated, that we can just kind of extract more value out from already today.
[01:09:18] Jeremy Utley: The, the other thing that's, as I look, keep looking at my notes while you're talking, sorry, I wasn't listening, but I'm kidding, joking.
I was. I was the other thing that stood out to me was the, um, the first thing he said when I asked him about AI culture and he said, everyone in every function must be using AI every day. To me, that's a unambiguous objective. Everyone, every function, every day.
And, uh, that's as a, manager, whether I'm in middle management, whether I'm a senior leader, that is something that I can actually know or assess relatively quickly. And I love that. I love it when you have a criteria, which could, can easily be demonstrated to be yes or no. And that to me is one that I go, Oh, got it.
Everyone every function every day.
[01:10:11] Henrik Werdelin: I think that was good. What was that?
[01:10:16] Jeremy Utley: It's obviously the end of the day here, folks, you hear what happens. , and here's the thing, , drop the conversation into an AI. You're probably going to get as good of a synthesis as we can give you. We're trying on the fly to do this, but for crying out loud, you can open Gemini, you can put the YouTube link in and you can ask it, What Henrik and Jeremy would say if they were smarter, if they had more gas in the tank, and it'll probably give you a better answer. Okay. . Folks. If you enjoyed this conversation with Brad Anderson, amazing, amazing career, amazing leader. If you enjoyed this conversation and wish we talked to more people like him, would you please invite us to your conferences where we can meet people like Brad at dinner?
I met Brad at a dinner and we hit it off. We got a man crush and you know what I want? I don't know about Henrik, I want more conference invitations so that I can meet more people like Brad. We'll get them on the show. You will know what Henrik and I know in a private dinner conversation. Within two months, two months ago, Brad was telling me this stuff in private and now the whole world knows it.
If you want that kind of experience for whatever, you know, big timer in your life, just invite me and Henry to a conference. Okay. For crying out loud.
[01:11:25] Henrik Werdelin: And with that, thank you so much for listening.