In this episode, Eric Porres, the newly appointed Global Head of AI at Logitech, walks us through his mission to transform a 7,000-person organization into a team of AI-fluent knowledge workers. He shares what his first 100 days looked like—from running a company-wide GenAI survey to personally training over 800 colleagues—and how those efforts laid the foundation for a scalable, human-centered AI strategy. Eric talks about building a culture of “augmented intelligence,” not just through tooling, but through habits, champions, and real behavioural change. He shares practical frameworks—like using AI to improve your prompts, embracing long-form instructions, and designing with role-context-task-output in mind—and explains why measuring success goes beyond usage stats to include depth of interaction and employee NPS. The conversation also looks ahead to the agentic future: personalized AI teammates, embedded workflows, and custom knowledge bases. Whether you’re leading AI at a global company or just trying to help your team get started, this episode is full of real-world insights on how to move from AI hype to meaningful adoption.
In this episode, Eric Porres, the newly appointed Global Head of AI at Logitech, walks us through his mission to transform a 7,000-person organization into a team of AI-fluent knowledge workers. He shares what his first 100 days looked like—from running a company-wide GenAI survey to personally training over 800 colleagues—and how those efforts laid the foundation for a scalable, human-centered AI strategy.
Eric talks about building a culture of “augmented intelligence,” not just through tooling, but through habits, champions, and real behavioural change. He shares practical frameworks—like using AI to improve your prompts, embracing long-form instructions, and designing with role-context-task-output in mind—and explains why measuring success goes beyond usage stats to include depth of interaction and employee NPS.
The conversation also looks ahead to the agentic future: personalized AI teammates, embedded workflows, and custom knowledge bases. Whether you’re leading AI at a global company or just trying to help your team get started, this episode is full of real-world insights on how to move from AI hype to meaningful adoption.
Key Takeaways:
LinkedIn: Eric Porres | LinkedIn
Logitech: logitech.com/
Eric's website: Porres
Psychedelic GPT: Trippin' The Chat Fantastic
These screenshots showcase how Eric Porres organizes AI research using NotebookLM, as discussed in the episode.
NotebookLM Dashboard: NotebookLM Dashboard - Eric Porres
NotebookLM Research: NotebookLM Research - March 9-15, 2025
00:00 Intro to Eric Porres
00:46 What the First 100 Days Look Like as Head of AI
01:51 Measuring AI Adoption: Surveys, Usage & Quality
05:34 Training 800 Colleagues: How Eric Taught AI Mastery
19:50 From Side Role to Head of AI: Eric’s Transition Story
23:02 Scaling AI Across Teams: Tools, Access & Equity
28:56 Choosing the Right Model for the Right Job
30:28 Measuring Success: NPS, Feedback, and Real Usage
32:05 The Rise of AI Champions and Teaching as Proof of Mastery
34:31 Beyond Fluency: Preparing for the Agentic Future
36:45 Atomizing Workflows: Making AI Work for You
39:10 AI in Sales & Customer Service: The Agent Use Case
43:26 Personal Knowledge Bases and AI-Augmented Thinking
50:49 Final Thoughts and Takeaways
📜 Read the transcript for this episode: Transcript of Eric Porres is Rewiring Logitech’s Org for the AI Age. First; he trained 800 people himself.
[00:00:00] Eric Porres: and the first tip I use in that is say, okay, how can you use AI to improve itself? And, this is an example of, you know, I can't tell you both internally and externally how many times if I do, okay, hey, everybody here, show of hands before you submit the prompt. Submit the prompt to AI to make it better. Like AI knows better than you do, how it's going to interact with these words, you are about to give it
Hi, I'm Eric Porres, the head of global AI for Logitech. I'm super excited to be here today with Jeremy and with Henrik to talk about what is it like to be head of global AI for the first hundred days at Logitech and how I hope to enable my knowledge working colleagues to become augmented intelligence superstars.
[00:00:46] Jeremy Utley: so you're appointed, AI, imagine one is appointed head of AI at logitech. Imagine that. What do you do? what do the first hundred days look like eric?
[00:00:58] Eric Porres: Uh, Jeremy, so you know, great question. So first, you know, of course, thanks for having me on with you guys.
It's great to see both of you. So uh, uh, what do you do in the first hundred days? You hold on to your, ejection seat and you say, all right, I'm going to ride this thing out as far as I can. No, I'm kidding. Of course. Um, first hundred days is, I would say a lot about instrumentation and then orchestration. And, and what I mean by that is, you know, like the old adage is true. You can't, manage what you don't measure. And so, you know, A. I. has existed in its, in various guises and iterations at Logitech for a very, very long time. Um, it's existed in hardware. It's existed in software. It's even existed internally with, with what we do with our, how we interrogate our own data , but I would say it's also been Jeremy a, somewhat of a disaggregated.
Set of individuals that work within specific capacities. And so at a certain point, we said, you know what, rather than it be perhaps 1 percent of a hundred people's time to think holistically and horizontally around the organization, maybe it makes sense. It probably does make sense to provide some kind of center of excellence and gravity, a gravity well around bringing all of what we do and thinking about AI.
So within those first hundred days, really within the first 30 days, I said, okay, great. What do I need to do? I need to figure out for the individual. So of this, roughly 7, 000 of us that that work around the globe, how are individuals like myself, knowledge workers. Using or not AI within their own capacity to be augmented intelligent people.
Right? So I, think a lot about AI, right? You can have, I think even Jeremy, maybe you said it in one of your recent articles, right? Like there's average intelligence, there's aspirational intelligence, and then there's augmented intelligence. Uh, I, would like to be in a place in which the mark I leave at Logitech a year from now, or two years from now is , in which all 7, 000 of us.
are operating at this augmented intelligence basis, but we need to benchmark. We need to get there somewhere at the beginning. So the first benchmark was, let's do a state of Gen AI as an example, state of Gen AI survey , across the entire company to understand where our strengths and weaknesses are and then how we can, get better.
And also for me personally, professionally to say, okay, Jeremy. Six months from now, a year from now, have we moved those markers? What is the NPS on our use of generative AI?
[00:03:31] Henrik Werdelin: And so what was it? Was that a questionnaire or was that like, you know, you looking into, um, enterprise chat, dbt and see how many sessions, what, what was kind of like some of the things that you were measuring?
[00:03:41] Eric Porres: So it started with a, survey. And the survey look at in in previous lives and iterations, uh, I have built research and insights practices at the four different companies in which I was a CMO. So, the, practice of of research is not you know, not foreign to me. And so it started with the survey, but it was enhanced with, we also have usage information of our own instances of AI. So, in that case, to your point. we have something that we call internally Lodgy AI, in which we have built similar to, I think, in early Moderna days when they had their M chat prior to moving to,
open AI or GPT enterprise, they built their own way in which they could bring in different models. Um, I love that for a variety of reasons, but that also gave me an ability to look of course, on an anonymous basis, what is the usage of our own systems? You know, are we using it in that way? Go go ahead,
[00:04:43] Jeremy Utley: we talk? I'm just digging into that idea of survey usage. One thing that's always curious me, and this is somewhat ironic, I know, coming from the idea flow guy, and, you know, most of the time I'm talking about more is better. Um, when it comes to a I I don't think more is better. I think better is better. And what I mean by that is more usage doesn't necessarily mean better usage, right? It could be more average use, which is probably actually not better than just humans without augmentation.
Can you talk for a second about how you evaluate quality? Are you able to evaluate quality? Because I imagine like self reported survey and certainly usage data would give you kind of a quantity measure, but probably wouldn't tell you much about the quality of folks interactions with LLMs.
[00:05:29] Eric Porres: Yep. So it reminds me. So my, high school band teacher, uh, Ken Pollitt, if he's out there, I'll, I'll send this, podcast to him still alive. And he used to say, you know, practice makes perfect, but it also makes permanent. And so, you know, to your point, it's like, you could be doing the same thing over and over again. But , if you're starting from the wrong seed knowledge or kernel, uh, it can be completely off. So give you one example of that when, and, and let me take a step back prior to doing that survey. Uh, I also had over the last year, prior to taking this role, an opportunity to teach, , which something hopefully Jeremy you would appreciate, you know, teach about 800 of my fellow colleagues, how to master. generative aI. Um, and the first tip I use in that is say, okay, how can you use AI to improve itself? And, this is an example of, you know, I can't tell you both internally and externally how many times if I do, okay, hey, everybody here, show of hands before you submit the prompt.
Submit the prompt to AI to make it better. Like AI knows better than you do, how it's going to interact with these words, you are about to give it
[00:06:39] Jeremy Utley: do you mean, like, Hey, hey, Chad, GPT, I'm trying to do this and here's the prompt that I wrote. Can you give me feedback on the prompt before I asked you at GPT what I was going to ask Chad, GPT,
[00:06:48] Eric Porres: That's exactly right. And whether it be GPT or Claude or Gemini, it doesn't matter what model you work with in, that way.
They all work or they all will give you that kind of feedback. And that's also the other thing I do where I have a, uh, I don't have it on my computer anymore. but the other thing I encourage people to do is I always say, put a sticky, put a post it note on your computer and saying, ask AI and ask AI really applies to everything.
So Jeremy. back to your question. So from teaching and training 800 people, I could understand qualitatively where some of the gaps were based on those conversations and Q and A's in zoom sessions and in person sessions. Number one. Number two, if we looked at some of the prompt history, we could say, okay, are people using uh, some concept of you know, context, task and output.
Right. So if if we think about the basics of role context task output, depending on which framework you use , around prompt or light prompt engineering to understand whether you are people doing that. And also, are we looking at it in terms of depth of prompt? So I'm not looking at anybody other than my own, um, the kinds of prompts that I'm able to look at. And I have full spectrum analysis into my prompts. But what we can look at on a, history basis saying, okay, what is the average depth
of prompt? Okay. And if the average person is prompting, you know, 2. 7 chats per conversation or so, that's saying to me, that's giving me some qualitative, that's quantitative and qualitative
estimate to say, okay, the conversations aren't that deep. , and then I look at the survey and I say, okay, hey, we're doing a lot of. The survey comes back and say, Hey, love, AI or generative AI for, um, for content writing, summarization, drafting emails or whatnot. Okay, that's good. That's like AI curious. And now I want you to get AI fluid. So that's the transition I think, you know, part of the transition for us is going from curiosity to fluency.
[00:08:41] Jeremy Utley: I love the, even the simple heuristic of how many interactions are in the conversation because I think. anybody could acknowledge in a human conversation, if folks looked at our transcript and there's only one person talking the whole time, say what you will about what a good production
is
it would not be a good conversation, right? There's a back and forth. And I think even that simple heuristic, Eric, I just wanted to call that out of just look at how many interactions are you having if you're not having
a vibrant back and forth, it's not a conversation.
[00:09:08] Henrik Werdelin: and we know by now, of course, that the more people know about using AI, the more they use it as a conversation instead of just as a prompt. And so I think that's a good point.
[00:09:17] Eric Porres: I was just going to say to that, point, look, and it's, it's nobody's fault in the sense that for the last 20 plus years, we have been taught by Google and to a lesser extent being, and those of us who are old enough with enough gray hair, you know, AltaVista, Lycos, others to, we have a search, we ask one question, we get back a sea of blue links.
And then we get our answer or we get our answer. Maybe we go into one deeper inquiry. Then we go back to that same one question. We don't go that heuristic of going from one question to two to three. It's not something as a human behavior species. We have necessarily been trained to do. So it's not necessarily a surprise that prompts tend to be short and they tend to be not a mile, you know, a mile wide and um, and an inch deep. In that way. So that's, that's, that's behavior
[00:10:08] Henrik Werdelin: Before we go back to, uh, cause we'd obviously love to learn more about like how to master AI, training stuff, and then kind of what you did after that, you measured kind of like the state of the union, were there anything that you were surprised by when you got um, the survey back? And, the reason why I ask is because Some of the companies I'm involved in, I'm always kind of like a little bit puzzled why suddenly, for example, at BarkBox, the logistics team is just incredibly deep in AI. And there's no real reason why, you know, like they should be more into it than another team. Um, did you see any kind of like patents that kind of stood up as you were reviewing the surveys?
[00:10:48] Eric Porres: Well, yeah, so I, it's, it's funny you mentioned that. So I looked at the survey, you know, ahead of, this call to say, okay, what were some of the challenges? And so, from the survey, we saw that, you know, 96 percent of employees are my colleagues have engaged with AI in some way. Now we can talk about, okay, the level of depth and whatnot.
Um, so it's not like it hasn't again, going from this curiosity to fluency. But what I would say is. That there was, uncertainty in terms of how AI may fit into specific job functions. And that is a function of maturity of AI going beyond this very podcast, beyond the prompt, going beyond this curiosity into, okay, can I take , an op variance report for instance, if I'm part of the finance team,
Uh, can I interrogate my data in a way that historically I've done , using human, uh, grunt work, grudge work, and a bunch of macros in Excel. And so bridging that gap between you know, aI and theory should be as simple as search, as simple as these tools that we've used for decades. , that's where I think there is that, maturity curve, it still needs, still needs work. So the biggest surprise is again, going from curiosity to fluency, but
Henrik, to your point, I would bet that that team that you described, there was at least one person in that team, , who was an AI champion. Someone who was curious early on. Started to go deep and then said, Hey, it is my responsibility as a good colleague to start bringing my team , into the mix and into the conversation. And that's what I found again from training and talking to over 800 people almost on a one to one basis. In the last year, there are absolutely champions in every part of the organization. But sometimes those champions are quiet champions because of either personality or workflow, or they work at a different office. And so the more we're able to expose those champions, the more we're able to, matriculate the, or trickle, have the information trickle down as well as, as well as trickle up.
Does that, make sense?
[00:12:56] Henrik Werdelin: ton of and obviously raises a lot of questions. Should we just. before we go down that track, stay a little bit with the, the how to master. And obviously I think everybody who is involved with trying to figure out how to upgrade their company's abilities to speak AI is kind of like trying to grapple with that specific question. right? let's just start with kind of just what you were teaching people. Uh, did you simply set up like, you know, webinars and kind of like saying, Hey, you know, next week. Eric is going to teach how we prompt better and, and do like kind of like the basics or could you tell a bit more about
[00:13:32] Eric Porres: No, absolutely. So, um, , the journey inward, you know, , my journey toward AI started really 13 years ago. Uh, when I got to work with the founder of rocket fuel, which was a, a MarTech company back in the day, he was a classmate of Sergey and Larry. Uh, he was an AI PhD. Turned down the job to be employee number four at Google, but like he did.
Okay. In the end, his name is George John George. If you're out there, uh, George John PhD, and he had one of the most cited research papers in, the world around, AI. Um, so if you fast forward from 2012 to now 2024.
Now we have folks like Section. I know what you're a part of. You know, Sections now created their AI Mini MBA course.
So I took the AI Mini MBA course a year ago and I said, or over a year ago, and I said, wow, this information is incredible and the capstone course of Sections Mini MBA is one in which they ask you, okay, how would you apply? Basically, you know, what's your thesis? How would you apply what you've just learned to something that you're doing at work. and so my original thought to do that was, okay, I'm going to do this for product management. How do I like very tactical? How do I prioritize a product and program management for the function that I was part of at the time? But then I then I had a then I I slept on it and then overnight I had this epiphany, which was, hey, wait a second.
If the goal is to bring, if, if I've just been brought into the AI class or I've refined my techniques as being part of the, AI class, people are illiterate in terms of how to have a, faculty with AI. Maybe what I can do is take this thousand slides worth of content and great. Okay. Great teachers of which I would, include Henrik, what you're doing and Jeremy, what you're doing in terms of this podcast and boil that down into something that would be digestible for the knowledge workers that I get to work with. And so it totally shifted my approach. And I said, Hey, I want to take on the role. Of AI conciliatory. within the organization, because I'm one of the few people that have had both the history in terms of making AI explainable and also the practical applications of AI to be able to do that. So then that created for me a way in which I could say, yeah, you know what, I'm just going to start. Presenting to teams, because I know they have the need for this. and I've been asked about it through the tea leaves. People are asking me about, hey, can you help me with this? Hey, I've heard that you just did this course. Hey, I've done X, Y, Z. Um, so that's really where the journey began in terms of journeying inward, uh, organizationally and then over that time over the course of. eight months and 800 people. And I don't know, 24 different courses taught. I came up with my presentation. Now is one slide and it's one slide with 10 tips. Uh, and those 10 tips are something that I can spend two hours chatting with different teams about. And the first step, as I described was, you know, use AI to improve itself. before submitting a prompt, ask AI to improve the prompt with you as part of your first effort to collaborate with AI
[00:16:43] Henrik Werdelin: That's fascinating. I think everybody who have had the role of kind of like inspiring an organization or get them going, you've kind of done some of that. My, my 10 wasn't 10 tips. It was 20 things that I used AI for the last two weeks. And it just was like this kind of rapid fire of just use cases, , you know, anything from kind of like writing a. investor report to, , talking through a difficult meeting to, writing a good night story for my kids. Um, but also like, it just prompts kind of like people's brain to go like, Hey, wait a minute, can you use it for that? And, but I, and I also really like, just want to point that out. Um, I think there's something very interesting in this idea of using, um. Wouldn't call it almost like swag to try to get into people's dailies habit. Obviously you're an expert in habit building. Um, so maybe you could talk a little bit over that, but, but I was just thinking everybody should simply go around and put those posters on people's laptops. Right. You like, uh, you know, or.
or computer
[00:17:42] Eric Porres: Oh, no, a hundred percent when I, visit an office, that's, what I do. , and one of the nicest ways that people have ever, you know, sort of thanked me for the training was I've had people send me pictures of their computer. with an Ask AI next to their Logitech camera. Uh, but, you know, but to your point, Henrik, I should, bring up that, yes, it is, you know, 10 tips, but the 10 tips are me working side by side with one of the multiple generative AI models that we have available to us. Say, okay, now let's take this tip. Let's, let's, hey, Henrik, what's a, what's something you're thinking about today?
Okay, I'm thinking about my kids in a story. Okay, great. Let's do that in real time. Let's write that prompt. And now, before we submit it, let's use AI to improve itself.
[00:18:26] Henrik Werdelin: is one of the other tips, just so we get a little bit of
a Oh, sure. Um, so prompts can be preposterously long, and that's okay. So I think, again, this mindset of how do I transition from search to gen AI and what I'll do with the prompts can be preposterously long. The first thing I'll share is anthropic posts, their system prompt, right? will, they will make an update to their system prompt and the system prompt is about 4, 500 words long. And when you think about like, it's a parenthetical expression, right? You have system prompt. Then you may have your custom instructions. Then you have your actual prompt itself. And you're looking at like, wow! That is a super long prompt. No wonder it feels so human, where I'd say it's kind of like, you know, uh, Anthropic to me or Claude feels like they've got a PhD in cultural anthropology. GPT is more like, you know, the, the MBA student. Both are good! And they have different reasons. They have different ways in which they communicate with you. So the fact that prompts can be that long, and you can provide that much context, , is what I use for, that's, tip number two. And maybe by the end of this, we'll get through all, we can get through all 10 tips if you like. Um, so then I use an example, Henrik, of, okay, now if I want to do something that is deep research oriented. I have created , a one shot prompt that is seven steps long, about two pages worth of content to share with people and say, wow, that starts to get everybody's cylinders firing as far as like, Ooh, I need to think about this. I need to treat this differently. I need to treat this way of interacting differently than perhaps I was doing initially.
[00:20:01] Jeremy Utley: So you've done, , you conducted these 24 sessions, 800 colleagues, and then one, how, does transition to head of AI happen? And two, given that you've had those experiences, how do they inform this new transition?
[00:20:18] Eric Porres: So, well, let me go back to the first question. , I would say to a certain degree, um, and you and I both have, our mutual friends, uh, uh, Philippe de pollens, , at Logitech, who, , really gave me the opportunity, not only to take on my, day job of managing our, our personal workspace, our services portfolio as far as part of the, you know, knowledge worker and
, how do we help ourselves with software and services, uh, but then also take on the secondary role of AI educator. And as we started to mature in terms of our, I'll call it our own AI maturity curve internally. It goes back to what I said before, which was rather than have AI as part of each individual's multiple individual's responsibilities. Does it now make sense really to bring that role into at least one person? I'm not saying I'm not the only person at Logitech that has an AI remit. I happen to have a remit that is really responsible at the moment. I'd say the first six months. So instead of first a hundred days, the first 300 days of upscaling and upleveling ourselves. Related to again, this notion of augmented intelligence where everybody can feel very comfortable that they can ask AI and they can have a conversation with AI and they can get the most out of their experience of AI. So it was a natural transition for me, Jeremy, to go from educator. Plus my experience over 12 years of thinking, doing, learning, understanding AI. Plus my personal experience. I did this again, just before this session, I downloaded my GPT history and which is I, hopefully everybody knows you can do that and go to your personalization settings and you can download the entire archive of your entire history with chat GPT as an example. So I did that and it turned out that I had done over the last two years. 9, 736, there were 9, 736 conversations.
Um, it was about 2, 000 pages of written work that I've co collaborated with and like 1. 7 million words worth of conversations with AI. And then if you go back to the habit book and then I said, okay, atomic habits is about 80, 000 words long. , so I've done about, you know, 20, I've written about 20 books worth generative AI and had that experience. Over the course of two years. So there's a certain faculty there. That is maybe a higher, than you know, higher than average in in that way.
[00:22:48] Jeremy Utley: Is it accurate then to just to kind of recap back to you? What you said that. The way your training, call it 800 people at Logitech informs your go forward as head of AI is, I know this is the first step and now I'm scaling up my upscaling efforts and they are, um, anointed or their knighted efforts. Now, instead of being a covert operation or like a secondary job,
Now my actual full time job is to scale up this thing that I already started doing on a part time moonlighting basis.
[00:23:23] Eric Porres: Uh, yeah, I think that's a good way to describe it um, and what I would say there too is Right. Like we have, Lodgy AI, our own internal instance where we're calling, you know, we're calling, in an API way, the different models. So one great thing about working at Logitech is that , we haven't standardized on one model. So everybody right now has a model playground to work with. They can work with Gemini, they can work with chat GPT, they can work with Claude , and any other model we may choose to introduce. And so that, that was a real, uh, uh, great light bulb moment for everyone to say like, wow, like many companies, as you know, have standardized on one, it doesn't mean you don't do it in your own personally , on your own time. You might work with others, Claude perplexity, take your pick of, of storm consensus, Poe, some of the other micro models that exist or microwaves in which you can work. What did that tell me, Jeremy? It told me that when I looked at our own instance of Lodgy AI. it was behind, uh, VPN and for, for, at the time, a good set of reasons B, it was stood up by our engineering team that didn't have any real, not, not a strong UI UX, uh, component to it.
So I said, okay, in order for us to flourish, one of the first things I need to do is figure out the right structurally and working with our product security team, et cetera, to make. Bring AI to where people are, as opposed to it make, making, trying to create too many gates to get there, because otherwise what you wind up with, as you know, is like a AI, where people are doing it on their own, uh, which is not good.
Like, we have to be responsible, um, responsible individuals in terms of our own data and our, proprietary
data that we
[00:25:08] Henrik Werdelin: get it to where people are. Is that working all the way into? Slack or teams or whatever kind of like software are they already using or completely passive agents that kind of roams how do you think about like the interface between your organization and AI?
[00:25:25] Eric Porres: Yeah, great, great question, Henrik. So step one is
meeting people where they are in terms of their, how they operate with Gen AI. And so that means how do we create an interface that really looks and feels more like the best of GPT. Claude and Gemini, because each one of them, right? It's it's like, you can have the greatest tool in the world, but if it's in a cave, um, you know, maybe, we would have discovered fire sooner, uh, but it was, it was stuck in a cave somewhere, you know, not in the Peking man 500, 000 years ago. Maybe it was a million years ago. And so, to that extent, it almost doesn't matter what the model is. It's about creating a beautiful and intuitive user experience that in which people can feel as comfortable as they are using, uh, our version of AI as they are with any of the other models, number one. Um, number two is then how do we create an environment in which people have a prompt library or a style library, right? Depending on the team that you're part of. You may need to speak in using Logitech's authentic brand voice. What are the ways in which we can help you, uh, adapt your voice into that brand voice and forgive you templates? So that was another, key learning by based on speaking with lots of people is okay. You know how a marketing team in our business group converses or creates content. will be different than that person that I described before in operations or finance, who will be different than that person in product management. And who will also be different than developers who typically will use, or are very often over indexed in terms of their use for code review. So each one of them has their own modality. And then we've created the ability for them to instantiate those uh, personas within how they interact with AI. Uh, automatically
[00:27:20] Jeremy Utley: I'm just thinking about the bread there. I mean, you just described so many different roles. So many different functions as you, and it sounds like correct me if I'm wrong. It sounds like you were kind of near term. You said one or two, your goal is a hundred percent of people. Proficient So 7, 000 employees fluent or, you know, from, from average to aspirational. So how do you measure that? How do you know if you achieved your goal? Like, what are you actually looking at in terms of an outcome variable to know in two years Eric juarez been successful in achieving the goal? Like, what are you even measuring to
know?
[00:27:50] Eric Porres: know? Sure. So, so I'll measure it in across three or four , different ways. One, I'll measure it based on, , on NPS. As it relates to how likely are you to recommend Lodgy AI or whatever it might be called in the future to a friend or colleague? Because that tells me, and I have a baseline now, and I know whether the tooling and the instrumentation that we've developed for on behalf of our colleagues has been successful by measuring that. And not only measuring that, Jeremy, as you know, but it's also what was the reason for your answer?
So that's super important, right? For instance, when we think about our teams in, China and in the far east, they do need access to different, I'll call it non Western standard models , and so that's something we're exploring right now. How do we safely. In a privacy and security friendly way enable, whether it be when from Alibaba or, deep seek or any of the other models that exist, how do we give them the right kind of model playground such that they can work more effectively and collaborate more effectively, uh, specifically in those in those markets.
[00:28:59] Jeremy Utley: Is what you're saying there that that you can't, if you're measuring NPS as one outcome variable, and you said, what's the reason for the answer? You can't expect folks who don't have access to the best tools to have a great experience. And so part of your job is to make sure that everyone has access to the best tool so that you know, that your NPS is a reliable measure.
[00:29:17] Henrik Werdelin: Yeah, but also the best tools for them. It sounds like, which I think is actually an interesting observation that.
[00:29:23] Jeremy Utley: Hmm.
Well, yeah, that's, that's absolutely right. Right. Like, what might be right for you may not be right for some, depending again on, the use case. So, right in the case of even now, when you use as an individual personally in your own personal time, right? there are different, , file formats as a simple example, right? There are different file formats that GPT does a great job of that Claude doesn't yet. read. Uh, And similarly, there are certain files that Claude does, I would say, a better job of either OCR recognition or image interpretation or code that it will do well that others don't. So that's like the point of meeting people where they are based on that need state and providing the right model for uh, associated with the goal number one.
I would just suggest there, Eric, insofar as you're measuring NPS, I think another great measure and perhaps even better if I. My understanding of kind of product market fit measures and things like that is instead of asking for an NPS ask how disappointed would you be if you no longer had access to this
[00:30:24] Eric Porres: Oh yeah that's a great question that's something we ask as , the other part of the, software I have in smart habits is, um, , we asked that exact question and we say, okay, you know, we know that 96 percent of people in that particular example, 96 percent of people would be super disappointed if it didn't exist anymore. Um, so , that is a great question. Uh, Jeremy, it's a great reminder. Thank you. Because that's something we'll do. I want to do this tracking study every six months.
Now, people can give us feedback whenever they want and something we will introduce Henrik to your question and Jeremy to yours as well the thumbs up thumbs down in terms of conversation response that will be measuring us and that is to say, okay, if I have a conversation in Logi AI, was it thumbs up? Was it net effective? Did it? Give me some kind of value? Or was it kind of eh? was it meh? Or was it really, was it really great? Again, that's not something else we're measuring. Now you can give thumbs up and thumbs down in your commercial versions of GPT, for instance, but that's going back to them.
That's not going back to us and saying, okay, can we get that? Then that's available to us at any time. So I would say you have a set. There's a combination of quantitative metrics as well as qualitative. So what I described before in terms of qualitative. Now let's look at conversation depth. Let's look at conversation, the, the average prompt. Let's look at how many assistants. I've been created today and how many assistants will be created six months from now versus a year from now. Let's look at the usage of those assistants. Let's look at the champions in which every time I now, now I know who my champions are because they've given me permission from the survey to say, yes, I'm an advanced user. Yes, these are the things I'd like to see better. Yes, this is why I love or hate Logi AI or G Suite. Yes, I'm proud to be, or I'm actually ready to be an AI champion within, within Logitech.
[00:32:26] Henrik Werdelin: could you talk a little bit more about the champions? Because, uh, Brice from Moderna, um, mentioned that he had a hundred.
Uh, champions, that is kind of a fixed group. And basically, if you don't do the job of an ambassador, you're kind of like, you fall out of the group, which I thought was interesting. And that's something that we've adopted at BarkBox. And I've heard other people have success with, Um, could talk a little bit more about like, how do you serve your ambassadors? So they can serve the rest of the thousands of Logitech employees.
[00:32:53] Eric Porres: Right. So it's the teach, tailor, take control. . So, right now , to be fair in terms of our journey, I think I have a pretty good sense of who those ambassadors are in terms of the actual rolling out of the champions program. , it's still very much in development, which is great because yes, I am speaking with Brice about, okay, what did you learn about that experience? Henrik as a separate conversation. Love to, interview you about, about what worked, as well. But I find that, in my case , I studied martial art for over 20 years, uh, ninjutsu, And I got to be, pretty good at it. I got my fifth dan license in 2004, and that said, okay, Eric, you are now knighted, as Jeremy said, by the grandmaster of the art to be someone who can then pass on the art to And what I found in that experience is that, right, teaching is the best teacher. And so from like, do you really understand your stuff? Can you actually effectively teach something to me that either I didn't know before or, I had to unlearn and relearn? And that's so like prompting as an example is a skill that people often have to unlearn and relearn because of the sort of going from search to generative AI. And so one of my markers in terms of ambassadors will be, okay, I want you to be able to teach to me and to a cohort of other people I've identified as super users and to help us understand how comfortable you are with the material you have, how you're teaching people. And then what are the key takeaways that they can then use as part of their, part of their everyday. Um, and that's , one of the metrics by which we will keep or move ambassadors in and out. Cause look, people are busy, right? Like, you get assigned a new project, you have a new child at home, you have other responsibilities. So I don't think the expectation is that , ambassadors necessarily need to be there all the time. And that they should be able to, have the right to rotate.
[00:34:45] Jeremy Utley: , assume the tips, you know, get 100 percent penetration, right? You get, you get 100 percent adoption fluency throughout the organization. Have you thought about what's beyond fluency?
[00:34:57] Eric Porres: No, to go from, from fluency to, mastery. So, so what I would say, you know, , beyond fluency, right? Like we know that the agentic future is, upon us. Um, now how people define agents, like Mark Benioff has a definition of, the agentic future. He is pivoted wholesale, pivoted Salesforce toward like, whether, whether he was, I don't know, meditating in in Hawaii or something like he has made a super commitment. And I've talked to some other people at Salesforce who've been there for a long time, and they said, look, we've never seen a company pivot this fast. Uh, and as focused toward the agentic future as they have with, Mark making that mandate. And so, likewise, we have to think about, you know, to your point, Jeremy, where, I will no longer a year from now, or two years from now, if we do this right, I will no longer be working solely as a human knowledge worker. I will have a agent or set of agents. That I will be responsible for and my AI fluency will be one in which I can, as the human, right, human, human, as the instigator, human, as the judge, I will understand what tasks and what projects I can safely and effectively delegate to my agent workforce and which ones require , the full scale of my, intellectual attention. So I think that is one of the, one of the markers that go beyond, you know, beyond fluency. Okay, how do I work? How do I work with gen AI myself on a day-to-day basis. Um, the next generation of that is how do I work with agents on, a day-to-day basis and , manage the team effectively.
[00:36:35] Henrik Werdelin: Can I just ask you to double click a little bit of that? Cause I, I obviously agree. And I, think there is these two worlds emergence. There's the ones who's very advanced with AI And we kind of talk about agents and stuff like that, as in like, you know, that is about to happen. And we are already have like a bunch of agents that we use on a daily basis and what have you. And then I think there's the realities that most organizations. are still just having the conversation. Should we use AI or not? Right. You know, like, so there's like this huge gap and then obviously there's like all the the shades in the middle.
I find it complicated to help people think about how to atomize the work that they do and then productize whatever it is, is the output of. They're now the current day to day job. And then even think about how to kind of like productize that as an agent. Now, you being somebody who understand product really well, I'm sure you kind of do that intuitively, but have you thought about how best to teach people how to kind of think about this kind of like journey that they. As you say, it's kind of about to go on where they will see themselves like not working themselves out of a job to somebody else, but working themselves out of their current job by having agents doing some of the more kind of like the laborious versions of what they they do.
[00:37:59] Eric Porres: Well, yeah, so good question. So, 10 years ago or 12 years ago, you know, at at Rocket Fuel, we would say. Um, you know, free yourself from the tyranny of the grunt work, grudge work and guesswork associated with the manual optimization of advertising campaigns and liberate yourself to do the most inspiring and insightful work of So, what do you do right now, which is grunt work, grudge work and guesswork. And how do you get to a place in which you can be, you know, be inspired, you know, do things inspirational and insightful. and when I think about that, I then think about like, where is AI great and where can AI actually accelerate dysfunction? And I think about that across these three vectors, and this is, I have to thank Greg Shove specifically for giving me this mental model, uh, which is, and I wish I had my, soda can here to do it, but it's a do the do, which is Mountain Dew, uh, d E W data, Expertise and Workflow. So is your data structured or unstructured? Is it garbage? Is your data polluted or not? Um, expertise is a combination of , what is , you the human, what is the expertise that you are known for? What is the, like, why do you come to work every day? Why do you get paid to do what you And then workflow, you could substitute the word workflow for process as well. What are workflows that are part of your daily, and Henrik, I think this partly gets to your question. What are the workflows that are part of your daily existence? Have you done the audit of your workflow? And therefore, can you atomize that workflow into digestible chunks that, AI might be able to take over for you? For example, if I think about, , back in my day when I was had fewer gray hairs, um, , I think about the SDR function as an example, right?
So, you know, sales development represented inside sales for those it's referred to for the listeners, SDR, BDR. And I think about what an SDR has to And I think about whether some of the workflows that are part of correspondence with a prospect, right? So marketing in theory, marketing has done their job to heat up a lead to in which a place in which it's qualified. There's some MQL, there's some way in which that lead has been qualified. They've downloaded the white paper or , they've read two reports or they've configured in our case, in Logitech's case, they've configured a room that they're interested in. Okay. Great. Um, the historical way in which SDRs operate is one in which, okay, there's a human being. There's a young person, right? Young graduate, um, whose name is, could be Max. And Max goes in and says, Hey, you know, I just saw that you downloaded this thing. Would you be interested? Like they're trying to qualify that lead further and whether that qualification is right. band budget authority need timing, smart, take your pick of qualification. Doesn't really matter. , some of those qualification levers can now be offloaded, I believe, to, agents that, that exists. And, you know, salesforce is one company that's offering this agentic model of, SDR engagement, uh, and there are others. So, can I give that process, you know, as a human being, do I necessarily need to be doing bank qualification in that way? Or is that a process that I can offload to An SDR agent. Absolutely. Do I still need the human in the loop to think about what is the way in which I can create this rag knowledge base, the right kind of knowledge base, and then structure those communications effectively and the workflow and periodic periodically audit those conversations and the engagement rate of those leads when, before they go from leads to prospect to actual business. Sure. But I don't necessarily need to be that human that does every step of that qualification. Um, and that's a great place where SDR agents could be very helpful. Similarly with customer service, like we've seen it, everybody here, in fact, , there was just that piece that was done that I saw on LinkedIn yesterday, which was, um, two agents realized they were speaking with each other. , and they switched , how they started to communicate into a higher frequency language that, um, that the computers could understand faster than if, if humans were having that conversation together.
[00:41:59] Henrik Werdelin: I kind of like the track that down. was fascinating. according to, um, you know, like deep research done on Reddit, apparently it. it was programmed to do that. And so they didn't actually like self discover
[00:42:12] Eric Porres: right. They didn't self discover. Right. now. it wasn't self
[00:42:14] Jeremy Utley: But they did. They did revert. They reverted. Yeah.,
[00:42:17] Henrik Werdelin: but it was fascinating. And obviously the, concept is, is certainly something that, that could and will happen.
[00:42:24] Eric Porres: , and I think so, like in my case, I would like to build an agent now, right? You can imagine as one becomes, you know, knighted, um, head of AI, you, the, influx of, um, unsolicited requests for, hey, have you thought about this ML machine learning operations that like the influx has increased exponentially, right? We're in the, we're in the age of exponential growth. Because I'm still so new into the role, I cannot possibly answer these people. So , what I would love to have is I would love to be able to build an agent for myself. That says that queries me periodically and say, okay, Hey, Eric, what are you in market for? What are you not in market for? And then stand up an agent to be able to respond to those inquiries to say, look, not interested. Now, call me back in three months, six months, five months, et cetera, such that I have a curated experience. And I can actually help those SDRs, whether they be agents or humans, uh, do a better job of managing their pipeline.
[00:43:25] Henrik Werdelin: question I have on, and this might be. pretty nerdy, but as we are looking at atomizing and agent physicizing ourselves, and we use the do model
data for, you know, for a company becomes clearer, right? You know, like we have different organizations have different kind of like data and some of it can kind of become unique. And some of them it is unique to them and some of them not. but as an individual, let's say that you're just working the marketing team, and you think about data for you as an individual. uh, you mentioned one earlier, was obviously it's a data source, which like it's all your interactions that you had with chat dbt over the last two years, which is now like a repository that you can make into a rack model. Do you have other kind of like, uh, things that you started to gather as data repository for you as an individual that can be useful? And I'm asking because I saw somebody who is now using readwise a lot as being the repository for all the highlights they've done in Kindle books. And that now becomes kind of like a database of something that say something about their interest that they can use. Do you have other kinds of things that you use? That you started to kind of like keep because you know, you're going to use it for an agent at a later point.
[00:44:38] Eric Porres: Yes. So notebook is what I use in no small part because, when for purposes of research. So I try and stay up, I try and read three or four or five. Research reports that come out per week of which there are another 15 behind it that I'm interested in, but just don't have the time to read. Like, I don't know how Ethan Mollick sleeps, but you know, I follow him pretty religiously as
[00:45:05] Jeremy Utley: I don't, I don't think he does.
[00:45:06] Eric Porres: Um, , and I think everyone should follow
[00:45:08] Henrik Werdelin: He is an agent. You know
[00:45:09] Eric Porres: I'll be in touch.
Yes.
[00:45:09] Henrik Werdelin: right?
Right.
[00:45:11] Eric Porres: Uh,
well, right. because Reid Hoffman actually has his agent, his younger self that he's created that three dimensional facsimile of him, which is Super interesting and interesting to watch himself interview himself from 30 years ago. But anyway , so I use notebook as a repository of research information, uh, both because again, , I think I'm a reasonably curious person. So I'm curious about lots of things. I just don't have time to absorb all of them. But that curiosity, I would say that curiosity is the tip of the spear to say, okay, you know what this, this, these five things I read and these 15 things I didn't read. That becomes part of a knowledge base for me
[00:45:48] Jeremy Utley: what's the workflow there, Like, is there literally like, do you have a workflow built in your mind? Like if somebody sends you a research report, you tag it and immediately send it to notebook. And then at the end of the week, you have like a time blocked on your calendar to ask notebook, what are the five biggest learning da da da. I'm making it up. Right.
[00:46:03] Eric Porres: Actually you're, not making it up. , that is exactly what I do every, every week I get whatever, however many pieces of research I tag flag, I open it in, I may be opening it on my phone and safari, and then I go through all of my open tabs from the last week. And I say, okay, that piece of research, I download it. I then upload it to a notebook. And then I get , my 15 to 20 minute audio summary, uh, of that notebook every week that I then listen to. And or read the briefing document with the notebook, as well. And then what I may do, Jeremy, is I may use that research or some of that research in the future to create assistance. So for instance, I've created a prompt building. Before, you know, yes, you can use AI to improve itself and you can also go instead of, you know, a mile deep based on the type of work that you do. So I think Henrik, you had asked, asked the question about the the marketer, right? So every piece of content that a marketer reads to become a better marketer. Potentially is part of that knowledge base. And it's just like Jeremy, like, okay, I don't know how many books you read last year. And I think somebody on, this podcast said it from a couple of months ago, which is like, you can't possibly remember what was on page five of the fourth book that you, or the fourth book ago that you read, um, and LLM can. And so having that access to access to that information in a bespoke way, uh, becomes very, very powerful.
[00:47:29] Jeremy Utley: This is killer. Okay. is there anything that you feel like folks need to hear that, you know, that you haven't shared with us yet?
[00:47:37] Eric Porres: Uh,
so probably I am still surprised by the individuals I speak with, professionally and personally who have not fine tuned, , custom instructions to give LLM that have choice knowledge about themselves, whether it be their work, their career, uh, and the types of outputs they expect. And, you know, Jeremy, you asked about like, what is that habit? Like a habit that you, the listener can do right now. If you haven't done it already is download your LinkedIn profile, assuming that it's up to date. And then you can upload that profile to your LLM of choice and say, Hey, help me create uh, what do you think would be an appropriate set of custom instructions that would be right for me And, you know, in 2000 characters or less, which is generally like the, that's the default character size for, for many of these models in particular GPT and I promise, right, it's like the, the output, the pre and post, right, the pre exposure, pre custom instructions, post custom instructions, , is dramatically different. .
[00:48:40] Jeremy Utley: Run the experiment. I think that's a cool thing is like tell everybody. Take a prompt, you know, maybe like a burning question. You've got put it in uh, LLM pre custom instructions, then do what Eric just described, which just TLDR, download your LinkedIn, upload it to ChatGPT and say, what should my custom instructions be based on my LinkedIn profile, and then upload that as custom instructions. Then rerun the same exact prompt in a new window with your custom instructions and just compare the difference. I think if folks want to dig deeper into custom instructions, Dan Schipper actually went really deep in one of our early
episodes, just on, he calls them the secret power up to change LLM. So I totally agree. Any other kind of big piece of advice that you think are no brainers that folks just have to know before they stop listening to you, at least today.
[00:49:24] Eric Porres: Sure. So, fact checking right You know, AI it's still an overeager intern or an overeager PhD MBA, depending on your pick. Right. You know, uh, I I do believe that, the large, the frontier models, it is the jagged. It's the jagged frontier as, as Ethan Molek described a year and a half ago. Like, you know, the future is here. It's just unevenly distributed. And so, our LLMs are so eager to please that , they will tell you, sometimes they will tell you the information that you want to hear. And so this is some of the work I've been doing recently. And I know Jeremy, I shared this with you and also Brice is creating neurodivergent assistants
that give you the opportunity to really, uh, instruct the LLM counter. its narrative of always being, um, always being the people pleaser, right?
[00:50:11] Jeremy Utley: Eric. Should we post a link to your psychedelic GPT? Would that be
cool in the show, in the show notes?
So folks, folks who don't know. So Eric and I got into something on LinkedIn recently. We were talking about, you know, uh, psychedelics have influenced the kind of creative potential of humans for a while. What is it? And that's like introducing a chemical. What, what could we do with language to effectively introduce a psychedelic to an LLM? Eric worked hard on building that and we'll post a link to that GPT in the show notes, if you want a psychedelic kind of mind bending LLM
[00:50:42] Henrik Werdelin: That is so cool. I hadn't seen that. That's amazing. Thank you so much
. Jeremy, what stood out to you, you
[00:50:52] Jeremy Utley: know, what, what stood out to me.
It's this phrase. I can't remember where it came from. I feel like it came from a recent episode, but we've had so many amazing conversations. Folks. Check out the back catalog that I can't even remember which one it is. Maybe Henrik will. But is this idea of commissioning yourself? Do you remember where that came from?
[00:51:10] Henrik Werdelin: No, but I remember somebody said it,
[00:51:11] Jeremy Utley: but you remember that, that vibe, right? There's, that's a vibe there. And I just, I really like how Eric, Eric is. I mean, he's an incredible leader. He's an incredible entrepreneur. He's an experienced CMO and now he's stepping into this super cool, super huge role at Logitech. And if you ask, how did it happen? Basically he commissioned himself. You know, he didn't, Oh, it was with our, in our conversation with Blair, the rabbit hole, , founder. , and he talks about the same thing, right. Commissioning himself to do the Adidas ad. I think in the same way, Eric effectively commissioned himself by, you know, on a moonlighting basis, taking the responsibility to train 800 of his colleagues, mostly because of his own personal curiosity.
Then when the question comes, who should be head of AI at Logitech. I mean, it's. How about the guy who's already taken the initiative to train 800 people? Yeah. How about we start there? To me, that's a really cool, like,
[00:52:03] Henrik Werdelin: You know what, that's probably a good thing, you know, like this is kind of also what happened at BarkBox, you know, we have the guy who runs AI is called Mikkel, and that kind of happened.
He used to run design before, but then he You know, got the AI buck and kind of spent most of the time, talking to everybody else about it. And you know what, I was talking to a part equity friend the other day, who was kind of asking, Hey, I just got involved in this big company and they, we need to kind of upgrade their ability to do AI.
How should I do it? Like who should I hire basically. And there is something interesting in the, thought of like, do you. hire somebody from inside who have already the ear and is somebody who kind of like can talk the logitech language as it is for them um or do you get kind of like the hot shot from outside and then is it somebody who understand technology or is it Somebody who understands humans, which obviously is somebody with an experience of CMO is, and even if you take Brice, who is not a technologist, but you know, is one of the ones that we've spoken to that seem to have the best handle on how do you really push this throughout your organization?
And so I think even like, who do you hire? How do you get hired? You know, like, is it inside it outside? It's kind of like an interesting conversation.
[00:53:09] Jeremy Utley: Yeah. Yeah. It's super cool. I love the shout outs to Brice. I think if folks haven't listened to that episode, you really owe it to yourself to listen to our conversation with Brice Challamel, the head of AI at Moderna, who's just probably a little bit farther on this journey that Eric's on.
I know Eric looks up to him as a hero and as a thought partner. And uh, Brice lays out a lot of his thinking in that conversation. You know, the other thing that I thought was just a really nice, simple takeaway is for anybody who's just getting started. Using AI , to improve your use of AI is kind of the meta trick that unlocks a lot of possibility, whether it's using AI to help you craft your customer instructions.
Like we heard from Dan, you know, now over a year ago, right? Eric mentioned that, or whether it's even running a prompt through AI and saying, Hey, I'm about to give this prompt, could you seek to understand my intents and then give me feedback as to whether I've framed this prompt? Well, how can I improve it?
There's something about the kind of meta. awareness that you, if you don't know how to use AI for something, you can actually ask AI and AI can teach you how to use itself or how to use it for your purposes in a better way.
[00:54:11] Henrik Werdelin: The final thing for me was like, we all feel that we are in a hurry to upgrade our own abilities and our company's abilities to use AI. And I think we hear these kinds of success stories and see these videos and hear these anecdotes from other people. But. It's nice even to hear from Eric that, you know, he's also in the beginning of this journey, right. And we are just figuring it out and probably no organization has truly completely nailed this yet.
And so what I actually like about , the industry of AI, if you like right now, if you take it just like applied AI, not just building AI systems, is that there's a kind of like a nice camaraderie in like. Everybody's just making shit up as they go along right now and sharing notes and sharing, which is actually like a, a pretty nice kind of like, uh, um, yeah, like nice space to be in.
[00:55:07] Jeremy Utley: Yeah, I agree. It's a privilege. And I think, the more that folks are sharing their experiences and, um, I, I love maybe the last thing I'll say, and it's related to that point. His thought that champions at Lodgy should be able to come back and teach him something new and teach the other champions something new as a criteria for their continued champion this I thought was really great, which one way of saying is there's a implicit humility there that he's not going to be the person who knows everything. And he's actually looking for people who are contributing to not only the group's collective knowledge, but his own understanding of the technology, I think is a really, really wonderful attitude to have for anybody who aspires to be the head of AI.
[00:55:50] Henrik Werdelin: And I think that's a good way to end it. So with that. Thank you so much for listening to another episode of Beyond the Prompt. , and obviously if you enjoyed this episode, it will mean a lot to us if you can share it to somebody else on LinkedIn, or you could, uh, go in and like and subscribe wherever you do. That's kind of like how, , this show gets, , exposed to more people.
So that would mean a lot to us. So if you could do that, we'd much appreciate it. And then until next time, take care. Be good.