Beyond The Prompt - How to use AI in your company

Special Episode: Paige Costello on AI in Product Management – a Co-Production from Crafted.FM and Beyond the Prompt

Episode Summary

In this special episode, we start with an insightful interview from Crafted, where Dan Blumberg and Paige Costello, Head of AI at Asana, dive into how AI-driven tools like status reports and bug triaging are transforming work by removing tedious tasks and sparking creativity. They also explore how AI is reshaping team structures and the evolving role of product managers. Afterward, Jeremy and Henrik continue with Paige to take a closer look at the hands-on development of AI, discussing the unpredictable nature of user interactions and why clear evaluation processes are essential. Paige offers behind-the-scenes insights on user testing, model selection, and finding the right balance between control and flexibility in deploying AI. Real-life examples, like Asana’s use of AI-powered status reports, customizable workflows, and transparency practices, showcase how these tools can boost productivity, streamline workflows, and fuel innovation. Tune in for practical takeaways on harnessing AI to reshape how we work, and a look at how these advances could redefine roles and team dynamics in the future. For more episodes from Crafted, visit crafted.fm and be sure to subscribe to Crafted on your favorite podcast app for more insights on product innovation.

Episode Notes

In this special episode, we start with an insightful interview from Crafted, where Dan Blumberg and Paige Costello, Head of AI at Asana, dive into how AI-driven tools like status reports and bug triaging are transforming work by removing tedious tasks and sparking creativity. They also explore how AI is reshaping team structures and the evolving role of product managers. 
Afterward, Jeremy and Henrik continue with Paige to take a closer look at the hands-on development of AI, discussing the unpredictable nature of user interactions and why clear evaluation processes are essential. Paige offers behind-the-scenes insights on user testing, model selection, and finding the right balance between control and flexibility in deploying AI. Real-life examples, like Asana’s use of AI-powered status reports, customizable workflows, and transparency practices, showcase how these tools can boost productivity, streamline workflows, and fuel innovation. 
Tune in for practical takeaways on harnessing AI to reshape how we work, and a look at how these advances could redefine roles and team dynamics in the future.

Key Takeaways:

For more episodes from Crafted, visit crafted.fm and be sure to subscribe to Crafted on your favorite podcast app for more insights on product innovation.

00:00 Special episode intro
00:01 Crafted: Paige Costello, Head of AI at Asana
00:36 AI in Action at Asana
01:52 Challenges in AI implementation
05:01 Structuring teams for AI
08:55 AI's role in product management
14:15 Prototyping and scaling AI
15:56 Internal AI uses at Asana
18:06 AI's impact on workflows
20:32 Future skills and job roles
22:52 Customer interaction importance
25:22 PageBot and AI assistants
26:56 AI in organizational challenges
27:43 Unexpected skills in AI
29:17 Beyond the Prompt: Paige Costello
29:29 Transparency and AI in teams
34:33 Building AI products
39:10 Legal and organizational aspects
44:47 AI boosting productivity
50:23 AI's impact across industries

📜 Read the transcript for this episode: Transcript of Special Episode: Paige Costello on AI in Product Management – a Co-Production from Crafted.FM and Beyond the Prompt |

Episode Transcription

[00:00:00] Henrik Werdelin: Welcome to Beyond the Prompt. Today's episode is a little bit different We're bringing in Paige Costello, who's the head of AI and co head of product at Asana. She'll be joining us soon. But first, here she is on Crafted, a podcast hosted by Dan Bloomberg, who is a friend of ours, and where she's talking about how Asana Is using AI to change the way they work.

[00:00:21] Paige Costello: We're dogfooding stuff that is quite profound in terms of its implications for moving work forward together.

[00:00:31] Dan Blumberg: That's Paige Costello, the head of AI and co head of product at Asana. And on this episode, we're exploring what it means to ship products with AI, how AI will change the way work gets done, and how to organize your teams for success in this brave new world.

[00:00:47] Paige Costello: Because it's not deterministic, and you could ask the same question ten times and get ten different answers, that means that what you're shipping is going to be different every time someone uses it.

[00:01:00] Dan Blumberg: Asana is a work management platform used by thousands of large and very large enterprises. And Paige will tell us how Asana ships and measures the success.

of AI powered features that aim to take the drudgery out of work.

[00:01:13] Paige Costello: So we'll say like 80 percent of this was AI and I added my 20 percent just so people realize.

[00:01:19] Dan Blumberg: Plus Paige's thoughts on the future of product management.

[00:01:22] Paige Costello: Now it will be more about like thinking, creativity, customers.

[00:01:26] Dan Blumberg: And her hope that these new tools enable us to get out of the building more.

[00:01:30] Paige Costello: It's so amazing what you can witness when you're in someone's space.

[00:01:34] Dan Blumberg: Welcome to Crafted, a show about great products and the people who make them. I'm Dan Blumberg. I'm a product and growth leader. And on Crafted, I'm here to bring you stories of founders, makers, and innovators that reveal how they build game changing products and how you can too.

So I want to talk about AI and sort of two vectors. One is how you ship with AI and also I'm interested in what the customer facing AI experiences are. Maybe we could start with that. Could you walk us through a recent launch of an AI powered tool?

[00:02:06] Paige Costello: Yeah, absolutely. So something we launched is status reports that can be Written with the help of a I and status, not just on your projects, but also on portfolios of projects and also on goals.

And what's important here is knowing that like people care a lot about sharing a good high quality status update to their stakeholders, and a sauna is used for visibility into progress. So this is at the core heart of what people are trying to achieve with a sauna. And so effectively, we created a feature where you can just draft the status update with AI, and in that, it's very clear about the progress that's made, what the blockers are, what the next steps, it links to the milestones, it cites work and progress that individuals have made, you can go review the citations, and it's really astonishing how high quality the output is.

Then we evolved this at the portfolio level, because like, It's one thing to do it at a project, where you're looking at the tasks and the comments and the back and forth there. Then to look across many projects in like a whole division or department, to do a status update on that is another level of complexity.

Internally, we use AI status reports all the time, and we write at the bottom what percent of our update was done with AI. So we'll say like 80 percent of this was AI, and I added my 20%, just so people realize. That these are being generated with the help of AI, because even internally, people are still astonished by the quality and the output of, of this work.

[00:03:43] Dan Blumberg: If this feature is, is widely used, it presumes that all the data that is necessary is in Asana, right? And so, cause I, I know the feeling of like, you know, some executive sees that your project is like, In yellow, you know, or god forbid it's in red, right? But it might be because you know, you're blocked by some third party that you have no control over and like and if that data Is not in the thing that's making the summary that that could cause a problem

[00:04:06] Paige Costello: Yeah That's actually related to the fact that we don't detect the color so we could Predict, like, is this yellow, is this green, or is this red?

But that is one of the things that we've kept our fingers off of and said, you, as the owner of this body of work, know best if the, like, maybe all the milestones are actually green, but you're going to mark it yellow because you're still feeling nervous about the launch because you think that scope might change.

Right. And so like, just because everything's green doesn't mean the whole body of work is green. And so that, that's a nuance that I think gets, gets lost if we put too much in the hands of, of what we can automate. And so that's very, very top of mind for us.

[00:04:48] Dan Blumberg: Yeah. Yeah. Yeah. When I was at LinkedIn, I was on a team that was trying to get out of yellow.

And for a little while we called ourselves in chartreuse as a nice hybrid there of yellow and green.

[00:04:57] Paige Costello: Lime green.

[00:04:58] Dan Blumberg: Yeah, exactly. We're making progress. We're going to get there. Um, at a few points in my career, I've been a part of either new product groups, uh, or I've been part of the core product group while there's this new product group over there and we get a little jealous of them because they get to play with the fun stuff.

And I'm curious how Asana has organized itself and how you recommend or how you're observing your clients, how they're organizing around AI. Is it something that you recommend? being diffuse and every team is using AI in lots of ways and the prototypes and you know a thousand flowers are blooming or is it best to have a central group that's doing it?

I'm sure the answer is it depends, but I'm curious how you think about this, this question.

[00:05:38] Paige Costello: I'd say it's pretty hard to take a scaled R& D function and just tell them everyone's doing it now because there's what you can do to become good at using LLMs in building a feature or like playing with it and then understanding the capabilities.

And that's a whole nother thing to, to ship it to customers at scale and really maintain it and think about which models are you using? How are you evaluating quality, et cetera, et cetera. At Asana, we decided that like that zero to one gear shift was too much to demand of all of our teams at once. And we could go further faster by Selecting a group of teams, telling them that they're now the A.

I. organization and they need to figure out the foundations and fundamentals for how we use A. I. to Shepdasana and use LLMs to make our customers more successful. And that meant that we were able to have clear focus, outline our strategy, be very intentional, and then pull in other teams with very clear asks.

And then pull in other teams over time where they're more set up for success, because they know, like, when I'm shipping, what do I need to ask myself, instead of everyone going to legal and saying, can I do this? What about this? We're like, whoa, whoa, whoa, don't go to legal. Here's the things you need to know.

If you have anything that's not on this list, talk to this person and I'll request funnel through them. It just makes everyone faster and smarter together. And, um, we're really happy with that choice, but it does mean that. It feels a little bit like, Oh, what's the AI org doing? And so it takes a lot more work to do enablement and, um, kind of connect the dots on our strategy because our product strategy for Asana and our AI strategy are not different.

And so that's a little tricky for people to understand because it could be really easy to assume that there are two different strategies. There's one company strategy and we're, we're just assisting the organization to deliver on that.

[00:07:35] Dan Blumberg: Yeah. I've been a part of a couple of different teams where the mission of the team was to put ourselves out of business.

So when I first joined the New York times, I was on the mobile product team. Right. And like we needed to have a team who only thought about mobile and that team doesn't exist anymore at the times or probably most places, right. There are a few, you know, there's, there's definitely specialists in Android or iOS, but you don't have a mobile strategy that's different from your web strategy.

Like you have a strategy. And so I imagine AI will be the same thing. I'm curious if there are. What markers you'd look for to say, you know what? This is no longer a special thing. This now can be diffuse, uh, among all teams.

[00:08:08] Paige Costello: Yeah. I think it's just, uh, a team sees the opportunity to apply LLM capabilities to what they're trying to do to solve a customer's problem.

And then you can just figure it out. Like, they're like, oh, I just use this and that. And like, we do some prototyping on prompt and I really like this output. And now I know how to roll it out. And I have confidence that I'm doing it within the expectations of how we ship enterprise software. And that there just are, um, I was picturing like a, a bowling alley with the guardrails.

Right. It's like everyone, just no matter what the skill is, can be successful. Because that is, is what we need to have where people can initiate and execute to completion successfully.

[00:08:55] Dan Blumberg: Where do you think AI is going to change the role of, let's start with product folks, but it could be knowledge workers more generally, where is it going to change their jobs the most?

[00:09:05] Paige Costello: Yeah, I think AI is going to take a lot of the tedious tasks out of work. I do believe that the work Pms will do will need to continue to focus on need finding and really doing like hands on user research because as much as you can do lit review style research much more quickly than you ever could before, the quality of the insight will vary.

And so Pms are still going to be responsible for. Like sniffing out what the best opportunities are and like being surprised by being really close to customers and watching them do their work, use the tools, etc. Um, I think also that PMs are going to have to figure out how to get comfortable with such.

With the stochastic outcomes of, uh, AI powered features, because what will happen is all of our software solutions are going to be much more personalized. And so reading an A B test isn't going to be so straightforward because everyone had a different experience. And so that's what we're going to do.

Figuring out how to define quality, how to evaluate what you're shipping, um, is going to be a really new and interesting challenge as all products become more underpinned with LLM technology.

[00:10:22] Dan Blumberg: Can you expand on that some more? What you mean by getting used to the stochastic nature of AI? And because I know, I know that's like, it's, some people see it as a bug.

Some people see that as a feature. And I'd love for you to expand on why that's something we have to understand to use it well.

[00:10:36] Paige Costello: Yeah, absolutely. So because it's not deterministic and you could ask the same question 10 times and get 10 different answers, that means that from a product perspective, what you're shipping is going to be different every time someone uses it.

And your ability to evaluate the success or quality of that will be determined. Even more complex than it is today. I think it's, it's hard actually to like read between the lines today. We can know if something wins or something loses. We can know if the results are neutral. But why is missing? So the quant versus qual is how we are trained to think about our releases.

But when you think about the future of what LLMs do to our technology, we're putting in customers hands. We need to get more creative about like articulating what success looks like, and that actually is going to be built into how we ship. So at a framework level, you can expect teams to start building with different quality suites than we built with before, where PMs are going to be responsible for articulating the success criteria for really checking that the prompt is working as you'd expect most of the time.

And so getting a lot closer to the actual implementation. And the performance at that level, as opposed to, you know, the highest level outlining or specking of the work and then reviewing results weeks later.

[00:12:01] Dan Blumberg: Is there a specific way that that played out with the launch of, say, the smart status that we were just talking about?

[00:12:05] Paige Costello: Yeah, absolutely. At the very beginning, RPMs were truly going through spreadsheets and, and runners and saying like, Oh, What were the answers and how would we say that that was successful and grading them on like answer quality, answer accuracy, and each feature had different things that we would be looking for in the output, like what we needed from a status update is different from what we needed from a task summary, say if you had a task with 50 comments and you said, Tell me, what's the back and forth?

What's this all about? The result on that is going to be different intentionally than the result on like a project summary or a status on a goal. And so each of those required like very careful thinking on the part of the PM about like what does good look like here and how do we build that into the system.

And then we eventually. Like, there are now burgeoning startups trying to do this. I believe that the model companies are probably going to try to build their own, probably fit for purpose evaluators for each model. But if you're at a company like Asana, where we're using models from Anthropic, models from OpenAI, and we Want to maintain flexibility about what our customers need and want to work with and the quality of those outputs that requires a lot more of thinking about our frameworks and and our layer of how we engage with that to make sure that what we're putting in customers hands is really the best we can put forward.

[00:13:36] Dan Blumberg: This gets a little bit meta, but are you experimenting with one model, checking another model and all the time you, you mentioned like the human qualities of the PM looking at the output of, is this a good status report or not? Can you also then, or maybe you already have built a model that takes that product sense for lack of a better term for it and looks at the model output that gave the status update.

[00:13:57] Paige Costello: Yes, we've created numeric scoring and categories and then built that into how we ship so that a PM doesn't have to use a spreadsheet. Yeah. Yeah, and that's, that's changed almost every three months since we started working more heavily here.

[00:14:15] Dan Blumberg: I hosted a panel recently, an episode of crafted all about the title was from prototype to production.

And it was all about ways, you know, engineering leaders, product leaders are, are prototyping with AI. And then some of the questions that come up as you decide, like, can we scale this? And those questions might be costs. They might be legal. They might be, does it work? Is it predictable enough, keeping in mind that it's never going to be truly predictable.

And I'm just Curious if you could give some of the consideration set that you have, questions you ask when deciding, does this generative AI powered prototype, you know, is it ready to ramp? Is it safe to ramp? Is it not going to cost us an arm and a leg if people use it at production scale, et cetera?

[00:14:55] Paige Costello: I think a good portion of the value of a prototype is just doing the prototype.

And you can very quickly find that you get to 70 percent way faster than you would with traditional product development. But taking something from 70 percent to 100 or production ready takes way longer than you expect. And so it's a bit counterintuitive and challenging to get into the into the flow with working with AI and bringing that to customers.

I would say that the biggest way that we find we have confidence in that is just rolling out to ourselves, dogfooding, rolling it out to an alpha group, rolling it to a beta group, and then rolling it to production and having a really systematic feedback loop for evaluating quality from the customer's perspective.

And so we do have a thumbs up, thumbs down. We, uh, Um, do quite a bit to evaluate is the feature at a certain level of threshold for its value that it's creating before we ship.

[00:15:57] Dan Blumberg: I'd love to dig into more of how you're using AI internally, whether you're launching any kind of feature, just what are some of the ways that it's really accelerating product squads.

[00:16:05] Paige Costello: So Asana employees use AI in a few different ways. One is they're using AI through Asana because we very heavily dog food our product for all work. We don't use email. We exclusively use our product. The next zone where I would say we use it is like, People are using ChatGPT or Cloud4Work, like, they're definitely going straight to the model consoles.

We also have a lot of internal projects that are using AI and embedding it in Asana that we're very excited about. If you think about the way work moves forward, it's not linear. You might work on a project and then get feedback on it, and then you need to consolidate that feedback and do a rev on it.

We're really Playing with internally all sorts of workflows that use a I to identify what's missing or what could happen next like that moving through the system and preference for how work happens is really core to how Asana is built and how it works today. And so, well, Our existing AI features are primarily, um, at the other work level.

Right now we're exploring, um, and building quite a bit around like how scaled work management happens with AI at its center. And we're dogfooding stuff that is Quite profound in terms of its implications for moving work forward together between humans using a I in the context of a sauna and just reducing the amount of tedious work that happens around, um, like bug triaging, for example, or preparing for a phone call with a customer.

These are the sorts of things that with AI, uh, you can do it. Much more efficiently and with much more confidence. And Asana gives a place where that can be done in a way that's structured, reviewable. Um, and that's, that's really exciting.

[00:18:06] Dan Blumberg: You said some of this will lead to really profound, use the word profound changes.

I love it. You could unpack a little bit of what you have in mind when you say that.

[00:18:14] Paige Costello: Yeah. This goes a little bit to the experience for everyone can be different, but when you think about custom workflows, Every organization has probably figured out their own, like, bug process, right? That same process for, like, work intake for a brand campaign or a marketing campaign, every company has things that are similar but different.

And so doing these custom workflows across groups, across teams, today happens like that. With a lot of elbow grease, like people write scripts and they like connect systems together and they're hard to change and they're hard to evaluate and they require people to say like, Oh, this bug doesn't have enough information.

Can you like help us reproduce and give us an image about it? They're like, it's really manual. Um, and so what's exciting is when you take automation or workflow processing and you put. AI within it, you're able to make that whole process make more sense. You're able to move things through that system more quickly.

Um, you're able to even review what's not working about the system. So we're going to be in a position to say like, Okay, this step is a bottleneck, only half of what's coming in here is ready to be actioned on, like you should change the form that's upstream of this so that you get better results and you're not slowing your teams down.

Yeah. And that's the exciting part, is I think no one wants to be doing this flavor of work, it is what's required to do the fun stuff about work, and um, Creating really custom AI powered business processes that move work forward in complex organizations faster is going to create so much value for so many customers around the world.

And I mean our customers, but I also mean their customers. Sure. And so that, that feels really exciting.

[00:20:16] Dan Blumberg: That also sort of gets to the sort of scary part. You mentioned a lot of this takes a lot of elbow grease. You know, humans have elbows and if an automation is just doing it right. So the question here though, and that's, that's a big issue, right?

That automation for, you know, centuries has displaced people's jobs. What are some of the new skills, new jobs that People should be preparing themselves for now.

[00:20:36] Paige Costello: Yeah, I believe that we'll still work. Like, there will be a good portion of people that still work just as, as many hours in the day. They'll be just doing different things during those hours.

Um, because, And the human appetite to make an impact and do good work is, I think, core to many people who have jobs in product. And so when I reflect on like, what new jobs will exist or what will people need to do, it's more about Using AI to be higher impact individual and to work smarter and to work faster and work more creatively and unlock things that you would have had to work through three people to do, and it would have taken a week to be able to do that in 10 minutes.

is not really a job change. It's an acceleration of the type of work that you hoped you had signed up for at the beginning. Uh, I do think there will be new jobs like AI ethicists and like people who are responsible for like how AI is used in rolled out companies. But from a like core product role, I think that PMs will continue to be valuable, but maybe spend less time on parts of the job that were more about writing in detail, and now it will be more about like thinking creativity, customers, and spending time in terms of the quality of the output.

[00:21:59] Dan Blumberg: Yeah, I interviewed Janet Basta recently, the founder of Mind the Product and of ProdPad, we're talking about this exact same topic and joking AI is going to do everything and like we can put our feet up and like that's never proven to be the case with any new technology. Humans like find a way to work more.

But she also said AI is going to take a lot of the work. So, Repetitive jobs away from from product folks, and it's going to enable them to get out of the building more. At least that's her hope. That's her dream. She's like, or they might just make more excuses for not getting out of the building more and do something else.

And so I'm curious if, uh, if you think that's truly the unlock that gets product folks to even more so, you know, talk to customers, do the things that, you know, humans are truly outstanding at, uh, or, or do, do we find new excuses to not do so?

[00:22:44] Paige Costello: I can only hope that's when the job is that it's most fun and it's most impactful.

So I can only hope so.

[00:22:52] Dan Blumberg: I don't know if there's a customer interview or an experience you had where, cause you said that's the most fun part. I would love if you could share why is that the most fun part? What, what is it that like got you to feel that way?

[00:23:02] Paige Costello: Yeah, absolutely. One customer interview I had, uh, at Intuit, I was looking at how our invoicing worked and invoice design and customization and how people were billing their customers for the work that they were doing.

And I went to a, an autobody shop and I watched them invoice and I looked at their paperwork, the way they were printing invoices, what they were trying to achieve, and there's nothing quite like standing next to someone and seeing. So it's like, Oh, is that thing on the wall actually how you're clocking in and how employees are managing time?

What are you doing here on, on the desktop? What, what time of day are you doing this? Like, Oh, what's this thing on your desk? How are you, like, are you doing that as well? Why? And so there's so much that's outside of what you can record. record in a, like, user testing session. I think a lot of people are like, oh, if I just, like, have someone play with this flow and I record their reactions to it, that's good enough.

And I think that's so, uh, limiting. It's just such a narrow view into how people are interacting with your tool in the context of their ecosystem of daily work. And so that, that's a memory I have that really stands out. I think also what people say and what people do is so different and you can't tell that very well, uh, when you're doing a video recording of, of them engaging.

[00:24:29] Dan Blumberg: Totally. It's funny you mentioned, uh, an auto mechanic. I was at my auto mechanic last week and he uses sticky notes to organize his work and he wrote a bunch of things down and then he, I forgot how it came up, but he showed me his system, which is, he has like, you know, Sticky notes for cars that just came in, cars that are in progress, cars that are waiting for the customer to phone call, cars that are done.

And he, and he literally picks up the sticky notes and moves it over and then moves it over. And I was like, you have a Kanban board. And he had no idea what that is. And I was like, it's actually a car thing. It came out of Toyota like years ago. It's actually, it's actually like we use it in technology now.

And he was curious about it. I don't know if he's your target customer with Asana, but he's using a very similar process. And I loved, I love seeing it just laid out right in front of me there.

[00:25:11] Paige Costello: It's so amazing what you can witness when you're in someone's space. And so I couldn't agree more that I hope our PMs get out of the building.

[00:25:22] Dan Blumberg: When you appeared on Lenny's podcast, you mentioned you were building a page bot. And I'm curious how page bot is doing. What can she do? What can't she do? And what do you really wish page bot could do?

[00:25:36] Paige Costello: Oh, man. Um, I have a board admission on that. Mostly because it was mostly for playing, just to see what is possible, what, what context you can give an LLM, what you can ask it, what, what output you can give out.

I'm finding that I'm writing a ton of docs and working with my teams about our product direction and that is taking so much head space that is more at the intersection of like, where is the market? What are Asana's unique capabilities and what can we create for our customers that It's beyond most people's wildest dreams, and that's, it's just taking so much headspace.

I would say I'm not playing with my own body at the moment.

[00:26:26] Dan Blumberg: If you were, what do you wish PageBot could do, or if not PageBot per se, but what do you wish AI could do for you that it's not doing for you today, but you think it could in the future?

[00:26:37] Paige Costello: Yeah. It's, it's a bit meta because Asana works on work management and enterprise work management and solving the problems that knowledge workers have around cross functional work and the plan and having shared purpose and being able to visualize progress.

And a lot of those problems are problems I myself. want AI to solve. There are so many problems that are going to get solved really quickly around, um, having AI be like a great EA or chief of staff and support you in like your meeting rescheduling and like preparing for a meeting and knowing what's on, um, On deck that could be moved or like relative priority across people and groups.

But I, I think some of the biggest problems are really like what's most important, um, to be doing and how to work across people and, and how to stay organized and on top of what's happening. And so that's, that's really what I want AI to solve.

[00:27:43] Dan Blumberg: Last question. I love stories of, I did X and I never thought I would apply it to my work and you know, in the world of product or AI.

And I'm curious if there's a, an experience from, you know, whatever number of years ago that is, is, is helping you be better at your job that you never would have predicted would, uh, would do so.

[00:28:01] Paige Costello: I have a liberal arts degree, and I think having the critical analytical thinking as your foundation is just something that scales to all sorts of interactions around thinking about problems, around prioritization, around trade offs, around, um, what's happening in the world.

And it's, it's been so useful, uh, to me in my career and just in my daily life.

[00:28:27] Dan Blumberg: So I'm going to bring this all full circle here. I'm a fellow liberal arts, uh, college major, uh, with AI, we're not going to be like, Ooh, you don't have an engineering degree. That'll be, that'll be less of a thing. I think going forward, you agree with that?

[00:28:40] Paige Costello: I do.

[00:28:41] Dan Blumberg: I mean, it's self serving for us supposed to say that, but like, but, but like the critical skills you talked about are, are even more important now. Uh, whereas, you know, like learn to code is probably still important, but a little less so than it was a couple of years ago, I think.

[00:28:55] Paige Costello: Yeah. And those become more accessible over your entire career journey, whereas that like foundational ability to challenge and think and be creative is, is going to become more, more of an asset because you're going to be able to do more things that you would have needed a degree to engage in at even a cursory level before.....

[00:29:17] Henrik Werdelin: That was Paige on crafted showing us how AI is transforming work at Asana. You can catch more episode, including one with me at crafted. fm or wherever you listen to podcasts. Now let's dive deeper with Paige here on Beyond the Prompt to see how her team is putting AI into action every day.

[00:29:38] Jeremy Utley: So one of the things that I really loved, uh, was when you talked about how at the bottom of your AI generated updates, you specify the percentage of the update that AI generated. Um, and you said something to Dan, you said, even internal folks are still astonished at the quality of the reports that you're doing that to tell them. Will you tell us more about that practice?

[00:30:02] Paige Costello: Yes, I think so much of our partnership with AI Is behind closed doors, so, individuals tend to explore AI by themselves and try to figure out, like, how do I become a better employee using AI? Like, what can I do to accelerate my work and improve the quality of my work? And, um, When you're building a tool for organizations and teams, you're starting to think about like, how do I not just improve the quality of life for this one person, but the quality of life for this team.

And in doing so you're thinking about procedurally, how does AI engage and how do you structure the way. AI assists moving the work forward, both like running work through a process, but also executing work. So there's this new question about how do you create transparency about what AI is doing to assist where that's happening and then that serves two purposes, one, an audit trail.

Like sometimes you want to say, how did it make that decision? How can I then reverse that? Um, and other times just. Transparency about the benefits that someone else is getting from AI and the opportunity they had To save time or make other changes to the quality of their work.

[00:31:25] Jeremy Utley: Yeah. Yeah, that's so cool It reminds me of something that we heard from Diara Busso, who's a founder of a fashion brand whose episode we actually just released this week, so audience members, it'll be fresh for them.

But one of the things that she does at her company is she records loom videos of herself and shows them to her team and her. And when we ask her, why do you do that? She said, because if I'm going to be asking them to level up, I don't want my leveling up to be in private. I want them to see how I'm leveling up to give permission and then also ratchet up expectations.

It kind of does both. It says, Hey, the only way I'm able to achieve this outcome is by working in this way. And also I expect you to start achieving this outcome too, which is it's normalizing a different, I love that phrase you use, partnership with AI.

[00:32:11] Paige Costello: Yeah that completely tracks. We did some research with Anthropic about the state of AI and we're learning quite a bit about AI councils and AI boards at organizations where they're trying to figure out not only like which tools should we use.

But also what should our expectations be of employees? Like what sort of training do they need? How should we make clear to them what our expectations are? Because like I said, people don't know where they stand and how they should work with AI and organizations can do a much better job being clear about that.

[00:32:45] Henrik Werdelin: When you write at the end of the updates, how much was they and how do you ever catch yourself in realizing that? It's creating a bit of laziness because it is so much more convenient just to go like, yes, than it is to kind of like rewrite it.

[00:33:05] Paige Costello: It's interesting you ask that. I would say for us, it's a celebration, the higher percent, because we're still trying to reduce the work about work, the stuff that isn't the strategic skillful work, but it's all of the work around work.

And so whether that's. a status meeting or writing a status report or finding the data you need in order to answer a question. For us, like we want that to be a hundred percent. So when it's 80%, we're all bummed.

[00:33:35] Jeremy Utley: That's a cool, that's actually a really cool, um, reframe, right? Is you actually want the percentage to be higher.

And I wonder Have you ever had moments either for yourself where you were affected by the fact that maybe the percentage is too low and how did that change your workflow either that time or the next time, or have you heard stories? I don't know.

[00:33:54] Paige Costello: Yes, absolutely. Um, well we have two variants of that feature.

One, you can just go and be like, help me write the status update and there's a default and it will write it. Or you can add additional guidance on top of our guidance to it. And that's a place where if I run it once and I don't like the output, I questioned, what did I change about this? And then I, next time I'm writing a status update, have changed that like starting guidance so that it's structured the way I want the results.

And I have clarified the length and target audience, and then I get a better result. So absolutely. It's an iterative process.

[00:34:32] Jeremy Utley: That's super cool.

[00:34:33] Henrik Werdelin: As a product person, I was curious in another statement that you had , in the podcast with Dan where. You talked about this new evolving issue, which is as you're building products, they are now becoming kind of in a one products because you're putting these foundational models into them that you don't really know how they're going to react because you don't know what the user is going to say.

How do you think about building a product without like being able to. You know, completely understand how it's going to perform when it gets to the customer.

[00:35:10] Paige Costello: Well, a few things happen that are completely different. One is structured evaluations and being more, clear about the use cases we expect people to use so that we can test those before we ship. So even though there's variations and what people might say or do with it, uh, Being able to have a best guess.

And some of that is informed by our intentions with the feature. And some of that's informed by us doing user testing and finding like, okay here's what people tend to do with this, whether we wanted them to do with that, with this or not. Um, and so being able to both, um, have a push pull plan in place for how people will use the product and then setting up the evaluation.

That, um, when, for example, you put a new model, you switch out a model. You can say, is this going to be net equal net better? What's going to happen to how this delivers on the core use cases we're trying to deploy here for this particular feature,

[00:36:19] Henrik Werdelin: I mean, I guess the product person that. For me is one of the most kind of interesting things now is when the foundation models upgrade and suddenly all your prompts don't really seem to function the same way as they done before.

And you have to kind of like , figure out like, well, you know, where do I change how much of the product is now completely changing with it? But I would imagine you'd be the same.

[00:36:41] Paige Costello: It's a very interesting problem that we have been playing with. Some of our features work with the. Certain models exclusively, and we have leveled up those as new models come out, but like with a particular model family and then other features are interchangeable.

And we actually in June, uh, launched a beta that we'll be continuing to, um, make more available. And in it, we allow customers select to select which models they want to use. And for that experience. That felt very scary and completely new to put that amount of power in customers hands and to try to give them guidance about which model to select for which jobs.

And to do so in a way that is balanced. Uh, , and helps people understand costs and the quality of writing and the quality of reasoning and how to make selections here. And this is one that I feel like the jury is out on how heavy handed we should be, um, versus how much we should let our customers choose.

[00:37:45] Jeremy Utley: Do you ever do anything like, um, show a side by side comparison? You know, sometimes ChadGPT, if it gives you a response, will actually say which one's better. Do you ever, if a customer gives you a query or an interaction, you say, Hey, here's what Sonnet 3. 5 will do. Here's what 01 will do. Here's what 40 will do.

Do you show some of that to help? Customers maybe because ostensibly most customers actually don't understand the implications of choosing the underlying model. And you may have a role in educating the customer.

[00:38:13] Paige Costello: It's a really interesting question for most of our features. We make those choices behind the scenes and we only have one feature where customers have that opportunity to choose in that feature.

We haven't done kind of a side by side comparison, but we are exploring ways of showcasing. Kind of like what it would do so that you can feed it your work and then say, how does this perform? Okay, now switch it. Okay. How does that perform? So that before anyone deploys their, you know, new smart workflows or, their process, they have Better insight into how it will perform on their data because we can test as much as we want, but really applying it to how they have structured their work, kind of what their projects and tasks look like how their assignees and metadata work, um, really changes the results.

[00:39:10] Jeremy Utley: One thing I was curious to, uh, to ask you about was you mentioned how you've propped up a team. That has been doing a lot of the AI exploration. And so they've become somewhat of the subject matter experts. This is where Dan was asking you about, is it a democratized access to the technology? And you had said, actually, we've got a group of folks who are exploring.

And then you said something that I love, and I found myself giggling. I just wanted to follow up with, you said someone might have an idea and ordinarily they'd go to legal and you say, whoa, whoa, whoa, whoa, whoa. Don't go to legal. Can you talk about that, that instinct, one instinct to go to legal and two, how you are rewiring the organizational permissions and understanding of what's possible.

Cause I think a lot of organizations, their default instinct is, well, we got to go to legal first. And you've clearly learned something that I think everybody could benefit from.

[00:40:01] Paige Costello: Absolutely. Well, this is a case of. Once you know what the questions are and what the concerns are, you can fast track certain things and say, yes, no, yes, no.

And It bogs down teams in shipping innovative work if they ask the same question over and over again. And so we can shortcut that by saying like, Hey, for example, any team in our organization can ship an AI. Powered feature and in order to do so, they just create a, they fill out in the sauna form and they say, here's what I'm thinking.

Here's how it works. And like, is it already a prototype? Is it just an idea? Are you asking for shipping approval? And then based on where it is in the funnel we assess it ourselves and do the first pass. And then, um, if it needs further assessment, we do that. But I, it makes me laugh that, uh, you had that response because absolutely teams.

Follow the well worn path and they tend to have their habits and rituals and when you're working with AI It's just a completely different beast and we have a specific person in legal who? We have onboarded and who stays ahead of the technology and the changing Trends and regulations to make sure that we can ship enterprise grade AI That our customers can use and trust.

And we really want to accelerate how quickly we can make an impact. And if people try to follow the quote unquote old way, it just really slows things down.

[00:41:40] Jeremy Utley: That's I, that really resonates. And I think that this is an area where. A lack of clarity or ambiguity really halts progress. I've seen a number of organizations where folks have ideas and I go, well, what's happening?

They go, we're waiting on legal. And I mean, my dad's a lawyer. I have deep respect for the profession. Truly. I don't say that sarcastically. I do. and so , it's not the path. to speed and the path for experimentation. And yet whenever there's ambiguity, asking legal seems like it's a good idea. How did you get clear on what the call it pathways of approval were?

Cause ultimately you have to get, in a sense, this is going to sound meta, but you've got to get legal's approval to not involve legal, so to speak.

[00:42:25] Paige Costello: Yeah. Effectively the team went broad at the very beginning and found different types of feature problems to solve. And so for example, one of those was around embeddings and, um, really looking at how our data was structured and retrievable, another was around summarization, another was around, um, more generative AI use cases.

And so we found multiple different types of features and applications and uses. And so we were able to understand like, when is it, um, The customer is pulling the AI feature. When are we pushing an AI feature? When is something proactive and not? And we effectively worked through here are the different classes of AI features.

And here's the different levels of approval that we want our customers to have and have visibility and transparency and to buy class. And then we gave them control to make the selections they wanted to make around which third party. Uh, AI data processors, they wanted on or off, whether they wanted proactive AI or not.

And we were able to come up with. Enough of a broad plan to make sure our customers were satisfied and enthusiastic and confident about the level of the control that they had. And then we are operating with bounds from a legal standpoint, but also from a technology standpoint. So we built a ton of AI frameworks around our evals and how we push and everything else that sets us off to really be more rigorous. in what we're shipping and quality there

[00:44:13] Jeremy Utley: that's great.

[00:44:15] Henrik Werdelin: I realized we don't have you for too long time. And so , since you work in productivity and, you know, do all the AI stuff, , what's one of the latest kind of like things you tried That kind of like you really felt, okay, this is really working for me.

I'm really getting increased productivity out of this because I also created the Henrik bot as a thing that you did and have also kind of like been trying to figure out what role does that play? And so, um, I was curious after hearing you mentioning that if you've done other small experiments or as you were saying in the podcast with Dan, you're just so Consumed in your head space with the Asana problems that yeah.

[00:44:56] Paige Costello: Well, here's one that, um, we had deployed recently on my team and you'll hear me say like team versus individual. Cause I'm really focused on like, what are the things that don't just enhance my personal productivity, but really the impact my team can have. And this one is a shared AI. Like assistant effect or not assistant, but it gives assistance in context.

And so this scenario happened just yesterday where we, um, added a smart workflow to our project where when a task is added, It creates kind of a, a shadow subtask of the recommended next steps. And so this is more of a private workflow so that you can think , about the work differently. And at Aslana internally, we don't use email. We only use asana

And so all our work, like all the conversation and communication is structured around the , the output or the outcome. So Asana has goals, it has tasks, it has, um, resourcing and people. And so effectively what this means is you might write a haphazard email to me and say, Hey, What's this choice we need to make with this team?

Should we, A, move, move this team over here, B, do this other thing? Like, what are the hangups? What's next? And then in the background, AI assists and says, here's the request. Here are the next steps. And it's way more structured, way more clear, way more direct. It recommends setting up time if the first two things are not done.

And that's a great example of a. Um, it really happening in the context of the work itself because so much of AI is a decision. Do I feel like I have the extra minute to save myself five minutes? Am I going to go over here and combine these three tools and think about it? Versus not making any choice at all and having, you know, the smart context right within your work advising you on how to move forward.

And so , that was something that yesterday, was a part of our workflow. And I was so delighted because I was able to take really rough haphazard notes. And then the variant that we were able to move forward with was just so much sharper.

[00:47:18] Jeremy Utley: That's cool. Yeah, it's a great example. To me, what it speaks to is, uh, making subtext explicit.

Cause there's so much there, you know, in a casual note that it's you force your collaborator to read between the lines rather than saying, no, no, no, it's, we need to follow up here. So I can see enormous value in kind of a. It's, in a way, it's a human to human translator.

[00:47:40] Paige Costello: Absolutely. And you're able to move forward with your trust in your relationship intact.

And let the robot do the heavy lifting around the brass tacks. So, I think it's, uh, it's really sweet to be able to. To, like I said, think about AI like a teammate, but have AI kind of in the context and flow of your work. So you don't need to go to it or ask it follow ups. Um, so that's, it's just really astonishing how we have seen . The pieces come together with transparency because we already have a user experience layer. You can put it at different levels of abstraction or altitude. You could have it comment, but you could also have it kind of running a sub task behind the scenes so that you can check in with it if you want. And so that's defiantly great

[00:48:35] Jeremy Utley: it speaks to kind of the multifaceted value prop, because one thing that we hear from a lot of folks is I can have a way more personal conversation with AI, cause it's not going to judge me. Right. So there's like the non judgmental side, but then to your point, I love what you said. The robot can do the heavy lifting My kind of human translation of that is let the robot sound like a jerk and the robot can be like, dude, I don't get this.

We need another 15 minutes. Whereas you can preserve like this, whether it's a facade or like real, whatever you can preserve this, you know, plausible deniability, right? Because it's the robot that suggested we meet, right? That's, kind of cool.

[00:49:11] Paige Costello: Yes. And you can move quickly. You can write your bulleted notes.

And the digital exhaust from you doing your work can be sharpened and clarified and made into a more useful artifact for the next person who sees it. And so we all want to move quickly and do good work and balancing, like how much time do we spend refining and editing and trying to be perfect? Versus just letting someone else help

[00:49:44] Jeremy Utley: . That is so cool.

[00:49:47] Paige Costello: I'm really excited about the future of work

[00:49:51] Jeremy Utley: So maybe the last question on my mind, I don't know if henrik if you have anything else Um, one thing that you mentioned, When you were talking with Dan, is your dog food and a bunch of stuff that it can have profound implications on the way internal teams move projects forward?

Use that word profound. You reminded me of it when you were talking about the robot that turns your digital exhaust into something useful. Is that what you're thinking about? Or I just thought it'd be kind of a fun to get a little bit of a, where are they now moment? Because there've been a few months in between Dan's conversation and ours. What are the, profound implications you've seen on some of the stuff you were dogfooding in the intervening couple of months.

[00:50:28] Paige Costello: Just trying to decide which use case to start with, because what's been interesting is working with global security companies and enormous marketing agencies and, um, banks and seeing how differently they have been applying this. And so I think one of the Most recent things I heard this week was one request and like prioritization cue workflow, where typically there was a two week lag time from the person submitting a request to work starting on that request or like a formal answer.

Like here's the commitment, here's who's doing it. Let's move dropped from two weeks to one day. And that's, if you multiply that by the thousands of requests they're getting and what that means week after week after week. That means a better experience for all the employees making requests, a better experience for the employees trying to do the work that we're chasing tails and trying to get incomplete information solved.

And that's just an example where I'm feeling so excited about people being able to do more of the work that they want to be doing and it feeling more joined up and more like one team.

[00:51:51] Henrik Werdelin: That's super cool. Hey, Paige. Thank you so much for taking time , To chat with, I guess us again, although people listening to it have just listened to your conversation with Dan, you really, really appreciate it. And I hope to, uh, to hear more from you because it's definitely fascinating to hear somebody as optimistic and as knowledgeable about it as you are.

[00:52:11] Paige Costello: Thank you, Henrik. Thank you, Jeremy.

[00:52:13] Henrik Werdelin: as always. Thank you so much for turning into this special co created with Crafted episode of Beyond the Prompt. We hope you found our conversation with Paige, both inspiring and exciting, if you liked it, don't forget to subscribe, share, and let us know who we should talk to next. Until next time, take care.