Beyond The Prompt - How to use AI in your company

You Can’t Vibe Code a 100-Ton Truck: Inside Applied Intuition’s Approach to Safety-Critical AI

Episode Summary

Most AI today lives on screens. But what happens when it runs a 100-ton truck, a fighter jet, or a fleet of defense vehicles? Applied Intuition’s co-founders Qasar Younis and Peter Ludwig join Jeremy and Henrik to unpack what it really takes to bring AI into the physical world. From safety-critical systems and autonomous fleets to the graveyard of companies that moved too fast, they reveal why building real-world AI requires a mindset rooted in precision, patience, and pragmatism.

Episode Notes

Applied Intuition builds the kind of AI you don’t see, but can’t live without. Co-founders Qasar Younis and Peter Ludwig share how their $15 billion company powers vehicle intelligence across cars, trucks, tanks, mining equipment, and defense systems operating in some of the most demanding conditions on earth.

They explain why combining AI with safety-critical systems raises the stakes, how a single mistake can destroy an entire company, and why so many autonomy startups ended up in the “graveyard.” The conversation explores the slow, methodical path to real autonomy, the hidden complexity of machines that run nonstop, and why consumer AI metaphors break down once software meets the physical world.

Qasar and Peter also reflect on how Applied uses AI internally, how their principle of “radical pragmatism” keeps innovation grounded, and what it takes to move fast without breaking things when lives and livelihoods are on the line. From six-figure labor shortages in remote mines to the future of defense and logistics, this episode reveals how AI is quietly transforming the physical world — one carefully coded system at a time.

Key Takeaways:

Applied Intuition: http://applied.co/
LinkedIn: linkedin.com/Applied
X: https://x.com/Applied

00:00 Intro: Safety Critical Systems
00:33 Meet the Founders of Applied Intuition
01:09 Understanding Applied Intuition's Unique Approach
03:02 The Human-Machine Teaming Concept
07:26 Challenges in Autonomous Driving
16:39 AI in Industrial Applications
28:27 Future of Fighter Jets and AI
29:50 AI in Applied: Coding Tools and Beyond
33:16 Radical Pragmatism and AI Integration
36:03 Challenges of AI Adoption in Large Organizations
39:56 Human and Technical Challenges in AI
42:02 Innovation and Organizational Structure
48:38 Reflections on AI and Future Prospects

📜 Read the transcript for this episode: Transcript of You Can’t Vibe Code a 100-Ton Truck: Inside Applied Intuition’s Approach to Safety-Critical AI

Episode Transcription

[00:00:00] Peter Ludwig: what it comes down to really is, safety critical systems. when you have a mixture of AI and safety critical systems, the problem is so much harder now if you're talking about a customer support system.

There's a whole graveyard of, of autonomy companies that have basically made mistakes. That hurt someone or killed someone. And that, effectively destroyed an enormous, opportunity for those companies.

And so, uh, you have to take safety very seriously and, frankly, think about how many times does chat GBT give you an answer that's not correct. Right. You, can't have that in, a safety critical application.

[00:00:33] Qasar Younis: Hi, my name is Qasar. Uh, this is Peter. We're the co-founders of Applied Intuition., We're a $15 billion company based in Mountain View, California, which, , takes AI and puts it into lots of different machines, cars, uh, trucks, tanks, , you name it. And, , broadly under the category of vehicle intelligence. And,, we're looking forward to talking about both the, bull case and the bar case of ai, uh, and, where it's at today.

[00:00:59] Jeremy Utley: Maybe just by way of getting started, just for folks who may not be familiar with applied intuition, tell us a little bit about the company and why they should be interested in this conversation today.

[00:01:09] Qasar Younis: So, uh, applied Intuition is a AI and software company, and we fall into this ca and in some ways it's a very boring company, right? It's just a AI company based in Mountain View, California. Uh, but in a way that it's very unique or different from every other AI company you've heard of is, uh, we're typically, AI is focused on screens.

We're talking about chat, GPT, it's typically that is the interaction through a phone or a laptop. Uh, we very much work in the worlds of, of vehicles, so cars, trucks, tanks, jets, stuff like that. Uh, construction, mining equipment. So we take that AI and like literally in this. Modern architectures that , we talk about AI and LLMs, we take that kind of, uh, technology and we put it into these physical vehicles.

So some of those experiences would be self-driving. Some of those would be intelligent cabins. So if you think about a war fighter today, , they're on the battlefield. The traditional tank human interaction is. Actually quite limited. , If you're ever wondering what it's like to be in a tank, you can go on YouTube and there's some great, interesting videos, but, uh, the punchline is scary because it's not, there's like, yeah, yeah.

You're like, oh my God. This is like, uh, you know, , it's a complex machine that's fairly mechanical and, uh, so we've, we've built technology to make all of that stuff more smart. I,

[00:02:30] Jeremy Utley: You know, I can't tell you, I can't tell you how many times a guest on the show says, you know, the traditional human tank interaction, dot dot. It's just, it's like every day we're hearing

[00:02:40] Qasar Younis: at this point.

[00:02:42] Henrik Werdelin: yeah, like we really, I was like, we're having this conversation again.

[00:02:45] Qasar Younis: Yeah, I saw, I saw Henrik's eyes glazing over. It's okay. It's.

[00:02:50] Jeremy Utley: This is so far beyond the prompt. We actually need a new title for the podcast. This is, that's amazing.

[00:02:56] Qasar Younis: I mean, it's similar though in terms of the, the prompt concept is, is great because it, the machine doesn't run alone. It's the human machine teaming. And I think a lot of the, you know, the ai, let's say, fear that emerges, like somehow these things are gonna, you know, just, , like a Chachi pt.

Doesn't just do anything. You have to ask it to do anything. And this is similar, similar things with these machines. I dunno, Peter, if you have a different view,

[00:03:17] Peter Ludwig: the only thing I would add is we use the term vehicle intelligence, right? So it's, uh, we, we are making all types of vehicles intelligent and, uh, and then trying to create real value, whether that's for consumers and let's say adas, which is,, self-driving cars, uh, , or value to, uh, logistics and, and. Trucking and autonomous trucking. And then obviously, value in the form of deterrence with the lethal, uh, systems as well in in defense.

[00:03:40] Jeremy Utley: Okay. But, but okay. You said again, lethal. I can't tell you how many times we hear that word on the show, which is, I think never, but before we go there, you just said something. Okay. You said it doesn't run alone. And I actually, it reminds me, I heard Sam talking with Tucker Carlson the other day, and Tucker was kinda asking, Hey, don't these things seem sentient?

T the, you know, um, and Sam said, Hey, I get why you might feel that way, but it doesn't do anything on its own. And I'm kind of, yeah, I'm a professional nerd. I'm a creativity expert. And so I'm always thinking about how do we. Optimize for this, you know, human, uh, you know, collaboration. And that Sam's comment there,, really sparked something for me, which is taking the initiative still matters.

Um, being the prompt, so to speak, still really matters. And I'm curious if you have any thoughts on whether that will continue to be the case. I think right now you, you made this statement. I, I wrote it down. It doesn't just, do you have to ask, do you envision a world where it does just do, because I agree right now that's true.

The human initiative is critical. Will that continue to be the case and for how long will it be?

[00:04:49] Qasar Younis: Yeah, I think, you know, uh, any, anything in, in software and AI that. Uh, like, uh, you know, in the current, uh, amount of change that goes beyond two years , can roughly be called, , you know, speculation. Uh, so we, we don't know what the true, correct answer that's the engineer in me and try to be precise, but I would say the MBA in me who wants to create an entertaining podcast would say, uh, you know, I don't think you're gonna have to change the podcast name anytime soon. , the point really being is the context and the desired outcome is so important to the actual, , response of the system. That I think removing that is a pretty herculean task. Let me, let's use a, another example.

, In the past, if you look at the late nineties and early two thousands of what was expected out of software. What actually happens outta software, which is basically like, we still have problems with wifi. We still have problems with video conferencing. I mean, it happens every single day. Not, not sometimes it happens every day.

I still have bugs in, uh, you know, maps when I'm coming, , to work. so there, there's a fantasy and then there's the reality. And the reality is these are still quite. I would say simple systems. Um, and even at the rate of change that we're seeing, lemme, lemme give you the bear case on AI rather than everyone so commonly gives the bull case of it's

[00:06:11] Jeremy Utley: I love it. I love it.

[00:06:13] Qasar Younis: let change everything. The bear case is, you know, uh, chat UPT, uh, roughly, and really doesn't, no, no one really at the. At that level in terms of consumer adoption, basically becomes the new Google. It's the, in an information retrieval system that's better. And ultimately integrates a bunch of different, you know, interesting things like just like Google did shopping and, and things like that.

But it's an information retrieval system fundamentally. And then all the other systems, we're talking about cars and we're talking about, you know, your assistant and all this stuff. They're all just the better versions of what they are right now. They don't become like these magical experiences there.

You get autonomy, but autonomy becomes general. Now, even that honestly, in if we did that in the next 10 years would be a pretty exceptional outcome. Um, , so , in your question, there's this loaded assumption, which is the rate of change will continue to be extremely high and the value. So not only change, but the value to the end user will continue to exponentially increase.

, And I think the reason I use the old software examples of late nineties and two thousands is, you know, we still have basic problems, and I think you still have basic problems that don't get solved on AI for a long, long, long time.

[00:07:26] Henrik Werdelin: But on the autonomous driving, I mean like I think I have an 11-year-old, and so like everybody else we've been talking about, should, will my kid ever get. Driving license or not. Right. And I think that conversation, I remember with some of my friends like six years ago, and they were like, no, my kid at that time will never, and then obviously then they're driving us around with their driving license.

Why is it that that specific problem seemed to be taking so long?

[00:07:53] Qasar Younis: It's really hard. it's, it's the people, the, the, the people aren't Yeah. Yeah. The dude. It's, it's not like the engineers are like twiddling their

[00:08:03] Henrik Werdelin: See, that was my assumption that they were just kinda like they had all these soda machines, like in their well-funded kitchens, and so

that's

[00:08:10] Peter Ludwig: yeah, yeah, yeah. What what it, what it comes down to really is, uh, safety critical systems. , When you have a mixture of AI and safety critical systems, the problem is so much harder now if you're talking about a customer support system in four orders of

[00:08:24] Henrik Werdelin: you're not vibe coating something that goes 200 miles an hour. Is that the thing?

[00:08:28] Peter Ludwig: So, but the reality is right there, there's, there's a whole graveyard of, of autonomy companies that have basically made mistakes that hurt someone or killed someone. And that, uh, effectively destroyed an enormous. , Opportunity for those companies. And so, uh, you have to take safety very seriously and, frankly, like think about how many times does chat GBT give you an answer that's not correct.

Right. You, you can't have that in, in a, a safety critical application.

[00:08:55] Qasar Younis: Yeah.

[00:08:55] Jeremy Utley: I don't wanna make the bull case here, but just to, just to rebut or to respond. Uh, Kasser, Toor. To your bear case you mentioned, you know, in the Bear case chat just becomes Google. It's information retrieval, but the truth is, it's already beyond that now. You know, I think anybody who treats chat like it's Google is miss it's malpractice, right?

and I, I often joke with people, look at your chat history. Is it a chat history or is it a search history? If it's a search history, you haven't even scratched the surface. Chat is defined by conversational terms. Right? This wouldn't be a chat if one of us were monologuing whole time. So is that bare case?

You know, even a possibility because I think even now chat is maybe to the, the poorest performing users, it's, uh, information retrieval. But to anybody who's collaborating with, you know, an in your hand. Coach and expert and creative partner and advisor, and you know, imagine the persona it's hard to really make a good case that this is simply informational retrieval, even just right now.

Right.

[00:10:03] Qasar Younis: Yeah, but that's, but you're talking about, uh, you know. The power extreme user. Um, if you look at my mom's use of chat. Google not chat. GPTI should say. Uh, it's gonna be very different than your use of Google. It's gonna be, uh, much, much simpler and maybe, you know, , the things that you take for granted in Google, she doesn't even know because it's just not the way she interacts with it.

So, so yeah, in, it's like any tool there will be power users who get the most and most out of it, but all, all of this is under the. Previous assumption, and by the way, I'm, I'm doing this more for, you know, uh, as much for a theatrics and entertainment as as for real.

[00:10:42] Henrik Werdelin: Obviously chat chichi. I think you're right. Like even now makes like many mistakes and we kind of apologize 'cause you know, it can make a cat wrap so therefore it's cool.

Right? If we look at the, some of the stuff that you are. Working on, is there a way where you can kind of ladder yourself into this awesome new world? Or is it binary like, because to your point, like kind of can't drive like almost safe, right? That to drive safe. But is there then like elements of autonomous driving or autonomous vehicle that you guys see will be the, the stuff that will come in soon that will be kind of the equivalent of the aha mom when we have a dolly or whatever kind of slips we've had.

[00:11:17] Qasar Younis: I mean, that's already happening. Yeah, absolutely. And, and, uh, the version of that is assisted driving, right? So if you, if anyone's used, , Tesla, FSD or if you used anything like that, that is not a full self-driving system, even though it's called FSD, uh, illegal debate. Currently a literally a illegal debate with, uh, billions on the line.

so, so there are those steps, and we're very much in those steps. So that's the, the, the, the version of that. Earlier question of can the AI basically work without a prompt in our universe is you basically sit in the vehicle and it really does everything. , It knows your calendar, it knows where you're going.

It basically backs outta your driveway, takes you to destination, and then while you're in the meeting or you're in the coffee shop, it charges itself and comes back. That will happen. That will happen. The question, you know, it's like before this,, company, , I was at a place called Y Combinator, and, uh, Y Combinator is most famously known in this universe is, you know, also the, where OpenAI was. Started, I mean, Sam was the president, I was the COO, and it was a part of YC research that that project really kind of kicked off. Um, so, you know, been looking at this area , for a long, long time. The, the, the punchline is, I think all of these things will take a long time to actually make in the way that we think, uh, you know, we kind of expect, but that doesn't necessarily mean.

you know, it's a wasteful endeavor and we shouldn't be, we should be waiting until like the perfect thing comes. And I think our strategy at Applied has always been actually that incremental way, the, the steps all the way there. Um, that, you know, until, till we get to this like, let's say, uh, future, which has,

[00:12:54] Peter Ludwig: and it's important to note as well, right? So all of the, the core innovations and research that are powering things like chat, GPT and stable diffusion and like , SOA and all, all of these. We at Applied, we, we benefit and we contribute also to all of that same research. So those the same core technical elements for those systems we are using all throughout our products at this point.

' cause the manifestation of those is different, right? When you're controlling a, uh, let's say a, a, a hauler for, for mining, yes, you can have transformers and, and a lot of advanced technology inside of that, but it's, it's not the same as, uh, chat GBT.

[00:13:28] Henrik Werdelin: Do you think, uh, that you, you might see kind of leaps that is bigger or at least more noticeable for people who don't understand the industry in a miner or in a boat or in a drone or something else because of all the issues that's tied into cars specifically?

[00:13:46] Qasar Younis: I mean, um, if you think about when you started as a consumer starting using chat GBT, you didn't need a lot of, uh, like, uh, someone to sit next to you and say like, look how great this is. . I actually remember the first time I used Google search and similarly based for beer before, you know, when you had Ask Jevs , and some of the other products, it was just notably different. And I remember. All right. Good one. Yeah. You know what

[00:14:09] Jeremy Utley: He's the one. He is the

[00:14:10] Qasar Younis: well, well, well, while you, you know, I wanna say, when I was thinking when you said your kid is gonna drive, , would've a driver's license in the future to be really cool. Not only will you have a driver's license, but you know, you also drive manual.

Like that means with your hands rather than, rather than like, actually with a stick shift. No one's gonna drive with this

stick. We're the

[00:14:27] Jeremy Utley: skills. Real skills. Yeah.

[00:14:29] Qasar Younis: So I, I think I, yeah, I, we, we benefit from all of these things, as consumers, and it's very obvious when you go to a mine or you go to a commercial trucking company that's building these things, we don't have to spend a lot of time explaining.

I think we, we showcase our work and that connection happens. I mean, like the internet. Impacting, you know, let's say Salesforce in that universe of the cloud enterprise company of the two thousands, right? From 2000 to 2000, let's say 15, which that, you know, that's where enterprise software really becomes, you know, enterprise software the way we know it today.

Um, the same thing is happening in enterprises with ai. On the consumer side, you experience this really great product and then you look at your work applications and they're not very good. Most of us think about work applications as being like Slack or something or or Workday or something, or rippling.

But for people who build machines, work applications are these giant. You know, , products and they think, well, these actually should be a lot more intelligent. And I don't know if there's many companies actually, , like us on the planet. Uh, you know, certainly not at the scale that we're at. Uh, I mean, we're working across basically all geographies globally.

We're working across basically all , uh, , form factors of vehicles. so then that gives confidence and, and the company is like a viable business in the sense of it's a profitable. Cash generating business, which is also very uncommon in the AI universe. So when our partners, I think, interact with us, they, all of that stuff really impacts.

We're like, wow, this tech is interesting. Oh, I can see how you work in all these different geographies and different, you know, types of vehicles. Oh, and you're like in it for the long haul because unlike a consumer experience, if a Komatsu, we just announced this, you know, long-term

[00:16:10] Jeremy Utley: I was, I was gonna ask you about them. They're a long-term partner of mine as well. Yeah, I was

[00:16:14] Qasar Younis: Yeah. Yeah. And, and like they want to make sure you're gonna be there. I mean, they're not just gonna take a kind of a swing at a, you know, a 10 person company. And we know that because we used to be a 10 person company and we had to kind of really, you know, fight our way up to where we're at today.

[00:16:30] Jeremy Utley: Tell us about, so at Komatsu, for folks who don't know, it's a Japanese company that makes, you know, mining equipment, you know, bulldozers and you know, big heavy industrial equipment. Help folks who maybe can't really imagine, what's the application of ai? in an industrial space like that, I don't know whether it's Komatsu, if that, if that's a public example, if you wanna share that or just help kind of blow people's minds or open people's imaginations to what are the applications there?

You know, I'm, I'm working with Chad DPT on my mobile device. How is, , mining company incorporating AI into their operations and their vehicles?

[00:17:04] Peter Ludwig: in the general realm of mining construction, there's this, this phrase that's used, so doll. Dirty and dangerous. The, the jobs can are generally dull, dirty, and dangerous. Um, if you are much like podcasting, they, they understand you

[00:17:18] Qasar Younis: don't, come on, let's, let's get more sophisticated.

[00:17:20] Jeremy Utley: We are putting our lives on the line here.

[00:17:22] Qasar Younis: Yeah.

Yeah. And he got two paper cuts last week. If,

[00:17:26] Peter Ludwig: if you've never been to a mine and seen this stuff up close, it's honestly, it's, it's hard to like get the. The feeling of what it's really like. Um, but, and this, this equipment is just enormous. I mean, like, you, you, it's, I, I'm almost six feet tall and I, my head is like halfway up the tire on one of these machines.

Like these are just enormous. And these machines can be operating. Like hours away from the nearest airport, like many hours away from the nearest airport. And so have a very limited number of people that are willing to do that kind of job. It's like, well, it's sort of miserable and lonely and it's dirty and it's dangerous.

All, all of these things. And so labor is a constant issue. , And related to that is it, there are many things that just don't happen because you don't necessarily have, uh, people that are willing to do the kinds of work. And then some of the conditions are like, what you're trying to do is fundamentally going to be so dangerous.

Uh, and there's many examples of this in construction and, and, and mining where you're sort of nervous to put people actually sort of in, in that kind of risky environment. And all of these things are extremely high value applications for bringing AI into those industries.

[00:18:31] Qasar Younis: Yeah. The actual like examples of, you know,

how you use ai here is, , a human walks up to a, uh, a dirt mover, uh, one of these large haulers that Komatsu builds. Um, and the, you know, it knows who it is. And as the human sits in the vehicle, the hauler, , understands the state of the human, how they tire. I mean, these mines, and especially in large, sites will run 24 hours a day, seven days a week.

For a decade. I mean, this is the backbone of everything that they, , the phrase in, in mining is if it's not grown, it's mined, which basically everything that we're touching, seeing, and, and interacting with ultimately has its roots in something that was mined out of, out of the earth.

And mines, again, this is for people who don't, are not in this space, the amount of tonnage of material and how big the biggest mines are, are beyond your covered, they're like. They're like cities, thousands of people moving tonnages of dirt every, you know, every, , hour, every minute. And so. You become tired. So there you're exhausted and you are, uh, sometimes even much further than a couple of hours from an airport. And so you're, it's kind of like, you know, deep, deep, uh, sea drilling. The, it's a hazardous environment and it's almost through intention. You're not gonna be digging massive amounts of dirt where there are people who live there.

it's one of the reasons why, like the Saudis, and the Middle East, the oil is, is, so valuable is because they drill it in the middle of a desert and

[00:19:57] Henrik Werdelin: When you then create like features for that, is it increasingly like then almost like agent by agent, there's one system that then sees if the person is tied, there's one person that sees if there is, you know, like objects around that they could drive into or,

[00:20:13] Jeremy Utley: Or what are, what are all the things like Henrik's imagination is stronger than mine. 'cause I'm like, I'm still trying to think. About what's AI doing there?

[00:20:22] Henrik Werdelin: Well there's one if they're Taad, right? Which is the one, is that what, what are like a few other ones that could get into this, uh,

[00:20:28] Qasar Younis: maybe maybe more, more, more specifically. What are the dangerous situations you could get into? in a, in a environment like this, the dirt under you moves the, the dirt around you moves, and, uh, there are people around you. 'cause these, these machines are so big that the machine can, you know, accidentally run into people and run into other machines and there's a visibility.

There's

[00:20:49] Peter Ludwig: a lot of dust, so you can't see very well.

[00:20:50] Qasar Younis: Yeah, we, we, we, you think about it this like very clean, you know, environment and then again. When you're doing it for six months in a row, that's when the things that weren't, you know, protocols that maybe should have been followed, maybe are not followed.

When you go to a mine or you go go to places like this, safety is the beginning, middle, and end of everything that that's happening. And so, uh, then we talk about, let's say the non-safety stuff. Typically the way, you know, it's almost like an investment. You kind of figure out how much of some, let's say copper is in this specific area, and you can figure out how much tonnage do I need to remove to get X amount of copper?

And then it's just a formula. Uh, then it's like, how many people do you need? How many trucks hollers do you need? And then you just run that formula for like 20 years, uh, 30 years. I mean, it's, it's a, a, a long period of time

[00:21:36] Jeremy Utley: Are they revisiting those assumptions annually or 'cause it feels, yeah. Wow, that's

[00:21:40] Qasar Younis: Exactly. So then suddenly you're like, oh, we can run these machines more.

We, don't have a labor shortage here, or we, we can run them because we're not having to do shifts in terms of people's exhaustion and things like that. The other thing that we, we haven't talked about is it's kinda like commercial trucking, which is in the autonomy space is often talked about, is a.

Pretty terrible job. Uh, if you're a long haul trucker, every health metric for you is worse than the average person. And today, the kind of untold story, and you know, you sometimes you hear these, like, you see these headlines like people don't wanna work at McDonald's. Like, you know, you'll, you'll see, you see this kind of thing.

The question is where are those jobs actually go? Well, they're going to places like DoorDash or Uber. And the reason a DoorDash and Uber is better is you can turn it on and off when you want. You can be with your family, you can, if, if you have 16 hours you wanna work and you can do it safely, you could, you'll do that.

I mean, I talk to drivers all the time at Uber and I always ask 'em, what's your typical shift? And consistently they say, I'll try to do. 10 to 12, but I'll break it up because I have kids to pick up and drop off if they're working in, in a long haul truck or they're working in commercial Mine, literally, I, the last Uber driver I talked to was a long haul trucker and I said, how do you juxtapose that job versus Uber?

And he goes, he goes, you know, they're both hard. They physically demanding, but this is way better. I sleep in my own bed and I see my family every single day, and if I don't feel good, I can go easy. I don't have to tell anybody. It's my car. So those realities, underlying dynamics are impacting. Mining and so the automation of mining becomes a much higher priority because suddenly.

Nobody wants to even at, you know, in Australia as an example, which is, you know, kind of the mining mecca of the world, the, compensation for fairly, I would say, uh, you know, uh, it's semi-skilled. It's not unskilled, but it's not, let's say it doesn't require PhD, uh, it's semi-skilled labor can be deep in the six figures, and they still can't recruit somebody to go because you have to go for three months at a time or

[00:23:36] Jeremy Utley: It's like, it's like joining the military or something, right? You're you, it's like, uh, being a part of the Roman Army, right?

[00:23:42] Peter Ludwig: Yeah. Uh, can be much worse because, and even in the military, unless you're in war, right? You, you're not actually subject to conditions like those mines. But, but I also wanted to talk for a minute about our, our actual technology to, to provide us a little more, , depth into what we actually do.

So there's, there's three areas, right? So, engineering tools is really a foundation of a lot of what we've done in this. Allows our engineering teams and our customers who purchase our tools to just build amazing systems, uh, using AI and, and autonomy technology for vehicles.

[00:24:10] Qasar Younis: We, we have a, uh, just on, on that, a lot of times for your listeners, they think about AI as this consumer application you interact with the, the real question is, let's say there's five different companies working on ai.

What are they actually doing? All of the stuff that you need to do, like, you know, ingest data, , figure out which of that data is gonna be good for this model, and, uh, make sure you evaluate the quality of that data. Then you train that model and then you actually, you know, deploy it. All those are engineering tools and an AI company is as much a tooling company as is anything else.

[00:24:42] Peter Ludwig: And, and in our case, those tools are tailored to vehicles and vehicle technology. And that's actually quite a bit different than like chatbot stuff, , because of all these safety implications.

[00:24:52] Qasar Younis: So there's a couple dozen tools we sell.

[00:24:55] Peter Ludwig: Exactly. And then we also use them our ourselves. So that's engineering tools.

We, we also make a, a vehicle operating system. And so this is like a bunch of embedded components that actually run on the compute in vehicles. And so, uh, earlier you were asking, well, what, what are like the different models that run vehicles? Well, fundamentally we have a compute box, a high power AI compute box that goes onto these vehicles and then our operating system runs on that.

And then using our engineering tools, we can then, deploy our applications onto the operating system and those applications. Our various autonomy software, but also a number of other applications that run on these vehicles.

[00:25:28] Qasar Younis: Yeah. And so, well, one way to think about, there is a company that actually has did this model, uh, in the previous generation.

It was Microsoft. Microsoft started from 1975 to 82 as a tooling company. Then they got into operating systems and ultimately got into applications. And, uh, us rather than, you know, being on PCs, this is the Microsoft, let's say seventies, eighties, and early nineties. US being on PCs we're on. Cars. And, uh, you know, today we've talked a lot about mining just 'cause the Komatsu thing. Uh, we've talked about defense, but you know, our bread and butter and you kind of, where Peter and I come from, I went to the General Motors Institute is automotive. And automotive is truly a consumer application. I mean, the sense of, every single person on the planet at some degree, no matter where they're at, is gonna interact with cars almost on a daily basis.

And

[00:26:14] Henrik Werdelin: the, on the cost stuff, for example, if you just shifting gears a little bit. Got that pond down, right? Um,

[00:26:21] Qasar Younis: you're the

first Henrik. We've

[00:26:23] Jeremy Utley: this puns sadly

[00:26:24] Henrik Werdelin: never heard that before, right? I'm sure. Um, it's like when. People talk about my company BarkBox and try to like outdo me in dog puns. I'm like, I'll raise the wolf.

Um, the, um, the UI for example is changing on the computer because of ai, right? Because now voice to text is something that actually kind of works as we're kind of like seeing how your system is being built is there still the dashboard, you know, is there still a wheel? Like are we, are we talking to it?

As you kinda like exploring, what is the most efficient UI to occur? What is that gonna be in this kind of autonomous world?

[00:27:02] Qasar Younis: You? Yeah. You said two things that kind of mix together, but they're actually quite different is the, the interaction with the machine and, uh, self-driving. And so , if you are. In the situation where it's like, you know what they call an L two plus plus system, so you're still expected to be in the loop of the drive, which means if the car disengages, you have to take over.

IE have to be in the driver's wheel, that interface is gonna be different than you don't have a steering wheel and you don't have,, pedals. And I do think we're seeing the early kind of ray of light of that future emerging, which is a future where there will be vehicles, that won't have steering wheels and pedals.

I, I don't think it's anytime in the next couple of years,

[00:27:42] Henrik Werdelin: But

I must be more like, you know, practical in like, is that actually the best ui? Like I would imagine you going to a Dumber, you might want to press a button once in a while, just because it's gonna be weird to say like, Hey, dumber drive. Right? You know, like you. Like, so as I'm sure you do, like a lot of kinda like evaluation or like what is the most effective kind of like interface, where does that kind of take you?

[00:28:04] Qasar Younis: It's a mix of voice and actual interaction that what you'd consider, uh, you know, traditional like, , human computer interaction, HCI. , I think over time, over a longer period of time, it'll be, I think it'll be almost all voice. The, the main reason is the example you just used, like turn left or turn right.

The vehicle is gonna know that already. So it's, it's much more of a context kind of question. Like, there, there, there's a faster way or there's a smoother

[00:28:29] Henrik Werdelin: I'm fascinating by that. Do you think even like in a, fighter plane 30 years from now, they're gonna be like, shoot the bad guy,

[00:28:36] Qasar Younis: So that's very different. Yeah, that's very different. We, we just made a huge context change. Uh, yeah. Yeah. Uh, so, so in those types of situations. , I think the human, uh, because now you're, you're going from, uh, using the example you just used is like going from, , using chat GBT to somebody who's writing software.

Those are two different, so the interfaces are gonna be different. Um, I, I don't think they'll merge anytime soon.

[00:29:02] Peter Ludwig: And also 30 years from now, Frankie, it's quite unlikely that there will be any people in fighter jets. Right.

[00:29:08] Qasar Younis: Yeah. That's, that that'll, I think that'll be sooner than 30 years.

I think we're looking at the last true. Fighter jet being developed right now? Uh, I think afterwards it, it, it will, yeah.

[00:29:17] Peter Ludwig: It'll, it'll be all, all drones. Autonomous,

[00:29:19] Qasar Younis: autonomous craft. Yeah.

[00:29:20] Jeremy Utley: Yeah, , my daughter wants to join the Air Force. You're telling me that I might need to redirect her CareerWise? Is that what you're saying?

[00:29:26] Qasar Younis: I mean, there will be people who are in the Air Force. They just might not be pilots. So I think the, that as a military branch would, might even become more important than it's ever has. And I think space Force, I think will become more important than has just because of the nature , of conflict.

[00:29:42] Peter Ludwig: Yeah. No, no matter what I mean the, the Air Force is going to be the, the primary procure, um, drones and, uh, the drone warfare systems.

[00:29:49] Qasar Younis: Yeah, exactly.

[00:29:50] Jeremy Utley: So we've been talking about how AI can kinda revolutionize your. Deployment of ai. Can we talk about your use of AI at Applied? What are the coolest, most interesting ways you've seen your team, or yourselves or your team deploy AI to kind of get that 10 X human kind of outcome?

[00:30:11] Peter Ludwig: Yeah, I mean the, the most, , obvious and apparent of course, is the use of, uh, of AI coding tools, which are, uh, super interesting and valuable. I think it's actually. Probably the, killer enterprise use case , for LLMs and obviously Anthropic is, is seeing, I think, good success from that and , them choosing to focus on that area with, uh, with Claude was a good strategic move, , because like every software engineer in the world can get a productivity boost , from using these models. And it just takes care of a lot of the, all of the boilerplate. I think you, you, you do end up seeing, , there is a certain complexity ceiling that you hit with those tools and that ceiling keeps rising, which is great. Um, but, , when you're working on really hard problems, the, the LMS become less useful.

Uh, still, but again, there's a lot of boilerplate, which the, the tools are, are, it's still a, a productivity boost. No. No matter what. Yeah.

[00:30:58] Qasar Younis: And just to keep it. Entertaining. Uh, rather than saying what everybody says, the bear case on, , on code complete tools is, you know, , you don't have to write a lot of software to know.

More Software doesn't necessarily mean better software, especially in like systems, we built heavily optimized performance systems because you don't have endless compute available when you are on the vehicle, when it's actually in, inference in the real world. Now you gotta pay for those chips and

[00:31:26] Peter Ludwig: every line of code has a lifetime maintenance cost. And so more code, it does mean more maintenance

[00:31:31] Jeremy Utley: technical debt, right? Technical debt's a real

[00:31:33] Henrik Werdelin: Do you have to make your own lms I would imagine a lot of the systems that you're using, you have to write your own models, right? Because they have to run on hardware

[00:31:43] Qasar Younis: Oh yeah, we, we, train

[00:31:44] Peter Ludwig: a lot of models. Uh, the, the models that we train though, they are. More specific to the vehicle technology aspect. Um, for, for generic things like, like coding, , the off the shelf models are, pretty great and you can do a lot of, uh, uh, prompting, prompt engineering to those models to sort of get what you want at the end of the day.

[00:32:00] Jeremy Utley: Is Code Gen, the primary use? Are there other cool use? I mean, I would, here, I'll, I'll project for a moment, and you guys correct me if I'm wrong. Radical AI autonomy company in Mountain View, California, where I've lived for, you know, 12 plus years myself, by the way. But radical futuristic company, you're telling me the coolest use cases of AI among your workforce is code Jen.

[00:32:26] Peter Ludwig: The, the reason I I say that is because that's, sort of like the, the infinity use case, right? , If you can do code gen, you can do anything, and so this allows

[00:32:34] Jeremy Utley: That's a Sam Altman answer. That's a Sam Altman answer. If we just get it to write better, you know, ai, then that solves everything. There's not like performance review lists. There's not like, I mean, you're just, Peter, right before, uh, Qasar joined, Peter was selling us out. All of a sudden you're, professionalizing your recruiting process and you're, the brand is out there in a way that it wasn't before.

There aren't fascinating ways that those teams are, getting a huge augmentation. You got, you've gotta spill

[00:33:03] Qasar Younis: Yeah. Yeah. So, yeah, I, I guess I we're like two Detroit guys who are like, you know, I think we, we lean into being boring, but, let's give you some exciting answers.

[00:33:13] Jeremy Utley: Come no more bears. No more bears. I want

[00:33:15] Qasar Younis: Yeah, exactly. Uh, you know, by the way, all of our values, we, boil them down into two words, which is radical pragmatism.

And

[00:33:21] Jeremy Utley: I, I was gonna ask you about that. I have that in my notes. You see my

[00:33:24] Qasar Younis: Oh, nice, nice, nice, Uh,

[00:33:26] Jeremy Utley: going to ask you about those words.

[00:33:28] Qasar Younis: So the way that we've, any company, , we're getting to scale as a company. You know, we, we are, over a thousand engineers we do hundreds of millions in, in revenue. Uh, and so we're already like a, let's just say like becoming a stodgy company for the lack of better word, right?

You're already starting to, ossify. And so as this. Revolution hits us the way that we've done it on the non-technical side. Now remember, 82% of our company is software engineering. So it is a very technical company. Uh, Google is 50 50. To give you some some context here, we've told all the commercial departments, legal, design, people, operations, et cetera, that, uh, they have to first, the first request.

Gentle request is again, even Peter and I are gentle leaders. Our first gentle request was, you know, , use AI in your workflows. And you know, what happened is especially experienced, folks, they're like, actually, this thing that we do works pretty damn well and we've already, you know, made it as efficient as possible.

And so our kind of second wave and third wave is doing things like we have, you know, uh, uh,, weekly, live all hands. Having departments go up and actually showcase, Hey, this is how we're using AI in our function, in the legal function within the constraints of what a legal team has to do, or within, you know, design function, et cetera.

That creates an environment where everybody thinks every problem I interact with, my knee jerk reaction should be, is there a new tool that solves this rather than an old tool?

And

[00:34:54] Jeremy Utley: do. Every week. It's your all hands. You have one of the teams get up and share how we use

[00:34:58] Qasar Younis: enough. exactly. And so all it is is keeping top of mind. And at some point that lift just started happening where the kneejerk reaction and then, , the thing that's the most, I would say, exciting or things that we di typically don't talk about, but is, is the most actual practical implementation is.

Well, a long time ago we started an in-house software team that I personally lead. Which is it? , The shorthand is the software that runs the company and the software that runs the company obviously shouldn't be just a web app with org charts.

And so making that more and more intelligent, it really is becoming the brain. So we dump, you know, our own, uh, it's becoming like the knowledge center, like literally last night. , The person who runs that product, Dina sent me a voice message and say, Hey, I think we're ready to like roll out essentially like a brain version of this based on the, these inputs.

So I think it's still early days. It's, I think if you, if you saw it, it's interesting. I think we keep compounding, keep working on that software. Our enterprise will literally be a more advanced enterprise and we can do that because we just use AI a lot in the company. We develop it, we make it so then we point some of it towards Enterprise.

[00:36:03] Jeremy Utley: Talk about gentle, because I want to come back to this. You've, uh, or push on it. Pull on it. I don't know. How do you wrestle with the question of gentle 'cause that's a word you emphasized and you

know, you got CEOs like Toby Lutkey, you know, saying, Hey, you cannot hire unless you can demonstrate that AI can't do the job.

Right. You've got Fiverr, CEO, you've got Duolingo, CEO making very kind of bold pronouncements. How do you think about leading the, they call it organizational transformation. Like if you look five years in the future, you can't imagine everyone not being AI augmented. Do you think it just happens naturally? Or how, how do you, how do

[00:36:39] Qasar Younis: Yeah, I, I, I said that, uh, I said that gentle word sarcastically.

[00:36:42] Jeremy Utley: Okay. Oh

[00:36:43] Qasar Younis: think anybody, yeah, yeah. Anybody who works with Applied or at Applied would ever, you know, I think the most common word they talk about is intense.

Uh,

[00:36:52] Jeremy Utley: so you aren't so, but, but I think it's actually good because people like listeners don't have that context you're saying. , You're so clearly not gentle that to say the word gentle is actually a joke. So when you say you gently recommend, I'm just looking back at my notes, right? You gently recommend that, , all functional leaders use AI in their workflows. Here's where I wanna get very specific. What's the consequence of not doing?

[00:37:15] Qasar Younis: I mean, um, so you like, you know, the Duolingo set of examples that you use. I've, I've caught little bits. Another kind of controversial thing is me and Peter are not on social media. We tend not to like, just. To hear what the like thing has to really hit a level of, a certain frequency for, you know, our ears to hear it. my assessment. You know, I worked at yc, , for many years, and I was very deep in the startup ecosystem. Half of that stuff is entertainment. The, so let's just talk, leave the entertainment and bombastic ness to the side. The practical reality of leading an organization is if somebody's a great head of design or somebody's a phenomenal general counsel.

You're not gonna fire them because they didn't try some startup that says they can read contracts faster. So I wanna have that like direct conversation of what are the actual limitations of these products that are out in the market now? The leaders should be able to articulate that if they like shrug and say.

Don't know boss. , Then we have bigger problems. So I think it's not these like, you know, simple, , like use this or get fired. Also, our company, I mean, it's no disrespect to like a Duolingo, our company's order of magnitude probably more complex if not multiple order of magnitude. We work on AI safety systems that can deploy to machines that humans use. It is. The most complex we are as technical as a technical company can be. Like there's the, , we are at that level of, , an anthropic or an open ai. In terms of raw, technical, double dig,

[00:38:39] Peter Ludwig: percentage of our company have technical PhDs. Uh, majority of the company has, graduate degrees. Uh, so like

the caliber is very high. And, and then because of that, in each of these fields, , there's just a lot of adoption. Like an example is like in it, , you can automate 80% of the IT tickets that come in for, for basic stuff. In project management, you can actually get a better organizational structure of, deliverables, uh, using, , using that compensation analysis, right?

We can use that to figure out if there are outliers and, and the leveling compensation scheme. So there's all of these things that sort of just fall into place, , since we have the, I would say, the right leaders in place in different

[00:39:13] Qasar Younis: Yeah, as, as other leaders who are listening to this, I think, uh, the question isn't, you can use AI adoption as a proxy.

For how good your leads are, but it should not be the sole like decision maker, right? Because then you're also gonna get, you know, uh, uh, your organization gets to any level of size. 50 people, a hundred people, your employee, the human nature, they're gonna start filtering and saying things to you in certain ways, and so if you do something simplistic of use AI or get fired, guess what?

You're gonna have a crappy lawyer who's gonna say the right things to you. And suddenly they're the GC because they're quote unquote adopting AI. And the actual lawyer who's really good at their job, who hasn't, who gets fired, and then you put your company actually at bit risk.

[00:39:56] Henrik Werdelin: Can I ask on the, uh, go back on the human side. You know, obviously the, a lot of the stuff that everybody's building, including you, is stuff that at the end of it is in service of a human right. Like we built autonomous drive so that they can drive us around, and then we build something that probably works, and then lawmakers change the law. And so like the software update comes in and suddenly it doesn't drive automatically because, you know, like there was a bug somewhere that killed a person, which is understandable, but. It would suggest to me that a lot of the stuff that you do is actually solving the technical problem, but I would imagine that you guys have as much kind of challenges understanding the human problem. Why is it that the person sitting in the big dumber doesn't do X and all those things? How do you actually, like, from an organization point of view, start to internalize that into an organization that is so technical? Like how, how does , the human understanding and the technical understanding overlap.

[00:40:55] Qasar Younis: I mean, what's also different about our company being an enterprise company is for all of these technologies we're partnering with the, , manufacturer who makes these machines and knows these marks. Well, we don't know anything about mining. We don't know anything about, commercial trucking but when you work with the. Literally top commercial trucking companies on the planet. As an example, commercial trucking, the real problem as much as in the cabin is actually they're small businesses. A lot of commercial trucks are owned by a, you know, LLC that owns maybe one to three trucks. And so the software and AI problem as much as in cabin and fleet, it's the fleet management stuff, and it's the maintenance, and for a lot of people, that's their biggest, most expensive asset.

As a family. They, that truck or those two trucks? Yeah, exactly. And so, so our partners are the ones who say, point your ai, technology towards this set of problems. And then we put those set of problems and it's all under the vehicle intelligence umbrella. But, um, that's how we have been able to work in lots and lots of different fields.

It's not because we know all these fields really well. We actually are just fundamentally an AI company, but instead of our AI focus being LLMs, it's within vehicles. And then we partner with manufacturers, and this

[00:42:02] Jeremy Utley: Can I, can I ask a, um, innovation kinda level question? He and I. Both have kind of a shared passion and curiosity and morbid interest, you could say, in the challenges of organizations in innovating. I'd be curious to hear you. You mentioned earlier that you studied at the GM Institute. Why isn't applied intuition coming out of gm?

Can you just talk for a second about the challenges of innovation and your observations?

[00:42:29] Qasar Younis: That's interesting when you say that. So, uh, uh, I mean, in some ways it did. I'm an alumni of General Motors as well, not only the GMI, but al also the company. Uh, so in some ways it is the case even earlier when we were talking about all these self-driving companies that ultimately didn't make it, you know, the where , they do make it is a lot of those alums.

Who learned all that stuff are in other companies and they're pushing Tesla and Waymo and applied intuition to production. So I, I think it's, incorrect to say like that suddenly that, that, you know, if somebody works at General Motors and General Motors doesn't create an ai, that's somehow General Motors didn't contribute to it.

It did, but not in a direct way. That's kind of how the

[00:43:04] Jeremy Utley: It's not affecting, say, their market.

[00:43:06] Qasar Younis: market. Exactly. So that question is actually, so a separate question, which is you could actually have the talent inside the company, but why can't the company. Almost extract that talent. And that's actually not a General Motors or a manufacturer problem. That is a large organization problem because a better example than General Motors is Google. Google invented this technology. Open the eyes, a little group of people who've actually monetized it. Why did that happen? Because that's actually a way more complex thing. 'cause it's literally Google's job to make that technology and make it great and they didn't.

Uh, and so I think it's really, if there's something about the reason Silicon Valley is successful is we, there's something about small teams that are incentivized through equity that just do really well. And it's been 75 years of proving that case again and again When I was at yc, the question would always happen over there.

It'll ever be an era where there won't be startups. No, I actually think that they're more likely to be an era that there won't be really large tech companies before than there will be an era there will be spoke just 'cause the incentive structure and the reality of is if when you work as a group of five people, why do five people don't, they don't need to do sync meetings and they don't need to do like, you know, Monday morning, , what happened last week.

'cause you're just literally, when we were in a living room with five of us, our all hands would be, we would twirl the chairs and we'd say, okay, uh, let's talk about what's going on. Often those all hits we're so in line with each other, there isn't much to talk about. And so then we'd be like, is there anything we haven't discussed?

So now the question is, why doesn't it happen in a large company? It's because. It's the, it's the, it's the structure of communication. Once you start adding layers of managers and you have to have a central control of it. So the hypothesis would be if you had tons of small groups that were loosely affiliated, that could be in a, a

[00:44:55] Jeremy Utley: A better

[00:44:56] Qasar Younis: of style Enterprise. Yeah, enterprise Stop. Now there's, companies have tried this. I think Steam had this one version of like a manager list office and pods and stuff. We tried actually without an org chart when the company was, you know, sub 75 people. Extremely difficult. There's a reason these org charts, it's kinda like you look at human societies around the globe.

The concept of a nuclear family with parents educating their children, school systems and hospitals. A huge amount of cultures and they all look shockingly similar, right? They all have a job. It's, they rarely is societies Today, you don't have some version of a occupation that you've specialize in. You don't just stay at home.

So why does it happen? It's somehow the environment that gets created that. So I think. The, you know, trillion dollar question is how can you make large organizations operate like small organizations? And that's like, how do you make a tall person operate like a small person, or how do you make a, you know, it, it's very, it's, it's almost paradoxical because the incentives are not correct and the communication structures are not ideal for that, reality.

[00:45:54] Jeremy Utley: , As a fellow kind of student of this topic, if you're curious, one book I'd recommend is Safi calls, uh, lo Shots. I dunno if you've read that, but he kind of talks about this

[00:46:03] Henrik Werdelin: I thought you were about to pitch my old book, Jeremy, the Acorn

[00:46:07] Jeremy Utley: No, No, I wasn't. But there, this, the podcast would not be complete without, without us sharing at least one book that we've written.

Um, no, but Safi, Carl's book, uh, moonshots is, is quite, is quite interesting. One thing that he gets to Qasar, which is just to your point, is I think it's Dunbar's equation, but it's this idea of about 150 people is kind of the max before things start to break down. Um, and anyway, uh, you were, you were reminding me of that as you were talking.

[00:46:35] Qasar Younis: One way, by the way, Google, , we're both ex Googlers and, and a huge part of the company is from Google. At, at some point when the company was a few hundred people, literally still, I think up to 400 people. They only still majority Googlers. So , we have a lot of Google people in the company.

Uh, and so, you know, we know the company quite well. If Google had really. In the era of 20, let's say 10, as the company was now at scale making like a billion in cashflow a month. What the company did at that time was they started the moonshots, you know, where ultimately were Waymo and, and that organization X, right?

Yeah. , That came outta there. And then Alphabet kind of emerged another five years later. If you could go back in time and let's say Google instead of did something different, they believed a large organizations are not ever gonna be effective, and we're basically gonna be like a holding company, venture capital fund.

If you look at YouTube, YouTube is a great case study. It was left alone and it grew into this thing. If YouTube had become Google video., YouTube would not be what YouTube is today. And, and I think everybody, you don't have to, you know, you don't have to have a, a PhD in, , tech, uh, strategy in Silicon Valley to get that.

And so if Google could have replicated that across hundreds of companies, there's a version that Google is actually just a, almost like a guild of companies. And so one of the reasons that large organizations are more efficient is there's sharing. Learnings you share recruiting learnings across. We have 30 some products. All the products learn. When one product learns something about marketing, all the products benefit from that. So if you have this Guild of organization where you share this information, incentive to share, maybe that could work. But that's not our business. Our business is vehicle intelligence.

So.

[00:48:10] Henrik Werdelin: I think that might be, I know everybody has a hot stick. Up. So I will, um, be the, downer that kind of like throws

[00:48:16] Qasar Younis: For Henry. There's a funny way of saying bored. Uh, but yeah, let's, uh,

[00:48:20] Henrik Werdelin: I mean, I just couldn't keep my eyes open anymore. So it was that, or,

[00:48:23] Jeremy Utley: we often, in these conversations are like the most interesting point when everybody's like, finally, you know, we're like, oh, join us again for another. Interesting. It's a, it's, it's Seinfeld, right? You gotta leave on a high note.

[00:48:34] Henrik Werdelin: That's awesome.

[00:48:36] Jeremy Utley: You guys were amazing.

[00:48:37] Henrik Werdelin: just Make.

Mr. Utley,, It was nice to talk about something that wasn't kind of like, uh, another thing that came up, like comes up in a text prompt, right? This is something that is

[00:48:49] Jeremy Utley: You know, I mean, uh, I thought it was hysterical the number of times we talk about human tank interaction. It's just, it's hysterical. You know, I, I, one thing I find myself kind of wondering about, you know, as, as Qasar painted the bar case, so to speak, is just really, you know, um. I think the question of where will the average shakeout, it's hard for me, and maybe I'm optimistic or Pollyanna about this, because becoming a good collaborator to AI is so, , accessible.

It requires no advanced degree. It requires minimal advanced training. It just requires some intention and maybe a little bit of practice, but everyone can be a top 1%. Uh, collaborate or, or get top 1% outputs out of ai if they, , give it a little bit of attention. And so the bear case, you know, is, is saying that it's effectively for most people just gonna remain information retrieval.

That makes me sad to think about because it just means the vast majority of people will hardly scratch the surface of possibility, hardly scratch the surface of capability. That's something I'm thinking about. I don't, I don't know if you have any thoughts on that from where you sit.

[00:50:01] Henrik Werdelin: No. I mean, I do think that he just pushed this as Bear case, right. You know, like, and, and, uh. Where I, my head kind of went a lot was, I think when we talk about specifically generative ai, a lot of us are just very anchored in, it can generate image and text and code , and I think as I was kind of reading up for this conversation, I was trying to. Understand a little bit more, like how is AI being used in all the places that we don't normally talk about? And I stumble into this case, which I'm not sure is like how real it was. But basically one of the issues of creating fusion energy, like unlimited energy, uh, is to keep like the plasma stable and people are now using AI to really help, you know.

I don't know how that actually works, but like, you know, keep the plasma staples so that when they have these like two particles kind of colliding, they can have like, almost like, it's not a force field, but like the force fuel staple. And so one of the kinda like unsung benefits of AI is that it now kinda like. Work as a participating actor in some of these things that could obviously revolutionize the whole world. 'cause if we have unlimited pollution, free energy, that will be wild. And so what I think this conversation really kind of. Made me think of is like, ah, there's probably this whole universe outside the stuff that we normally talk about, like having kind of like autonomous cars being like just that first step removed.

But as we keep that step like removing ourself from the core of just generating text code and image sound. Then, uh, yeah, like I, my, my head just normally don't go there and so it was nice to kinda like explore that.

[00:51:46] Jeremy Utley: Yeah, I think, it was wild to me. I mean, it just shows kind of what maybe sheltered lives we live. I mean, I, I work with Komatsu as an example, so I'm familiar with them as a business, certainly. Um, and yet. The thought that there are machines running 24 7 in cities in the middle of nowhere that. For decades.

Decades. And people who are spending months of their lives at a time there. That is, it's such a different and where, where visibility is limited because of dust, where humans are at risk because tires alone are 12 feet, you know, and diameter. That is such a different world from our day to day that, that it's in some way it's really exciting and invigorating to realize there are people. Like Qasar and Peter who have dedicated themselves to improving the safety and you could say the humaneness of those conditions. I mean, it's, it's really cool. It's like, it was an, conversation that made me feel more optimistic actually about the impact of technology.

[00:52:54] Henrik Werdelin: The other thing that I was , the other thing about is, I mean, like, I like to hate on San Francisco because I don't live there. Um, but it is, it is

[00:53:03] Jeremy Utley: You always hate on it publicly, but folks privately, Henrik's always texting me going, you're so lucky that you live so close.

[00:53:10] Henrik Werdelin: But it is incredible, right? Like you have. People like these two folks that you know clearly very smart, and then they're ambitious, right? They, you know, I had in full transparency never heard about their company before. , And, , my, one of my friends joined and then we, we , heard about them and it's like a, I think $15 billion company.

They do incredible work. They have very big. Ambitions, they're doing something that's very complicated and you know, you have to just admire kinda like the ambition level and the ability to think kind of just really big of, a large group of people, uh, specifically kind of like on on the, the west coast of the us. And so, yeah, like I was, uh, there's another one where I was like, ah, I want to hate on it,

[00:53:54] Jeremy Utley: Well, speaking of, you know, speaking of large, you know the comments, I, as, as you know and probably is the case for you, the, the end of the conversation is always my favorite part. We're talking about what does it look like to, . Enable innovation in an organization, Caster's, comments , on the size of the organization.

Mattery, the fact that, I mean, they're a startup, right? I mean, they've raised whatever, 500 million bucks, so it's, they're not nothing, but they've got a thousand people and, and he said quote, we are already stodgy. You know, I think anybody who's got a 10,000 person or a hundred thousand person organization goes, wow, what I wouldn't give to only have a thousand.

And yet he's noticing how many of their functions are starting to be, what did he say? Sclerotic, right. And just calcified. And to me it's a really interesting, I mean, they are still able to do something like, which I think is a great technique at the Weekly All hands do a showcase of how folks are working with AI to keep it top of mind.

But it is an interesting challenge. What is the ideal. Organizational configuration to continue to invent the future and to future proof yourself. I liked his kind of alternative realities, kind of black mirror version of Google, which is a guild of small teams. How many more YouTubes might there be? I don't know.

What would the cost to Google search be? I don't know. But it's an interesting, kind, alternate reality to to kind of entertain. And I think that's actually the perfect note to end on. Could we have the secret code word be. Or Black mirror, I think is a perfect code,

[00:55:23] Henrik Werdelin: There's actually two words just pointing it out.

[00:55:27] Jeremy Utley: code words to end on. It's two sets of two words. It's deeply confusing. If you're confused and you only want one code word, just say confused.

[00:55:36] Henrik Werdelin: But whatever you do, please like and subscribe and share with the friends and put it on LinkedIn and email us, uh, if you have any good ideas or questions. 'cause that

[00:55:46] Jeremy Utley: Tell us who we should interview next.

[00:55:48] Henrik Werdelin: Yes.

And with that, bye.