All Episodes

June 10, 2025 55 mins

A new era is emerging where engineering drives AI—and AI transforms engineering

This week Matt Kirchner is joined by Dr. Pramod Khargonekar—Vice Chancellor for Research at UC Irvine and lead author of the ERVA report AI Engineering: A Strategic Research Framework to Benefit Society. Dr. Khargonekar unpacks the emerging discipline of AI Engineering, where engineering principles make AI better, and AI makes engineered systems better.

From robotics and energy systems to engineering education and data sharing, this episode dives into the flywheel effect of AI and engineering co-evolving. Pramod explains the real-world impact, the challenges ahead, and why this moment represents a generational opportunity for U.S. leadership in both innovation and education.

Listen to learn:

  • How AI is changing every branch of engineering—from mechanical to civil to industrial and beyond.
  • Why manufacturing, energy, and transportation are ground zero for “physical AI”
  • What the 14 Grand Challenges of AI Engineering reveal about the future of innovation
  • Why systems thinking is the key to building AI products that actually work
  • How colleges must rethink engineering education—and what industry can do to help

3 Big Takeaways from this Episode:

1. AI is transforming every branch of engineering—from design and simulation to manufacturing and operations. Pramod explains how fields like robotics, fluid mechanics, and materials science are being reshaped by tools such as reinforcement learning and foundation models. This shift isn’t just about efficiency—it’s enabling engineers to solve problems they couldn’t approach before.

2. Engineering will play a critical role in advancing the next generation of AI. Pramod highlights how engineering disciplines contribute essential elements like safety, reliability, power systems, and chip design to AI development. These contributions are vital to scaling AI into real-world, physical systems—what he calls “physical AI.”

3. To lead in AI Engineering, higher education must integrate AI into every engineering discipline. Dr. Khargonekar outlines how universities can start with shared foundational courses, then build field-specific AI applications into majors like mechanical or electrical engineering. He also emphasizes the importance of short courses, professional development, and industry partnerships to support lifelong learning.

Resources in this Episode:

Connect with ERVA on Social Media:

X  |  LinkedIn  |  Facebook

We want to hear from you! Send us a text.

Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Matt Kirchner (00:00):
Matt, welcome into The TechEd Podcast. I am

(00:10):
your host. Matt Kirkner, everyweek we do the important work
here on the podcast of securingthe American Dream for the next
generation of STEM and workforcetalent. I'm also a huge
believer, by the way, thatsecuring that dream for the next
generation is not going to lookthe way that it has for past
generations. We are in a worldof advancing technology, of so

(00:31):
many great innovations takingplace in the world of education,
across the entire economy. Howwe secure that dream in the
future is going to lookdifferent. That is a big part of
what we are going to talk abouton this episode of the podcast
with a really, reallydistinguished guest. I'm going
to take just a moment to talkabout this individual's
background. I should note thathis name is Dr Pramod kargonker,

(00:55):
and we are really, really firedup to have Pramod on this
episode of the podcast. Let'stake a look for just a moment at
his background. He is adistinguished professor of
electrical engineering andcomputer science and the Vice
Chancellor for Research at theUniversity of California,
Irvine, called in short ofcourse UC Irvine. He's the
former Assistant Director forengineering at the National

(01:18):
Science Foundation. Think aboutthat Assistant Director for
engineering at the NSF, leadingnational strategy on engineering
research and workforcedevelopment is what he did
there. Currently, he is guidingthe what we call ERVA, the
engineering research visitingAlliance. That organization
brings together academia,industry and government together

(01:41):
as we shape the future ofengineering in America. Finally,
he is the lead author of the2024 ERVA report, AI
engineering, a strategicresearch framework to benefit
society. Now I will tell you,having read his background in
his bio, that we are justscratching the surface on our
guests experience. I'm surewe're going to get into a lot of

(02:01):
details over the course of ourdiscussion, but for now, let me
welcome to the studio of TheTechEd Podcast. Dr Pramod.
Kargankar Pramod, it's soawesome to have you here. Thanks
for coming

Pramod Khargonekar (02:12):
in. Well, Matt, I'm so excited to be on
your podcast. Thank you forinviting me. Great to be here.

Matt Kirchner (02:18):
It's so awesome to have you. We're going to have
an awesome conversation justthinking about all the work
you're doing. And as ouraudience knows, and you will
soon know if you don't already,I, like, totally geek out on
artificial intelligence andanybody who's doing really
important work in AI, either inindustry or education, those are
some of my absolute favoriteconversations. So I know we're
gonna have a lot of fun on thisepisode. Let's start with this.

(02:39):
Though you have had really afront row seat to the evolution
of engineering here in theUnited States of America. We
talked a little bit about yourleadership at the National
Science Foundation, your currentwork at UC Irvine, and then, of
course, the engineering researchvisioning Alliance. What has
prepared you to lead thisincredible conversation around
artificial intelligence andengineering? And why do you

(03:01):
think, if you do think it's sucha pivotal moment, both in
engineering education and intechnology, why do you feel that
way? Matt,

Pramod Khargonekar (03:08):
I think you mentioned my background in your
introduction. I've been afaculty member in engineering
for almost 45 years now. Wow,did

Matt Kirchner (03:17):
you start when you were five? You're aging
really well, by the way, well,

Pramod Khargonekar (03:21):
thank you. I was dean of engineering at the
University of Florida. I waschair of the EECS department at
the University of Michigan. Youalready mentioned my role at
National Science Foundation,which was at the really senior
leadership of the premierorganization that sponsors
research and education acrossthe entire United States. So

(03:43):
that's sort of my backgroundfrom my own field of work. I'm a
control systems person, so it'snot an AI field, but it's in
what I call AI adjacent field,so it's totally close to AI. I
personally have done a littlebit of work, so I don't consider
myself an AI expert, but I'vedone some work more recently on
AI and control. So that's sortof the disciplinary background I

(04:05):
came from. But I think for thisparticular conversation, it was
really my role as CO principalinvestigator on the NSF funded
ARAVA project. We call it ARVA,engineering research visioning
Alliance. Got it. Okay? ARVA,Yep, got it. Our mission is to
identify and highlight andarticulate high impact, high

(04:26):
potential areas for the futureof engineering. So that's really
what we focus on. And the ideafor the AI engineering event
actually came within weeks ofchatgpt release. So you remember
in October 22 very well, GPTcame out, and then within weeks
of that, I realized that we inengineering, needed to work on

(04:50):
this tremendous opportunity thatwas in front of us. And so we
started to work on that idea,and we held the event in October
of 23 Okay, so that's how I gotinto this, and that's sort of
the background that prepared meto lead that event. Got

Matt Kirchner (05:07):
it and just a quick question. You know, out of
my own curiosity, when we talkabout controls engineering, I'm
by background a manufacturingguy, right? So worked around
control systems, in myexperience, at least starting
out primarily in themanufacturing space, was that
your kind of area of expertiseand interest, or was it another
controls that you were involvedwith?

Pramod Khargonekar (05:26):
A great question. Matt, so I started out
as the theory. I mean, I'm stilla theorist, so I did sort of the
mathematical theory of control.
In fact, my advisor was somebodywhose name you might know, Ari
Kalman, okay, of kalman filterfame, got it. Yeah, sure, I was
one of his last PhD students. Sothat's the background I came
from. But especially when I wasat the University of Michigan, I

(05:46):
got heavily involved in controlof semiconductor manufacturing.
Okay? And we, in fact, had anNSF Engineering Research Center
on what we called reconfigurablemanufacturing systems, RMS, RMS.
So I was involved in that ERC,so I have had quite a bit of
exposure to control as itrelated to manufacturing, but
then I've also done control forpower grids and smart grid and

(06:09):
renewable integration, so manydifferent applications of
control understood

Matt Kirchner (06:17):
manufacturing and energy and so on. And I mean
that whole field, we couldprobably do a whole podcast just
on controls, technology andengineering and the evolution
there, but we'll stick to the AItopic for now. We've heard this
category called AI engineering.
So talk about what is AIengineering, and how would it
differ from maybe a traditionalAI or computer science field or

(06:37):
focus,

Pramod Khargonekar (06:40):
right? So first, let me get one thing that
is potentially confusing out ofthe way. Okay, so the this term
AI engineering sometimes is usedto refer to a job category of
sort of computer systemssoftware engineers who, you
know, build and deploy AIproducts. And so it's a job

(07:01):
category people specialize inthat and get our definition of
AI engineering was much broader,okay, and it refers to this
convergence and bi directionalinterplay between AI and
engineering. So the idea behindAI engineering came out of that
event that I mentioned, that wasdone under ARAVA sponsorship,

(07:22):
and what we realized is thatthere is a virtuous cycle where
engineering helps AI. AI helpsengineering. And this virtual
cycle Matt becomes very realwhen you think about physical
AI. So I don't know if you haveseen Jensen Wong of Nvidia talk
about, you know, his vision forthe future, certainly. So he
says we are right now in theagentic AI. Next is physical AI.

(07:45):
So when you think of physicalAI, this AI and engineering
convergence, in my opinion, isgoing to be absolutely crucial.
And so the fundamental idea ofAI engineering that we
articulated was aI helpingengineering. Engineering helping
AI engineering writ large. Soengineering in all different
fields, benefiting from AI anddifferent engineering fields,

(08:09):
allowing us to build moreuseful, safer, reliable,
productive AI systems,particularly physical AI system.
So

Matt Kirchner (08:18):
let's talk about, I mean, I think most of our
whether they're familiar withthe term agentic AI or not, and
we're creating a an AI agent,some version of software that is
using artificial intelligenceand leveraging a large language
model to solve one problem oranother. And there's now 10s of
1000s, if not more, AI agentsthat people have developed. So
certainly you think aboutchatgpt is just as just one of

(08:40):
1000s of examples when you talkabout physical AI, or when
Jensen Wong talks about physicalAI, give us a definition to help
us understand what you mean bythat. So

Pramod Khargonekar (08:49):
my take on physical AI, which is a it's
something that is happening andis going to happen over time, is
AI integrated into physicalsystems, okay, so think
manufacturing, which is yourbackground so AI getting
integrated, improving allaspects of manufacturing. You
can say the same thing aboutpower systems and grids. You can

(09:11):
say the same thing abouttransportation, whether that's
self driving cars ortransportation system that
controls my life in SouthernCalifornia, yeah, right. It's
like, okay, two hour trafficjams. So when I think of
physical AI, I think of AIintegrating into the physical
world in which we live andoperate. It also includes AI

(09:32):
into healthcare systems. Sothat's what physical AI means to
me. The other piece that I thinkwe in AI engineering talked
about is how engineering iscrucial to advancing AI. A great
example is semiconductors, likeyou cannot imagine doing today's
AI without state of the artgraphics chips. That's all

(09:54):
engineering. But it's not justthat. I like the whole issue of
power and energy for. AI datacenters. Another aspect of this
physical AI is this AI is notjust information processing. It
is sort of powered by actualphysical energy. There is the
whole issue of data that's goingto drive development of physical

(10:15):
AI. Where is the data going tocome from? What has happened is,
you know, we have used all theinternet data and things like
that to train AI until now, butwhen it goes to the physical
world, and we can talk moreabout that, we are going to need
data from the physical world.
And so that's going to beanother piece of this AI
engineering that we see ahead ofus. You

Matt Kirchner (10:37):
know, I knew I was going to love this
conversation when we werefortunate enough to have you
join us promote. But I will tellyou that some of what you just
said and our audience willrecognize as well my
fascination, some might evensay, an infatuation with the
whole idea of the edge to cloudcontinuum and kind of what
you're talking about there,where we've got, you know, we've

(10:59):
been talking about inmanufacturing, smart sensors and
smart devices on the edge,communicating with each other,
making decisions on the edge,not necessarily having to
communicate, in my case, with aprogrammable logic controller or
a control system and or acomputer network in
manufacturing that's allowingthe proliferation of data
acquisition out on the edge andall kinds of really cool
decision making and advancingmanufacturing. And then we talk

(11:21):
about that data going from thereto a network, to the fog, to a
regional, more local datacenter, to the cloud, and using
the data, and then AI for allkinds of predictive analytics
and so on that. I mean, that'sthe world we're living in today,
and super advanced manufacturingcompanies are leveraging that
technology today. And then wesay, Look, we can have that same
continuum, whether it's asmartphone, whether it is in

(11:43):
healthcare, whether it's inenergy, whether it's in defense,
hospitality, retail, it doesn'tmatter that same continuum is
happening. And what I'm hearing,and you can tell me if I'm
getting this right, when we talkabout physical AI, and then the
role of engineering. You know,traditionally, in a lot of
engineering disciplines, wethink about them working with
the physical whether it's amanufacturing operation, whether

(12:04):
it's a smart grid, whether it'sa power plant, what have you
working on the theoretical side,but then applying that theory to
physical assets and to designand to upkeep and maintenance
and all these other aspects ofwhatever that market space is.
What we're really saying, if I'munderstanding you right is look
in the age of artificialintelligence, and you think
about maybe somebody's mostfamiliar with something like

(12:26):
chatgpt, we're going to have asmuch focus on how that
integrates with physical assetsand physical items in our
economy and in the world inwhich we live and our Engineers,
whether they are electrical,mechanical, industrial,
aerospace, biomedical, whatever.
Now they've got to have thisreally tight relationship with
artificial intelligence,because, to your point, AI is

(12:47):
going to influence engineering.
Engineering is going toinfluence AI. We're going to
create this flywheel, which canbe really, really exciting, but
only if we've got engineers inthat loop that are understanding
how AI is going to change ourphysical world and vice versa.
And so now we've got a whole newway of thinking about
engineering. That was a longsoliloquy on my part to get to

(13:08):
the point. But is that the waywe're thinking about this? And
am I getting that right?

Pramod Khargonekar (13:12):
Matt, you got it perfectly. And, in fact,
you don't need me. You can, youcan explain all these things,
because you understand itabsolutely perfectly. Matt,

Matt Kirchner (13:21):
well, somebody's got to be doing all the
incredible work, research andpromulgating all this incredible
information around the concept.
And you know, I would say I getto ask the questions and make
some comments, but brain's muchmore advanced than mine. Is are
the ones that are really doingthe cool work, and I would
include you front and center inthat work promote. So let's talk
about the report that Imentioned early on in the

(13:41):
conversation as we'reintroducing you to our audience.
And it describes AI engineeringas two things, bio, directional
and reciprocal. So engineeringmakes AI better. Ai makes
engineered systems better. Youmentioned a little bit about
that in your last answer, butlet's go a little bit deeper.
How are you thinking about that?
And if I'm an engineer in thefield, or I'm a student in an

(14:04):
undergraduate or graduate or PhDengineering program, how do I
need to be thinking about that?
That flywheel and connection?

Pramod Khargonekar (14:10):
Yeah, so it's bi directional. So AI
improves engineering.
Engineering improves AI. Solet's start with AI improves
engineering. We see AI improvingall aspects of engineering, all
the way from fundamental thingsthat you learn in your
undergraduate engineeringclasses, all the way to analysis
of products and services, todesign to manufacturing to

(14:33):
operating these products in thereal world. You know, lot of our
engineered systems are operatingin the real world, whether
that's your car or that's yourpower grid operating them, and
even end of life, recycling andall of that. So we think that AI
can help improve every aspect ofthis and can lead to greater

(14:53):
efficiency, reduced cost,customizability, the. Into
market. I mean, so it hasimplications into all aspects of
what we as engineers do in ourdifferent domains. So that's
sort of this one direction, andultimately, I think it can
improve our products, reducetheir cost and overall

(15:13):
sustainability of the wholeengineered world. Sure, on the
reverse side, we think thatengineering fields have
something really substantial tobring to help improve AI so one
of the things that we made apoint of, and this is not to
disrespect people from computerscience and from Ai fields, one
thing that we engineers takeextremely seriously is safety. I

(15:34):
mean, it's sort of built intoour right ethic, like we never
field a product that's unsafe.
We take that as professionalethical responsibility. But it's
beyond just that commitment tosafety. It's also, you know, we
have intellectual traditionslike information theory, control
theory, signal processing,communications, all of these

(15:57):
things are really important fornext generation AI. And so, you
know, take a field likereinforcement learning, which
really is a lot of stochasticoptimal control, something that
I learned as a graduate student,right? And so we think there is
a whole bunch of fields inengineering that improves AI. I
already talked about the wholechip design area, right? It's

(16:20):
like today. Semiconductor designis changing because of use of
machine learning and AIalgorithms. So you use advanced
machine learning and AIalgorithms to design the next
generation chips, which in turnenable the next generation of AI
so there is that virtuous cyclein semiconductor we talked about
power right now, having powersupply to these big AI data

(16:44):
centers is a huge bottleneck, sowe think that we are going to
need advanced power systemsengineering to enable the next
generation AI data center. Sothat's that direction of flow
engineering, improving AIenabling AI. And I already
talked about AI enablingengineering. There is one
example I want to use matt toshow another example of a

(17:07):
virtuous cycle, and I thinkthat's around data. So as we
know, AI systems require largeamounts of data to train the
models and use them, et cetera,et cetera. We think that for
physical integration of AI intothe physical world, we are going
to need data, and that data isgoing to come from engineered
systems. So whether that's amanufacturing system in which

(17:29):
you have deep background, likeif you're going to incorporate
AI into manufacturing system, inany part of it, you're going to
need data, and that data isgoing to come from the physical
manufacturing system. So thereis that virtuous cycle again,
that AI can enable datacollection, and the data
collection enables advanced AI

Matt Kirchner (17:49):
absolutely well.
And it's really, really excitingfor anybody in the field of
engineering. And I think about,first of all, you mentioned the
term reinforcement learning,which you I mean, I just think
is like, the coolest thing, andfolks, when they hear about it,
that, of course, reinforcementlearning being a sub topic, or a
subset of machine learning,which is a subset of artificial
intelligence, and the whole ideathat we're giving an AI little
reward, so as it gets closer andcloser to the result that we

(18:11):
want to see from it, it'sgetting that positive
reinforcement, almost like achild who's getting positive
reinforcement from a parent, sothat the AI learns over time and
gets closer and closer to ourultimate goal, whether that's
optimizing a power plant,whether that's perfecting or
optimizing material flow througha manufacturing environment,
whether it's optimizing theactivity of an autonomous

(18:32):
vehicle, reinforcement learningreally, really important in that
aspect of it, and then thinkingthat That is just one subset of
artificial intelligence. Andthen thinking about the
incredible amount of data that,for instance, a machine learning
algorithm, in the case ofreinforcement learning, is going
to need, and then how do we getall that data? We think about
the example I always use. I'msure you've been in a Waymo car.

(18:55):
Have you? Have you driven in aridden in a self driving
vehicle? Yet you've done that?
Right? Not yet. Oh, so I was inPhoenix a year and a half ago,
and I was running late for ameeting, and I jumped in my Uber
and I ordered up a car, and itsaid, do you accept a ride from
an autonomous vehicle? I didn'thave time to, like, argue with
the Uber app, so I was like,Fine, you know, I'll take the
ride. And this self driving carshows up and picks me up along

(19:18):
the curb and drives me threemiles to my next meeting, and
there's no driver, and we'redriving down the city streets of
Phoenix, Arizona with no driverin the car. And you start to
think about why that works, andall the data that that car is
collecting as it's drivingaround, about what's a mailbox,
what's a person, what's theweather doing, what are the
traffic signals doing, sendingall that information to probably

(19:38):
a local data center and thenultimately to the cloud, but all
the cars are doing that all atthe same time, gathering all
that data. All the cars are assmart as every one of the
individual cars is at any givenpoint in time. That's how we
leverage AI. That is a littlebit of you know what we're
talking about with, gatheringdata in the physical world and
then having an AI learn from thedata that we're gathering. So we

(19:59):
put an. Engineer in the middleof that entire conversation or
that entire process. How do wethink about so if I'm an
engineer and I'm in mechanical,I'm in civil I'm in electrical,
what have you. How is thatevolution, both in terms of
data, in terms of artificialintelligence and so on, changing
my role, and what do I need tobe doing differently as we look

(20:20):
to the future than maybe I wasdoing five or 10 years ago.

Pramod Khargonekar (20:24):
So Matt, as we see it, almost all fields of
engineering are going to change.
Are already changing, and aregoing to change even more
because of the AI machinelearning advances. So let's just
take some examples. I would sayrobotics is like a poster child
for what we are thinking Sorobotics, you know, mechanical
engineering, electricalengineering, computer engine,

(20:45):
you know, all of these fieldsare crucial to robotics. But if
you look at say what's happeningin robotics right now, things
like reinforcement learning and,more recently, foundation models
are being used to make theserobots much more dexterous,
being able to, out of the box,do things that they have never
trained to do. Just absolutelyamazing advances are happening

(21:09):
in robotics. So if I was, say, amechanical engineering
specializing in robotics, Ithink first thing I would need
to do is to make sure I'm at thecutting edge of RL or foundation
models like what Gemini roboticsjust released a couple of months
ago as a Design Suite for nextgeneration robots. I think that

(21:30):
reality is is coming. So to me,that's an example of how a sort
of mainstream engineering topiclike robotics is transforming in
front of our eyes. But let'stake autonomous cars you
mentioned all the way more andall of that. I mean, if you were
a mechanical engineeringinspired by cars, like a lot of
people that in Michigan wereright, like they all grew up

(21:53):
around Ford and Chrysler, andthey all wanted to become auto
engineers. Well, you know, ifyou're going to be an automotive
engineer for tomorrow, you needto be thinking about AI and
machine learning. Same thingwith aerospace, right? I mean,
all aspects of aerospace arechanging, like uncrewed drones
and space travel. I mean, all ofthat is changing. Let's go even

(22:13):
beyond kind of these clearexamples. Take a classical
field, like fluid mechanics,okay, like something that we
teach you know, navier stokesequation and how do you solve
all these things and numericalcomputations? I mean, there are
conferences and workshopshappening as I speak to you on
how we can bring in AI andmachine learning tools and data

(22:34):
driven approaches to advancingfields like fluid mechanics or
thermodynamics or just name it,and it's all potentially
advancing now. It could be manythings. It could be that we do
the things that we have beendoing, but do them more
efficiently, faster, cheap. Butit could also be that we solve

(22:54):
problems that we haven't beenable to solve before. So I think
both are possible. If I look tonext five or 10 years, I would
think that we are going to seereally great examples of
advances of both kind, even infundamental fields. And so I
think if any field ofengineering is changing both in
terms of its sort of practice inthe real world, as well as the

(23:17):
fundamentals that we teach ourstudents, and graduate students
do their PhDs in them. All ofthese have potential of being
changed because of AI. You

Matt Kirchner (23:29):
know, I think you make a really interesting point.
And when I think about if I wereto have just kind of thought
through this conversation, allof my thinking would have been
on the physical side. And we'vetalked about energy. We've
talked about the huge demand forenergy that our data centers are
going to require. We've talkedabout autonomous vehicles, space
travel, drones. I mean, first ofall, imagine 20 years ago,
thinking that 20 years fromthen, we'd be having a

(23:50):
conversation with words likethat, as if they were
commonplace, and now here theyare. If nothing else, it's an
indication of just theincredible times that we're
living in. But I would havethought all about those physical
assets and how they relate toartificial intelligence, what
probably wouldn't have come tomind, which I think is really
fascinating, is on thetheoretical side of engineering.
So whether it's fluid mechanics,whether it's, you know, and that

(24:11):
can be everything from waterflow to hydraulics to pneumatics
to and so on, you think aboutthe theory on the side of
whether it's temperaturecontrol, or its physics, or, I
mean, any of these aspects ofengineering that are kind of
like when we think about thetraditional engineering pathway,
things that engineers justlearn, right? I mean, they've
got their pathway, their coursesequence, in the different

(24:32):
aspects of engineering, in thephysical world and so on, and
science that they need to learnin order to be effective and
good engineers. There's a wholeAI side of that as well that I
hadn't even thought about. Youthink about, have you read, by
the way, Kissinger's Genesisyet? Yeah, yeah. And there's
some really, really good bookson AI, and I've read most of
them, or at least a lot of them.
That one is just especially forsomebody who, like myself,

(24:52):
isn't, you know, I'm certainlynot a PhD engineer. I'm just a
guy that's interested in AIreally brings a lot of those
concepts down to. Earth, buttalking about the future of
material science, and how AI isgoing to be involved in the
development of materials thatare going to be lighter weight,
more sustainable, way, moreapplications that are just going
to happen because AIs areworking on this, obviously, with

(25:13):
an engineer in the loopsomewhere, but all of those
aspects of engineering that aregoing to change. So I'm right in
understanding it's not just thephysical world, but how we
deliver learning andengineering, how we prepare our
engineers for the theoreticalside is going to be do you think
it's equally as important? Moreimportant? How do you feel about
that?

Pramod Khargonekar (25:31):
I think it is certainly equally important.
Of course, future will tell towhat extent it changes the
fundamentals of what we teach. Ithink one direction seems very
clear, is role of data isbecoming greater and greater. So
like you know when, when I gotmy training, you had to get
closed form solutions toproblems in order for them to

(25:54):
consider to be solved, right?
And then we changed from thoseclosed form solutions to okay,
if you can give me an algorithmto compute the answer, then it's
okay, and I will run this in my,you know, IBM computer, or the
desktop computer. So we changedour definition of what it means
to solve a problem. I think ifI'm sort of open to all
possibilities, this newrevolution that is upon us will

(26:15):
also re will also alter how andwhat we mean by having solved a
problem. And so I think we haveto be open that fundamentals
could change quitesignificantly. Of course, the
equations won't change, right?
What we do with those equationsand how we deal with them, and
how that allows us to deal withthe with the world in better

(26:37):
ways. I think that has everypotential to change. I mean, you
know, an example is like reuseof knowledge. Let's think about
design, right? We would all loveto do more reuse of design that
we have already done, because ifyou can reuse it, then you short
circuit so much, right? Youdon't to develop a new process,
right? Don't have to do a newmanufacturing thing. And I think

(26:59):
it is less than perfect, right?
I mean, our reuse of existingdesign, you know, it's
constrained by like, how thedata is stored and how the data
is reused. I think all of thatcould potentially change. So,
yeah, I think fundamentals, thefundamental physics, is not
about to change, but how we dealwith it is going to change. To
me, an inspiring example isalpha fold. So protein folding

(27:19):
was this huge problem. I heardabout it back in the 90s. I
never worked on it myself, but Iheard about it and in a lot of
very, very smart mathematicians,computer scientists, biologists,
worked on it. Not too muchprogress. Set that up for

Matt Kirchner (27:37):
our audience, if you will, promote so when you
reference that today, help themunderstand what it is that
you're referencing in layman'sterms, and then go on. Okay,

Pramod Khargonekar (27:44):
so protein folding is the problem that if I
give you the linear structure ofthe molecules in a protein,
protein is a three dimensionalstructure. So that linear
structure kind of folds in areally exquisite three
dimensional pattern driven bythe molecular forces of
attraction and repulsion,because it's all electrically
charged, right? And so, so thisthing will fold into this

(28:07):
beautiful, three dimensionalshape that controls our biology.
So the a fundamental challengein protein folding. But if I
gave you the structure of theprotein as a linear array of
molecules laid out on, say,table, sure and you let it. You
just let it go. How will it foldinto a three dimensional

(28:28):
pattern? Can you predict this?
Because if you can predict it,it affects your drug discovery
and drug targeting and all ofthat. So that was, like the
fundamental. You can call it amath problem or a chemistry
problem, for 3040, years, peopleworked on it, but it really
wasn't a minimum. It didn'treally happen. And then Demis
Hassabis, who won the NobelPrize for it, along with a

(28:48):
colleague at Deep Mind, appliedmodern RL type of algorithms
that they used for AlphaGo forthe beating the world champion
in the game of Go, they usedsimilar ideas for solving the
alpha. So that's aninspirational example. I mean, I
think those stories that want torepeat in other parts of

(29:08):
engineering,

Matt Kirchner (29:11):
yeah, absolutely.
And I mean to me, that is justso, so exciting. And you think
about just the human benefit tobeing able to target drugs, I
have people who are really closeto me in my life that have
benefited tremendously fromthose advancements, in many
cases, that never would havebeen possible a without AI and
without tremendous research, andwhen you know, we wouldn't have
even been talking about 10 or 15years ago. So and we're just

(29:31):
scratching the surface. I mean,we're just getting started on
that aspect of the applicationsfor artificial intelligence,
which is just a could be anepisode in and of itself as
well, and also something to bereally excited about. None of
this happens on autopilot. Noneof it happens easily. I know in
the report that you, that youauthored, we identified 14 you
call Grand Challenges in AIengineering. So let's talk a

(29:54):
little bit. I, if we had anhour, I would love to sit down
with a piece of paper andactually see if I see. Many of
them I could write down on myown, but tell us, what are some
of those grand challenges. Giveus a couple examples.

Pramod Khargonekar (30:06):
So like my kids, I love both of them
equally. So I love all 14equally, but because you asked
me to pick a couple. So youknow, to me, if I had to pick
just one out of 14, I would saymanufacturing, and we have
talked enough about it, so Iwon't go into manufacturing,
because, to me, the scale of thepotential scale, there is just
incredible impact. So reallyunusual one Matt had to do with

(30:30):
rare events. So we know, we knowthat in engineering, we worry
about very rare but catastrophicpossibilities, like a bridge
failing or so we talk about how,in this AI engineering systems,
we are going to worry about rareevents, because machine learning
is sort of data driven, and forrare events, we are not going to

(30:53):
have lots of data, right? Sothat is a to us. It's sort of an
intellectual Grand Challenge. Islike, how are we going to deal
with rarity of failures in adata driven machine learning AI
driven world. Sure, have someinitial ideas on how to do this,
but we think that's like afundamental Grand Challenge,
intellectual Grand Challenge.
Let

Matt Kirchner (31:12):
me just ask, is that because it's an anomaly and
it doesn't fit a normal patternin data, why would a rare event?
What aspect of it makes it amakes it a challenge,

Pramod Khargonekar (31:21):
just that it won't be in your training set,
right? Got it. So if it's not inyour training set, well, AI is
going to have a little bitharder time in sharing with
those things which are not inyour trade. So that's the
fundamental problem. Is thatthey are rare, and so they don't
show up in the how they want todeal with it, sure. So let

Matt Kirchner (31:39):
me take that just one step further, because this
is interesting to me. Is it fairfor me to assume that a it's not
going to show up in your data?
So it's that much harder topredict? On the other hand, once
it does happen, and if we dohave the data that inform that
or influence that, that rareevent, we build that into our
LLM, and we're that much morelikely than the next time at
least, to be able to predictwhere that may be a risk. Is

(32:01):
that part of the thoughtprocess? Yeah, absolutely.

Pramod Khargonekar (32:04):
You know, those are the kinds of thoughts
that go into what we articulate.
And then grand challenge that,yeah, it is a big problem. And
we, here are some initial ideasthat, and we reference some
papers that are coming outdealing with this. Got it so,
yeah. But you are on the righttrack. I mean, that would be one
approach to doing it, but youcan also imagine use of
simulations and simulatingextreme conditions, and use like

(32:27):
a combination of real data andsimulated data. So there are
other possibilities of dealingwith this. Again, I don't think
we have the answer, but which iswhy it's a grand challenge,
exactly, but we think it's animportant one. Got it another
one that I want to highlight inthe third Grand Challenge is
data. So you know, unlike otherfields of science, and astronomy

(32:47):
comes to mind, chemistry comesto mind. There is a tradition of
shared data. In fact, the onlydata they have is the shared
data. Right in engineering,sharing of data sets is very
uncommon, and there are manyreasons, including proprietary
sure business value profitdoesn't allow for data data
sharing, or it's not easy to dodata sharing, but we think that

(33:09):
having shared data to advancethe engineering fields is a
crucial bottleneck, and it'sgoing to require leadership from
government, from industryorganizations To solve it, and
we kind of put a finger on sharedata being a bottleneck, Grand
Challenge,

Matt Kirchner (33:27):
interesting. And, you know, I think, and that's
one of the, one of the topicsthat I put a fair amount of
thought into. In fact, just hada meeting earlier this week with
a group of healthcare providersand talking about, you know, AI
and applications of AI andhealthcare, which, obviously,
there's just huge, huge usecases for AI, and many of them
are already being executed on.
But that is a challenge, rightin terms of, how do you share

(33:48):
healthcare data? Especially, youknow, it's one thing, if you're
a hospital system that's, youknow, generating $20 billion a
year in revenue or whatever, andyou've got all this
infrastructure built into yourorganization. A lot of
healthcare providers, whetherit's a, you know, it's a nursing
home, it's a mental healthclinic, or whatever, much, much
smaller by orders of magnitude,in terms of their business
model, probably could benefitthe most from applications of

(34:10):
AI, but are also most at risk ofhow they share their data, how
they protect it, how they createa data set that allows, allows
us to leverage the learningwithout, you know, without
disclosing an individual'sprivate, you know, healthcare
data as an example. Are youhearing any examples of how
organizations are working aroundthat, or what might be a
potential solution to that grandchallenge?

Pramod Khargonekar (34:32):
We thought that material science is like
one of the first ones where thisproblem has been talked about.
So, you know, when I was in atNSF, we started this Materials
Genome Initiative, sort of, youknow, inspired by the Human
Genome Initiative. And the ideawas, can we get different
investigators to share data? Sowe put together tools and

(34:55):
formats for people to share dataout of their labs. It's a hard
problem, because. Experimentalconditions are different across
labs. It's a difficult problem.
It's not unsolvable, sure, butit requires collaboration across
different kinds of institutions,funding agencies, et cetera. So
I think that's one example thatwe thought where some amount of

(35:15):
progress has been made in termsof sharing of data. We think
industry organizations,professional organizations, have
a role to play, which are kindof neutral convening bodies. So,
you know, in your in your worldof manufacturing, right? So
society for manufacturingengineers. I mean, could they
play a role in anonymized or deidentified data for

(35:37):
manufacturing systems so that nosingle company can solve the
problem, but taken together,maybe we can really create very
advanced tools government canplay a role when they sponsor
research projects to sort ofencourage data sharing across
companies and academia. So thoseare some of the ideas that we
thought of. But this, it's notgoing to happen bottoms up. It's

(36:00):
going to require leadership fromthe top,

Matt Kirchner (36:04):
absolutely one.
And that's one of the you know,you think about here in the
United States of America. And asmuch as we're focused on
securing the American Dream forthe next generation of STEM and
workforce talent, and our wholesystem of, you know, at least,
almost a conversation aboutcapitalism, about how we
innovate within privatecompanies, which in other parts
of the world, sometimes folksdon't have an appreciation for
how democratize is probably thewrong word for it. Maybe the

(36:26):
completely wrong word, but, buthow open some societies are in
terms of sharing information andnot protecting IP almost might
be some bad benefits in terms ofof creating larger data sets
more quickly, as opposed to herein the United States, where we
so value innovation,engineering, ownership of ideas,
ownership of technology andimprovements, and being able to

(36:46):
leverage that into, you know,financial and other benefit for
individuals and companies. It'sjust going to be a really
fascinating thing to watch overthe course of the next, you
know, probably the next severaldecades, as we as we really try
to find more and more ways toleverage data while still
protecting what makes oureconomy here in the United
States work the way that itdoes, and makes us such an
innovative nation. I know one ofthe things that's going to be

(37:09):
required, certainly in the pastand even more so in the future,
from an innovation standpoint,promote, is this whole idea of
systems thinking, right? I'm abig believer in figuring out and
making sure that peopleunderstand how what they do fits
into a larger system, into alarger process. We're not just
working on individualcomponents. There's a goal,
there's a system, there's aloop. It's a loaded question,

(37:31):
because I know you're a hugebeliever in systems thinking,
but talk to us about theimportance of systems thinking,
especially as we move into this,this new age of the flywheel
between AI and engineering?
Well,

Pramod Khargonekar (37:43):
I think Systems Thinking is absolutely
crucial. And I would even arguethat the reason we don't have a
killer app for AI is because wehaven't taught in terms of
systems. So, you know, chatgptis a consumer product. It has
its own sort of dynamic so Iwon't go there, but anything
else is going to be aI inside asystem, AI alongside other
technologies to build a systemthat's useful for human beings,

(38:07):
right? So I would say systemseek is absolutely crucial to
create those really high valueproducts and services that
people are going to benefitfrom. And there, I think it goes
beyond the engineered system orengineered product. We really
have to be thinking about theproduct in the wild, working
with humans. So this sort ofhuman technology interface as to

(38:28):
how humans experience a productand engineer a product. And you
can think of self driving car asan example of that human
technology interface. I thinkit's going to be absolutely
crucial,

Matt Kirchner (38:39):
for sure. So the whole idea of the human in the
loop, that we've got anindividual kind of in this
system that is interacting withphysical systems, interacting,
you know, behind the scenes,with artificial intelligence,
and then being part of that,that overall experience. Am I
getting that right? Absolutely,

Pramod Khargonekar (38:55):
and that's so that's the kind of thinking
we need, and that is only goingto come from systems level
thinking,

Matt Kirchner (39:00):
and it's going to come from all these different
disciplines of engineering aswell, you know. And you think
about with my time inengineering programs at
universities, my time workingwith primarily electrical
systems controls and industrialmanufacturing engineers, quality
engineers and manufacturingwe've built up in some ways,
some silos, in certain cases,some organizations are better

(39:21):
than others between thosedifferent engineering
disciplines, talk to us abouthow we're going to see maybe a
convergence of some of thosedisciplines, and then how, you
know, partnerships and researchbetween individual engineering
disciplines, betweenorganizations and educational
institutions and between privateemployers. You know, are we
going to see more interactionand reliance and overlap between

(39:44):
all those different disciplinesand organizations.

Pramod Khargonekar (39:47):
I think absolutely yes. In fact, you
know, we talked about systemslevel thinking. Well, that's
going to requireinterdisciplinary
collaborations. And you know, inboth AI for engineering and
engineering for AI, we talkedabout how, like, four. Fields
like controls or signalprocessing or information
theory, they are all going tohave to integrate with machine

(40:08):
learning and AI fields, but wetalked about thermal and fluid
sciences, so I think you'regoing to need internet
collaboration at the level offields and individual
researchers, but I think we arealso going to require
collaboration at organizationallevels. So whether that's like
departments in a college ofengineering or in a university
writ large, between universityand private sector, between

(40:31):
government, university privateindustry. So we are going to
need collaborations acrosspeople and fields and across
organizational boundaries acrossour entire society, and I think
that's the only way we are goingto realize the biggest benefits
of this new technology, enablingthe next generation of products

(40:51):
and services for our people.
There's

Matt Kirchner (40:54):
no question. And you see the convergence. We see
it again. Not to keep going backto manufacturing examples, but I
have so many of them, you know,in the convergence between it
and OT, between operationstechnology and information
technology and manufacturing.
And those used to have hugewalls between them. And you
know, it never wanted to haveanything to do with the
manufacturing floor.
Manufacturing floor wantednothing to do with what was

(41:15):
going on in the computer serviceservers. Never the two shall
meet. And now we're seeingtremendous overlap there. We're
going to start to see moreoverlap between disciplines in
engineering, especially on theeducation side, but across
engineering disciplines, bothprivate and public and
educational. Now we start tothink about, how do we prepare
the next generation ofengineering talent for this new

(41:36):
world? You and I can talk allday about how important it is
that we or how this is coming,and how we have to prepare for
it, but, but what does it meanto actually prepare for it? I
spend a lot of time aroundengineering programs. You see
every different version, right?
We see some universities at thispoint that are kind of creating
separate disciplines aboutaround artificial intelligence,
separate engineering programs.
Maybe it's connected to datascience or computer science or

(41:59):
or something in that realm, andthat's fine, but I know you and
I agree that that's not going tobe enough, that we really have
to drive into AI technology andAI thinking into every
discipline. How do we do that ina university where we're still
teaching traditional electricalor mechanical engineering, and
we all know in many cases thatacademia can be, especially at
the four year and above level, alittle bit slow to change. How

(42:21):
do we do that? How do we drivethat transformation in
undergraduate engineering? Ithink

Pramod Khargonekar (42:25):
that's really, really crucial. And what
we think is, at a minimum, everyengineering undergraduate needs
to have at least one exposurecourse to AI, and that would be
the full full gamut, analytic,predictive, generative AI. So
cover the whole thing at leastone and in many, many fields,

(42:46):
you will need more than onecourse to expose all the
students to the fundamentals ofAI and how the field is evolving
and how it is impactingtraditional engineering
disciplines. Now I can imaginedoing this through a common
course for all engineeringstudents. But you can also do it
through, like for some majors, adeeper course for some other

(43:08):
majors, you know, a differentoffering to expose them to AI.
And I think we have done similarthings in computing so or
statistics. You know, how wemeet the statistics requirement
or computing requirement? Ithink we are going to have to
take creative approaches toexposing students to AI fields
depending on their major. Butbeyond that minimum requirement,

(43:30):
this is absolutely like, it'slike a literacy type
requirement, we are going toneed deeper exposure depending
on the field. So if you're arobotics major, while you do
some, like a collection ofcourses. And if you're a power
systems, you know person you do,you know. So I think we're going
to have to do some mix and matchthrough a collection of courses.

(43:50):
And I think here, in terms ofthe reality of an engineering
school, we have the limit on thenumber of credits, so we have to
get around that to free up spaceto have aI courses, it's a huge
barrier, so we're going to haveto work on that with the
accreditation abet type ofconstraint. But also I think, I
really think that collaborationacross engineering departments

(44:13):
and computer science departmentsis going to be crucial to
creating this kind of flexiblecurricula, so that students can
come up to speed for theirprofessional future. And I also
think there is real opportunityhere for industry to play a very
big role. Industry can offerdata sets. Can offer example,

(44:36):
real world examples of how AI ischanging that industry. Almost
every engineering college haslocal industry around it, so I
think this is like a really bigopportunity for deeper
connection between industry anduniversity to educate the next
generation of engineers who willcreate the products and services

(44:56):
of the future.

Matt Kirchner (44:57):
You know, I think about a manufacturer, for
example, or an. Employer that'spartnering with a with a
university, and a lot of times,we might think about sitting on
an advisory board. We mightthink about some of the
manufacturing engineeringprograms, donating materials,
maybe donating a piece ofequipment that the students can
learn on. You know, it occurs tome something you just said, and
I have a couple, a couple otherfollow ups to your last answer,

(45:19):
but we're almost in an age wherewe have to think about donating
data in the same way, and that,you know, as important as it is
to donate equipment to makefinancial gifts to a university
to endow a scholarship, I mean,all those things, none of it's
going away, is still going to bereally, really important, but
making data available, that'ssomething that I hadn't really
thought about in that way. Ireally like that aspect of it. I

(45:40):
also think that you know whatyou said. And again, as someone
who I love, anybody who isdisrupting the traditional model
of education for all kinds ofreasons, I think it's just
absolutely necessary. And and Ido get especially on engineering
programs, two objections. Andthe first, and you mentioned
both of them, it's justinteresting. Number one is,
well, abet says we can't. And ifyou ask abet, they will say, we

(46:01):
actually never tell anybody theycan't do it. We just say, you
have to show us how your meetingrequirements for accreditation.
And we'll, we'll work on thattogether. So I that one, and
we've talked to the folks at thehighest levels of a bet about
that. And so a little moreflexible than a lot of people
give them credit for, althoughcertainly can be a hurdle. We
also think about, well, I'vealready got 120 credits. What do
you want me to take away, andit's a real problem, right? I

(46:22):
mean, something's gonna have toyou can't just add content and
add learning to a program.
Something's gotta be squeezedout. I heard a little bit about,
hey, maybe we just need torethink how we're doing
statistics and computer scienceas part of some of those
programs. Is that one way, Iguess, is just rethinking about
the current courses that arealready in a degree program, and
how we deliver some of the samelearning outcomes, but do it
with a bent toward AI, yeah. Imean,

Pramod Khargonekar (46:45):
I think that definitely is one path to
follow, but also kind of rethinkwhat is a must have in terms of
an engineering degreecurriculum, and kind of free up
a little bit there. Because onething we know, we cannot teach
students everything that theyneed to know, right? We can only
teach them how to learn. That'sbeen my principle. You don't
teach them everything. You justteach them to learn on their

(47:05):
own. But if you take that pointof view, AI actually becomes a
big enabler, right? If youbecome really good at it, you
can learn on your own. So Ithink there are degrees of
freedom that we haven't thoughtof.

Matt Kirchner (47:16):
There's no question. And I think opening
our minds to what could be, it'salmost like zero based
budgeting, right? I mean, ratherthan saying, Here's what we what
do we change? It's like, let'sstart with a blank piece of
paper and say, All right, whatis the engineer in the next five
or 10 years? Need to be reallygood at. Need to understand.
Need to Know. And let's build aprogram around that that starts
to answer some of the questionsabout the evolution in higher

(47:37):
education. Let's say that I'm anengineer. Now that's 20 years in
I graduated from UC Irvine, and,you know, whatever the year 2004
and now here it is, 2025 I lovemy job. I feel like I'm doing
really important work, but Ialso recognize that my world is
going to change fundamentally inthe next 10 years. What do we
need to do around lifelonglearning for established

(47:57):
engineers? Hugely important

Pramod Khargonekar (47:59):
question.
Again, a really big opportunityfor industry and university to
collaborate on. But like youknow, thinking of optimized
short courses, evening programs,certificate programs for working
professionals, absolutelycrucial. And if you actually
fail to do this, it will have ahuge impact on jobs and work for
working professionals. So Ican't emphasize enough how

(48:20):
important what you just askedis, and

Matt Kirchner (48:25):
I think that's a message for two groups of
people. The first one is theengineers themselves, who, I
think, yeah, I talked to a lotof them, and I have a lot of
them in my, you know, in myfriend group as well, who are
already asking these kind ofquestions, like, what is, you
know, what is my role in theworld of artificial
intelligence? So a message forthem, of, look, you know,
they've got a lot of theselifelong learning opportunities.
Whether it is you know any, it'snot, you're not going back,

(48:46):
necessarily for anotherbaccalaureate degree or moving
on to do, you know graduatework. It's, it's about, how are
we going to just create theseopportunities for upskilling
through certificate programs,through shorter bits of learning
around certain disciplines thatprepare engineers for what's
coming. And I think, not toundersell the fact that you've
got, as a you know, you're a2030, year tenured engineer in

(49:09):
manufacturing, you've got anincredible amount of knowledge
that a graduate who may becoming out of a program with a
little bit more on the datascience and AI side doesn't have
use that to your advantage andlayer the AI side of it, on top
of it, but also a message toanybody in higher education or
education in general, about theopportunities to cater to these
individuals, of learners, thatit's not always about the 18

(49:32):
year old college freshmen thatwe need to recruit into a
engineering program or into auniversity, but it's these
opportunities to serve thesevast number of people that are
already in the workforce. Youmentioned the fact that the
stakes are pretty high if wedon't find a way to educate that
next generation, certainly forthe individuals in that
particular situation, what arethe stakes for the United States
in general, if we don't wrap ourarms around the future of

(49:54):
artificial intelligence andreally understand how it's
transforming our economy andmake sure we're creating that
next. Generation of talentthat's ready for it,

Pramod Khargonekar (50:02):
yeah. So to us, it's a generational
opportunity that's going toimpact economic competitiveness,
economic growth, nationalsecurity and all aspects of our
society. So we just can't missit. The stakes are just too high
in this globally competitiveworld, and I think we have every
asset that we need to make ithappen. So I just can't over

(50:25):
emphasize how important thisgenerational opportunity is. So
to steal

Matt Kirchner (50:30):
a line from Apollo 13, failure is not an
option, exactly. We have to finda way to do this. Many, many
great minds yours, included, areworking on incredible ways to do
it. Super, super excited for thefuture. Super excited to ask you
two more questions. Promote aswe wrap up our time on this
episode of the podcast, they'requestions we love hearing from
all of our guests, becauseeverybody's had their own
journey. Everybody has their ownbackground. I'm going to ask you

(50:53):
first, especially given all thetime that you spend in the world
of higher education, what is onething about that you feel about
the education? How we educatepeople here in the United States
and around the globe? What's anopinion or a belief that you
have that would surprise ouraudience a little bit? Well,

Pramod Khargonekar (51:08):
I don't know if it will surprise or not, but
I think AI is fundamentallyaltering the Foundations of
Education. So I'll just go withassessment piece of it. Right?
So how do we assess studentswhen they've been educated? We
give them a bunch of questions,they give their answers, and we
grade them on their answers,right? That whole model is
completely obsolete. Okay, thankyou. I couldn't agree more.

(51:29):
Makes no sense anymore. I mean,nobody knows the answer, but my
own answer is, let's grade themon the quality of questions they
ask. So let's focus ourattention on teaching them how
to ask good questions. Itcompletely changes the way we
think about education.
Absolutely,

Matt Kirchner (51:45):
I, you know, I talk quite often on the podcast.
It seems like it's almost everyepisode. Now, is my belief that
in the in the old world ofeducation, school is where we
went to learn, and home waswhere we went to practice.
Right? We go to school to listento someone deliver a lecture, we
did some reading, what have you,and then we would go home to do
our homework. That's where wewould practice, and that's
totally flipping now, and we canlearn anywhere, and school is

(52:06):
going to be where we go topractice. And part of that
practice is asking really,really good questions. I think
that's a really, really deepanswer to that particular
question promoted, and Iappreciate the way that you're
thinking about that, but alsoappreciate your thoughts on our
final question, which is, andhere again, it's a question we
love hearing from every singleone of our guests on, if you
could go back in time all theseyears, you know, 45 years that

(52:29):
you've been doing, the work thatyou're doing, obviously
tremendously decorated and wellrespected and an incredible
amount of success that you'vehad, let's go back before that.
Let's go back before whathappened, even before that, to
that 15 year old promote in yourmaybe, you know, here in the US,
a sophomore in high school, kindof that age. What advice would
you give to that young man ifyou had the opportunity?

Pramod Khargonekar (52:51):
So it was 1971 when I was 15, and I was in
India in high school, just likeyou said, love it. And, you
know, do you remember this moviecalled The graduate? Oh, sure,
it doesn't happen, you bet. Andhe gets his advice. Plastics,
yeah, if I go to 1971 and me as15 year old, advice computing,
pay attention to computing. ButI think the more important

(53:13):
aspect of your question, Mattis, what should a 15 year old
today pay attention to? Okay, isit AI? Is it bio? Is it energy?
Is it? And I don't think theanswer is obvious, but it's a
fantastic question. And you knowthat 15 year old today, my age,
add 54 to it. Okay, right? Yeah.
Okay, so that's like, you know,2079 okay, what's that world in

(53:35):
2079 a 15 year old? Outstandingquestion, Matt, outstanding
question well, and an

Matt Kirchner (53:43):
outstanding answer as well. And whether it's
thinking about all those yearsago, whether it's plastics or
computing, I almost wish wecould pick our own bumper music.
We'd be playing Mrs. Robinsonhere on the way out of the
podcast, which, of course, camefrom that same movie The
Graduate then just an incredibleanswer. And then to the 15 year
old of today, if it was plasticsor computing back in the early

(54:04):
1970s what is the answer to thatquestion? And it probably has
something related to artificialintelligence and machine
learning, but maybe in aspecific discipline that
interests that young person,without regard to what their
answer would be. Certainly,they're going to learn a
tremendous amount promote fromwhat we talked about today, such
a great conversation walkingthrough artificial intelligence.

(54:24):
Your incredible research, thereport that you wrote, The 14
grand challenges that we touchedon. By the way, we will link
that report up and many otherresources in the show notes to
make sure that our audience hasan opportunity to take a look at
that information. I know they'llbe fascinated, but thank you so
much for taking some time forus. Dr Pramod kargankar, with,
of course, UC Irvine, we talkedabout his incredible pedigree

(54:48):
and his role as we started thepodcast. But Pramod, thank you
so much for taking some timewith us today.

Pramod Khargonekar (54:53):
Thank you, Matt. Thank you for having me.
It was a marvelous discussion,marvelous

Matt Kirchner (54:58):
indeed in a marvelous time. For our audience
as well. Thank you so much forjoining us on this episode of
the podcast. So many coolresources we talked about, of
course, as we do every week, wewill link those up in the show
notes. Let's put those at TechEdpodcast.com/promote that is
TechEd podcast.com/p R, A, M, O,D, that is where you will find

(55:23):
the show notes, the best in thebusiness, as you know. And
please check us out on socialmedia. We are all over social
media. We are on LinkedIn. Weare on Facebook. We are on Tick,
tock, as the kids are sayingthese days, we are on the gram.
You will find us anywhere you goto consume your social media
when you're there, reach out,say, Hello. We would love to

(55:43):
hear from you. And until nextweek, I am Matt Kirkner, and
this is The TechEd Podcast. You.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.