All Episodes

January 25, 2023 47 mins

Missy Cummings, one of the country’s first female fighter pilots and director of George Mason University's Autonomy and Robotics Center, calls herself a tech futurist, charged with making tech work better and safer. In a conversation with Mason President Gregory Washington, Cummings is unflinching in her critique of AI’s strengths, weaknesses and shortcomings, as well as that of humans. There is a lot to like about AI, Cummings says, but she calls out bad tech where she sees it, including in the vision systems of self-driving cars and Tesla’s Autopilot. There's also a lot to like, Cummings says, about Mason's new Fuse building on its Mason Square Campus. When open in 2025, the building will will house R&D labs, corporate innovation centers, incubators, accelerators which will help advance the digital innovation goals of university, industry and community innovators.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
Trailblazers in research,innovators in technology,
and those who simply have a good story:
all make up the fabric thatis George Mason University,
where taking on the grand challengesthat face our students, graduates,
and higher education is ourmission and our passion.
Hosted by Mason PresidentGregory Washington,
this is the Access to Excellence podcast.

(00:26):
Missy Cummings is from asmall town in Tennessee,
where as she said few peoplefinish college or even leave home.
Her biggest challenge, she said,
was finding the courage togo out into the unknown.
All I can say is missionaccomplished. Cummings,
a professor in George Mason University'sDepartments of Mechanical Engineering,

(00:49):
Electrical and Computer Engineering,
and Computer Science helped blazea trail for women's equality in
America's armed forces as anaval officer and as one of
the Navy's first femalefighter pilots. That's right:
First female fighter pilots.
That distinction came despite her facingdiscrimination and resentment from

(01:11):
her male colleagues.
She chronicled those events inher book "Hornet's Nest." Now
the director of Mason Center forRobotics Autonomous Systems and
Translational AI,
Cummings research interests include theapplication of artificial intelligence
and safety critical systems,human systems engineering,
and the social impact of technology.

(01:34):
One of her first challenges atMason is to create a new educational
program in the design and developmentof artificial intelligence.
Cummings has asked the hard questionsabout the fundamentals of autonomous
transportation while takingsome jabs at bad technology,
including Elon Musk, TeslaAutopilot, which we will discuss--

(01:56):
I'm an owner and so we can havea lot to discuss on this issue.
She has been a guest of 60Minutes, the Colbert Report,
and the Daily Show with John Stewart.
She also has a goal of hiking the entire
Appalachian Trail. Dr. Cummings,
welcome to the show and welcome toGeorge Mason University with the

(02:18):
start of the spring 2023 semester.
It is so good to be atboth places. Thank you.
For those of you who may not know,
Dr. Cummings and I have along and storied history.
I've actually tried to hireher on multiple occasions and I was ecstatic when we
were able to get here,
here with the help of Dean Ken Ball andothers when we were able to get her here

(02:40):
at George Mason. So how far have yougotten on that Appalachian Trail?
Oh my goodness. Well, I've onlybeen doing it 20 years and you know,
I have an attention span problem, so Ican only go out for a few days at a time.
So, you know, I'm about halfway.I still have a lot in Maine,
New Hampshire up north. I'm prettymuch done with the south though.

(03:01):
Nice. Well, you know, I am ahiker kind of guy in Virginia.
There are some significantparts of that trail.
I hear that there is therollercoaster section.
Have you done that section of the trail?
Yeah, I'd knocked out therollercoaster section a long time ago,
but I will tell you, it wasjust, um, over COVID when, uh,
down near Lynchburg,

(03:22):
I went up in the wintertime and I tookmy Jeep and because I feel like I'm a
fighter pilot, I could do anythingand there's nothing I can't do.
And I almost drove off the side of themountain in my Jeep 'cause I hit a spot
of ice that that was under the roadsurface and you couldn't see it.
And I almost died. And, uh, whew.
So Virginia is probably not therollercoaster section that killed me,

(03:43):
but over down by Lynchburg,that was dangerous.
There's just so much we could talk aboutand I want to jump in to a whole lot of
it.
So I wanna start with your time in themilitary because that was so defining for
you in terms of your path now.
Absolutely.
So you were in the militaryfrom 1988 through 99.
That's correct.
So when did you start flying fighter jets?

(04:05):
Well, I went through flight school--thefirst couple of years when you're a baby
pilot, you,
you fly propellers and then you flya couple of different kind of jets.
So I didn't become a full-fledgedjet pilot until 1990,
and it's at that time that Iforwarded deployed to the Philippines.
Nice.
And, and I was flying afour echoes back then.
Oh, you're flying a fours?
Mm-hmm . Mm-hmm. I'm a real man.

(04:27):
. So are you qualifiedto fly any other aircraft?
Well, I flew F-18s as well. I then,
when I became an official fighterpilot when I was in the Philippines,
I was an aggressor pilot. Soif you ever saw Top Gun 1,
that's sort of what wedid in the Philippines.
We were pretending to be bad guys. Andbecause I was a woman and at that time,
women couldn't fly in combat,
we just trained the men whowere coming over to deploy

(04:51):
in Iraq. And so we got to train themen to try to defend themselves.
And it wasn't until a few years laterthat then the combat exclusion law was
repealed. And that's when I, becauseI had already been doing the mission,
I was one of the most qualified women tobecome a fighter pilot because I'd been
doing those missions. That'swhen I rolled over to the F-18.
You know, , I bring thisup, I literally two weeks ago,

(05:13):
just saw Top Gun 2 for the firsttime. I plan on seeing it again.
Anything realistic aboutthose Top Gun movies?
The flying scenes are amazing, but it'snot at all realistic. Right. I mean,
the,
the bottom line is these planes are soexpensive that if you're getting that
close to them, it's just too dangerous.
You can't afford to lose the a hundredmillion dollar copy of the aircraft.
And that's not even including weapons.

(05:34):
And so there's a lot that goes on withthe movies about the flying scenes that I
think are not that realistic.
Oh so this,
that whole one where he flies throughthe middle of the formation of the two
aircraft, that probably wouldn'thappen. That what you're saying?
I mean,
a lot of what happens at Top Gun you'dget kicked out for if you actually did
that in real life. I, I.
But they do a good job of capturing thespirit of what it means to be a fighter

(05:56):
pilot.
No, I understand. I understand.
You did a presentation a whileback where you spoke about how,
despite being an ultra trained andsophisticated pilot on takeoffs and
landings in the F 18,
you and all pilots are pretty much takenout of the equation as the plane flies
itself at that point.
You also wrote an extensive paper onthe use of autonomous and automated

(06:18):
weapons. Can you talk a little bitabout that from an autonomy perspective?
How much the person actually does andhow much is involved in the actual
technology of the plane itself?
Yeah, I think these arereally great questions,
especially as autonomy and artificialintelligence continues to advance.
So when I was flying, when I wastransitioning from the A-4 to the F-18,

(06:39):
one of the things that I found amazingwas that when you were launched off the
front of the aircraft carrier in an F-18,
you had to show everyone on the carrierthat you were not touching anything.
You were not allowed to fly theplane. It was a computer program,
it could fly itself off thefront of the carrier just fine.
The problem was that ifyou touched anything,

(06:59):
you were likely gonna set up somepilot--induced oscillations that were not
gonna be recoverable. And in fact, lotsof people died this way. So this is why,
you know,
I felt that was really unnerving andthat was really the beginning of me
starting to wonder whether or notI should be doing something else,
like going into academia.
When I looked around the aircraftcarrier and we weren't allowed to touch

(07:19):
anything on takeoff because we wouldonly screw it up. The planes always,
always, always landed better thanwe did. Humans, we loved it--
Always?
By, by that time, I mean theearly days of automated landings,
it was a bit sketchy. But by the latenineties and 2000, I mean, the planes are,
computers can respond so muchfaster than we can as humans.
We just can't process information asquickly on those kinds of jobs that the

(07:44):
computer on the plane can.
Okay, hold on now.
You are sitting in hundredmillions of dollars
worth of aircraft and you're on theaircraft carrier and they tell you
don't touch nothing on takeoff.
Yep.
The computer will handle everything.
Yep.
And you're okay with that?
Well, I didn't say it was okay withit, but it's what you do to stay alive.

(08:04):
Right? I mean, it's veryunnerving. It it is unnerving.
And it's unnerving to watch the planeland itself always better than you can.
You know,
most of the really bad accidents on anaircraft carrier happen at 3:00 AM after
a pilot's been out doing a mission,you're exhausted. You know, it's night,
it's hard to see. And so that's a--
But if the plane is landing,why would you worry?

(08:26):
Well, that's why you want it. Right?
Because we have a lot fewer accidentsnow that the automation is at least
assisting, if not doing it outright.
I do think one of the things that peopledon't really get is don't be aghast at
what I'm telling you aboutwhat's happening on a air, on a, on a military jet.
It's happening to you everyday when you fly commercial.
Pilots only touch the stickfor about three to seven

(08:48):
minutes out of any flight, and it's ontakeoff. Most of the time you're landing,
you are being landed by a computer.
And one of the reasons whythe airlines locked it--
So, and even during the actualtrajectory of the flight.
Oh yeah. It's all automated.
They're not, they'renot touching the stick.
They're babysitting.
Well, let me ask you this. Soyou're flying, you hit turbulence,
you bounce around for a few minutes andthen you hear the ominous voice of the

(09:10):
captain comes on and he says, or she says,
we hit a little rough spot here andwe're gonna glide down to smoother air.
Is that the pilot taking over then?
You know, it's kind of a hybrid. So thepilot's talking to air traffic control,
getting a better spot. Andthen what they're doing is,
like you're programming inyour GPS, they're saying, okay,
descend and maintain flight level 3 1 0at an air speed of blahdi blah. Right.

(09:33):
So they're just programming it in.
Okay. And then the airplane just.
And the airplane, the airplane doesit so much smoother. And indeed,
one of the things that we've realized onlandings and just flight in general is,
first of all, if you let the computer fly,
we save an amazing amount of fuelbecause pilots are just rougher on
the controls. And so it's much moresmooth when you let the automation do it.

(09:56):
And it even saves on the tires. IfYou let the automation land the plane,
they don't have to changeout the tires as often.
That is so amazing. . Wow.
But, but let me tell you this,
I would never get into a self-drivingcar that any of my students programmed.
I'd fly in an aircraft a program,but I wouldn't get into a car.
We're gonna talk about that.
I have never done any programmingfor aircraft relative to autonomy,

(10:21):
but I have done a fair bitfor automobiles, including a,
just a couple of years ago on a projectthat we just completed on an autonomous
dragster. And it, even going straightat high speeds is non-trivial.
So I I I got a whole bunch ofquestions for you in that regard,
but you mentioned that it was stressful,
this whole idea of lettingthe plane land itself.

(10:43):
I guess the human part of you wants tojust take the stick and guide it down.
And I assume at some point you haveto take into account the fact that,
you know, maybe you have a malfunctionin a chip or the computer is not working
properly and you haveto land the aircraft.
So I assume that at somepoint in time in training,
you physically have to land on a carrierjust so that you got the confidence

(11:06):
that you can do it. Right?
Well, and indeed airline pilotshave to do the same thing.
They have to land so many,so often to stay qualified,
basically to stay up-to-datewith your skillset. But indeed,
you've hit on probably one of the biggestproblems that we're facing in aviation
right now.
How much skill do you lose for the lengthof time that you let automation do it?

(11:27):
And then how do we make sure that peoplekeep their skillsets up even in the
face of increasing automation.And the Asiana air crash,
several years ago in San Francisco,
there was a crash where there was awhole cockpit full of pilots and off-time
pilots. And all of themmissed the fact that,
because the automation wasn'tworking that day they, you know,

(11:48):
I think there were likefive or more pilots in this cockpit and it still crashed
and killed a lot ofpeople in San Francisco.
And that's because their skillset haderoded to the point that they didn't
really even know how to fly theaircraft in the good old fashioned way.
So I think there is a pushand pull about either,
and there's a lot of parallels to driving,
like either the airplane can doit all the time with very high

(12:12):
reliabilities,
or you need to make sure that thehuman stays in the loop every so often.
And now the FAA has to mandate the peopleget in and land every so often to keep
that skillset up.
So talk to me about warfare.
What is that like in an ageof semi-autonomous systems?
Is it closer to a video game?
Oh, yeah.
Or, or is it closer towhat we saw in Top Gun?

(12:35):
I think it's kind ofa mix. The reality is,
is that there's a lot of automationthat's finding its way into the cockpit.
And one of the favorite stories I liketo tell, and I told this in my book,
is about a guy's call sign Spider.That's not really call sign.
I changed it to protectthe not-so-innocent.
But when you're practicing missilesand you get the radar going,
you actually would maneuverthe airplane into an envelope.

(12:57):
And if you got everythingright, the envelope was right, the distance was right,
the speed was right. And you would getthese gigantic letters in your HUD shoot,
shoot. You know, you'd pull the triggerand if you're on the test range,
a missile would come off the railsand it would be at a static target.
No problem.
But there was a case where there wasa squadron and they were deployed live
over, you know,

(13:18):
somewhere in the Middle East and Spiderwas coming back with his commanding
officer. So that's belike me and you flying.
And then they decided they had a littleextra gas. And so they were gonna do a,
a little one v. One top gunthing. They were gonna practice,
pretend fight each other. But becausethey were coming back from a live area,
they both had weapons on their plane.
So they had real weapons.
Real weapons. But you can putthe plane in simulate mode.

(13:41):
So even if you have weapons and youpull the trigger, nothing happens,
or you can leave it in live mode.And so when they went feet wet,
which is when you go fromthe land to the water,
they were supposed togo into simulated mode,
but Spider got distracted andhe forgot to push that button.
Oh.
And so then they.
I know where this is going.
So then they take a split, they comeat each other and Spider's young,

(14:02):
I mean the young guys usuallyhave better reaction time.
So he was able to get a biteonto the, his commanding officer,
which means that he got to the,a good shot position first,
and he lines up and he gets thatamazing compelling shoot, shoot, shoot.
And he shoots and a missile comes off therail and then the planes tattletale on
you.
That's how you can't even lie anymorebecause as soon as a weapon leaves the

(14:22):
planes, the video camera turns on.
So it's like the police body cam is gonnaturn on and make sure that it records
everything bad you did. And so thatyou can actually see in this video,
the missile go after his commandingofficer, the commanding officer,
because it was a heat seeking missile,he didn't have any of his systems on.
He didn't even know thisthing was in the air.
And so you can see the missile gets likeliterally like inches from his tailpipe

(14:44):
and then it just falls out of the air.
It just didn't have enough juiceto blow his commanding officer up.
So the next thing you know, theyhave to come back to the carrier.
Wait, wait. So why, why didthat happen? Was it just luck?
Just luck. Just luck.
Just dumb luck.
He took the shot on the very edge of theenvelope and the missile just did not
have enough gas to get there. So.
Oh my goodness.
It's burning like a,
and then it just kind of like a BugsBunny cartoon falls outta the sky.

(15:05):
But then they have to comeback land on the carrier.
And so when you see a fighter jet and onemissile's on one side of the plane and
there's a missing one on the otherone, it's not like you could say, uh,
I don't know what happened.So, so he had to fess up.
What happened to him?
In the old days, he probablywould've been kicked out.
But I think that they realized, that wasthe big time of the military was like,
wait a minute,
maybe we shouldn't have these shootcues that are so compelling because it's

(15:29):
making people respond in a video game-likeenvironment instead of taking the
time to actually think about, is thissomething that I need to be doing?
And so things have changed since then,
but it's a good story to indicatehumans under stress and battle even,
that wasn't even a real battle. Right.
He's just excited and he was gonnabe able to quote unquote, you know,
fake kill his commandingofficer. And he almost did.

(15:50):
He almost real killed him.
That's right.
So we hear about these patriotmissile systems now in the public.
They're hearing a lot aboutthis autonomous aerial vehicles. But the reality is,
in a non-military sense,
the deployment of UAVsis probably a thousand,
10,000 to one relative to what we'reseeing in the military. I mean,

(16:10):
these things are beingused all over the place.
Are you doing any work orworking on any applications, uh,
that are non-military?
Oh, I haven't done dedicatedmilitary drone work for a long time.
There was a point in historywhere you could see the tide turn
from anti-drone
sentiments because of military topro-drone sentiments. That year was 2013.

(16:35):
And I had been working really hard totry to socialize the idea that drones
would not be just a militaryplatform, that they had a lot of good.
And I was trying to socialize in Americathe idea that these would be cargo
planes one day. And so I got invited togo on the Daily Show with John Stewart.
I recommend everyone go look atthis clip 'cause it's hilarious.

(16:55):
'cause he is going after me forbasically being part of the war
machine.
And I'm trying to explain to him thatthese are going to be delivery aircraft in
the future. You know?
And he and I had a good repartee of goingback and forth about were these things
really killer robots or are there somegood to this. And that that was 2013.
And what's amazing is 10 years laterit's a done deal. Right, right.

(17:18):
And then many years--
They're everywhere.
That's right. Everywhere.And I've been to Timbo,
Africa using drones tofollow elephants around.
They have a really hard time of keepingtrack of their elephants and making sure
the poachers aren't getting them. So wecould use drones for those applications.
And then recently I finished a projectsponsored by the National Science
Foundation, looking at how we defendagainst drones in prisons, right.

(17:42):
Because now one of the problems thatwe have is that drones are putting
contraband into prison yards.
And so now we were trying to come upusing some artificial intelligence with
ways to defend against the drones.And I think what was interesting--
There's a way, it's called a shotgun.
.
.
Yes. It turns out, do you knowyou're not allowed to do that?

(18:03):
The FAA says don't do it. Right.
So you can't shoot.
You can't, even if there'sone hovering over your house,
technically it's illegal for you toshoot a drone hovering over your house.
Really? I guess that's right. Becausethe property above your home, is it?
That's correct.
25 Feet and higher.
No, it's like one inch.
But that is not yourproperty, actually. It's.

(18:23):
Not, it's not your property.
And fa a doesn't want you shootingthings because they don't know where the
drone is gonna go.
Oh, well, not only that,
you shoot and then if you missthe projectile lands somewhere,
it does come back down.
So, so my advice is, ifyou're worried about that,
just get a big old light and put iton top of your house and direct they,
they can't, that'lltotally screw the system.
So there's lots of passive ways thatwe can defend against these things.

(18:45):
So you, you would come up with the, thetechnical way to defend against them?
Oh yeah. Oh, oh yeah.
As opposed to the, you know, the goodold American way. , shoot it down.
Shoot it down.
Okay. Well there's part of me thatreally wants to do that. Like,
I didn't say you couldn't use a slingshot.
exactly. Okay, I hear you.So let's switch gears a little bit.
There's a lot of talk these days aboutElon Musk with the whole Twitter issue

(19:07):
and him purchasing Twitter, butthat has spilled over to Tesla.
I'm a both a Tesla ownerand a Tesla stock owner,
have been for quite some timeand have seen, uh,
the value of my Tesla shares decreasedramatically over the last year.
So talk to me about yourchallenges with Elon Musk.

(19:29):
So, I mean,
it's hard for me to say that I have awar going with one of the richest men on
the planet, right. Because it's onlyhis perception that that's the case.
I'm a big tech futurist.
Right.
Uh, that's my job is to try to maketech work. It's not to stop tech.
It's to help it get better. And I'vebeen a big fan of SpaceX for a long time.

(19:51):
As far as Teslas, Ithink they're great cars.
I think that certainlythey're very crash worthy.
After you saw that Tesla go down thatcliff and everybody survived, I'm like,
you know, that thing has a goodcage. That, that is a solid car.
Yeah. Because it's unibody construction.
That's right. So I am notanti-Tesla, but like I will tell you,
and there's many people in thedrone world that know this,

(20:13):
and in the driving world and inthe AR/VR world--augmented reality,

virtual reality world (20:17):
I just really hate bad tech.
And if you've got some badtech that's really dangerous,
I'm gonna call you out on it becausethat is my job to make safe good tech.
And the problem, and I hopethat you're listening to me, is.
Oh, actually I hope he's notlistening, but keep going. .
Do not drive your Teslas onautopilot or full self-driving

(20:41):
without paying full andabsolute attention, and keeping your hands on the wheel.
So two things I I I will highlight toyou: I can't help it, I'm an engineer,
right. I put mine in, uh,auto drive mode all the time.
And I can tell you thepluses and minuses to it.
Technology is not quite yet ready forprimetime without question. You know,

(21:01):
and sometimes small artifacts,some of which I don't even see,
cause the autopilot to stop working.
What happens in almost every singlecase is the vehicle just abruptly slows
down. So, and it's a scarything when you're driving 70,
75 miles an hour on the highway and thething just hits the brakes and it slows

(21:21):
down dramatically. Maybe it saw a shadow,you know, you don't know what it saw,
but it saw something that triggereda response. And I tell you,
it's probably happened to me adozen times. That being said,
it is a remarkable technology to use whenwe're doing the kinds of things we're
doing in our cars. You know, oh boy,I'm giving a whole, it's gonna be funny,

(21:45):
you know, but you're in the car andyou're driving and he's like, oh man,
I gotta blow my nose. Okay. Engageautopilot, things driving on its own.
I can reach down into my glovebox or into the center console,
pull out a tissue, blow my nose,and put it back and it's all cool.
I do that, no problem, right. And feelvery, very comfortable doing that.

(22:06):
Or if I'm coming home long time at work,
little tired and need that extrahand: it's not so I fall asleep,
I'm still driving, got my handson the steering wheel, 10 and 2,
so I'm still there, but I turnit on just so that I won't drift.
That actually works quite well forme. So I do think there are uses,
right now, as an assistant--

(22:27):
I, I agree.
To the actual driver, right?
We're not at the point where we cantotally turn it over to the computer.
And this is the thing that's amazingto me. You won't turn the car over,
but you put lead
a hundred million dollarbird out of the sky onto
a strip of concrete .But you, you feel me here.

(22:50):
Oh yeah.
That's because I know how the sausageis made and I helped make that sausage.
And so I see the mistakes that are madeand I see the problems in the system.
I actually had a number of conversations,not just with engineers from Tesla,
but also Zoox, which is an,
a company was built to buildautonomous vehicles. Say, Hey,

(23:12):
well what's the problem? You guys areworking on this technology every day.
What are you struggling with? Whydon't we have it and have it now?
What are your thoughts? What do you think?
So, you know, you hiton one of the issues.
I just finished about a year and aquarter with the National Highway Traffic
Safety Administration as the senior safetyadvisor. And for the last year or so,
I've been looking at all theaccident reports of any car,

(23:34):
including Teslas who had a crashwhile they were on automation.
And so this phantom brakingissue that you described,
where the car sees somethingand then decides to dramatically
decelerate. So that isnot just a Tesla problem,
we see it in many other kinds ofautonomous vehicles, including ADAS Equip,
that's the driving assist systems, andalso just the self-driving systems.

(23:57):
So we have not yet gotten to thepoint where computer vision systems,
they're just not reliable enoughto be able to "see the world in the
way that we do." And we don'tknow, is it shadows? You know,
we've done some testing withTeslas in my own lab where we have,
we can see a statistical correlationwith the sun going behind clouds for,
even that is enough potentiallyto trigger a problem with the

(24:22):
vision system. So these systemsare still really brittle.
And I'm not saying we'll never get there,but we're still working out some very,
very basic problems. That's justone of many problems. And that,
that's the tech problem.
But I loved you describing yourreaching over to the glove box because I
am here to tell you, youguys heard it from me first,
that if President Washingtonends up at a Tesla accident,

(24:44):
it's gonna be because of theaccidental steering nudge bump.
So one of the things that we've seen inaccident mode that we see people do is
in these cars, and it's not just Teslas,
there's also Blue Cruise and Super Cruise.
People are so confident in thesystems that what they do is they
drop something on the floor,
they need to reach around the back ofthe seat and pick something in the back

(25:04):
seat up.
Or they need just to get something outtathe glove box and they reach across and
their shoulder.
Bumps the steering wheel.
Just bumps the steering wheel And that,and sometimes, depending on the car is,
and depending on the speedthat you're going but
lemme tell you somethingelse I found, but.
But the Tesla will go, it'll giveyou an audible signal, boom boom.
And then that'll let you knowthat it's disengaging autopilot.
And you do have some time toadjust sometimes, you know.

(25:27):
But sometimes you don't. Sometimesyou do it at the exact wrong time.
No, I get it. I get it. And I, andI get that that could be a problem.
Here's the deal. You mentioned somethingearlier that I thought was really,
really interesting. You saidthat in fighter aircraft,
as automation became moreand more prevalent and the
technology became better and better,

(25:48):
you actually started tosee accidents decline.
I believe you're gonnasee a similar thing now.
You're seeing accidents go up now,
but you gotta correlate that with thefact that there are more and more of these
vehicles in the market.
I think a comprehensive studymay show that the whole host of
technologies that are in vehicles now,right? The lane departure warning,

(26:09):
the auto-steer that pulls you back,
we're probably seeing an overalldecrease in the number of
minor accidents that would've occurred'cause you sideswiped somebody or
you're coming up to a traffic stop andthey would be looking at a text message
and run right into the back of the car.I got rear ended that way. Nowadays,

(26:30):
vehicles do catch you from doing that.
They will stop the vehicle before or atleast let you know with blaring signals
that an accident is imminentif you don't do something.
And so we should start tosee, as the vehicle becomes,
as the computers become more andmore prevalent in how we drive,
we should start to see thenumber of accidents going down,
which is gonna have a dramatic effecton insurance companies because they make

(26:53):
their money when thereare accidents. .
Yeah. Right. Well, I I tell you, they'renot too worried right now. Automation,
basically, there's two different kindsin cars. There's the safety automated:
auto emergency braking, thefrontal collision warning. Right?
These kinds of safety devices.
But those gotta be working.
They and they are working.
And indeed we can see that decrease thatyou're describing that is happening.

(27:14):
But the Teslas, Super Cruise, Blue Cruise,
these are convenience features that dolateral and longitudinal control for you.
Right. They're doing the accelerationfor you and they're doing steering on,
right? So the jury is very much out.
And there are a lot of scientists, myself,
there's some other people at GeorgeMason looking at this. That jury is out.

(27:34):
And I will tell you thathaving come from NHTSA,
I did the analysis myself onall this crash data we have.
And I will tell you that if youare in an accident in a car with
these convenience features, right,
you are statistically more likelyto be seriously injured or killed.
Really?
And there's one reason,one really big reason,

(27:56):
there's a lot of little reasons...
'Cause, because you're overdependent?
That is probably, but there'sactually one clear measurable problem.
What's that?
You're speeding. So this is.
And that's 'cause of overdependence.
Well, right.
So this is the kind of interaction thatwe're seeing is that people become so
reliant and they love their vehicles andthey...President Washington is loving
and trusting his vehicleso much that, you know,

(28:17):
I'm just gonna go ninemiles over the speed limit.
Yeah, yeah. No, look, first of all,
without incriminatingmyself too much ,
you hit the nail exactly on the head.
my speed goes up becauseI got that technology with me,
without question. So nowyou're making me rethink.
Maybe I need to tone the speed down.
You do need to tone the speed down.
And so I will, I will definitely do that.

(28:38):
But that brings me to another questionbecause there's a YouTube video out
there, and I know you'veseen it, with the title,
"Missy Cummings wants to destroy Tesla." True statement, overstatement,
or what?
No, of course it's an overstatement.I want Tesla to survive. I mean,
my favorite thing about Tesla is thefact that it doesn't have a dealer model.

(28:58):
Like if you wanna be,
you go with any woman to try to buy acar and you will realize how much women
hate the dealer models.Right? So we would love...
It ain't just women. Hello. Hopethey look like me. You, you,
you know what's I'm saying? Hey,look, I, I, I've been there. And um,
you're absolutely right. It was so easy.
You can pull up your computer right nowand within 15 minutes I can buy a Tesla.

(29:22):
For a lot cheaper these days.
Yeah, that's exactly right.
Because of the "Missy Cummings wantsto destroy Tesla" video. But .
I wish I, I wish I had that kind ofpower, but I will, I will tell you, look,
I love the car itself is great. The modelbehind the car in terms of the deal,
no dealer model.
Oh, that's fantastic.
The picture of like, I, there are somany good things to love about Tesla,

(29:43):
but I think that Tesla,
they were first out to try to do somethingbrave and innovative. And I get that.
But now one of the other things I callit, you know, your mom always said,
if you see your friends jump off acliff, are you gonna go jump up a cliff?
So now Tesla had some questionabledesign decisions about letting people be
hands free, but now all the other carcompanies are modeling after Tesla.

(30:06):
Right. I hear you.
And I do not think we should allowthat. I think no car, not Tesla,
not Ford, not GM, no carshould in any driver assist,
should allow you to be hands-free.And that is a very unpopular opinion.
But unfortunately the Teslarati wants totry to blow that up into something like
Missy Cummings is comingfor your autopilot.
Yeah, no, I get it. Now look,

(30:28):
if there could be wholesale adoptionon the manner in which you buy and sell
Teslas, that I tell youwould be a game changer.
The reality is it was soeasy for me to buy my car.
It literally took me about 15 minutes.
Yeah. I'll do freeadvertising for them for that.
Like I love that feature of their car.

(30:48):
But I think the whole Teslaratithinking that I'm out to get them,
it kind of points to the bigger problem.There's two problems it points to:
number one, women in tech, women whoassert themselves in tech. You know,
it's funny, we,
we talked about the fighter pilot thingwas I discriminated against as a fighter
pilot? Yeah.
But I'll tell you what's shocking to meis the fact that I was a fighter pilot,

(31:09):
carry a PhD, have beena tenured MIT professor,
have done all of these thingsthat the Teslarati and other tech
bros hate the fact that I'm assertingmyself and that I'm a broad.
I'm a pushy broad,
trying to push my opinion that is notfavorable to their stock price. Right.
So that's one issue.
That's what you think it is.
Yeah, but I think also it points tothe devisive nature of this country.

(31:32):
Like I have a very, I think,balanced view of Tesla.
It's a great car except for this badautopilot that when you have your
hands free or it basicallypromotes you into complacency.
So I can like the car,but not like a feature.
But that ability to have a balancedview towards really any person,

(31:53):
politically, to a technology. Likeyou're either with me or against me.
You know, kind of That's how people.
It's all, you're either in it, you'reall in, or you're, or you're all out.
That's right. I get it. I get it. Ahundred percent. This is interesting.
In the last few minutes I have,
I wanna steer us more closer to yourresearch and what you're doing or
what you will be doing here at Mason.

(32:14):
The real advance in all ofthis is intelligence, right?
We are bringing more and moreintelligence to the systems, right?
Whether it is classic neuralnetworks with back propagation
or Kohonen networks andthe like, or deep learning,
or it's just the idea of bringingexpert modeling and systems

(32:36):
into code, right?
Where you take into account hundredsand thousands of variables in terms of
decision making. The reality is,
is that systems are gettingmore intelligent and you stand at the forefront of
this.
And so talk to us a little bit about thedegree program you're putting in place
and how do you see that fitting in toeverything that you've learned up to this

(32:57):
point?
Yeah, these are great questions. Yousay that intelligence is advancing.
And I will tell you, an approximation ofintelligence is advancing. Understood.
So artificial intelligence isartificial and not intelligent.
And if you've heard about GPT,the large language models.
Yeah, yeah. ChatGPT.
These are things are dangerous becausethey're good enough to approximate

(33:18):
language. But if youactually pay attention,
you can see very quickly how wrong anddangerous disinformation coming from
something like chat GPT couldbe. But I've spent a lot of time,
obviously in the aviation world nowin the surface transportation world.
I've spent some time in the medicalworld and looking at these large language
models.
And the one common theme across all ofthese are intelligence technologies are

(33:42):
advancing so rapidly.
What we're not doing is keeping upwith allowing people to get educated
in how to think about thedesign frameworks behind when should you have these
systems? Why shouldyou have these systems?
What requirements are they really meeting?
And then how should I test these systemsto make sure that they're sufficient?
And this whole idea of thedesign life cycle around AI,

(34:05):
it's new thinking. Like people think,oh yeah, we know how to design systems.
We've got agile system development.
Well it turns out for safetycritical technologies,
maybe your testing frameworkneeds to be a little different.
Maybe you need to do different kindsof component testing. And guess what?
Digital twinning,
like I'm so sick of hearing digitaltwins because you can digital twin AI all

(34:26):
you want, but garbage in,garbage out. The only way,
if you're ever gonna know if your Teslais actually going to not hit children,
and this is a big debate going onright now in the Tesla community,
is you do have to put it on the road andyou do have to put it in various tests,
real tests, not fake tests. NotFSD full self-driving tests.
Like really principle tests that areanswering and research question. And,

(34:49):
and I think companies are reluctantto do this because it's expensive.
It takes time and effort that maybethey wanna spend other money on. But.
But the other thing is that theycould fail. And when you fail the.
It's more development cost.
Well, not just that the resultsare catastrophic. Right?
I remember looking at the Teslastock price when the first Tesla fire

(35:12):
hit the news and you justwatched the share price drop.
That's somebody's livelihood. Andreality is these are complex systems.
Complex systems will fail, right?You have 'em in automobiles,
you have 'em in rockets, you'em in airplanes. You have 'em in fighter aircraft,
right? There's been failures.Failure is a part of the process.

(35:33):
You hope that you can put it in thecontext where there's not loss of of life.
The reality is thatthese things do happen.
Yes. And, and I agree with Henry Petrosky,
he's a famous Duke professor who says to,
to fail is just a core component ofengineering. I'm all about that, right?
But I think with artificial intelligence,
one of the problems that we're seeingis that there just really aren't testing

(35:54):
paradigms to try to at leastfigure out how to mitigate
risk.
No, no, I get it. Ithink it's a little more,
and I don't want to use the wordnefarious 'cause I don't think people are
trying to do harm.
I think the challenge isa little different in that nowadays we don't know when
we're interacting with AI technologyand when we're not. Right?

(36:14):
It's not ubiquitous yet,
but it is far more intrusive inour everyday lives than we actually
realize. And so you can be interactingwith your vehicle, not a Tesla,
but we own a BMW asanother vehicle. Right?
You could be interacting with thatvehicle and there could be aspects of
artificial intelligence handling somesystems and you actually have no idea,

(36:35):
right? People are dealingwith ChatGPT and there,
they're being told that they'redealing with artificial intelligence,
but they're dealing with a whole host oftechnologies on their computers as they
go to websites and as they frequent theinternet on a day-to-day basis where
they're not told and they'reinteracting with something,
thinking that they might be interactingwith a human and they're actually

(36:57):
interacting with a bot, right?
You would handle things differentlyif you knew it was a bot relative to a
human. And so we need guardrails.
And that's exactly what we'regonna teach you at Mason.
Outstanding.
We're gonna teach you how to build them,how to set systems up to design them,
how to interpret them, how torecognize when you need guardrails.

(37:20):
So this is one of the things I think thatMason is just has such a rich field to
pull from. There's many, manygovernment agencies here.
There's lots of top talent faculty here.
Lots of really motivated students who aregonna work in all aspects of industry.
We've got healthcare, we've gotDOD, DHS, transportation industry.

(37:40):
So I'm really looking to build astrong cohort of people who can
recognize, do I need guardrails?What kind of guardrails?
And how do I maintainthose guardrails over time?
Mason is constructing the Fusebuilding on its Mason Square campus in
Arlington. As you know, thebuilding will house research labs,
corporate innovation centers,incubators and accelerators.

(38:04):
How does that interdisciplinarymodel fit into your research?
Well,
I'm hoping personally to teach classesin that building and actually have an
offshoot of my lab out there.
Because with all this work that we'redoing with government agencies on safe,
secure, trustworthy AI,
we anticipate offering researchand lab-based classes out

(38:26):
there. So it's critical to myresearch and critical to the overall
interdisciplinary nature of AI in general.
Well look, I am looking forwardto what you're gonna do in Fuse.
I think it's going to be fantastic.
Just talk a little bit about howacademia can be the agent that
educates industry and governmentemployees to actually ask the right

(38:48):
questions about AI's performance,its weaknesses, its strengths,
and its shortcomings.
Yeah. So these are great questions.
I think first and foremost we have torecognize...to start looking at the
assumptions in the designand construction of AI.
So I think what a lot of people missis they think that AI is this great
computational tool, one plus one is two.

(39:11):
And so how can you argue with the maththat's coming out of AI, for example?
Well, it turns out that there'sa lot of subjective work.
And I personally have done research andI'm continuing to do this research that
looks at the assumptions that modelersmake. So when you're engineering,
your computer scientist developsan algorithm, for example.
They actually make a lot ofguesses about how to initialize

(39:34):
certain parameters inside the algorithm.How do I set some hyper parameters?
And they don't really understand thatthe way that they set up the problem can
actually cause the model to have verydifferent outcomes as opposed to maybe
another engineer who sets up a problem.
So one of the research projects thatI'm working on now that's gonna continue
probably out at the Arlingtoncampus is looking at data labeling.

(39:58):
So it turns out, afterspending some time with Amazon,
I learned just how much data labelingis happening in offshore places like
India and around the world. Andlots of companies are using them.
And then the question is,
if you have people labelingimages for eight hours a day,
is that labeling just as good in theireighth hour as it is in the first hour?

(40:19):
And one of the things that we're lookingat my research right now is how does
sloppy labeling not wronglabeling? So it's not wrong,
people weren't circling the wrongimage, the component of the image,
but they were very sloppy.
And then when you run that througha convolutional neural net,
how much of the sloppiness and thedata labeling shows up in the quality

(40:42):
of the outcomes? Turns out it'spretty significant. And so.
That's part of the data setthat's gonna be part of the model.
That's right. So I really want tohelp people focus on knowing when,
where, why,
and how to ask those questionsabout the underpinnings of AI.
Is there an assumption that was made inthe development of this AI that could

(41:04):
have a downstream effect?
And not that you then shouldn't usethe AI with that downstream problem,
but at least you know that thenthere is potentially on the,
on the downstream side that you have tomaybe not trust the outcomes as much as
you would if you had betterquality data going into it.
Outstanding.
I'm so looking forward to what we'regonna be able to give to the community,

(41:26):
especially as this field continues togrow and as it continues to have an impact
on taxpayers supported dollars. Right.
The investment that the countryis making in these technologies,
you need to have an understanding ofwhen to use 'em, when not to use them,
and when to be cautious about their use.

(41:46):
Right, and when people make big claims,
I would like to give people tool setsto be able to evaluate those claims for
themselves.
Outstanding. So I get to askyou a controversial question.
Oh, what? We haven't been .
Really, a really controversial one.
Okay.
How long, in your opinion, before weactually see full self-driving vehicles?
Well.

(42:06):
I'm just gonna need a definitionfrom you first. Do you mean like.
I mean you get in the vehicleand there's a steering wheel you can take
over. So the whole concept from Zooxwhere you, there's no steering wheel,

you just get in and ride: I'm not talking about that. (42:18):
undefined
But vehicles that are full self-drivingwhere you have the option to say you
push a button and the car just takes over.
And you get in the backseat and go tosleep if you want and it'll take you to
Vegas.
Well I'm not talking about that .
But you actually havethe ability to do that:
The technology will be sophisticatedenough that you could indeed

(42:41):
go in the backseat and go to sleep. Whendo you think that'll happen? Now not,
I'm not saying that we will ever get toa point where the community allows that
to happen, but when the technologyis mature enough to happen?
We're not even close.
We're not even close even. So how many...
You know, I'm gonna pull a typicalacademic response. Oh, 10 to 15 years.
'cause that's the secret academicspeak for, we don't know , we,

(43:02):
we have no idea. Right.
It's far enough away that whereyou can't get called on it.
But yeah. Right. So I think thatthat we will see in the short term,
there's been a lot of success.
I mean you see it on George Mason'scampus with Starship little grocery
delivery.
So companies like Neuro who have thebigger vehicles are on the road that the
purpose-built. Yeah. I think that they--

(43:22):
Those are working!
Those are working. And we,
I think that there's a reallegitimate profit building.
What is it? TU has the trucks out there.
So I think that small lastmile delivery is probably
where we'll see thatfirst happen. You know,
it just like Waymo is struggling still.
Cruise is under investigation bythe National Highway Traffic Safety

(43:45):
Administration. I mean, all signs arelike they're making incremental progress.
But if you're asking me,
should I go ahead and start investing inself-driving cars because they're gonna
start turning a profit next year, Idon't know when that year is gonna be.
Here's the last controversial question.
What do you say to peoplewho worry that automation

(44:05):
will be taking away evenmore jobs from people?
'Cause we know they have taken awaysome jobs, we know where this is going.
Talk about that a little bit.
Yeah, I don't think weknow where it's going.
I think we think we know where it'sgoing because we hear, you know,
the media's job is to kindof get you to click more.
So the headlines arealways bad on this front,
but I've been predicting this correctlyfor a long time. Look, it's true.

(44:27):
Elevator men and you know, therewere always a man in the elevator,
maybe occasionally a woman whopushed the buttons for the elevator.
Are those people out of a jobbecause of automation? Yes. Yes.
But they probably needed to beout of a job. That job didn't,
that was dull and tedious andit didn't need to be there.
Now are we at a place where we mightlose a few jobs here or there to

(44:48):
automation? I would say yes. Particularlyin factories and manufacturing. Again,
these are jobs that destroythe dignity of humans.
I would love to get the people whoare having to pack my Amazon boxes.
This is really, reallyboring work for them.
It causes repetitive distress injury.
I would love to get that jobautomated as quickly as possible.

(45:10):
It turns out it's very hard to automate.The human hand is a thing of genius.
Eventually we will start to see moreand more jobs automated as we figure it
out. But every time we automate onejob, it opens the door to 10 more.
And I think that that's what people don'trealize is that we can't have enough
manufacturing workers right now. Weare in such a glut of labor workforce;

(45:33):
pilots, like, you wanna be a pilot?
Go sign up because we don'thave enough pilots right now!
So I think that people tendto hear the worst when they
hear about robots are coming.I will tell you, taxi drivers,
you do not have anything toworry about. Truck drivers.
You do not have anythingto worry about. Like.
So you got a long time before thevehicles start driving themselves and take

(45:55):
your jobs.
Oh, we are so far away from that.
And what's happening is we are seeingthe creation of so many more jobs.
And I'll tell you somethingelse that the audience,
if you're looking for a good stock tip,
start your own robot maintenance companybecause we can't keep them all working.
For the manufacturingrobots that are out there,
a lot of them have to sit into a closetbecause they don't have enough people to

(46:18):
come and fix them whenthey inevitably break down.
Outstanding.
Well this has been fantastic and Icannot wait to see the results of your
research.
And that will do it for thisepisode of Access to Excellence.
I'd like to thank my guests,Professor Missy Cummings,
who directs George MasonUniversity's Center for Robotics,

(46:40):
Autonomous Systems,
and Translational AI for takingthe time to speak with me.
I am Mason President GregoryWashington saying, until next time,
be safe Mason Nation.
And don't speed in your autopilot.
Alright. Of course.
If you like what youheard on this podcast,
go to podcast.gmu.edu formore of Gregory Washington's

(47:04):
conversations with thethought leaders, experts,
and educators who take onthe grand challenges facing our students graduates in
higher education. That's podcast.gmu.edu.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by Audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.