Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to the Practical
AI podcast, where we break down
the real world applications ofartificial intelligence and how
it's shaping the way we live,work, and create. Our goal is to
help make AI technologypractical, productive, and
accessible to everyone. Whetheryou're a developer, business
leader, or just curious aboutthe tech behind the buzz, you're
(00:24):
in the right place. Be sure toconnect with us on LinkedIn, X,
or Blue Sky to stay up to datewith episode drops, behind the
scenes content, and AI insights.You can learn more at
practicalai.fm.
Now, onto the show.
Daniel (00:48):
Welcome to another fully
connected episode of the
Practical AI podcast. I'm DanielWightnack. I'm CEO at
Prediction Guard and joined asalways by my cohost Chris
Benson, who is a principal AIresearch engineer at Lockheed
Martin. And in these episodeswhere it's just the two of us,
we we try to analyze some of thethings that have come out in
(01:11):
recent AI news and maybe touchon a few practical elements
that, will will help you levelup your machine learning and AI
game. And, really excited today,Chris, to talk about something
that is very intriguing,somewhat maybe confusing for
some people, exciting for somepeople, maybe producing a little
(01:36):
bit of uncertainty, some Just alot of mixed feelings maybe
around around this.
We're talking about America's AIaction plan, which was just
released by by the White Househere in The US. So yeah, this
just came out not too long ago.Obviously, we're just kind of,
(01:59):
reading through the initialrelease and some reactions, but
I say there's been some mixedfeelings that it's kind of
generally rolled back somethings that have maybe been
happening in The US around AIand regulation. Maybe introduce
some other things that there areelements of the action plan that
I find really inspiring andencouraging. So yeah, it's maybe
(02:23):
an interesting mixed bag formany folks.
But yeah, I'm sure it's come upin your circles and in
discussions, maybe even withfamily or around town that
America has an AI action plan.What are your initial thoughts?
Chris (02:41):
Yeah, since the White
House released it, I've
definitely had people askingwhat I thought of it and stuff.
And I know both you and I hadread it a while back as it was
released, and we're just gettingto a point where we had a chance
to to talk about on the show.And leading into it, I'm, of
course, going to do my usualdisclaimer Of course. That
because we're talking aboutstuff that can tangentially
(03:02):
touch government affairs andmilitary, I am only speaking for
myself as always and never everspeaking for any other
organization and most especiallynot my employer Lockheed Martin.
So I want to be very clear aboutthat because I I do want to keep
my job.
So this is this is just ChrisBenson on the podcast. So like
(03:24):
Good to know. Yeah. So this isit's an interesting mixture of a
lot of things as you're alludingto. And like you, there are some
things I was inspired about.
There's a fair number of thingsin there that are inspiring. But
there's also a lot of stuff thateither rolls back other aspects
(03:44):
that we probably that I think alot of people would agree in a
nonpartisan context would beadvisable, some of the
protections in certain areas.And it is, I think, you know, my
my single biggest gripe is it isa starkly partisan document in
many respects. It feels like itwas written by some folks that
(04:11):
were that had some good ideas inthere, but were being very
careful to make sure it wouldfit in with the current
political climate. And that as Iread it, that was if I felt like
I kept bumping into that filter,that the authors, would have had
on it.
That that's my first impression,a highly filtered document with
(04:31):
good and bad in it.
Daniel (04:33):
Yeah. Yeah. Well, I'm
interested, I mean, kind of to
set the stage a little bit. Nowwe've talked about other things
that have rolled out from thegovernment directly from the
White House on the show. I'mjust looking back at 2019, if
you remember those days.
(04:54):
2019, there was a White Houseexecutive order on AI. This was
during the Biden administration.And we talked about that on the
show. It's interesting in thatcase, so if we kind of roll back
the clock a little bit, forquite a while, has been this
effort at NIST, the NationalInstitute for Standards and
(05:16):
Technology, around AI and AIrisk management framework. That
has some hints of regulation.
It's a government thing, butit's not official on any kind of
regulatory front. Then there wasthis White House executive
order, which I think I mighthave said Biden, but I think
(05:38):
this one that I referenced from2019 was a Trump executive
order. Now I'm getting all thesequence confused, but that's
episode, what, '34. It's back awhile. It's back a while.
Think I was thinking, because Ithink we did talk about another
executive order. We did. Anyway,there's been this kind of
(05:59):
rolling series of executiveorders back and forth on both
sides, like you say, of thepartisan line. What, from your
perspective, maybe before wedive into the details, some of
the executive orders that havecome into place do have some
type of regulatory implicationin the sense that I remember one
(06:26):
of them that we talked about,for example, around model
builders and the requirement ofcertain sizes of models to be
regulated in a certain way. Interms of this AI action plan,
what is your understanding, ifyou do have one or don't have
one, what does it mean that thisis an AI action plan?
(06:48):
It's not an executive order.Partially in my understanding,
it's meant to not regulate. Andso maybe it's just a document.
But really, guess, what is theimplication of this more
broadly, do you think, in termsof its impact?
Chris (07:05):
Yeah. I mean, it
definitely reflects this current
administration's approach wellbeyond AI to to a larger
viewpoint. This one definitelyis all about remove removing
regulation, which I think is Ithink that there are places
where where regulation could bebeneficial in their places where
(07:28):
regulation might get in the way.I think you have to pick
carefully where you want to putthat. I think and one of the
other things that I've noticedhere is that, you know, there's
a lot, you know, there's a lotof verbiage in it that kind of
on the surface, almost anyonewould say, yeah, I'm for that.
But we also know if you look atat the politics outside of this
(07:50):
specific document and andspecific AI field, what free
speech is depends, you know, interms of interpretation, depends
on who you're talking to. If youput a democrat and a republican
in the room and talk about whatfree speech is, you may get some
different answers on what thatis and what's implied into that.
And so it's one of thosesituations where we are we are
(08:15):
left to interpret what theintent of some of these words
means along the way. And it'salmost impossible to do that
without kind of taking in thelarger political environment,
which sadly, I think detractsfrom the document in a lot of
ways, I think it could bebetter. I'll also note that it
doesn't spit it there's a wholebunch of recommendations in it.
(08:37):
And there's no funding for anyof those recommendations that is
noted. And so it's interestingabout in the sense of where do
you go from here, whether it's agood idea or a bad idea, from my
perspective, it's still leftwith well, great. Maybe I like
that idea. How are we gonna dothat now?
Daniel (08:56):
Yeah. I think as opposed
to some other maybe executive
orders that had some directimplications right away, this
is, again, action plan. I guessthat's what I was meaning in
terms of impact. Like whatimpact is this going to have
other than being a document? Ithink if people are out there
(09:16):
and maybe one implication thatcould be thought of is
universities applying forgrants, companies applying for
SBIRs or other types ofgovernment interactions.
I think in the current climateof those things getting
approved, if it's AI related, itprobably needs to kind of tie
(09:38):
into this action plan for it tohave a greater chance of
success. Tying into that actionplan kind of has this trickle
down effect to all of thesegrants and innovation that
happen across university andacross small business and
etcetera, etcetera. So there ismaybe this trickle down effect,
but it's not a sort of direct,we're going to fund this or that
(10:02):
and there's this program, butmaybe that is more the impact in
terms of the climate of what isfunded and how that affects
businesses and universities orresearch institutions that get
government funding.
Chris (10:17):
Yep, I totally agree.
It's kind of funny. We've talked
about some of the ambiguity aswe've gone forward into this and
you know, about you know, andthe good and the bad of removing
regulation a little bit, theambiguity of the free speech
component of it. One of thethings that I'll say that I I
like the intent of is thepromotion of open source and
(10:38):
open weight AI, which are bothexplicitly, you know, pointed
out in several points in thedocument. And I I know that we
have long talked about that onthe show.
And so I think that's one ofthose those, inspirational
moments, though. There's not alot of how we're gonna get
there, I must say.
Daniel (10:57):
Yeah. Actually and just
to give maybe We've talked about
a little bit of the impact.We've set it some stage. I love
this kind of point that you'rebringing up about the open
access models, which of courseI'm very passionate about. But
maybe it would be helpful alsofor our listeners.
There's a bit of a structure tothis document. If you're
(11:21):
listening to this in your carand you don't have it in front
of you, of course, we'll link itin the show notes. But there's
these sections and they're splitup into pillars of the AI action
plan. So kind of these keypillars. The first pillar is
accelerate AI innovation.
The second pillar is buildAmerican AI infrastructure. The
(11:44):
third pillar is lead ininternational AI diplomacy and
security. So there's multiplethings under this first pillar.
So the first pillar beingaccelerate AI adoption. And
you've highlighted a couple ofthose things that might be more
on the kind of controversialside related to removing
(12:07):
regulations, specifically maybeeven rolling back some of those
things related to the NIST AIrisk management framework,
protecting free speech in AImodels.
Looking at this AI systems mustbe built from the ground up with
freedom of speech and expressionin mind and be free of top down
(12:27):
ideological bias. There's thatdiscussion of the free speech
piece under the innovation. Thenthey go to this discussion, like
you talk about, about promotingopen source and open weight AI,
which of course is one of thoseelements that I'm really excited
about. Some of the recommendedpolicy actions they talk about
(12:52):
is ensuring access to largescale computing for startups and
academics, partnering withleading technology companies to
increase research and fosteringthis sort of environment that
creates a supportive environmentfor open models because they
obviously see this as kind of apart of the future, which I
(13:13):
would of course agree with. WithOpenAI becoming open again and
releasing open models for thefirst time a couple weeks ago, I
think that certainly is a trendthat we're seeing.
But yeah, I love that piece. Ofcourse, again, it's this mixed
bag of things that as you gothrough each of these pillars,
(13:34):
you kind of have to parsethrough. What else stands out to
you, Chris, in this kind ofaccelerate AI innovation pillar?
There's a lot of interestingthings there about world class
data sets, the science of AI,interpretability, adoption in
government, adoption within theDepartment of Defense, lots of
(13:59):
things in this innovationpillar. What else stands out to
you?
Chris (14:04):
I think one of the first
things that I noticed was So
once again, on the surface,addressing dataset quality and
stuff is great at that at thatoutermost layer of the onion.
But as soon as you dig into itin the document and, you know,
people listening to us may be onboth sides of this politically,
(14:25):
but they immediately go for apolitical objective directly
under that which is basicallykind of removal of of DEI
concerns. And for those of youwho may not be familiar with the
DEI acronym, it's diversity,equity and inclusion. And that
that is a a big focus of policyacross all fields within the
(14:47):
Trump administration. And sothat had we not been inserting
some of the politics into it, Iwould have been encouraged to
see that under it, know, but itit that's with each of these
things, you kinda have toinspect it and see how much
politics is in it.
And not in in fairness, notevery point that they're making
under either this first pillarof Accelerate AI innovation or
(15:09):
the subsequent pillars thatwe'll talk about include
explicit politics. But I know inmy first past, that was the
first thing was trying to filterthrough some of that issues and
actually get to the meat of itand then think about a many of
the things that are suggestedare already being done. So it's
not it's not new. But for thosethings that are not being done,
(15:31):
how do you get there? And what'sthe funding to make things
happen that aren't alreadyhappening?
Sponsors (15:52):
Well friends, when
you're building and shipping AI
products at scale, there's oneconstant, complexity. Yes.
You're bringing the models, datapipelines, deployment
infrastructure, and then someonesays, let's turn this into a
business. Cue the chaos. That'swhere Shopify steps in whether
you're spinning up a storefrontfor your AI powered app or
(16:13):
launching a brand around thetools you built.
Shopify is the commerce platformtrusted by millions of
businesses and 10% of all USecommerce from names like
Mattel, Gymshark, to foundersjust like you. With literally
hundreds of ready to usetemplates, powerful built in
marketing tools, and AI thatwrites product descriptions for
(16:33):
you, headlines, even polishesyour photography. Shopify
doesn't just get you selling, itmakes you look good doing it.
And we love it. We use it hereat Changelog.
Check us outmerch.changelog.com. That's our
storefront, and it handles theheavy lifting too. Payments,
inventory, returns, shipping,even global logistics. It's like
(16:55):
having an ops team built intoyour stack to help you sell. So
if you're ready to sell, you areready for Shopify.
Sign up now for your $1 permonth trial and start selling
today atshopify.com/practicalai. Again,
that is shopify.com/practicalai.
Daniel (17:20):
Well, Chris, just kind
of tying up this first pillar of
the AI action plan, accelerateAI innovation. I think a couple
potential implications thatpeople could have, we're
obviously We can't read thewhole thing here on podcast, but
we've just hit a few of thehighlights of this first pillar.
(17:40):
But I think there are going tobe these sorts of debates over
what defines bias in an AImodel. Because there's
certainly, there is And again,I'm just kind of coming at it
from a, I guess, technologystandpoint. There could be bias
as related to maybe DEI types ofthings in one way or another,
(18:07):
for example, gender or whateverthat might be.
But there's also biascorresponding to very real world
implications around technologyand security. For example, the
bias of one model to be moresusceptible to prompt injections
than another model. That is abias, but it has nothing to do
(18:31):
with these other categories. Ithink there is going to be this
bias, in a sense, as a term ofart in our world. In another
sense, it's a politicized thingin our country.
I think there's going to be alittle bit of tension there. I
think also with the kind ofrollback of some of this
(18:53):
regulation, it's going to putmore pressure, I think, on
businesses to strike the balancebetween innovation and the
safeguards that they need to putin place. Because without
explicitly being forced to,you're going to see companies, I
think, at a very high profilelevel make critical mistakes in
(19:16):
their AI applications and suffersome pretty hard brand
consequences because of this.That in itself, just the
commercial pressure, is going toforce companies to think about
their self regulation. Companiesare going to have to figure out
this balance for themselves, Ithink, as it's not flowing down
(19:39):
guidance wise from standardsbodies in the government.
Chris (19:42):
Agreed.
Daniel (19:43):
Yeah. Well, that gets to
pillar one, accelerate AI
adoption. Pillar two, buildAmerican AI infrastructure.
Sounds exciting. What stood outfor you here?
Chris (19:56):
It does. One of the
things, and I almost forgot to
mention it in pillar one and itapplies to pillar two is one of
the the subtleties I'm noticinghere when they're talking about
building out AI forinfrastructure is, you know, we
there is already quite a bit ofAI infrastructure in The US. We
have a lot of major players withbig investments in the
(20:17):
commercial sector that all havegovernment and military and
intelligence links and thingslike that. And so this applies
across the board. It applies tocommercial.
It applies to to to government,etcetera. One of the things that
really stood out to me in someof these suggestions was that
there's a subtlety of pickingwinners and losers here. In most
(20:38):
of these suggestions, there isthere is already something in
effect that does these differentthings. It may not be optimized,
and that's that's open fordebate. It may be a different
organization from how they'reenvisioning it in the thing.
But there is a subtle picking ofwinners and losers through the
entire document in terms ofapproach and who is in in
(21:01):
sometimes explicitly who'sresponsible, and in some cases,
who they're likely laying itwith. And so that's one of the
the is a is an overview thatparticularly applies to the
infrastructure section pillartwo would be that I can almost
envision the lobbying fromcertain large companies that
(21:22):
contributed to theinfrastructure section as I was
reading through the documents.I'll leave those companies
unnamed, probably our listenerscan can come to some of those
conclusions themselves. So itwas it definitely I I think that
there's some interestinginfluences that are that are at
play here behind the scenes interms of the specific choices
(21:43):
and verbiage being used.
Daniel (21:45):
Yeah. And just to give
people an idea of some of the
things that are mentioned in theBuild American AI Infrastructure
pillar of the plan, there's sortof creating streamlined
permitting for data centers,semiconductor manufacturing,
etcetera, development of thegrid, restoring semiconductor
(22:08):
manufacturing, kind of onshoringthat along with a number of
security related things. Sobolstering the cybersecurity of
critical infrastructure,creating secure by design AI
technologies and applications, amature federal capacity for AI
incident response. So veryinteresting. I find this
(22:31):
interesting at the same timethere's a rollback of regulation
in the public sector aroundthings like maybe what would
have flowed into regulation fromeither the NIST AI risk
management framework or previousexecutive orders or that sort of
(22:51):
thing.
There is this pressure from thegovernment side where they're
talking about making sure andtrying to force somehow I don't
know exactly how that's kind ofvague in the document, But to
ensure that AI systems that thegovernment relies on,
particularly for nationalsecurity, are protected against
(23:12):
spurious or malicious inputs,some of the language from the
document, promoting resilientand secure AI development,
having the DoD lead incollaboration with NIST, this
sort of refining of generativeAI frameworks, roadmaps,
toolkits. It sounds like a lotof what has happened in the NIST
(23:34):
AI risk management framework,but maybe led from a different
direction. So there's this, I'mseeing two things at once, this
sort of what was rolling more tothe commercial side, but now is
being geared more towardsgovernment implementation of AI,
(23:55):
but also being led from adifferent direction, being more
led from the DOD kind ofnational security perspective
versus like a NIST?
Chris (24:03):
Yeah, there is a there is
a lot of kind of, you know,
whether kind of government ish,you know, in all the things
military and otherwise that arethat are prevalent through it.
And, know, once again, as we'relooking through the section, the
subtlety of the outcomes thatare possible here are pretty
(24:24):
big. And I'll give you oneexample of that. You know, they
mentioned, even though the thirdpillar refers to AI diplomacy
and security and the secondpillar, there's a couple of
points where they talk aboutsemiconductor manufacturing, in
terms of streamlining thepermitting of that and to and
there's one that's calledrestore American semiconductor
manufacturing. And, you know,that that has profound
(24:47):
implications on the commercialspace and on only on our foreign
policy, but on our nationalsecurity that listeners of this
who may not be very familiarwith foreign policy and military
concerns, You know, a bigstrategic reason why The United
States is kind of the is kind ofthe the leader in defense of
(25:10):
Taiwan concerns.
Taiwan's responsible for its owndefense, but we have some
obligations there that oftencome out in the news that people
would have seen and a lot ofthat has to with the fact that
the global semiconductormanufacturing industry is very
well entrenched there and it'svery complicated to replicate
that elsewhere. That that is nota trivial thing to do. And so
(25:33):
whether or not we are successfulaccording to this document and
doing what it's advocating, itwould have implications either
way in terms of things outsideof AI altogether. And to your
point, kinda leading into that,it definitely has a very, you
know, different in, you know,shifting. There's some stuff
that's assigned to NIST, butthere's also stuff that might
(25:55):
have traditionally been withNIST or other similar
organizations that are oftenmore commercially focused and
moving those into into more of aa DOD sphere of influence as
opposed to that.
So, yeah, you there's definitelya policy, an overall policy
shift to be felt with thedocument going forward.
Daniel (26:14):
Yeah. I'm kind of
fascinated a little bit in this,
AI infrastructure pillar, thisidea. The last thing they
mentioned is this sort offederal capacity for AI incident
response. And this is, I don'tknow, I was just trying to parse
through a little bit of this,like what is an AI incident? And
(26:37):
some of this terminology I'msure is intentionally vague in
certain places, but at least interms of this AI incident, they
say prudent planning is requiredto ensure that if systems fail,
the impacts to critical servicesor infrastructure are minimized
and response is imminent.
And they wanna promote this kindof incorporation of AI incident
(27:01):
response actions into incidentresponse doctrine and best
practices. So actually this wasanother kind of point, other
than the fact that I was alittle unclear what they mean by
AI incident response. I did likethat this was another element in
this pillar that I kind of likedfrom a certain perspective in
(27:24):
the sense that I really believethat there's a lot of people
developing AI applications andautomations and that sort of
thing without a lot of bestpractice cybersecurity
knowledge. Then over here,there's the cybersecurity folks
who aren't sleeping at nightbecause they know all this is
going on, but it's sort ofwhatever it is. The executives
(27:48):
or the board are pushing this AIstuff forward.
They don't want to bring thecybersecurity folks in because
that's going to slow things downand the naysayers or whatever.
But I do really think thatthere's a benefit to have this
kind of more cross functionalcollaboration between
cybersecurity and AI developmentand actually bake in some of the
(28:09):
known AI threats intocybersecurity playbooks and some
of the cybersecurity bestpractices into AI application
development. I think if you lookback, you could make the
parallel that when data sciencewas big and hyped, there were
data scientists building thingsthat DevOps and infrastructure
(28:30):
people couldn't support andthere wasn't a lot of cross
functional collaboration onthose things. We kind of worked
through a lot of that. I thinkthere's a similar thing
happening right now in terms ofAI development and the
cybersecurity industry.
I do like that kind of crossoverin a lot of ways that they talk
(28:51):
about under this AI incidentresponse piece.
Chris (28:54):
It is. And that while
we're talking about that
particular point, you know, andwe as we were discussing kind of
the the power shift that thatyou're seeing, I I note that in
their third recommendationpolicy note, but they shift who
is in charge of that by going tothey say led by DOD, is the
(29:15):
Department of Defense, DHS,which is Department of Homeland
Security, and the ODNI which isthe Office of the Director of
National Intelligence incoordination with OSTP which is
the Office of Science andTechnology Policy, NSC which is
the National Security Council,OMB, which is the Office of
Management Budget and the Officeof National Cyber Director. And
(29:37):
so that's the they're they'reclearly putting defense and
intelligence at the front end ofthat and kind of and kind of
backloading it with moretechnology orientation on that.
And that's very different fromhow we've seen it in the past.
They have essentially flipflopped how that has
(29:59):
traditionally been addressed toyour point, which which there's
potentially good and bad to thatapproach.
There it will definitely createa different set of policies when
you have a different set of oflead agencies with their
priorities addressing that. Andso how that how that ends up
rolling out in terms of thediscrete, you know, future
(30:21):
executive orders and policystatements that come out and how
that funding is allocated willbe interesting to see in the
years ahead. I will also say itwill be interesting to see how
this rolls out into a nextadministration down the road,
whether that administration bedemocratic, a democrat or
(30:41):
republican administration. So
Daniel (30:43):
Alright, Chris. We are
on to the third pillar. The
third pillar is lead ininternational AI diplomacy and
security. That is the thirdpillar of the America's AI
action plan from the WhiteHouse. So lead in international
AI diplomacy and security.
(31:04):
There are a few things underthis pillar here. There's
discussion of exporting AmericanAI to allies and partners.
There's an explicit call out ofcountering Chinese influence and
strengthening AI compute exportcontrol enforcement, which I
(31:27):
think maybe is geared towardssome of that Yeah. How does
NVIDIA stuff get over to Indeed.That's right.
Yeah. Actually, there's two. Sothere's strengthen AI compute
export control enforcement,which was what I was talking
about. Then there's anotherpoint, which it's a double
(31:49):
click, plug loopholes inexisting semiconductor
manufacturing export controlsand align protection measures
globally, ensure that the USgovernment is at the forefront,
invest in biosecurity, whichmaybe we can talk about that one
here in a sec, why might that beinteresting? But yeah, any sort
(32:11):
of first reactions to this setof things?
Chris (32:15):
Well, have certainly seen
current domestic politics in the
previous pillars, this is,without a doubt, the most
explicitly political pillar,with turning to foreign policy,
that we have seen in thedocument. And pretty much as you
read through each of the variouspoints it makes and the
(32:36):
underlying suggested,recommendations, you you
definitely find it reflecting,policy. So, you know, once
again, this this is very much apolitical and very much a a
turning toward national securityconcerns in this. It really
(32:57):
doubles down on currentadministration policies toward
China. And, you know, and as,know, in that world, China is
considered to be a peer or nearpeer adversary and different
administrations have addressedthat with different sets of
policies.
And and right now, it's thecurrent administration policy is
(33:20):
fairly aggressive in that way.And and we're seeing that if you
look at the news right now. Youmentioned the NVIDIA, the
concern in terms of exportcontrol. And China recently, in
the last few days, hasinstructed its AI companies,
know, I paraphrase, thou shaltseek GPU technologies
(33:42):
domestically rather than goingto The United States. So we're
seeing it turning around likethat.
So as they tighten controls,whether and you can make pros
and cons in terms of argumentsthere. We're certainly seeing
China turn around with its ownset of policies and approaches
on that. So only history willtell us whether or not either
(34:04):
side is making the best policychoices currently. But, yeah, it
this document definitely doublesdown on on current American
foreign policy approach.
Daniel (34:15):
It's interesting. I
don't know if when we would have
started this podcast, if I wouldhave thought that GPUs would be
kind of this thing that would beused as a kind of export and
support for political allies,which is definitely kind of what
(34:38):
it Even before this AI actionplan, right? This is sort of, in
a sense, a kind of new armsrace. Absolutely. Kind of have,
in a sense, whole market, almostlike an arms market, but with
semiconductors and GPU cards,which is very interesting.
(35:01):
Again, something we probablycouldn't have expected some
number of years ago, but is kindof a reality.
Chris (35:09):
I mean, it it and I've
heard many people, you know,
from across many companies andacross many industries talk
about this as kind of a a sortof like a cold war esque
approach, you know, where in themodern age, it's all around AI
and the technologies that areassociated with and support that
capability. Regardless of ofwhich side of the aisle someone
(35:34):
is on, AI is a is is dependingon who you're talking to often
referred to as the mostimportant concern within the
American military establishment.And that's not surprising. We're
seeing that across others aswell. So it really comes down
to, you know, how how you'regoing to affect your national
(35:55):
intent, if you will, on, on, onaddressing that.
But yes, GPUs models, all thesedifferent concerns have become
pawns us, you know, kind ofglobal economic and military
pawns for national policy, orinternational policy around the
world now.
Daniel (36:15):
Yeah. Yeah. It's it's
interesting. And, yeah, I know,
I'm in Lafayette, Indiana. We'vegot a big semiconductor plant,
SK Hynix, I believe it is,coming in north of town.
And, certainly you've seen, alot of what Intel has done over
(36:38):
the past years with plants outin Ohio. It will be interesting
to see, but that's all yeah. Itwill be interesting to see
because as you mentioned, thissort of onshoring of
semiconductor manufacturing isreally a complicated problem to
solve. I can't speak to it withall expertise, but any supply
(37:05):
chain, especially at that level,extremely complicated. And so
just kind of this idea ofonshoring is definitely
difficult.
I think that's partially alsowhy they mention working with
allies and that sort of thing,because some sort of solution to
(37:25):
onshoring and protecting thekind of supply chain of GPUs,
for example, is going tonecessarily involve multiple
nations. And so, yeah, it'll beinteresting to see how that
plays out. I do see there's kindof certain themes as well that
are kind of cross cutting acrossa lot of the pillars. One maybe
(37:47):
to call out, which is not reallyyou've mentioned a couple of
these that are maybe morepoliticized. One cross cutting
thing that's interesting thatthey talk about is AI improving
the lives of Americans bycomplementing their work, not
replacing it, which again isvague but is certainly a theme
(38:09):
that we have promoted on thisshow.
So that's kind of encouraging.How it's interpreted might be
everyone might interpret thatdifferent and how compliment
what does it mean to compliment?But that did seem I I was just
gonna call out that as one kindof cross cutting thing.
Chris (38:28):
There's an irony there in
that, you know, what we're
actually seeing in the jobmarket right now is it's really
tough to come into the jobmarket, especially if you are a
junior worker in the space. Andso because, know, the models
that are out there, both Gen AIand others are in a lot of cases
(38:49):
replacing junior positionsaltogether and not being
backfilled on that. And that'smade the job market really tough
for junior level people inparticular, but but that extends
across the entire space. And soif you're going to, know,
there's you're there's a littlebit of, of which way you're
gonna go. I mean, you're, youknow, if a company, out there
(39:11):
who's just focused on the bottomline can avoid paying a bunch of
compensation by putting somemodels in place.
We are seeing that that ishappening in real life. And yet,
if you're going to say we don'twant AI to replace workers, we
want it to supplement them, thenyou're going to have to provide
some form of incentive to makethat happen. And that incentive
(39:34):
could take a number of differentforms, but likely there would be
some sort of regulation in termsof what can occur in that. And
yet, this administration is veryfocused on rolling back
regulation for AI adoption. Sothere's a place there where you
have to find some sort ofbalance and that isn't even
(39:54):
recognized in this document.
Each of the points is standingalone. They're not recognizing
that there's conflict within thedocument and definitely not
addressing how they mightapproach that conflict to yield
real life outcomes that arebetter across the board.
Daniel (40:11):
Yeah. Well, I think kind
of on that note, just getting
kind of to the end here andsummarizing, maybe just there's
some questions that I thinkanyone that sees this document
is left to wrestle with thatwe're not able to completely
(40:31):
answer here. But I think forlisteners, these are really
valid questions that I think youcan be thinking about in your
own context. And the first ofthose I would highlight is just
how are you going to balanceinnovation with the safety
element in your context, in yourorganization? Because ultimately
(40:52):
that kind of is being, evenright now, it's driven by your
own choices to wrestle with thatand the implications of that not
coming down from regulation.
It seems like that won't becoming. So that is something
that you're going to have tocontinue to balance and think
about is that kind of innovationand safety element. I think
(41:15):
other pieces of this areobviously related to kind of AI
dominance and geopolitical riskand all of those things. You
might enjoy thinking aboutthose. But the major thing that
I'm left with here is it's on uspractitioners and leaders in
companies and organizations toreally think about this balance
(41:36):
of innovation and safety.
We can lead out with that ingood ways that are consistent
with this AI action plan andwill carry through with benefit,
whether it's this administrationor another administration, in
terms of best practices. Soyeah, that's my main thought
(41:56):
leaving this. Any closingthoughts from your end, Chris?
Chris (42:00):
I subscribe to what you
just said. I think you I think
that's very well said. Thatrepresents where I come from as
well. I'm hoping that that overtime, regardless of the specific
policies of a givenadministration, that maybe we
can arrive at some policies thatare maybe a little bit less
(42:21):
political and and kind ofsomething that everybody on both
sides of the aisle could feelgood about. I think it may take
a little time to get there, butI I would be remiss if I didn't
express that hope for foreventuality.
So, yes, we'll see what comesnext, and we'll see how this all
(42:42):
plays out in real life with realbudgets and real policies to
come.
Daniel (42:46):
Well said, Chris. And
and quoting from America's AI
action plan to leave ourlisteners, simply put, we need
to build, baby, build. So, youknow, practitioners out there,
we've got we've got some work todo. Thanks for thanks for
chatting today, Chris. It's beenfun.
Good. Yep. Absolutely. Thanks alot, Daniel.
Jerod (43:14):
All right. That's our
show for this week. If you
haven't checked out our website,head to practicalai.fm, and be
sure to connect with us onLinkedIn, X, or Blue Sky. You'll
see us posting insights relatedto the latest AI developments,
and we would love for you tojoin the conversation. Thanks to
our partner Prediction Guard forproviding operational support
for the show.
Check them out atpredictionguard dot com. Also,
(43:37):
thanks to Breakmaster Cylinderfor the beats and to you for
listening. That's all for now,but you'll hear from us again
next week.