Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to the Practical
AI podcast, where we break down
the real world applications ofartificial intelligence and how
it's shaping the way we live,work, and create. Our goal is to
help make AI technologypractical, productive, and
accessible to everyone. Whetheryou're a developer, business
leader, or just curious aboutthe tech behind the buzz, you're
(00:24):
in the right place. Be sure toconnect with us on LinkedIn, X,
or Blue Sky to stay up to datewith episode drops, behind the
scenes content, and AI insights.You can learn more at
practicalai.fm.
Now, onto the show.
Daniel (00:49):
Welcome to another
episode of the Practical AI
podcast. In this fully connectedepisode where it's just Chris
and I, we'll, dig a little bitinto the latest news related to
AI, hopefully share a few of ouropinions and and maybe some
little nuggets that will helpyou level up your machine
(01:11):
learning or AI game. I'm DanielWitenack. I'm CEO at Prediction
Guard, and I'm joined as alwaysby my cohost, Chris Benson, who
is a principal AI researchengineer at Lockheed Martin. How
are doing, Chris?
Chris (01:23):
I am doing splendid
today, Daniel. How's it going
with you?
Daniel (01:26):
It's going great. It's
been a productive week. We've
had customer visit and lots ofinteresting development stuff.
And last week did a workshop. Solots of hearing all the
different kind of archetypal usecases that people are working
on.
Lots of interesting questions. Ithink we're also in a hiring
(01:51):
phase right now, which means Iam thinking a lot about kind of
what positions to hire in,especially in light of all the
AI stuff that is happening andeveryone talking about, oh,
maybe you should consider AIcould do this task before you
(02:13):
hire someone in to do that. Arethose discussions happening in
your circles as well?
Chris (02:18):
Oh, they are. And I hear
that a lot of people are asking
me about questions like this,increasingly. And it's funny
that you bring that up. I wasdoing another podcast as a
guest. And some of thosequestions came up in that
podcast asking about kind of howwe saw that.
I think things are getting sofast. Know, we've gone through
(02:41):
several years of big LLMexperience and now the agentic
world followed rapidly by aphysical world and people are
really, like, paying attentionto how their lives are starting
to be impacted by this with thewith jobs being right at the top
of that list and that, you know,concerns on what it is and what
should I do and will I bereplaced? And it's a mixture of
(03:02):
excitement and fear out there.
Daniel (03:03):
I know that you can't
share things necessarily from
your immediate actual day to dayto day job always. But let's say
Chris Benson in a fictitiousworld and you were Well, I guess
you also have the nonprofit sideof what you're doing, but as
you're thinking about As youwould be thinking about in this
(03:24):
context, like the world we livein now, knowing not the promise
of what AI might do next year,but just based on capabilities
now, the tools that you'reseeing out there, and you are
building maybe a small team fora new company, what do you think
is kind of on the, maybepositions or things that might
(03:48):
be on AI, which may havetraditionally been a full time
or a part time hire? What isyour thought?
Chris (03:57):
Yeah, the sense I'm
getting from my interactions
with people what I'm observingis still using AI mostly as a
tool to empower people. Thoughthat might mean that there are
fewer people in a givenfunction, you know, than there
might have been before. Youknow, you like there's some
really obvious examples thatwe've all seen like like
(04:19):
marketing material and using AIto, to generate that and stuff.
And so, you know, I think I'mnot seeing AI, like completely
replace lots of positions rightnow. There's probably some here
and there.
But I think I think thatcompanies are looking for
efficiencies. And, you know, ifa marketing department could be
(04:39):
two people with AI tools insteadof five people in a company, I
think that's a really commonapproach that that you're seeing
right now, you know, today andand over the last couple of
years in terms and I think thereare many, many instances of
that, you know, that that smallefficiencies, small functions
that are being, know, our tasklevel things that are being
offloaded, but which by doingthat reduces total headcount to
(05:03):
some degree.
Daniel (05:04):
And I guess there's
things that maybe people can do
faster. So to your point,there's maybe fewer of a
particular type of position thatis needed. Maybe it's marketing
or even development or otherthings. I also find it
(05:27):
interesting that there may bethings that people could do that
in a certain position that theycouldn't maybe do before and
might have to hire like anoutside agency or services
organization to do, which is anintriguing part of this. So just
thinking of advertising, PR,market research, those of course
(05:53):
are things that you mentioned,but also there's kind of
prototyping new software things,you know, proof of concept that
maybe you would have to engagean external consulting firm or
software dev shop, right, to dobefore that maybe you, even
(06:17):
with, fewer years of developmentexperience or less exposure
could knock out internally?
What's your thought on that kindof interplay with service
providers and how I guess thisworld may impact the service
provider side of things.
Chris (06:34):
So yeah, with service
providers, I think it's an
interesting story. I don't bringit up very often in our
episodes, but before I was inthe defense world, I spent a
dozen years in the digitalmarketing world. And, you know,
as a as a CTO, or othertechnical positions and stuff.
And I think that it's reallytough with these new AI tools to
(06:56):
be in that industry at thispoint, because there's so much
capability, you know, we'vecommented many times that, you
know, it used to be once upon atime, we would say, AI would
take care of all the grunge workand leaving humans for the
creative work, but we'veactually kind of seen the
opposite in some cases. I thinkthe the creative aspect of some
service industries like that,you know, compared to AI tools,
(07:19):
which may or may not be able todo it at the same level, but
they might be able to do it goodenough, you know, we, you know,
in software development, weoften talk about good enough.
And you mentioned prototyping amoment ago. And and I think, you
know, with the notion of vibecoding, making at least simple
coding efforts, you know, forthings like prototyping
accessible to people whonormally, historically would not
(07:42):
have been able to do that isalso a bit of a game changer.
And that means that those arethat's another area where hiring
externally or hiring thatfunction into your organization
is also reducing because youdon't need that necessarily just
to get a prototype. It may bedifferent for writing production
systems. And even then AI tools,as we know, have have had a had
(08:05):
an impact.
So I think I think the impact onthat can be debated on whether
it's positive or negative anddepending on who you talk to.
But these tools are definitelychanging many industries right
now. And and they may not becompletely knocking out an
industry, but they are impactingit in terms of its viability
from a revenue standpoint andsuch. So yeah, we're, I mean,
(08:29):
it's, it's a concern. And Ithink most people start off
thinking, well, most people thatare kind of, you know, AI
cognizant on a day to day basis,start off thinking, what tools
do I have available with thesenew AI services and the models
that I might host ourselves atour company can I do now that
maybe I couldn't have donebefore and how can that change
my cost structure?
Daniel (08:50):
Yeah. I'm really
intrigued by this service
provider side of thingsactually, because on the one
side, there's maybe traditionalthings that people would have
gone out and hired an agency todo. So maybe that's a new
website for your business,right? Or, you know, simple
website for your business orsome type of marketing assets or
(09:16):
maybe a prototype of a project,like a SaaS product, or I don't
know, a bunch of differentthings. Those things seem to be
good targets for kind of theeither vibe coding or kind of
wrapper type platforms aroundcertain AI systems.
So, you know, for the very firstI think the very first one of
(09:40):
our websites that we made forPrediction Guard, I just use one
of those sites where it's like,create your website with a
prompt. You just go in andprompt the thing. Like, I want a
website for this thing, and itkind of generates it and you
move around a couple things andthere you go. So I think that
those types of things, the sortof non AI things that are
(10:02):
generated by AI are very muchmaybe there's less of a need to
go out and hire an agency or aservices company for those
things. At the same time, I dothink it's interesting that if
you look at building AI thingsfor your company so let's say
you are a mid or large sizecompany and you want to create
(10:27):
this AI tool for your company,right?
The talent is so scarce for thatsort of thing that really I
mean, if you look at things as awhole, OpenAI or companies like
that are definitely not thecompanies making the most money
(10:47):
off of AI. OpenAI is losing alot of money. The companies that
are making a lot of money off ofAI are Deloitte, Accenture,
McKinsey. Yes. Right?
Those are the companiesabsolutely raking it in off of
AI, right? Because there's allof these companies that say,
(11:08):
Well, we want to transform ourcompany with AI, so you need
sort of change management andyou need to create these
prototypes and maybe these AItools that you're going to
deploy across your company. Youneed to take care of the
security privacy concerns. Youneed to have a strategy around
it, a roadmap, like all of thisstuff. And that is all very much
(11:30):
tailored to what these servicesorganizations do really well,
right?
And I think even smaller MSPtype of, companies that have a
book of business in healthcareor other industries where the
impact of AI is clear, they havea big win to have in in this
(11:53):
area too. So I find it reallyinteresting on the services side
where some things maybe aregetting a little bit eaten up by
AI, but the actual AI stuff, theAI transformation is actually
very ripe for servicesorganizations. It's interesting.
Chris (12:12):
It is. I mean, I guess
people are looking for guidance.
They don't understand how, andthere is of course this whole
industry of consultants who arehappy to tell you they have it
all figured out.
Daniel (12:26):
So, and sure that much
of that they are very much
helping people, but they arewell positioned to do that. And
that is kind of their businessmodel, right?
Chris (12:36):
Indeed it is. I did a
brief stint at Accenture as
well. I can attest to that.
Daniel (12:44):
Yeah, yeah. So maybe it
is a bit of a mixed where if
you're an MSP out there, maybeyou're, you're you're thinking
that some things are are goingaway, but there is maybe a big
opportunity for you to go intothis sort of set of offerings
around AI services.
Sponsor (13:09):
Well, friends, if you
want to build the future of
multi agent software, check outAgency. That's AGNTCY. It's an
open source collective buildingthe Internet of agents. It is a
global collaboration layer whereAI agents can discover, connect,
and work across frameworks. Thatmeans better tools for you,
(13:30):
standardized agent discovery,seamless inter agent
communication and modularcomponents to scale your multi
agent workflows.
And they're teaming up with CrewAI, Langchains, Cisco, and many
more dropping real code specsand services with no strings
attached. Start buildingalongside engineers who care
about high quality multi agentsoftware. Learn more at
(13:52):
agency.org. Again, that'sagntcy.org. That's agency.org.
Daniel (14:03):
Well, Chris, we started
talking a little bit about
influence of agents and thatsort of thing on workforce.
There was a a very interestingstory that came out that
intrigued me as related to thisbecause basically everyone's
saying, you know, 60% or 80% orwhoever's giving the percentage
(14:30):
will say, you know, the majorityof development work is now gonna
go to AI systems. And it wasvery interesting because there
was a story that came out. Mean,it was reported by multiple
folks, I think. But, this oneI'm looking at is from Fortune
and the title is, An AI poweredcoding tool wiped out a software
(14:52):
company's database, thenapologized for a catastrophic
failure on my part.
And this was, I believe it wasReplit that, is is the the
culprit here, which is a reallyamazing tool. You know, they
have I think there has beenapologies and, you know, of
course, they they're handlingthis in one way or another. But
(15:14):
I'm sure this is not the onlyinstance of this across these
kinds of tools, but also it's ait's maybe a bit of a wake up
call for some people, tounderstand that maybe these
tools are moving at a little bitof a faster pace than what might
(15:35):
be reasonable for companies tosupport just operationally and
permissions wise and at scale,etcetera, etcetera? Any
thoughts?
Chris (15:49):
My first thought was,
was, you know, lack of
guardrails, lack of, of,consideration on that. And I
don't know the details, any morethan you do, you know, other
than what was in the article.And maybe part of my perspective
is biased, because of theindustry I'm in being in
defense. We're really, reallycognizant of safety measures and
(16:12):
guardrails and such like that inour industry, because obviously,
it can turn into a problem ifyou didn't have that. But I
think other industries sometimesmay need to to stop and think
and I I get that the marketplaceis moving really fast and there
are competitors that will eatyour lunch quickly and you wanna
get to market with the bestthing.
But kind of at this point, youknow, we're still fairly early
(16:35):
stages in a genetic AI, youknow, a lot of frameworks are
still under development. This isand so I think having a, you
know, kind of, it's a good thingto ask yourself what happens
when things don't work out peryour expectations. What are the
guardrails for when thingsreally go bad, and maybe think
(16:56):
kind of a little bit of maybe alittle bit old fashioned in
terms of you know, what can youdo for data redundancy and
backup and things to where ifyour AI experiment or effort
does go off guardrails that youhave a guarantee that you don't
have real damage done in a in alarge sense. After that
(17:18):
something that literally has nopermissions to. So there's
probably some mitigating factorsthere that I'm not aware of.
So I'm not trying to lay in onthe company, but I would just
urge people to, to think aboutworst case scenarios and and
maybe architect spend a littletime architecting for that. And
then and then go do some coolstuff, but make sure you can
(17:39):
fall back without blowing yourcompany up.
Daniel (17:42):
Good point. What do you
think One of the things I've
been wondering around this isthere's all of these tools
coming out, vibe coding tools,vibe marketing tools, vibe
design tools, or even HR hiring,recruiting, etcetera, name any
(18:05):
function. There's a way to do itwith prompts or these sorts of
tools now. How do you think thisinfluences the type of both
education and professionaldevelopment that people need to
have coming into a job? Becauseobviously there was maybe And
(18:29):
again, we only know what we knowabout this one instance of the
database being dropped or thatsort of thing.
But I'm guessing that maybe, andpeople can correct us on social
media if I'm saying somethingwrong here, but I think in a
(18:49):
scenario like this, regardless,there's probably some of this
that's on the company using thetool. And like you say, the
trust and the guardrails thatare put in place around kind of
how things are pushing toproduction and how they're
iterating on their platform orcode or that sort of thing. And
(19:11):
there's part of it that'sprobably on the platform itself
and how it's architected andfailure modes and safeguards and
that sort of thing. But part ofit I think is, well, should we
train our new workforce suchthat they're ready to kind of
team with AI systems in anyrole, one or the From technical
(19:37):
to non technical roles. And howdo people in existing roles that
maybe haven't spent their careerusing these tools, how should
they professionally developthemselves to be proficient in
this new space?
Any thoughts?
Chris (19:56):
Yeah, it's something I
think about quite a lot on
several factors. There's my owncareer, we're working through
things and things are changingconstantly. And I also, I have a
13 year old daughter, you know,who is who's going into eighth
grade, she'll be in high schoolsoon and then into college. And,
you know, we are we are livingin an age of experimentation
(20:18):
right now because thesetechnologies are coming so fast
that the nature of what it meansto use it from one year to the
next is very different. Youknow?
2023 to 2024 to this year haveall been distinctive in terms of
how these tools are able toaffect our job. So I think the
first thing is I would say, ofaccept it, accept the change.
(20:42):
And I see, you know, I see a lotof people who are highly
technical developers, thingslike that, who probably
shouldn't know better, kind ofresisting and all that. And so I
think the first thing is to say,this is happening, and it's not
going away. And then the secondthing is on on individual roles
on individual jobs to try toimagine where things might go,
(21:06):
given the tools that you'reseeing today.
What do you think might changeincrementally tomorrow? And what
kind of trends are you seeing inyour particular industry? And go
ahead and get ahead of thatcurve. And and I think that's
really valuable. I think it'shard to do.
So it's it's very easy for me togo make a recommendation like
that. But seeing the future isnot easy and being able to to do
(21:27):
that. So I think myrecommendation is given that
this is this technology is onlygoing to continue to grow and
change things, accept that yourjob is changing, accept that the
the the the vision that you hadyesterday is not gonna be what
happens tomorrow. And so try totrack on what you're doing,
embrace the tools and figure outwhat's changing your job. How
(21:49):
can you make it a net positiveinstead of a net negative?
And those are very generic, butI think it's a very job
specific, you know, set of tasksthat you have to do in terms of
that. And I think that startswith education. It's one thing
to be doing it on the job likewe are in the middle of a
career. But how do you traintoday's youth to to do that?
(22:11):
They're really good with thesetechnologies.
But we're, they're not the onessetting the curriculum. It's us
older people that are settingthe curriculum. And I don't
think we're doing them anyfavors right now. In school
curriculums, they're they'redefinitely not set up for
success yet.
Daniel (22:26):
How far behind in
general are we, do you think, on
that front in terms of, let'sjust say college grads coming
out into the workforce in termsof what maybe, take any field,
accounting, marketing,communications, business,
(22:47):
whatever it is, how far off doyou think we are in terms of the
realities of what day to daywork might look like with these
tools versus how that's beingrepresented in the classroom?
Any insights?
Chris (23:03):
Yeah, I think look to how
people are doing it in a hobby
sense, you know, because peoplewho are interested in adopting
these technologies, we've seenthis for I mean, this is not
new. This is for many, manydecades. You know, companies
tend to adopt what theirdevelopers go home at night to
their home and play with and areinterested in because it's the
(23:23):
cool thing to do. And I thinkyou can you could expand that
across a lot of different fieldsand industries. You know, what
what do people do when they'redoing it on their own time
because there's this new thingout there.
And if companies were smart,they'd pay attention to that. If
universities were smart, they'dpay attention to that and track
that fairly early kind of whatthe maker space or the hobby
(23:45):
space is doing and buildcurriculum around that. Because
right now, I mean, as we sithere today, most schools, if
students go in, the professorsare trying to find ways to keep
students from using models thatare publicly available to do
their homework and stuff. Andwhen I hear and and and every
time I hear an academic saysomething like, do I keep my
(24:08):
students from using chat GPT tocheat on their homework? I'm
like, well, you're teaching thewrong thing then.
Because that that that model isnever going away. So if you're
teaching something that that'scausing you a problem, then
you're doing whole curriculum isoff track. And I realized that
may not be the perf you know, aspecific teacher's ability to
change overall curriculum, butit's certainly university and
(24:30):
school leadership, school systemleadership that should be doing
that. So there's a bigcorrection that needs to happen
in education, that companieswould be very smart if they
could also track that very earlyand go ahead and make
adjustments earlier rather thanlater.
Daniel (24:47):
Yeah. And maybe part of
the struggle here, which I may
be just playing devil's advocatea little bit on the Well,
there's a lot of I think aseveryone would recognize,
there's only so fast thateducation can move just because
of bureaucracy and other things.Just putting those things out of
(25:08):
the I mean, this is also trueprobably for companies where you
look out around the landscape ofAI tools that can affect my job
and you look at the snapshottoday and then you look at the
snapshot next week and it almostseems like it's totally
different. There's so manythings coming out and I can put
(25:30):
in the investment to learn thisthing. Then next week it seems
totally useless or could seemtotally useless.
How does one out there that'smaybe listening to this navigate
that kind of whirlwind ofreleases?
Chris (25:52):
So so I I don't plant an
anchor in any one thing as a
long term bet. I think I thinkto win these days, you have to I
think you have to embrace thatlearning mindset. And it's very
cliche to say that. But I thinkyou have to do that in your in
your day to day actions. Youknow, I'm I'm at an age I'm 54
(26:12):
right now closing in on 55.
And a lot of my peers in termsof age have already just kind of
given up, they've alreadydecided what they think their
worldview is, and they'vestopped and planted right there.
And I I think it's exactly thewrong thing to do. I think the
world is changing fast, and tocontinue to be successful and
happy in such a world, you haveto roll with the world on a day
(26:35):
to day basis. And you may investin something today based on what
you think tomorrow looks like,and you may realize in a few
months that that's not the waythings are going, and you should
be ready to pivot again. And sothere is a never ending series
of pivots or or adjustments thatyou need to be making on an
ongoing basis for the rest ofyour life, all the way till the
(26:57):
day your dying day.
And if you ever quit doing that,you're putting yourself in a
worse position. So that learningmindset is a is an action based
activity. It's something thatyou're you're literally doing
all the time. And I think peoplewho can do that will tend to be
more successful and happier inwhat they do than people who
(27:18):
just resist and resist and plantflags. So that that would be at
at 54, that's what I think I'velearned.
Daniel (27:25):
Well, I I definitely So
there's a whole element of this
that really has nothing to dowith AI, which is also kind of
based in neuroscience. I've reada lot about dementia and these
sorts of things as sort ofthat's occurred in my immediate
(27:47):
family. And that's one of thethings that a lot of
neuroscientists talk about isthat just act of always trying
to learn a new thing is one ofthe best things that you can do
that's tied towards Not thatit's a guarantee, but it's
definitely positively correlatedwith good outcomes long term in
(28:11):
terms of preventing dementia orother sorts of brain type of
conditions. So maybe that's agood sign, regardless, of
whether it's AI or not.
Chris (28:23):
That's right. A little
bit of I mean, there's a mental
health benefit to doing it andthere's a career benefit. So
just love learning and pick lotsof different stuff to try out.
And some of it's little thingslike puzzles and some of it is
learning a whole new thing thatyou've never touched before.
Daniel (28:40):
Well, Chris, we talked a
little bit, I guess, more
generally about the impact of AIon the workforce and maybe
professional developmenteducation. There's an
interesting side of this too,which is the actual workforce
that is developing the AIthings. So AI, I think we've
(29:06):
kind of started to adopt thisterm AI engineer, you know, I
think in a lot of, lot a lot ofthings to, Lighten Space and our
friends over there, SWICS andand others from the AI engineer,
summit and and those types ofthings that have put this term
out there. But whether it's AIengineer or data scientist or
(29:28):
machine learning ops or whateverit is, this sort of workforce
that's actually supporting thebuild out of the AI, both
infrastructure and applicationlayer, that is driving a lot of
these transformative tools, Ithink we would all recognize
(29:48):
well, maybe our listeners wouldrecognize, but certainly, and I
hear this, I know this directlybecause I talk to our customers
who many of which are trying tohire in this sort of talent into
their companies, is verydifficult right now. There's a
lot of positions available.
(30:09):
There's kind of, it almostreminds me a lot of back in the
day when people were trying tofind the unicorn data
scientists, right, that sort ofdidn't exist, maybe back kind of
2011 to 2015 type of timeperiod. Now it's expanded
because the impact of thistechnology is just so much more,
(30:32):
but everybody's looking for AIengineers and talent that is
related to this wave ofgenerative AI technologies and
having trouble finding those.Yeah, I don't know. Is this also
a trend that you're observing?
Chris (30:49):
Yeah. I think so. I think
and, you know, not only not only
is it a trend, but, you know, itkinda points to the the
inequality in the larger spacein terms of the value in terms
of what compensation would befor that, the value in different
industries. And so AI engineersand, you know, specifically, you
(31:12):
know, research scientists thatare developing new model
architectures and stuff likethat, you can get some awfully
big compensation for that. Andyet there are many there are
many fields out there if you'renot able to do those kinds of
skills or whatever that are, youknow, as we talked about at the
beginning of the conversation,which are decreasing and stuff.
And I think that's another Ithink that that's another thing
(31:35):
that people really need toconsider is, you know, is the
track you're on going to be in aplace do you do you do you
expect your track to be in agood position two, five years
out from now based on how yousee things today? And and that's
another reason. Even even in alate career stage, maybe
consider making changes. Maybemaybe maybe it's time to make
(31:58):
adjustments and stuff because,yeah, AI engineers are able to
basically write their ownpaychecks in a lot of cases. A
lot of other industries areseeing decreasing revenue to the
for those particular jobs.
Daniel (32:10):
Yeah. One one data point
on this, which is kind of
interesting is news that cameout a little bit earlier in July
where Apple's kind one ofApple's top AI folks, Roaming
Peng, was poached by Meta. And Idon't know how accurate this is.
I haven't done the But thearticles are saying sort of
(32:33):
total compensation package overX number of years of like
200,000,000 or something. It'ssort of like, I guess how you
could think about this is likethese top AI people being
swapped around almost like topathletes in a professional sport
with big contracts or the best fone drivers going from this team
(32:56):
to that team and signing a bigcontract or whatever.
It almost seems like we'reliving in that world. Good
point. Yeah. I mean
Chris (33:06):
And we are.
Daniel (33:07):
Yeah. Yeah. I'm very
happy to be building what we're
building with my team, but italso makes me think in the back
of my mind, it's like, woah,we've got some great talent
here. I hope no one offers them200,000,000 because I can't
(33:28):
match it. There's a market whereyou can just I also wonder about
retention of some of these folkswhere you finally find an AI
engineer that's working well foryou, but they could go out and
demand twice as much somewhereelse.
Chris (33:48):
They could. So there's
two points there. One is is this
story doesn't really standalone. There's been a number of
of very similar stories withMeta. Meta has been has been
hiring
Daniel (34:00):
I think Netflix too, and
some other companies. Yeah.
Chris (34:04):
Yeah. But but Meta in
particular has been known the
last couple of months forpulling people from OpenAI,
Anthropic, Google, Apple now.And that's a group the sports
analogy made was right on.They're weaponizing kind of that
hiring process to try to get theright talent from and they're
gambling a lot of a lot of moneythat they're going to be able to
(34:27):
put together a superstar teamthat will outpace the market. I
think but that's that's also atthat extreme high end area that
we're seeing that.
I think in the short term, Ithink we're seeing compensation
go up for these for these skillsas people are because we're
still in very early stages ofpeople setting up, you know,
(34:47):
truly mature AI infrastructureand stuff. But I think over
time, just like we've seen insoftware development over the
decades and other industriesand, you know, with technology,
Because there's morecompensation there, more people
will gravitate to those skills.It will you will have a bell
curve of capability in terms ofthe of the candidates that are
(35:09):
out there. And generally,compensation will fall over time
because they'll become, youknow, much more commonly
available, because that's wherethe money is.
Daniel (35:19):
And then people will get
a lot of money for being AGI
engineers?
Chris (35:23):
Yeah, you know, that's
coming. We're gonna see by by
twenty twenty six twenty twentyseven. We're gonna be talking
about AI, AGI research engineersand stuff like that. That'll be
the new title coming. Folks,hear you heard it here first.
Right out of Daniel's mouth.
Daniel (35:38):
I'm I'm gonna I'm gonna
start a AGI engineer summit and
just start the the trend.
Chris (35:45):
I mean, you'll notice
that that's what that's all Sam
Altman and OpenAI talks aboutall the time. And and so is Mark
Zuckerberg with Meta. They'rejust talking AGI AGI AGI, but no
one's doing that. Yeah. Like,they're they may be doing
research, but they haven'tgotten there yet.
But it's great marketing, and itand it's driving the
compensation spectrum, but thatwill that will mature and the AI
(36:07):
tools themselves will becomebetter suited for creating
infrastructure and setting upstuff just as we were talking
about earlier with other areas.So it will we're in a bit of a
bubble in that area. It willeventually pop.
Daniel (36:22):
One interesting we need
to have we had a previous guest,
one of my friends from the IntelIgnite program, Ramin on. He
he's a he also has a job as a AIengineer, but also teaches at
north Northeastern. And I wasjust talking to him, just a sort
of interesting data point. We weneed to have him back on the
(36:44):
show to talk more about this,but he he was mentioning that he
he teaches a couple differentcourses. One of them is kind of
more the sort of theory behindgenerative AI, more of the, I
guess, the transformersarchitecture, you know, just the
bones and underpinnings of allof those things.
(37:07):
And then a different one that'slike machine learning ops or
like AI ops type of course. Andhe's clearly seen a shift over
the last few times that he'staught this where in the
beginning, it was everyonewanted to be in the generative
AI theory course. And that'sbasically flipped to everyone
(37:32):
wanting to be in the ops course.And at least his quick
reflection on that was thatpeople sort of assume now that,
well, you don't really need toknow the theoretical
underpinnings of any of thesethings. It's all about how you
operationalize the technologyand put together all the
(37:53):
plumbing and integrate the dataand scale it up and that sort of
so I'm not saying that's thecorrect perception necessarily,
but I think it represents aninteresting shift in how people
are thinking about what is an AIengineer or what is an AI
position in maybe an AI or notAI company.
(38:19):
There may be a trend, and that'sonly one data point, so I can't
speak more generally, but theremay be a trend where people Like
when we say AI engineer, kind oflike when we said data
scientists, there's sort of anassumption that you kind of know
statistics at some level, youknow kind of some of the
(38:39):
underpinnings of these models,maybe how to do evaluations,
this sort of thing. The moretechnical kind of underpinnings
of that where maybe some datascientists were just really good
at using gradient boostingmachines and using all the
tools. So there may be this kindof interesting dynamic now where
(39:04):
there is such good toolingaround AI that it's almost like
there's this engineering of thetools, not engineering of the AI
that people are interested in.So not the actual model or the
underpinnings or to use it orthe theoretical understanding,
but in a sense, how to engineerat this more abstract layer and
(39:27):
connect up all the plumbing andops around it is very
interesting.
Chris (39:31):
Yeah. I I think I I and I
can understand that. And there
is a bell curve of skill andcapability that people will
bring to such positions. I thinkI would encourage people to do
both. Don't do one or the other.
Do both.
Daniel (39:44):
Take both of Ramin's
courses, please.
Chris (39:47):
There you go. Make you
better. Never stop learning. So
yeah, that's when I hear peopleskipping over all the
theoretical. I get that youmight have a practical intent in
mind, but there's a point wherea certain amount of theoretical
knowledge helps you do apractical job better than you
could otherwise do it.
So yeah, yeah.
Daniel (40:08):
Yeah, I I always like
this idea of having a mental
model or having at leastintuition around how these
things work under the hood.We've talked multiple times on
this show, just understanding, Ithink there's a lot of people
who use these tools that don'teven, They don't understand that
(40:29):
when text is streamed onto ascreen, or you're getting a
streamed response back from anLLM, that is actually every
token. It's an operation of themodel which produces that output
and then it cycles around andyou run the model again. And
that actually has a number ofimplications that help you
understand a lot of things.Well, that makes sense why
(40:52):
closed model providers charge bythe token output because that's
connected to compute, right?
Or if you host your own model,it means that throughput or
streaming or the time that youneed for a response, it is tied
to how much text you'reoutputting and that has
implications for how you quoteAI engineer your solution and
(41:13):
put it together, especially ifyou're concerned about latency
or user experience or otherthings. So there's just so many
like trickle down things, evenjust from that one kind of
example of how mechanically amodel works. I
Chris (41:28):
think you're point on, a
spot on right there. So think
that was well said.
Daniel (41:33):
Yeah, well, I guess
we've talked about workforce,
we've talked about education,we've talked about hiring, how
jobs are shifting. As we closeout here, how do you feel about
your own position and what areyou thinking about, Chris, as
(41:53):
you go to the next phase of ofwhat you're involved with?
Chris (41:57):
So I I am trying very
hard to follow the my advice
from a few minutes ago is I trynot to let a single day go by
where I don't do a little bit ofself analysis, a little bit of
introspection, look at what I'mbeing asked to do and how I
might, change that. That almostevery month, I make a little
(42:18):
insight into what I need to doto adjust. A Gentic AI here in
2025 is changing, how I'mputting solutions together, and
how I did it in 2024. And so andit will be different again in
2026, and I already know that.And so I think that that never
you can't ever stop evaluatingwhat your present and future
(42:43):
look like and never put a stakein the ground on how you're
doing things that might hold youup tomorrow.
And so I think that's I thinkthat that can I think that's
general enough to say that thatcould be applied across almost
any industry? And so don't findyourself stuck in the mud. Don't
be don't be the old codger thatI am always trying to keep from
(43:05):
being myself. Stay young in yourmind and stay agile in your
thinking. That's what I wouldthink.
Daniel (43:11):
That's awesome. I think
that's a great way to end this
discussion. Thanks for thanksfor digging in with me, Chris.
Enjoy the evening and whateveryou're gonna learn tonight.
Chris (43:20):
Sounds good. You too,
Daniel. Take care.
Jerod (43:28):
Alright. That's our show
for this week. If you haven't
checked out our website, head topracticalai.fm, and be sure to
connect with us on LinkedIn, X,or blue sky. You'll see us
posting insights related to thelatest AI developments, and we
would love for you to join theconversation. Thanks to our
partner Prediction Guard forproviding operational support
for the show.
(43:49):
Check them out atpredictionguard.com. Also,
thanks to Breakmaster Cylinderfor the beats and to you for
listening. That's all for now,but you'll hear from us again
next week.