All Episodes

September 15, 2025 42 mins

The technological landscape is evolving at breakneck speed, and AI stands at the forefront of this transformation. But how can HR professionals and business leaders navigate this new terrain effectively?

Dr. Keith Duggar, CTO of X-Ray and co-host of Machine Learning Street Talk, brings clarity to this complex topic through a lens shaped by his fascinating career trajectory. From his roots in chemical engineering to high-frequency trading on Wall Street, then to Microsoft, and now leading an innovative AI startup, Dr. Duggar offers practical wisdom for organizations grappling with AI adoption.

His company, X-Ray Glass, emerged from a deeply human need – creating augmented reality subtitles for those with hearing impairments. This mission of "subtitling life" exemplifies how AI can enhance human connection rather than diminish it. Through this work, Dr. Duggar has developed invaluable mental models for understanding large language models that cut through the hype and confusion.

"They're order of magnitude more efficient search engines," Dr. Duggar explains, while cautioning about their limitations, particularly hallucinations – convincingly wrong information that can appear authoritative. His advice? Approach AI as an interactive dialogue, start simple, refine iteratively, and always verify critical information through traditional sources.

Looking ahead, Dr. Duggar envisions a shift toward "constellations of narrow intelligences" rather than ever-larger general models, with specialized AI tools working in concert to solve complex problems. For organizations seeking to harness AI's potential, he recommends practical approaches like hackathons and workshops alongside robust governance frameworks addressing privacy and misinformation risks.

Whether you're an AI skeptic or enthusiast, this conversation offers a balanced perspective on embracing innovation while mitigating risk. Subscribe to the HR Chat Show for more insights on navigating the evolving workplace, and follow Dr. Duggar on LinkedIn or through the Machine Learning Street Talk Discord to continue exploring the frontiers of AI.

Support the show

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to the HR Chat Show, one of the world's
most downloaded and sharedpodcasts designed for HR pros,
talent execs, tech enthusiastsand business leaders.
For hundreds more episodes andwhat's new in the world of work,
subscribe to the show, followus on social media and visit
hrgazettecom and visithrgazettecom.

Speaker 2 (00:31):
Hello and welcome to the HR Chat Podcast.
I'm Pauline James, founder andCEO of Anchor HR and associate
editor of the HR Gazette.
It's my pleasure to be yourhost.
Along with David Krillman, ceoof Krillman Research, we're
partnering with the HR ChatPodcast on a series to help HR
professionals and leadersnavigate AI's impact on
organizations, jobs and people.

Speaker 3 (00:47):
In this episode we speak with Dr Keith Duggar, cto
of X-Ray and co-host of MachineLearning Street Talk, one of the
world's top AI podcasts.
Keith brings a uniqueperspective, shaped by his work
at Microsoft on Wall Street andnow at the forefront of
AI-powered augmented reality.
Dr Duggar shares how AI istransforming human interaction,
what it means to subtitle life,and how organizations can

(01:10):
practically and safely leveragegenerative AI tools.
He walks us through helpfulmental models, the risks of
hallucinations and why athoughtful, hands-on approach is
key.

Speaker 2 (01:20):
Dr Keith Duggar, we're so pleased to have this
time with you today and welcomethis conversation to support our
community and learning moreabout AI and how they can
leverage it in their day-to-daypractices, and also as they look
to inform how they scale theirapproach within the
organizations and for themselvespersonally.

(01:42):
Could you take a few minutes totell us about your background,
your current work?
I understand that you worked infinance.

Speaker 4 (01:51):
Yeah Well, first, thanks for having me.
I appreciate the opportunity to, let's say, extol some of the
positive virtues of AI and getpeople interested in embracing
it.
So yeah, I was educated as anengineer, actually as a chemical
engineer, but I also minored incomputer science and, as like
chance would have it, all thework I was getting was for

(02:14):
software.
You know software engineeringand applied math and that sort
of thing.
So I tried out academia for acouple of years.
I postdoc'd.
Just wasn't really a fit for meand so, after a bit of soul
searching, I had a friend whoworked in finance in Wall Street
and he said well, why don't yousend your resume to a recruiter

(02:34):
?
And he gave me a name.
I sent the resume and a fewweeks later I had a job because
there's high demand for thatskill set there.
So I think I spent about eightyears doing trading.
So equity trading, all types ofequity and equity derivatives,
did that mainly for marketmaking and high frequency, and
so you know I was one of thoseevil sort of rocket you know

(02:55):
quant, rocket scientists orwhatever that got scapegoated
for.
You know the crash in 2008 andall that, truth be told, it's
not our fault, it's the same oldstory.
The people with all the powerare like, you know, the big
people right, the banks, thepoliticians, all that.
They needed a scapegoat.
So that was us and, um, somepoint I just realized that I was

(03:15):
kind of um, just moving otherpeople's money around and every
time I moved it, taking a littlebit, and I just felt like it
wasn't really contributing toany tangible, you know, concrete
benefit I could perceive.
So I got out of that businessand went to Microsoft actually
doing technology strategy.
But the cool thing was it wasfor Microsoft's manufacturing

(03:38):
customers, so they're ourlargest, you know, manufacturing
customers, which really broughtme back to my engineering roots
, loved it, had a great timethere.
But then and I met Tim there,by the way, so my co-host on
Machine Learning Street Talk wemet at Microsoft and, by way of
him and meeting his brother, wecame up with this idea for X-Ray
Glass, which is the startupthat I now am the CTO for it's

(04:05):
xrayglass, if anybody wants tocheck it out.
So we started that up and Iwent full time there and that's
where I'm at right now.

Speaker 3 (04:08):
And, by the way, just xray glass is a AI company, and
what does?

Speaker 4 (04:12):
it do.
Yeah, so, and it's.
It's xraglass, which we we callit, you know, x-ray, but that
stands for extended reality,artificial intelligence.
And so the mission started with, actually, tim's grandfather,
who's 90-something years old andhas lost his hearing, but he's
cognitively fine, and theyreally, at Christmas one year,

(04:35):
they were seeing what anisolating experience that is,
you know, because he doesn'tknow sign language, he doesn't
know how to read lips, and theythought, well, hang on a second,
he watches TV all the time withsubtitles.
Why can't we subtitle life?
So that was the originalmission is can we just subtitle
life?
Can we, through these, these,you know, ar glasses that were
just starting to become a thing,can we display in real time

(04:58):
subtitles of what people aresaying around you in these
glasses, so you can hold yourhead high, you can look at the
people you're talking to, youcan engage, you know, but just
have this augmentation thatmakes up for the hearing loss.
That was the original visionand it's since expanded
significantly, really based on,you know, just demand from

(05:20):
customers and companies andenterprises into a lot of other
things.
But in essence it's a softwarethat in real time transcribes
and translates and appliesartificial intelligence to
speech-to-text andtext-to-speech for all kinds of
applications.

Speaker 2 (05:39):
What an interesting journey I think it just is.
You speak to wanting to reallyadd value with the work that
you're doing and how you've beenable to do that with the
startup.
Can you also tell us a bitabout the mission of Machine
Learning Street Talk?

Speaker 4 (05:53):
Yeah, so that was a brainchild of Tim.
So when we were at Microsoft heput together an internal set of
paper review calls where folksat Microsoft would get together
and go over the latest machinelearning and AI papers and it
was really the mission at thattime inside Microsoft was let's

(06:13):
just explore and learn what'sgoing on.
As a team, a small group ofpeople at Microsoft, he was
posting those on the YouTubechannel.
They started to gain traction.
Him and I met early on and justkind of hit it off.
We're very complimentary to eachother intellectually and so it
was a really fun and good fitfor us and the mission of

(06:35):
Machine Learning Street Talkreally became to explore and
talk with people in the field,in the trenches, you know, but
in a way that's that's friendlyto to a wide audience.
So we it's a difficult balancewe want to both have technical
depth but try, if we could, topresent it in ways that was

(06:56):
understandable both to totechnical, deep technical folks
as well as hobbyists,enthusiasts, you know, business
executives like really, reallyanyone.
So that's what we try to do wetry to communicate what's
happening in AI and machinelearning to a broader audience,
but with technical depth.

Speaker 3 (07:16):
And I want to sort of underline for particularly the
HR managers out there.
This is a wonderful example ofpeer learning, learning of you.

Speaker 4 (07:24):
Just get people together and they collectively
help drive forward theirlearning in an area yeah, yeah,
I mean so my I think my two kindof, let's say, intellectual
passions in life are learningand problem solving, and so so
for me, the, the partnershipwith tim and Machine Learning

(07:44):
Street Talk is just yeah, it'sbeen a godsend.
It's really transformed my lifebecause it makes it so fun to
learn and I get this privilegeright of talking with leaders in
the field and practitioners inthe field and everybody, and
it's fun.

Speaker 3 (08:00):
Now, the main thing on on managers mind these days
are the large language modelslike chat, gpt, and they need
some kind of mental model aboutif I'm going to have this tool.
What is it likely to be good at?
What might it be able to do ifI put some effort into
fine-tuning the prompts?
What's a waste of time becauseit just cannot do that.

(08:22):
Do you have a mental model orcan you talk about the different
mental models?
People have to make sense ofwhat this tool is.

Speaker 4 (08:31):
Yeah, I think you probably need a couple of mental
models because they are, in asense they're very general, so
you need to think about them ina couple different ways.
The first way to think aboutthem is they are literally
language models, and what thismeans in effect is that they can
communicate with you, you cancommunicate to them using

(08:54):
natural language, and so theyhave a good ability to process
and digest and producewell-formed natural language.
You know in a variety, in avariety of languages.
So, whereas in you know, let'ssay before them, the primary
interface to computers wouldeither be, say, a gui, you know,

(09:17):
some graphical interface, orsome type of programming
language or domain specificlanguage or ht, all this type of
stuff which you would have tolearn as a person because you
wouldn't start knowing those.
They're not your first language, they're not your natural
language.
So the first thing is atransformation of the interface.
Now I do want to say up frontand we'll probably maybe get

(09:39):
into more detail about thislater but there are trade-offs,
like anything in life.
There's no free lunch.
So by communicating in naturallanguage, you lose the precision
and the specificity of kind ofthose machine languages and
programming languages, and so itintroduces ambiguity,
flexibility and things like that.

(10:01):
So there are trade-offs, thereare pros and cons, but as a
first pass, it's this languageinterface to and from.
The second thing is that almostI wouldn't say by accident, but
in order to learn language,what they do is they train it on
pretty much any language that'savailable in digital form, so

(10:25):
this is like the entirety of theinternet and any other library
sources and things like that andso kind of.
For you know, along the way oflearning language they've also
ingested just a massive quantityof information from all this
language that they've learned,and neural networks just kind of

(10:48):
naturally kind of compress andform structures and
representations of thisknowledge.
So there's a sense in whichthey're kind of a massive
repository or compression, ifyou will, of all the knowledge
that they were fed in when theywere learning language, and so

(11:10):
that allows them to act asreally excellent search engines,
and not just search engines,but search engines that can
first of all understand whatyou're asking in natural
language and then produce back,let's say, results and examples
and things like that that aretailor-made to your question,

(11:31):
because it's like they're sortof ad-libbing and putting
together all the pieces you knowto make it exactly what you
asked for.
And just an example of howtransformative that is you know
before, llms, for example.
Suppose I wanted to learn about,you know, and I'll use
programming just because that'swhat I do from day to day.
But, as an example, suppose Iwanted to learn about how to you

(11:53):
know, display a dialog box onAndroid or something like that.
What would I do?
Well, I'd have to go to asearch engine like, say, google,
type in some of those keywords,scroll through the results to
find an article.
Suppose it's a medium article.
I'd go read this medium article.

(12:13):
I'd have to slog through likefour paragraphs of the person
telling me why this is cool andwhy I would want to do it which
is unnecessary because I'vealready decided I want to do it
and then eventually slog throughmore material to finally get to
an example that maybe wasn'texactly what I wanted, it was
sort of slightly different, andthen I'd have to mentally

(12:33):
transform that myself.
Large language modelscompletely streamline all that
into a single query and result,and so they're just order of
magnitude more efficient searchengines.
So I'm going to pause here.
There are other mental models,but let's pause on these two and
then maybe talk about them alittle bit.

Speaker 3 (12:52):
The one thing that I would dig in on a bit is the
fact that, because it'sambiguous, language is going to
be ambiguous.
Sometimes it gives you exactlywhat you want, and sometimes
people get frustrated becausethey said that isn't really what
I wanted, and so you have to godown some kind of path of
rephrasing the questions, andI've seen people have very long

(13:14):
prompts sometimes Tim may bebeing one of them, but what is
your thought about how youinteract with it when it doesn't
give you what you want?

Speaker 4 (13:25):
So you just hit the key word there, which is
interact.
This is an interactive process.
You should always start a let'scall it a dialogue.
You should always start adialogue with an LLM as an
interactive process.
And you know, keep it simple.
You, you ask a question, you dowant to, and there are sort of

(13:46):
tricks that you'll learn overtime and I'm hoping one day.
You know there's training onhow to do this, but you'll kind
of learn tricks of how to phrasethings, just like you had to
learn how to phrase Googlesearches to give it a good shot
at kind of getting to where youwant to go to, like initially.
But keep it simple, you know,keep it concise.
You fire off something.

(14:07):
It gives you an answer.
If it's a bit off or even faroff from what you were looking
for, then you iterate thisprocess.
So just keep going with it,like say, you know, thanks, but
that's not really, uh, what Iwas looking for.
I'm really more asking aboutthis, right, and so you do this
kind of back and forth with itand you can, but let's say,
triangulate, you know to whereyou to, where you're trying to

(14:29):
get to.
That's the process you shouldfollow initially.
Now, like you mentioned, timwith his massive prompts, so
those come from after you gothrough this process.
You go through this kind ofiterative triangulation.
You get to where you want to go.
You'll learn kind of a promptthat you could have given it at
the beginning, almost like,let's say, a composite of all

(14:51):
the prompts that got you towhere you wanted to go.
You'll have an idea of a promptthat you could have given it to
really get directly to thisanswer.
And what you'll want to do ifyou think you're ever going to
use this again is kind of copythose prompts somewhere, clean
them up a bit or even ask theLLM itself to do that compiled

(15:12):
prompt and you put it in thereand then you're starting farther
ahead in the pathway, thejourney, if you will, than you

(15:35):
would have been otherwise.

Speaker 3 (15:36):
By the way, I really like that idea, that gosh, I'm
struggling with these promptsand I want to put them all
together and I think, well, howam I going to do that?
Well, I just ask the LLM In theearly days people and you still
hear this people talking about,well, it's just a stochastic
parrot or it's justsuper-powered autocomplete.

(15:57):
Do you want to explain whatthose mental models are and what
you think of those ones?

Speaker 4 (16:04):
Sure, sorry, the stochastic parrot.
The idea there is that, as wesaid, it's been trained on this
massive corpus kind of of all,or you know most of the
available say, hopefullynon-copyrighted, but who knows?
You know material from the web.
It's been trained on this hugecorpus and, of course, even
though these models havehundreds of billions of

(16:27):
parameters, right, that's stillnot enough to store petabytes of
information.
So what they're doing is kindof finding patterns,
compressions, projections.
You know they're reallydistilling and digesting all
that information and that,necessarily, is a lossy process.

(16:49):
So they've kind of got this allcompressed down and you can
sort of think about the parrot,that part of a parrot which is,
if anybody's had parrots, surethey can repeat some of what you
say, but there's some lossthere.
It's not quite exactly right,it sounds really close, but they
can repeat it and it's usuallyparts.
It's parts of what you said.

(17:10):
It's, you know, sub, subsentences and phrases that
they've heard, you know, manytimes over.
Like siren is a pretty bird,right.
You know that that kind ofstuff, right, curse words
usually show up pretty often inthese things.
And so that's the process thatthe lm it's digesting that
information, that's breaking itup up.
It's creating compressions andbillions of artifacts, little

(17:33):
Lego blocks that can reassemble.
And then we get to the secondpart of the stochastic parrot.
So we've got the parrotingthere and then the stochastic
part is it's got all thesepieces and now it needs to
reassemble them.
Right, but to a degree thatinformation has been lost.
It's been converted into kindof, let's say, you know,

(17:55):
networks of probability, right,and so it starts putting out
these parts and it puts out apart and then it looks at what
it's put out so far and itprobabilistically it sort of
flips some dice and some coins.
It decides what the next partwould be.
That's the stochastic part ofit.
So this is what people meanwhen they say it's a stochastic

(18:15):
parrot.
It's digested everything intothese parts and then it rolls
some dice and kind of stringstogether the parts, like a
parrot would to form a responseOkay and go ahead.

Speaker 3 (18:29):
And I suppose what's good about that is that if
you're interested technicallywhat it's doing, without getting
really into how neural networksdesign, that gives you a pretty
good understanding of what'sgoing on under the hood and it
ensures that you don't thinkwell.
I'm dealing with a human behindthe screen here, so it
highlights some of thelimitations.

(18:49):
In general, though, I don'tthink well.
I'm dealing with a human behindthe screen here, so it
highlights some of thelimitations In general, though I
don't feel it's a particularlyuseful mental model because I
think it maybe undermines.
It makes you think that it'sless than it is Well.

Speaker 4 (19:02):
I mean yes and no.
So there certainly arelimitations to this process.
And I mean there certainly arelimitations to this process and
I mean, for example,hallucinations is, you know, a
well-known problem with largelanguage models, like you know.
You ask it for a reference,like, hey, you know, llm, I
heard that David wrote a paperrecently, a paper about XYZ.

(19:28):
It doesn't, in a sense, havethat information anymore because
it's been compressed andchopped up and stripped away,
and so it's going to give you ananswer.
It'll string together, oh yes,like David wrote a paper
entitled TransphysicalAppropriation of Such and Such,
and it'll string it together andgive you something that looks

(19:49):
really convincing.
It's like, wow, that soundslike great paper.
Heck, you could probably evenask it to generate an abstract
for you and it would seem great.
But it's all fantasy.
It never existed, because onceyou've kind of chopped up to the
world into this probabilityspace, that space is much larger

(20:09):
than the actual world.
So I think it's good to knowthat those are possible.
Limitations is that you have tobe aware that maybe what you're
getting is a stochasticcombination of parts and not
something that was really there.

Speaker 2 (20:24):
Along those lines.
I would say what hearing andunderstanding is, that it's
definitely not autocomplete withthis, that it's actually making
a prediction around what thebest response would be, which
explains both hallucinations andalso, as you mentioned earlier,
we can get much moresophisticated in our prompts and
how we leverage them, but theexact same prompt in the same

(20:46):
system can actually elicit adifferent response, which can be
confusing for individuals aswell.

Speaker 4 (20:54):
I mean, you're right about that.
And that was the other mentalmodel, which is some form of
autocomplete autocomplete onsteroids or whatever the sort of
phrase is, sort of phrases andand the way in which it differs
from, well, I mean, thetraditional auto complete would

(21:14):
really only be looking at a verytiny context first of all.
Right, it would be looking atthe last couple of words that
you typed and trying to find thenext, you know the next few
words.
Okay, the.
The context in these largelanguage models is orders of
magnitude larger than that andit's also much more
sophisticated and more complex.
It doesn't just look at, say,the last part of what came

(21:39):
before.
It doesn't even look uniformly.
It can be intelligent about howit kind of looks around in
there to find the next matches,around in there to find the next
matches, and it's just a farmore sophisticated and much
larger model.
So it's not a fair comparison interms of sophistication, it's a
fair comparison at the lowestlevel, which is that, yes, at

(22:02):
the lowest level, this machinewith hundreds of billions of
parameters is looking at thecontext and it's deciding the
next, the next part, the nexttoken, if you will, and then it
kind of moves forward, and movesforward, and moves forward.
You know, much like anautocomplete would.
But that's about the end of thecomparison.
You know there, and so you'reright, that in these very low

(22:26):
level kinds of senses, I thinkthat in these very low-level
kinds of senses it's correct.
But they do distract from, andsometimes are meant to diminish,
the capability that's beenadded on top, just the massive
amount of capability, difference, difference.

Speaker 3 (22:45):
Now, if we look at different kinds of AI, people
are familiar with, as I say, thelarge language models, chat
GPTs of the world, but they'llalso.
They may remember thatrecommendation engines were the
classic example of an AI, aswell as, more recently,
something like alpha geometry,which is doing extremely well on

(23:08):
difficult geometry questions.
So are there differentcategories of AI that we should
be thinking about, that thiscategory is quite different from
that category.

Speaker 4 (23:19):
Absolutely I mean.
So we kind of in technicalparlance, if you will, we refer
to things that are called narrowintelligences.
So these are artificialintelligences that are tasked to
do something very specificfolding proteins, integrating,

(23:41):
you know, performingmathematical integration,
recommending videos, cetera.
And so the advantage of narrowintelligences like that is that,
when you train them, all yourresources, all your parameters,
all your computation, all theenergy you're using, all the
money you're spending arefocused on that one task.

(24:04):
Right.
And so instead of being a jackof all trades but master, master
of none, it's a master of onething.
And large language models are,in a sense, kind of both narrow
and general.
They're narrow and at least youknow out of the box.
The common ones really, upuntil recently, only understood
you know text, right, they onlyunderstood language.

(24:27):
They really couldn't make muchuseful of, say, audio data or
you know video data.
Now they're starting to.
So people are starting to train.
You know what are calledmultimodal models.
This means it has multiplemodes of of input data that it
understands.
And then, on the other hand,they're.
They're general in the sensethat you can, because natural

(24:49):
language is itself flexible.
You can ask all kinds ofquestions in natural language
and pose all sorts of problems,and you might get an answer from
the LLM, right, but it turnsout that the answers really come
from what you can imagine iskind of a Swiss cheese.
It's like this block of Swisscheese.
And what you can imagine iskind of a Swiss cheese.
It's like this block of Swisscheese.

(25:10):
You know, it's taken all theknowledge that was in its
training corpus and it'scompressed it and chopped it up
and it can kind of reassemble it, but there's a bunch of holes
in there.
And so for some questions youask, you'll get great answers.
For some questions you ask,you'll get bad answers.
For some you'll get answersthat really seem right, but
they're subtly wrong andsometimes in dangerous ways, if

(25:33):
you just apply them withoutdomain expertise.

Speaker 3 (25:36):
Yeah, and that's, I think, another key lesson for
managers when they're educatingtheir employees as well as using
it themselves.
Is this danger of answers thatare just wrong in a subtle way.
They're very convincing, butyou do have to apply domain
expertise.
Yeah absolutely Absolutely.

Speaker 2 (25:55):
Keith, can I ask if there's any developments in the
world of Gen AI that havesurprised you or surprised you
lately?

Speaker 4 (26:03):
Well, honestly, believe it or not, the things
that are surprising me latelyare some of the geopolitical and
social phenomenon happening inthis sphere.
So, for example, I was justreally quite shocked that
DeepSeek was as open as theywere with their methodologies

(26:25):
and techniques and code.
I agree, I didn't expect that.
Because of the geopoliticaltension between the United
States and China in particular,I wasn't expecting a Chinese
company to be so open and itjust happened again with these

(26:46):
sort of G1 robotics release opensourcing, the models and
methods of that, and kudos tothem.
I mean, this is really amazingand I think it's good actually
for the world.
I think that the path, the bestpath forward for humanity is
widespread, distributed, open,diffuse.

(27:08):
You know development byeveryone you know, by all
countries, all corporations, allhobbyists, all enthusiasts.
So things like that aresurprising me in terms of
technological developments.
So I don't want to act like Iwant to say no, but it's not
because I don't think AI hasachieved great things.

(27:31):
I say no because I had greatexpectations for AI and I didn't
mean to say AGI.
I've had great expectations forAI.
So I think this is kind ofexpected in the sense that, yeah
, progress is tremendous andvery cool, but otherwise I think

(27:52):
it's not so surprising thatwe've made it this far and even
that we've made it this quickly.

Speaker 3 (27:59):
And now everyone's talking about agents as being
the next big thing where the LLMwill take a more active role in
actually doing things in theworld and controlling your
computer, just as if you werecontrolling it.
What's your thought about thattechnology?

Speaker 4 (28:15):
Yeah.
So I'm a little bit concernedabout, you know, adding agency,
essentially adding in thesecontrol loops right where AIs
can directly control more andmore, and the reason why is
because we don't know how toreally engineer these things
well enough to be reliableenough for all use cases.

(28:38):
So for some use cases sure,like an AI that changes your
desktop background with coolAI-generated images composed
from all the latest news feedsand ex-post posts and everything
like that that's fine.
There's no real chance of harmthere.
But I worry about hooking upagentic systems to things that

(29:02):
had the potential to do harm,and I just don't think we're at
a level of sophistication of AIengineering yet to do that.
So I think it's a good goal,but we really, you know, it
needs to be pursued with cautionand with transparency as well.

(29:23):
I mean, I think people shouldknow if an AI is going to start
taking control of certainactivities or if the content
that they're consuming wasgenerated by AIs that sort of
thing.

Speaker 3 (29:34):
Yeah, and even if people like to use a travel
agent example, where the damagewouldn't be that great except
that you are hoping to go toAustralia and you end up in
Sydney, nova Scotia, right whichcan happen with humans as well
but nonetheless, to hook it upto actually making financial
decisions of any kind, buyingthings on your behalf, as you

(29:56):
say, the engineering may not bethere to do it reliably.

Speaker 4 (29:59):
Or even, for example, in the travel agent thing you
know, maybe for example a book's.
You know a sequence of legs,you know flight legs that take
you through a certain airportwhere you don't, like you don't
have what you need to actuallypass through the airport, like
there's a visa requirement thatyou haven't completed or aren't
able to complete, orvaccinations that you do or

(30:21):
don't have.
I mean, those are the types ofkind of very detailed
engineering, um, uh, type ofthings that can slip through l
or can slip through, inparticular, llms, but other
types of AI systems that have adegree of probabilistic,
stochastic activity, right.

Speaker 2 (30:42):
Just to shift gears.
Can you share with us how youuse Gen AI for yourself within
your own work?

Speaker 4 (30:48):
Sure, yeah.
So in the first place I use itas a really great search engine.
So I mean, I still usetraditional search engines but a
lot of my, let's say,exploration activities have
shifted over to large languagemodels.
So I really like being able to.
The way I'm using it today is Ikind of start with a large

(31:10):
language model.
I ask it some questions.
It gives me some information.
I'll usually go and try toverify Like certain things will
kind of, you know, maybe smelliffy to me, like I'm not sure
about that, and I'll paste itinto Google and see if I can get
that confirmed, you know, bysomewhere.
Or I'll look at the links ifI'm using one of the LLMs that
provides reference materials andtry and go confirm things.

(31:34):
So that's one thing.
Is this exploration andsearching?
I found it very useful forgenerating tailored boilerplate
code.
So if I need a function thatdoes something that I just don't
feel like writing, like, Ireally don't feel like reading
through the docs and digestingthem and transforming them into

(31:55):
code, you know I can get thatinitial prototype from the LLM.
Now it typically takes quite abit of further work on my part
to, you know, make sure it's notgot any bugs and kind of that.
It does what I want it to do,and then it's updated to the
latest, you know, interfaces andto kind of expand it in ways

(32:16):
that would have been difficultfor me to describe in natural
language.
Anyhow, like it's just easierfor me to code it, kind of code
it myself, but it saves me atremendous amount of time in
that initial prototypeconstruction.
And then I use it for fun too.
So I have, I have fun trying togenerate images in particular.
It's what I've played aroundwith the most.
So yeah, things like that thankyou.

Speaker 2 (32:39):
And how much does this set you back?
So it's financially.
What systems do you invest in?
What's the, what's thefinancial investment that's
required just to to provide abit of an anchor for ourselves,
for our audience.

Speaker 4 (32:52):
Yes, first of all, I don't pay 200-plus bucks a month
for XYZ, awesome LLM, the onesthat are free and or the
lower-tier pay-as-you-go typethings are fine for me.
So I typically, first of all, Ido have an open AI account.

(33:16):
I've had an open AI account,you know, for some time and I
probably spend, like you know,maybe 20 bucks a month or less
kind of on the on the activitiesthat I do there.
I use I use Gemini, which isvarious forms, free, right.
I use ChatGPT.
Even though I have an OpenAIaccount, I sometimes use the

(33:38):
free version of it or log in.
I haven't played around muchwith DeepSeek.
I just didn't really feel theneed because I was getting what
I needed from my work, at leastfrom the other ones.
I play around with it when wedo puzzles and problems and
things like that, to compare andgrok as well.
But primarily, I would say Iuse Gemini and open AI.

Speaker 3 (34:02):
And if we look ahead two or three years, how will the
capability of AI likely bedifferent from today, so sort of
end of 2026 or into 2027?

Speaker 4 (34:14):
Well, it's really hard to predict the future, but
what I think will start tohappen is that we'll have more
and more, let's say, more narrowintelligences trained and fit
for purpose, and so peopleshould be using a variety of
narrow, narrowly intelligent,you know systems for different

(34:37):
purposes and then kind ofintegrating together the results
.
And I hope that, I hope thatpeople start providing kind of
advanced user experiences tomake that easier.
You know, to kind of have like,let's suppose you, you put in a
query that contains you know,some natural language and some

(35:05):
equations and maybe some imagesto analyze into their different
modes and sends differentcombinations of those modalities
to narrow intelligences, pullsthe information back and kind of
reassembles it.
Because we're already seeingthat people are finding results
and this is no surprise I mean,it shouldn't have been a
surprise, but maybe it's asurprise to some people that

(35:27):
it's better to have a smallermodel, narrowly trained on your
goal, than to use a massivegeneralist model, right?
So I think that's where we'regoing to head towards is more,
let's say, constellations andsystems of narrow intelligences
rather than ever bigger generalmodels.

Speaker 2 (35:45):
Thank you and your thoughts on the capability of
robots and where that's headed.

Speaker 4 (35:51):
Yeah, so that's one thing that surprised me.
Well, it's because you askedwhat surprised me.
So what's surprising me is howmuch advancement and resources
and focus are going intohumanoid robots.
So I always figured, yeah, youknow, we're gonna create drones
and that's gonna be it.

(36:11):
Like people are just gonna makemore and more drones of all
different kinds.
You know things that don'tresemble humans at all.
You know whether it's, you know, quad copters or things on
wheels, or things on you knowfour legs that have wheels, or
all kinds of all kinds of crazystuff.
But there's been reallysignificant advancement in
humanoid robots.
Uh, you know, I was a bitconfused by that until I had a

(36:35):
chat, you know, with a friend ofmine and she helped me see,
like kind of the obvious, whichis, hey, the world is already
designed around the humanoidform.
So it makes total sense thatthat, uh, you would have
humanoid robots because they canfit into that environment and
especially if, if, robots areinteracting with people or

(36:55):
helping people do the tasks thatpeople would ordinarily do.
So I've been really quitesurprised at the growing
advancement and investment putinto robots and it's super
exciting.
They're amazing.
I mean I've seen justabsolutely amazing videos.
Maybe, maybe other peoplehaven't seen those two online.

(37:17):
And this gets back to theagentic you know uh question
which is we need to?
We need to be a bit cautioushere, like even forgetting about
agi sagi, doom, doom scenarios.
You know, when you startunleashing robots, they can just
cause damage, like accidentally.
I do want this to proceed withcaution, but it's super exciting

(37:39):
.

Speaker 2 (37:40):
You've worked in big organizations.
With that, do you have advicefor organizations that are
wanting to respond toopportunities to address the
risks?

Speaker 4 (37:51):
Well, there's kind of .
So there's two parts to thatquestion the opportunities and
the risks, you know.
For the opportunities I wouldreally encourage what I would do
is I would encourageorganizations to hold, you know,
hackathons and workshops toreally give people a hands-on
experience with these.
I mean, of course, the softwareproviders like Microsoft and

(38:13):
whatnot, are incorporating AI.
You know, both narrow AI aswell as, let's say, llm
technology into their products.
But just getting it into thehands of people and letting them
try it out, and sort of coachsessions, right where somebody
who's good at these things iskind of walking them through
what they can be used for,tailored for their daily

(38:36):
experience, I think is what Iwould highly encourage, because
I've run into a number of peoplewho they just haven't tried it,
either because they weren'tquite sure how to use it or they

(39:03):
really hadn't imagined thepossibilities.
And when I walked them throughit and kind of even just a
couple hours, you know they'relike wow, you know which is, I
think, the path.
I think the way in which wemitigate the catastrophes that
people are worried about is thatwe think now very and very
detailed and thorough ways aboutreducing harm.

(39:24):
You know that's caused by AI,like AI is already causing harm
today.
We already know that.
We already know that, likewe've seen the stories of the
harm that you know algorithmsand whatnot inflict through
social media and many otheravenues right.
So if we focus on reducing theharms, so for an organization,
what's a big harm?
Well, data privacy, dataleakage.

(39:46):
You know, that's one angle.
That's one angle.
So, making sure that you'vereally got experts and privacy
and data isolation and creatingdata firewalls and making sure
that any AI systems you're usinginternally are governed
correctly and robustly, right.

(40:06):
And then, on the output side aswell, making sure that you have
in place necessary humancuration, oversight,
surveillance and I don't meansurveillance in like a negative
way, but just keeping an eye onthe content that's being
produced, to keep an eye out forconcerning, I mean like,

(40:28):
misinformation or hallucinationsor things like that.

Speaker 2 (40:32):
Thank you so much.
I really appreciate thethoughtful insight and
discussion today.
This has been a shorterdiscussion.
I think David and I could talkto you for the rest of the day
and keep learning.
For those who are interested inlearning more about your work,
how should they follow you?

Speaker 4 (40:50):
I don't.
So I am on Twitter.
You can find me there.
It's uh, uh, dr duggar um ontwitter, but I'm not.
I'm not that at that act, I'msorry.
X.
I'm not that active on x, um,yet I'm becoming more active and
you're mostly going to find acouple posts and some poems and
you know whatnot.
But I think, um, but I will bemore active there in the future.

(41:13):
So you could do that.
Could do that if you wanted to.
You could also join our MachineLearning Street Talk Discord
channel and check out ourpodcast and YouTube there.
I'm pretty active in thatDiscord community.
So if you join the MLST Discord, hang out in there, I'm around,
we can chat in there.

(41:34):
Otherwise Discord, hang out inthere, I'm around, we can chat
in there.
Otherwise.
And then you can check out mycompany, so that's XRAIglass
G-L-A-S-S.
Check it out, see what we're upto.
Try the software.
It's in the Android store andthe Apple store.
Those would probably be thebest options right now.

Speaker 2 (41:51):
Very good Are you on LinkedIn?

Speaker 4 (41:53):
Yeah, I'm on LinkedIn .
Let me just check here.
I think that's.
Oh, it's Dr Keith Duggar,d-r-k-e-i-t-h-d-u-t-g-a-r at
LinkedIn.

Speaker 1 (42:06):
Thanks for listening to the HR Chat Show.
If you enjoyed this episode,why not subscribe and listen to
some of the hundreds of episodespublished by HR Gazette and
remember for what's new in theworld of work?
Subscribe to the show, followus on social media and visit
hrgazettecom.
Advertise With Us

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.