All Episodes

September 24, 2024 • 47 mins

Share Your Thoughts With Us!

Discover the untold truths and common missteps of AI implementation in business with our special guest, Matt Martinez, a seasoned cloud engineer at QFlow and founder of DragonOps. Alongside Dr. Andrew Hutson, Matt uncovers the frequent pitfalls and misconceptions that businesses face when jumping on the AI bandwagon without clear objectives or quality data. We delve into the role of FOMO and competitive pressures that often drive companies to prematurely adopt AI, leading to suboptimal results and wasted resources.

Join us as we shed light on the real-world use cases of generative AI in software engineering and beyond. Matt shares eye-opening anecdotes about the misconceptions non-technical managers have regarding tools like ChatGPT, emphasizing the irreplaceable value of human expertise. We discuss the practical challenges of using Gen AI for coding, including the necessity for continuous adjustments and the inherent limitations of these models, stressing how human oversight is crucial in ensuring AI-assisted solutions are effective.

Lastly, we navigate the complexities of integrating AI tools with business processes to generate meaningful insights. From the challenges of data preparation to the importance of well-structured and interconnected data, we provide a comprehensive overview of how businesses can effectively harness AI. With a focus on transcribing and summarizing meetings to improve efficiency and communication, we explore the future potential of general artificial intelligence.

To learn more about how DragonOps can help your business, visit https://www.dragonops.io/

About "The Interconnectedness of Things"
Welcome to "The Interconnectedness of Things," where hosts Dr. Andrew Hutson and Emily Nava explore the ever-evolving landscape of technology, innovation, and how these forces shape our world. Each episode dives deep into the critical topics of enterprise solutions, AI, document management, and more, offering insights and practical advice for businesses and tech enthusiasts alike.

Brought to you by QFlow Systems
QFlow helps manage your documents in a secure and organized way. It works with your existing software to make it easy for you to find all your documents in one place. Discover how QFlow can transform your organization at qflow.com

Follow Us!
Andrew Hutson - LinkedIn
Emily Nava - LinkedIn

Intro and Outro music provided by Marser.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Emily (00:08):
Welcome back to the Interconnectedness of Things, a
podcast brought to you by Qflow,where we dive deep into the
technology shaping our worldtoday.
I'm your host, emily Nava, andwhether you're curious about AI,
digital transformation or howthese innovations impact
industries like government andhealthcare, you're in the right
place.
Today we have a special guestjoining us, matt Martinez, a

(00:32):
cloud engineer here at Kubeflowand the owner of DragonOps, a
tech firm with a strong focus onAI and software development.
Matt brings a wealth ofknowledge in AI implementation
and how businesses can andsometimes shouldn't leverage AI
to get ahead.
Matt, it's great to have you onthe show.

Matt (00:53):
Thanks, Emily.
I am super pumped to be a partof the show and to discuss AI
with you and Dr Hudson today.

Emily (01:00):
As are we.
Yes, Dr Andrew Hudson is hereas my co-host.
He will be interjecting withhis own nuggets of knowledge
here and there.
Want to say hi, Hudson.

Dr. Hutson (01:15):
Hey, how's it going?
I am ready with nuggets.

Emily (01:20):
All right.
So today we're going to explorea topic that's super important
for anyone looking to staycompetitive in the digital age
how people are misusing AI ormissing out on huge
opportunities when it comes toAI for business use cases.
So the age-old question how canI use AI efficiently?
So let's start by talking aboutsome of the most common ways

(01:46):
businesses misuse AI.
What's your take on that, matt?

Matt (01:52):
Sure.
So you know I mean AI is allover the place and you know,
with all the hype there is outthere right now, it's really
hard to blame anyone for wantingto just dive in without too
much direction and too much of aplan.
The problem with that is AIisn't magic.
It needs a clearly definedobjective problem and high
quality data.
It needs to be tied into yourcore business objectives, and if

(02:14):
you fail to do any of thosethings, then you're not
necessarily going to get theinsight or the results out of it
that you want.
So yeah, it's just, it's veryeasy to jump into AI without
being aware of the right and thewrong way to harness things
like that.

Dr. Hutson (02:30):
That's a good point, matt, and it almost feels like
we need to step back in time alittle bit.
Right, and we don't.
You talked about the hype of AI, and I used to work with
someone who would calleverything a hype train and
apparently it was a good thing,and I would get all upset of
like, well, no, that'shyperbolic.
Why would you want to be on ahype train?

Emily (02:52):
Because it's fun.

Dr. Hutson (02:54):
Because it's fun.
Ai seems to have caught theimagination of a large majority
of people because of the greatdemo ability of generative AI
and so people being able tointeract with this, they start

(03:15):
to jump to conclusions on whatit is actually capable of doing
without truly understanding it,and I think, if we take this
back in time a little bit thepast 20 years or so the effort
of building data centers andgetting data organized and
getting to a source of truth andthat gold standard this seems

(03:38):
to be the logical progression ofthings that technically, are
already AI before ChatGPT evercame on the scene.
It just has a novel probabilitymatrix that it's using to say.
This is the next word part thatshould probably come next, but

(03:59):
that's it.
It's not intelligent, itdoesn't think it's based off all
the training data that it has.
This is probably the next rightthing to say, but then,
unfortunately, you also gethallucinations.

Emily (04:16):
Yeah, unreliable answers.

Dr. Hutson (04:18):
Yeah, and you get degraded performance over time
too.
So I think it's the wrong wayto use it is to use it as a
replacement, or for companies touse it out of context of their
work, and so then the questionbecomes how do you get the
context into the system?

(04:40):
And I think that brings upquestions like security and
privacy which, Matt, I know youknow everything about security
and privacy and locking it down.

Matt (04:52):
Oh sure, yeah, no, that's a whole other topic for
conversation, and I completelyagree with you.
You know, businesses they rightnow a lot of these businesses
are prone to just dive inwithout defining exactly how
they need to be using it ortraining it, and it's just a
mess from there.
I also I love that you saidthat we need to kind of go back
in time and hold the brakes onthis a little bit.

(05:13):
I completely agree with that aswell.
You know, I like to think thata lot of these businesses
jumping into AI without thinkingabout the right approach is
kind of like me buying a newkitchen tool off Amazon every
three days.
It's all about the hype.
I'm hyped, I want to make a sousvide, this or a slow roasted
that and three days later Ibought the tool.

(05:35):
I don't know how to use it, Ididn't prepare for it, I'm not
trained in it, so now I justhave an expensive tool and no
results, and companies areending up with the same thing.
They're overpaying andunderplanning and ending up with
a disproportionately low amountof results for their effort and
their investment.
What do you think that is?

Dr. Hutson (05:52):
Matt, what's that?
Why do you think that is?
Why do you think folks like isthere this?
Is it FOMO?

Matt (06:01):
Well, you know, I bet it's a few different things.
Fomo is absolutely a part of it.
We are hardcore riding the AIhype wave right now.
I bet it's also a fear of yourcompetitors.
Everyone is using AI and Ithink the implicit understanding
there is if you're not alsousing AI and if you're not using
it right now, that's a missedopportunity in business.

(06:22):
You're not going to streamlineyour business.
You're not going to generatethat additional revenue.
Keep up with your competitors.
So you need to jump on it now,whether or not it's the exact
right time or the exact rightapproach, and I think that's the
misconception that's biting alot of people in the butt right
now.

Dr. Hutson (06:38):
Yeah, I think there's a lot of merit there.
Sorry, emily, were you going tosay, say something?

Emily (06:43):
I was just going to say that they, they companies feel
like they need to be using itand so they just slap it onto a
process or slap it on somebody'swork plate and just say figure
this out like we need to beusing this.
You figure out how we use it,and that is a whole other issue

(07:03):
of making the knowledge workerfigure out how to use AI.

Dr. Hutson (07:10):
I'm going to catch 22,.
Right, I mean, there's the.
Well, I want to try to figureout how to use it, but I have to
have the use case in order tojustify bringing it in.
Just this past week at the AIWorld Conference in San Diego, I
got to hear from differentfolks around governance for AI

(07:31):
and how it's seen as a black boxand, without naming company
names, just talking at generalcouncils for large companies and
CIOs.
They're getting confronted tobring in generative AI,
specifically chat GPT, intoareas like coding or information

(07:51):
governance, but everybody'safraid to do it because they
don't really know how it worksand they're very concerned
around, I guess, the assumptionthat it has sentience and
therefore would have some legalramifications of liability,

(08:12):
which I had never reallyconsidered before.

Emily (08:16):
What do you mean by that?

Dr. Hutson (08:19):
Sentience is like it's a thinking, feeling entity.
It's a non-human thing that canthink, feel and do and that's
based off of your ability totype in something into the chat
and asking it.
If it's alive, do you thinkthat you can do something?
And I think, because it'strained on human language and

(08:45):
human interactions to trainthese large language models, it
would be very likely to have avery human-like response,
because that's what it wastrained to be like
self-legislating,self-autonomous, that it is

(09:07):
making these decisions withoutdirection from a programmer.
What do you guys think?

Emily (09:17):
I think that comes into question some quality issues
that people are running intowhen they're using these models
that have been trained onunreliable sources.

Dr. Hutson (09:33):
Yeah, go ahead, Matt , no you go ahead.

Matt (09:35):
Sure, I'm just going to say, you know I couldn't agree
with you more, dr Hudson.
You know there's thisprevailing opinion that these AI
, these gen ai machines, areself-legislating and, you know,
even nefariously self-aware, etc.
As someone who works with thisstuff all the time and deploys
this on a daily basis, I can saythat's very far from the truth.

(09:56):
Right now it also sounds like,you know, we're talking about
two different problems, twodifferent sides of the same coin
.
We're talking about overuse dueto hype and underuse due to
fear.
I see both of those and you'reabsolutely right, dr Hudson.
You also mentioned overuse ofchat GPT in software engineering
, and that's something I see allthe time.

(10:17):
I wish I had a nickel for everytime I heard a non-technical
project manager say somethinglike.
For every time I heard a, youknow, a non-technical project
manager say something like allof you developers need to be
using chat GPT to develop yourcode.
That's a big misconception.
It's a result of the hype andit's not really how it works.
You know that when you use GenAI like that, it has to be for
objective based businesscomponent tied you know, goals,

(10:42):
and it has to be with humanintervention.
There have to be human checksand you know it has to be with
human intervention.
There have to be human checksand it has to be this pairing.
It's not going to be a silverbullet or a catch-all, it's a
balancing act.

Dr. Hutson (10:53):
Yeah, I mean, do you have any?
I mean, I have my own personalanecdotes about using it for
coding.
Do you have any yourself?

Matt (11:02):
Yes, I think using, I think, chat GPT as a source of
code in software engineering iskind of like the unpaid intern
in a kitchen or something likethat.
Like, yes, you need them,you're training it, they're
helping you, but it needs farless responsibility than I think

(11:22):
most people are giving it rightnow.
You know, 100% of the outputthat's given to you by Gen AI
still needs to be poured over bya human.
I mean, you know, obviously notat scale when you're talking
about millions or billions ofdatasets, but for things like
software engineering, where theproblems that you're asking Gen
AI to solve can be representedby code blocks of one, two, 300

(11:43):
lines of code.
Yeah, you know the.
You need to be supervising itvery carefully because this code
is going to go into productionand be used for all kinds of
business use cases.
What about?

Dr. Hutson (11:53):
you.
Yeah, man, For me specifically,I have.
I can see the merits of usingsomething like that as a
starting place.
If I know nothing and I'mtrying to figure out a new
concept whatever that might becould be coding, could be
anything else it could be aplace to help me get started.

(12:17):
Um, uh, but I have found.
First of all, when I startedusing this back in before, it
was ChatGPT, so I was using itas command line, ChatGPT3, using
Python on my local M1 Mac.

(12:39):
Okay, the amount of tweakingthat had to happen even to get
something intelligible back wasdifficult then, and I would
think the height of itscapability was when it was first
released in November, December2021.
And ever since then, I've seendiminishing returns in the

(13:04):
quality of the solutionsreturned and I've gotten more
specific with my prompts andmore targeted than I was before
and getting worse results andreally having to go through lots
of iterations to the point thatI just stopped and I'm like
it's not doing it for me.
I need to go someplace else tofigure out how this actually

(13:26):
works so I can get it done.

Matt (13:29):
That's very interesting.
I'd be very curious to learnmore about the use cases.
You know it's faster for me toask a Gen AI assistant for 500

(13:50):
lines of code and then correctthe 50 that are glaringly or
obviously wrong than it is towrite those 500 lines myself.
And you know that's fortunately, at least for me, who spends a
lot of time writinginfrastructure code and backend
automation code for the cloudand DevOps that appears to be
the case is, gen AI can usuallyget around 90% accuracy for what

(14:11):
I need and I end up actuallysaving a considerable amount of
time just correcting the piecesthat I can pretty quickly tell
aren't correct than writingeverything from scratch.

Dr. Hutson (14:21):
I think that's the key difference.
So when using these models, ifI come at it as an expert
already and I need something toenhance or speed up what I
already know how to do, then yes, I think it's a novel
enhancement to my workflow allday, flow, all day.

(14:41):
To think that it can replacesomeone that doesn't have those
knowledge, skills and abilitiesI think is not even close to the
realm of possibility, and Ihave speculation for why that is
, and I think it mostly comesdown to its source training set.

(15:03):
How often have you and I comeacross folks that have wacky
ideas about coding, logic andbest practices and people will
publish anything out on the webwithout verifying it.
How many different articles andforums have you been on before?
We had ChatGPT that would tellyou a solution and it didn't
actually work and you had tofind it like 50 lines down and

(15:26):
some comment to even get closeto something that could actually
operate.

Matt (15:32):
That is spot on.
I couldn't agree more.
You and I see this all the time, having worked together for
years now.
In fact, I actually worked fora client who ran into this
specific problem a few yearsback.
They were using machinelearning for smart agriculture
solutions out in California, forthings like yield prediction,
water management, stuff likethat.
They were faced with two paths.

(15:53):
Path one was hire a seniorengineer to help maintain and
write better code.
Path two was keep their currentjunior engineer and implement
chat GPT to write the majorityof their code.
They chose that option and itit ended up going very badly for
them.
You know they, they.
To this day, three years later,they, they, they're not able to
push a ton of new code.

(16:14):
You know they're.
They're forced to consistentlydeal with tech debt because
they've pushed code that youknow their junior engineer, just
at that point in his career,didn't understand.

Dr. Hutson (16:23):
So yeah, I completely agree with you yeah,
and I and I think that that thatlittle anecdote there goes
across many other places.
So it's anything that you're soco-pilot is like the new thing
that's built into visual studiocode that everyone wants to talk
about.
But then you know, there'sother systems that can be
trained up, like mistral.

(16:44):
And then, um, well then there'sa classic story of chat gpt of
a bunch of lawyers going andasking it to give it evidence
for its legal argument and thatit made up court cases.
Uh, so that's cool way to checkyour work there, guys.

Matt (16:59):
Classic lawyer joke that, and it was absolutely delightful
, yep.

Dr. Hutson (17:07):
And so that's how I think about AI in the workplace
and where folks go wrong with it.
This was a very long answer tothe very first question, emily,
do you feel like we covered itgood?

Emily (17:20):
I think you more than covered it, but I really loved
the anecdotes and the personalexperiences with it.
I tend to use chat GBT as kindof like that intern.
I like to say they're a paidintern though, because unpaid
internships are just, they'renot it, but I totally agree with

(17:43):
everything that you both havesaid.
So I want to shift theconversation a little bit too.
So we've talked about how notto use AI in business.
We don't want to over automate,we don't want to rely on it too
much, because there's thequality issue, but there are
some opportunities thatbusinesses are maybe missing out

(18:06):
on.
So what do you think thoseareas are?

Matt (18:16):
Sure.
So you know it's really hard.
I think businesses can miss outin one of two main ways and
it's you know, as a whole as anindustry we haven't made the
advancements we need.
Or some companies in anindustry are using AI for
specific targeted you know usecases and some aren't, and those
in that case are missing out onthese use cases, but you know

(18:38):
it spans industries.
I mean you can talk about demandforecasting in a multitude of
industries.
You know doing things.
Talk about demand forecastingin a multitude of industries.
You know doing things likeanalyzing historical data and
market trends to reduceoverstocking or understocking.
You can talk about using demandforecasting to help scaling and
cost optimization in the cloudand in software to help with

(19:00):
staffing levels in any number ofindustries.
You can talk about education.
So my wife is a former teacherand we're still very much
figuring out how to improvethings like intervention for
at-risk students or curriculumdevelopment or personalized
learning with AI.
I think the big takeaway hereis there are many of these cases

(19:22):
where it's not necessarily amissed opportunity so much as it
is.
We've started to harness thepower of AI, but we're not done
harnessing the power.
We're not done revolutionizinghow to use these tools in every
industry.

Dr. Hutson (19:36):
Yeah, I agree with their bet on we're not done, and
I also wonder if we're makingsome assumptions that we aren't
being explicit about.
So one of the things I like todo is level set on a definition
of AI, and I kind of like theIBMs that they put out recently

(19:58):
Maybe it wasn't that recenttheir definition of AI is a
series of steps that wouldnormally require human

(20:19):
intelligence or intervention,and so, with a definition like
that, we have to be honest witheach other that AI is not new
and it's been used.
We've all been.
If anybody has a smartphone,welcome.
You've been using artificialintelligence, right?
Anybody ever used the GPS and amap.
There you go when to go, turnleft and right it's helping you

(20:42):
out.
Where to go, turn left andright, it's helping you out.
That's a classic example ofsomething that we're already
using.
Now, where it gets interestingand where there's been a huge
barrier are these predictivethings that you're talking about
, matt, and that's beendifficult because, while we have
gotten very good at storingdata, we are very bad at relying

(21:06):
on it, verifying it and gettingit back in a way that we can
make a decision on.
It's not unheard of to ask acompany how many terabytes of
data they have.
20 years ago, a terabyteprobably wasn't in most people's
lexicon.

(21:26):
Now we're at petabytes andzettabytes.
The amount that we can store isenormous, and it's created this
new problem of we need to beable to sift through it, and
that's becoming harder andharder and harder.
When I think about how we'regoing to get into the next phase

(21:47):
of using artificialintelligence tools, I think it
is smart to focus in on how arewe organizing our information to
aid a computer to sift throughthe copious amounts of data in

(22:07):
the same quality that we wouldtrain a human to do it.
It doesn't mean we're creatingsentience I want to be clear on
that but we have to be able toexplain our decision-making, as
humans organize the informationin a way that can replicate
those decisions, and then wehave to be able to
programmatically put that intoan algorithm, for example, to

(22:34):
have a predictable decision thatwe can rely on.
All those things are reallydifficult to do and have nothing
to do with chat, gpt andeverything to do with how a
company is able to externalizethe corporate knowledge that
they have, how they're able toorganize and store it and then
how they're able to then use iteither within their current

(22:57):
staff or within an AI model.
And if you can't even get yourstaff up to speed because
everything is tribal or lockedin somebody's head or full of
rock stars, you're never goingto be able to get it from a
computer.
In fact, the only people I knowthat have truly gotten value

(23:18):
from data are called quantsquantitatives quantitatives In
the financial sector, workingfor hedge funds.
They're able to do things like,if the temperature drops in
Nebraska by three to fourdegrees next to the oil pipeline
, that that would slow downproduction of oil, which would
increase the price of oil duringthe duration of three to six
weeks.
Therefore, I should make thismove on oil barrels so I can

(23:41):
make X number of dollars thatlevel of correlation of the most
minute data to make a decisionwhich, by the way, they will
never tell anybody how to do,because they are advantaged to
being the only ones who know howto do that, because they can
manipulate markets and pricingas a result.
The rest of us have to realizethat that's what it takes to

(24:05):
actually get human leveldecisions that we would rely on.
Now, if you want bad humandecisions, welcome, we already
have it.

Matt (24:16):
But I don't think that's what we want to use AI to do.

Dr. Hutson (24:19):
I don't think we're like hey, let me use ChatGPT to
make a wrong call yeah yeah, canI have some broken code please?

Emily (24:28):
but I think a lot, a lot of people don't realize that
chat gpt can be wrong.
They just think, like you said,it's just this all-knowing
sentient being that has all theknowledge in the world.
But humans have flaws and AIwas created by humans, so it's

(24:49):
going to be inherently flawed.

Dr. Hutson (24:53):
So the only way to make that better is to give your
best knowledge to the trainingand sift out the the false
positives and the falsenegatives to the extent possible
so that you can have higherconfidence and reliability in
whatever outputs thosealgorithms will produce.

(25:16):
Not an easy thing.

Emily (25:22):
All right.
So we've talked a lot about themisuses, missed opportunities,
how businesses should be relyingon AI.
I kind of want to talk aboutintegrating AI into your
existing workflows or softwaresand how you think that.
What's the best way to go aboutthat?

(25:44):
I'll throw that one to you,matt.

Matt (25:47):
Sure.
So you know something Dr Hudsoncovered a moment ago is that
you know one of the most commonpitfalls we're having right now
is this eagerness to jump intoAI without proper preparation
around your data.
We talked about data siloing,which is absolutely a big thing.
One of the most common pitfallswe're seeing right now is

(26:08):
companies will adopt these AIproducts that require on
multitudes of information, butthese companies are large, they
have different departments withdifferent data sets that are
different structures, were, youknow, raised by different teams
and are disparate and don't workwell toward a cohesive, single
AI-powered goal.

(26:29):
We see that all the time.
You know many businesses aren'tfailing to at least try to
integrate their tools and theirprocesses with AI.
The failure is often in thetime.
You know many businesses aren'tfailing to at least try to
integrate their tools and theirprocesses with AI.
The failure is often in theintegration.
So, even if the data isn'tsiloed, you're running into
structural issues, which isanother thing Dr Hudson
mentioneda moment ago.
So you know you have to get allthese things right.

(26:50):
You have to have unified datawith purpose and meaning that's
clean, and then you have to tieit to core business objectives,
for example.
You know you can use all theexpensive AI analytics software
tools out there, but if youhaven't spent enough time
planning how you're going to usethese tools to forward business
objective X and businessobjective Y, then your insights

(27:12):
won't be as helpful as theycould have been and you're going
to end up back on the drawingboard or worse, you'll have
harmful insights that canactively lead you in the wrong
direction.

Emily (27:22):
And so do you think people are expecting or I guess,
yeah, expecting the AI to knowhow to sort through their data
sets and clean their data forthem?
I would, yeah, Hudson's nodding.
So yeah, that can be verydangerous.

Matt (27:43):
I bet it's a good mix.
I bet you'll find plenty ofpeople out there who think that
AI has the ability to drawinsight from incomplete or bad
data sets, when in fact itdoesn't, and you'll have people
who have no idea what it'scapable of and they're just
using it, just to use it, andthey're going to end up with the

(28:03):
same result.

Dr. Hutson (28:08):
I think there was an old Einstein quote about that
right Keep doing the same thingover and over again, expecting a
different outcome.

Matt (28:15):
einstein quote about that right.
Keep doing the same thing overand over again, expecting a
different outcome.
There we go.
Yeah, einstein loved to waxabout chat gpt it was his big
thing back it was a big thingfor him.

Dr. Hutson (28:22):
Yeah, I agree that between that and quantum
mechanics yeah, those twofavorites, um, oh, I wanted to
build on some of the stuff thatyou had said there.
So, uh, let's bring it to aspecific use case that I have
seen get rapidly adopted andimprove the work of everyone at

(28:44):
our company, and that's a simplething of taking the transcript
from a recorded meeting using AIto convert the audio into text
and then the text into a chatthat you got an initial summary

(29:06):
of what happened in that meetingand you were able to
interrogate it for more detail,for more detail.
The other neat thing that itdoes is when you arrive at the
meeting late, rather thanstopping everyone and saying get
me up to speed, you can keepyour mouth shut and read the

(29:29):
summary of what had happenedwhile you weren't there, so that
the meeting can continue andyou can be informed.
These are small, subtleinjections of using a large
language model to help solvecommon gripes.

(29:49):
We'll say I'm like what was onthat meeting?
Who said what?
What was I supposed to do Allthat stuff?
I'm like what was on thatmeeting.
Who said what?
What was I supposed to do Allthat stuff?
And it's been to such an extentthat people can't imagine or it
becomes jarring to join adifferent kind of meeting that
doesn't offer that, because theybecome so accustomed to it in
such a short amount of time.

Matt (30:12):
I wouldn't agree more.
You know, one of the thingsI've been focusing on so far is
the pitfalls that happen whenyou don't integrate AI with your
existing processes the correctway.
But I love what you're talkingabout and that's been absolutely
wonderful.
In fact, you know, I've seenmeetings that weren't recorded

(30:39):
by this tool.
I've seen leadership turn onrecording for the last 90
seconds of a meeting just to beable to dictate what was spoken
about so that they can have thepower of this tool at their
fingertips.
You know I have to guess thathalf of the Confluence articles
and JIRA tickets that we've madein the last several months were
powered by this tool.

Dr. Hutson (30:59):
I wouldn't doubt it.
I've certainly used it forsummaries of reviews.
We've made JIRA tickets fromthose meetings.
Like you said, what thathighlights in this one or series
of use cases is that when youhave a specific purpose and the

(31:20):
tool that you are using istargeted at that narrow use case
, it can be really effectivewithout anybody getting mucked
up.
And am I using AI or not?

Emily (31:33):
because no one says AI, but that's what they're using
whether they know it or not, andit doesn't really matter, and I
think that's the mark of thecorrect way to use AI, is you
don't?

Dr. Hutson (31:47):
know you're using it Correct, and I think if we take
that nugget I know I was goingto bring nuggets today if we
extrapolate that out toknowledge work in general, then
we can start to come up withsome principles or guidelines
that, if we are intending to getvalue from a large language

(32:10):
model, we should first startwith simple use cases and we
should reinforce it with our ownlearning and data, which
creates a new problem.
Why would a company share itsproprietary data, its knowledge,

(32:32):
skills and abilities that it'sexternalized into some system?
Why would they share that witha third party like meta or open
AI or anthropic?
Why would they?
I wouldn't.

Emily (32:51):
They wouldn't and their lawyers would not want them to
do that under anything.

Dr. Hutson (32:55):
You're exactly right .
I mean, I sat through thosepanel discussions of lawyers
saying it's a black box, youcan't pass compliance if you're
sending your stuff over there.
You can't rely on those answers, don't do it.
And when you start to hear that, you start to think, okay, hear

(33:20):
that, you start to think okay,now we're just getting into an
era of how, not if.
How can we effectively do thiswhile upholding compliance
standards, while protectingprivacy and ensuring security?
The only answer that I have forthat which I'm sure there's

(33:40):
others is you have to local hostit, you have to do it yourself.
Now, thankfully, many of thesemodels that are out there to be
used are open source and thereisn't a cost associated with

(34:01):
using a model that's beenpre-trained.
The question just then becomeshow do I get what Emily knows,
what Matt knows and what Hudsonknows into that model in the
right way, so that when I needsomething from it, when I ask it
a question or I ask it to do anaction that normally one of us

(34:24):
would do, that I would do itwith enough reliability that we
could enhance our work, like wealways.
I always say this we needanother Emily.
Well, we can't have another,emily, because she's too
opposite.
Well, we've still got to make.
We got to multiply Emily by abillion.

(34:45):
How do we do that?
And I think this combination ofhow we externalize what we know
, combined with the models andtools that have been published,
the higher the likelihood thatwe would get reliable value
consistently from these tools.

(35:05):
What do you guys think?

Matt (35:09):
Well, I, for one, couldn't agree more.
What you're describing is avery delightful, super magical
trend that we're seeing more andmore of companies that are
using self-hosted large languagemodels to get that kind of
insight and that kind of valueout of data that they wouldn't
otherwise export or share withlarge companies.

(35:41):
Like you described, this is anincredible tool and a really
cool trend to watch people do.
This is an incredible tool anda really cool trend to watch
people do, and it's encouraging.
People are often intimidated bywhat it takes to self-host
somewhere like the public cloudare becoming, if not more
affordable, sometimes moreaffordable, but if not more
affordable and also at leastmore scalable or, more you know,

(36:04):
workable in a way that whereyou can scale this up and down
and make it fit within yourbudget.
That's something we do atDragon Ops.
Right now actually is a lot ofLLMpowered data and insight
management, and it's great.
It's one of my favorite thingsthat I see people do with LLMs
right now.

Dr. Hutson (36:23):
Well, that's awesome .
I want to know more about that.
Like you're using LLMs or otherAI models, that approaches
within your own tool to helppeople leverage the cloud, yeah,
oh sure, yeah, and you know,you said it.

Matt (36:36):
one of the biggest problems is that people want
this level of insight, but theycan't or don't want to, or are
afraid to, share their data toget that level of insight.
So one of the things we do atDragonOps is you know everything
that we deploy as a platformservices company for people.
It's in their own private AWSaccounts, and then what we can

(36:58):
then do is deploy cost optimizedand scalable versions of LLMs
with an easy interface so thatpeople can grab the kind of
insights you're talking aboutwithout having to export or do
anything hard or risk their dataor risk their data governance
or anything like that or risktheir data governance or
anything like that.

Dr. Hutson (37:20):
Yeah, I mean, that's so perfectly positioned for the
moment that right now, so manypeople want to leverage these
things, but as we get seriousabout it with these companies,
they can't really pass it bygovernance or compliance, so
making it incredibly easy forthem to spin these models up
without the worry of it goingoutside of their I guess maybe

(37:43):
the right term is their walledgarden, right Like it's in their
stuff.
It's kind of the magic saucethat your companies bring into
the world, which is awesome.

Matt (37:53):
Oh sure, yeah, we're trying to solve two of the main
problems around AI right now,which is awesome.
Oh sure, yeah, you know we'retrying to solve two of the main
problems around AI right now,which is that people don't know
how to use it and people areafraid to tag their data to it.
So we're trying to help withboth of those things.
So I couldn't have said it anybetter than you did.
You're awesome, man.

Emily (38:09):
Yeah, that's amazing.
I had a question about likebuilding your own LLLM just
using your company's data.
Can you explain more of whatkind of data you would need to
build your own LLM?
Is it your emails or is it yournotes?

(38:32):
Can you tell me more about that?

Dr. Hutson (38:38):
I would love to, but this is something we've given
considerable thought to as acompany, even, I'll say, as an
individual, thinking aboutknowledge work for the past two
decades.
The number one thing thatpeople have to do in order to

(38:59):
leverage these models is to tryto understand the relationship
and context of their information.
Okay, the challenge then is howcan I do that at scale?
Because there's really fourgrowing domains of information

(39:20):
that we have to try to siftthrough as humans.
The first is very, verypersonal.
So you mentioned email andnotes.
You know you and I like Appleproducts, so let's take Apple
notes as the place where I'mgoing to put all of my notes.
Matt, I know, is a big fan oflog sec.
It's another great place to putnotes, and all three of us have

(39:44):
to use email.
We'll just talk personal, notprofessional, yet All of us use
email.
Just taking that corpus of dataand getting it organized is
monumental.
The habits and the disciplinethat have to go along with

(40:07):
getting those few thingsorganized for the individual is
a lot.
Now, if we grow out to the nexttier, that's, the groups that we
are a part of Could be ourfamily, could be a church
community, could be a sportsthat we do, could be board
gaming convention, could beAnthrocon in Philadelphia, who

(40:27):
knows?
But these would be the groupsthat we interact with, that
there's some information that weneed to collect and sift
through and connect with ourpersonal information that we're
interoperating with every day.
And then, if we go further out,still beyond those groups, and
thinking about the organizationsto which we belong, we can
think about the companies thatwe belong to, or even

(40:50):
location-based, like our city,our state, our country, all
these things that are happeningat an org level that are not
incredibly intimate but stillaffect us in some way.
And then the final area is theworld.
So Apple did a really good jobsimplifying this down to two

(41:13):
groups.
I use four because I think it'sbetter than two.
The two groups that Apple talksabout is yours and the world's,
so your information is heldprivate.
To use the Apple intelligence,because apparently AI now means
Apple intelligence if you workat Apple, because apparently AI
now means Apple intelligence ifyou work at Apple, and then

(41:34):
anything that's not Apple, whichwould be world, so you would
have consent and approval tosend your information out when
you choose.
It isn't done automatically, butthat's still sending your data

(41:55):
out to an LLM that's hosted byanother company to get something
back rather than keeping itwithin your walled garden.
As we think about these as afuture and knowledge workers,
these four domains need to bemanaged in a scalable way for us
, for anyone to actually getvalue from it, and that's been
the really big barrier formeaningful progress on any of

(42:16):
this.
For any one person, there'stools that hyper-focus on one of
these areas, but not all four,and, as a result, it's difficult
to get the right context, theright diagnosis, the right
diagnosis, the rightprescription, the right
prediction from any of the AImodels, because these things

(42:38):
aren't interconnected and as theinterconnectedness of things
grows in importance and yes, Iam punny all day long because
I'm talking not only about thispodcast but in the actual
interconnectedness of things-Not just the internet of things.
That's right.

Emily (42:58):
The interconnectedness of things.

Dr. Hutson (43:01):
All of this has to come together in a way that is
unobtrusive, that doesn'toverwhelm, that becomes second
nature, and once we're able toachieve that, that's when we'll
really get some value frommachine learning, predictive
analytics, prescriptiveanalytics and even narrow

(43:28):
artificial intelligence.
Or, if people are correct intheir assumptions, we're going
to achieve general artificialintelligence in the next decade,
which, based on what I'm seeingso far, I think is an
optimistic target.

Emily (43:45):
Matt, do you have something to add?

Matt (43:48):
Oh sure, I mean a couple of things.
One totally with you aboutgeneral artificial intelligence.
Oh sure, I mean a couple ofthings.

(44:14):
One totally with you aboutgeneral artificial intelligence.
Anyone who's afraid of thatbecoming a thing in the next
decade go ask ChatGPT how manyR's are in the word strawberry.
And then you might Dr Hudsonabout making sure that if you
want to get the most possiblevalue out of LLMs right now,
then you're going to make sureyour data is accessible and
strategically well thought outand easily consumable and
readable.
The data folks at Qflow areamazing at putting so much
forethought into the structureand the relationships of their

(44:35):
data, such that you know ifanyone's going to get a huge
amount of success out of LLMusage, it's going to be the guys
at Qflow.

Dr. Hutson (44:44):
Oh, that's sweet of you to say, definitely a big
thing, that we're trying to helpeveryone solve, everyone solve,
and that you know, as we, as wethink about a way and we start
helping others frame the problemand and giving them that path
and that solution.
We hope to make it better, butthat's probably another episode
is if we want to jump intoretrieval, augmented generation

(45:08):
and knowledge graphs.

Emily (45:11):
If listeners out there, if you would like to hear more
about that topic, let us know.
We'd love to hear some feedbackfrom any of our listeners out
there, but I feel like that's aperfect place for us to end
today's episode.
So I wanted to thank you, matt,so much for being on the show
today, and I wanted to give youa formal place to talk about

(45:34):
your work at Dragon Ops and theamazing company that you've
built.

Matt (45:39):
Sure, yeah.
Well, first of all, you're verywelcome and thank you as well,
emily, and you, Dr Hudson.
It's been an absolute blast totalk about this stuff with you
and to go on this journey withyou.
Yeah, so, dragon Ops, you know,we're aware of these problems

(46:05):
that we've been talking about,where people want to jump on the
AI bandwagon but they don'tknow how or they're afraid of
misusing it or becoming, youknow, sweet spot.
And to give you that guidanceand to help you use these tools
the right way so that you canget all the benefits that other
companies in your industry aregaining, without all of the
trouble and hassle that comeswith it.

Emily (46:26):
Totally, and where can people find you?

Matt (46:29):
Yeah, so we're right at dragonopsio.
Right there, hit us up andwe'll get you back well, there
you have it.

Emily (46:37):
Um.
So thank you all for um joiningme on this podcast.
Um.
Thank you, matt, thank you, drhudson.
Um be sure to tune in to ournext episode, uh, where we'll
dive into more cutting edgetopics.
Um, and if you enjoyed today'sdiscussion, don't forget to
subscribe to our podcast andshare it with your network.
Thanks for listening.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.