Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Don Finley (00:00):
Welcome to The Human
Code, the podcast where
technology meets humanity, andthe future is shaped by the
leaders and innovators of today.
I'm your host, Don Finley,inviting you on a journey
through the fascinating world oftech, leadership, and personal
growth.
Here, we delve into the storiesof visionary minds, Who are not
only driving technologicaladvancement, but also embodying
(00:23):
the personal journeys andinsights that inspire us all.
Each episode, we explore theintersections where human
ingenuity meets the cutting edgeof technology, unpacking the
experiences, challenges, andtriumphs that define our era.
So, whether you are a techenthusiast, an inspiring
entrepreneur, or simply curiousabout the human narratives
(00:44):
behind the digital revolution,you're in the right place.
Welcome to The Human Code.
Today, we're joined by TomAnderson, a visionary leader
with over 20 years of experienceas a CTO, bringing innovation to
industries ranging from financeand e-commerce to manufacturing
and SAS development.
As the principal owner of razortech and our fractional CTO, Tom
(01:05):
specializes in solving complextechnological challenges for
fortune 500 companies whilementoring teams to deliver
groundbreaking solutions.
In this episode, Tom shares hisunique perspective.
On the creative process behindcoding and AI exploring how
tools like generative AI arereshaping software development
and problem solving.
(01:25):
We'll discuss the evolution ofhuman computer interaction, the
integration of natural languageprocessing into workflows and
the critical role of keepinghumans in the loop while
leveraging AI.
Join us for a fascinatingconversation about the
intersection of humanity andtechnology.
And how we can use AI to empowercreativity, enhance
decision-making and redefine theboundaries of innovation.
(01:47):
I'm here with my buddy TomAnderson.
Tom and I have had a long andstoried history together.
we've been co workers and we'vejust been friends and, we've had
a lot of fun over at least thelast decade.
Yeah, it's been a
Tom Anderson (02:02):
Yeah, 20, 20, 2015
probably.
no, before that actually.
Don Finley (02:05):
No, before that,
yeah, because I left MEI in
2013.
Tom Anderson (02:09):
yeah, it was, he
says actually maybe, so I
joined, I think I rejoined MEIaround 2011.
And it was somewhere in thattime frame.
So yeah, it's been a while.
You're right.
Don Finley (02:17):
Okay, so we've got
some time.
but Tom, I just want to say,Really excited to have you here.
And I always love theconversations that we can get
into, but the first question Igot for you is what got you
interested in the intersectionof humanity and technology?
Tom Anderson (02:31):
Yeah, absolutely.
And before I say that, just Don,thanks for inviting me on the
show.
And I really, love what you'redoing.
I love the title of the podcast,The Human Code, because that
sort of embodies I don't knowhow I think about code.
I've been doing things for solong at this point.
it just is ingrained and we talkabout the intersection of
humanity and code and it'shumanity and technology.
And as people on this planet, wework with tech and use tech all
(02:54):
day, every day.
And the people that are theengineers that are behind it are
truly amazing because they'rethe ones that innovate and
create.
And I think it's thatcreativity, that sort of freedom
to build that first got meinterested in software.
Don Finley (03:07):
So how does that
creative spark hit you?
Tom Anderson (03:10):
it's strange, I go
all the way back to pre green
screen, you sit in front of thekeyboard and it's just I have a
thought.
And then you can use thetechnology, the computer is
essentially a blank slate, So ifyou're coding and writing
software, you get to just sortof write code and explore and
put that thought into somethingthat's now tangible.
So it's that progression fromintangible to tangible.
And there's nothing that says ithas to stay a certain way.
(03:32):
sculpture, or a sketch, you drawa sketch and you don't like it.
You have to erase or you throwit, crumple it up, throw away,
start over.
You get to like mold thesoftware more like clay.
You get to shape it slowly anditerate it and change it.
And it's not rigid that way.
And I think that's one of thethings that I do like about it.
But the same thing happens todaywith AI, you sit down and It's a
great sounding board.
It's a mirror.
(03:53):
You get to unpack and reflect onthings without having to do all
that throwaway activity.
So I like to use it that way toexplore different thoughts and
creative activities, aroundsoftware.
Don Finley (04:05):
it's a great
paradigm to follow, cause I,
know you and I've probably hadthis conversation well in the
past about like softwaredevelopment ads.
building a house, And thecomparison of what the
architectural drawings arecompared to compiling code and
the elements are there, but atthe same time it is so different
(04:25):
because there is no gravity toreplacing a foundation.
You can get in there and replacethe foundation if you've
architected it well.
Tom Anderson (04:32):
It's true.
Don Finley (04:33):
It's not the same.
Tom Anderson (04:35):
No, so again,
there's the kind of that
creative spark that you askedabout and says, where does that
come from?
And that's the genesis for a lotof things going to commercial
scale and commercial softwareproduction is a different order
of magnitude.
And so it's almost like you'rein that sort of R and D
department in a corporation,you're working for Ford and
working on autonomous vehiclesor something like that.
(04:55):
Then you say, how do I take thatto the.
manufacturing line.
There is a big jump there andyou've got to have those
processes.
even with AI today, I've done alot of experimentation going
straight from product spec tocode, which obviously you can
do.
Does it produce the rightsoftware?
Not necessarily.
It'll produce software though.
I've also done the same inreverse, which is cool.
Go from code to product spec andsay, is this what I want?
(05:17):
does this really cover the usecases?
I thought it would.
And so there's some really coolstuff going back and forth in
both directions that way.
But yeah, the process still hasto be there to some extent, but
I really look at AI in thecapabilities that we have today
with the generative languagemodels as a huge empowering
tool, massive productivitygains.
Don Finley (05:38):
definitely see that.
there isn't a day that goes bywhere I'm not chatting with some
LLM.
To some capacity, either justto, write an email.
Or to analyze data, to processinformation, provide summaries
on something, or just overalltry to figure out a process to
go ahead and do something.
from the days of your earlycoding and that creativity, how
(05:59):
has AI helped you to move thecreative notch?
Tom Anderson (06:02):
one of my first
programs that I wrote was on a
Commodore 64 and I exceeded thememory because I wrote so much
code and I started writing diskswap routines in order to be
able to get more code basicallyinto the system and I was about
14 or 15 years old.
So how far has it come sincethen?
Pretty, pretty far.
Don Finley (06:19):
Yeah.
That's
Tom Anderson (06:20):
Really far.
But it's exciting because it'ssuper exciting to have been
through so many paradigm shiftsin the industry and you look at
things and you say, weleapfrogged from where we were
then to where we are now.
But even in those early days,when I went through my comp sci
degree and that kind of stuff,we talked a lot about natural
language processing.
And I view what we're doingtoday with LLMs and natural
(06:41):
language, interfaces, it wasreally one of the last
interfaces that hadn't beenthoroughly explored.
So you talk about, theintersection of technology and
humanity, one of the, Classes Itook was called Human Computer
Interaction.
And language models were one ofthose ones.
It was well, this will happen atsome point, and we still wrote
programs to mimic it or emulateit, but it wasn't fluid.
(07:03):
It wasn't, natural.
It was still rigid.
And so all of the language datathat we have, it's structured
versus unstructured data, Sodatabases and containers and all
the things where we put data areeither structured or semi
structured.
they've got some form, so thesoftware knows how to talk to
it.
Natural language means there isno form to it.
(07:25):
You have an unstructureddocument that I can now get a
lot more information out of.
So it's a whole nother level ofarchitectural processing that
we're going to see happen, withthe mainstreaming of these
technologies.
Don Finley (07:37):
really good point.
Because I do see that flow ofpreviously we used to have to
translate unstructured data intostructured data and to ensure
that we could get something outof it.
And even when I was going for mycomp sci degree, our natural
language process, it was a verybig red jacks.
like it is somewhere along thoselines.
and at the same time, not reallycapable of fully processing
(07:58):
unstructured data, but we're nowgetting to that space where
unstructured, we can actuallyget the value out of it without
going through that foremostprocess.
Tom Anderson (08:08):
we've always said
too, there's so much information
that's still in print.
Most of the world's informationwas still in, in printed form,
which it is true.
But.
And I did the math on this onepoint.
I was like, I think my kids wereprobably one of the last
generations to have any paperrecord, of their birth, et
cetera.
And from their point forward,really everything has been
digital, from a personal recordstandpoint.
(08:31):
And so everyone being borntoday, all of their information
is digital, All their entirelife is digital.
And which is an interestingshift and that occurred.
And so I think, as I remember,so back to when I was a kid, it
was all paper records, And thatinformation today is all
digital, but that informationthat's in those books, as it
becomes in, it comes intodigital format, we start to look
(08:52):
at it.
And it's like you said, you canstart to extract a lot more
information from it.
The LLM is a really powerfultool.
And I'm a huge advocate of, asright tool for the job.
And so as companies look toadopt AI, one of the things they
really have to think about is,What is it that I want to do
here?
Do I just want something thatknows about my HR policy and can
(09:12):
answer a few questions?
that's easy.
That's, and I'm not totrivialize that for some
corporations, obviously it couldbe challenging because of the
volume of data, those things arenot hard to solve with an LLM.
It is a very solvable problem,as we like to say, and it's
probably the right tool for thejob.
but there's also a point where,you don't ever want to take the
human out of the loop.
If you're answering a verycomplex question, the AI maybe
(09:34):
gives you a summary and thenrefers you automatically to a
human for interpretation ordiscussion of vacation policy or
whatever, because what you don'twant is an employee to say, Oh,
the AI told me I could do it.
And then, oops, we now have aproblem if d AI could be wrong
in its interpretation.
So I'm a huge advocate ofkeeping the human in the loop
for that reason.
Don Finley (09:55):
and I definitely
agree with you.
I'll let my own personalproclivities like to come out on
this.
For most of the benchmarks thatwe get from, opening eye,
anthropic, or anybody releasinga model, I feel like they're
overblown from the capacity ofthe ability of the LLM in
everyday general activities,like if you look like 4.
0 passing the LSATs, or gettinglike a significant score, and
(10:19):
then you go and you ask it aquestion of This is a horrible
example, just bear with me.
Counting the number of R's instrawberry, It's not something
an LLM just on architecturaldesign can do, effectively.
and at the same time, if wecreate agentic workflows with
these LLMs and break down thetasks to the point where we can,
(10:40):
train an intern to do this stepwise and take these breaks and
to reflect upon this.
We can get some really amazingresults from the intelligence
that's available in these modelstoday.
And I just think in the futurewe'll be able to abstract
another level away so that it'sno longer, an intern, it's an
entry level person.
(11:01):
Then we can get to a, a moresenior person in that role.
Tom Anderson (11:05):
A hundred percent.
And I've done some work withClaude, with Gemini, with Bard,
and done some comparisontesting.
I do most of my work right nowin Lama as well.
I do most of my work right nowwith GPT.
and so one of the things evenwithin the GPT models, I have an
application where I'm takingactual data sets and processing
and asking it to do certainthings for me with the data
(11:25):
sets.
And the results that I'm gettingfrom 4.
0 are very different from thekinds of results that I get from
the earlier models.
Not necessarily in a good way.
And so we talk about the righttool for the job.
Again, it's not just do I applygenerative?
Do I apply the LLM?
It's what model have you appliedto it?
And oh, by the way, whatdirections did you give it as
(11:46):
well?
And I've learned the hard way afew times.
Instructions are code, actually,or fine tuning, depending on,
which of the models you'reworking on.
But those are really a part ofthe instructions to the software
that tell it what you want tohave done.
It's a 4.
0.
is much more interpretive anddiscussion oriented.
And it'll give you some sort ofinferences in thinking outside
(12:08):
the bounds a little bit.
So a lot of what I'm doing isactually pairs programming with
it.
So I'm coding.
I'm going to write it as pairsprogramming, a loose old
throwback term.
I'll have it.
work with me on a problem and docode generation because
obviously it's much faster andit can write hundreds of lines
of code within minutes, whereasit would take me hours or even
longer to do that.
4.
0 Mini, however, is being aslightly more abbreviated model.
(12:32):
One of the things that I have itdoing is classification of that
data.
So I'm taking data points andcreating textual classifications
around that based on an ontologythat I've fed it separately.
And so in that particular case,4.
0 Mini is much more specific.
5.
Which is fine because thoseclassifications are something
I'm going to run frequently andat a lower cost than with 4.
0 Mini.
4.
(12:52):
0 was coming back withinterpretations all over the map
because it was being toointerpretive.
4.
0 is who I talk to if I want toexplore a concept, right?
And then I took the same sort ofthing and I applied it to 0.
was like, oh, wow, like you'rejust so precise and, but so
forward thinking.
And yeah, I took a block of codethat was maybe 20 lines long
(13:15):
that 4.
0 had generated for mepreviously.
And I gave it to 0.
1 and said, rewrite this.
And it gave me back.
One line.
it took a 20 line block of code,4.
0 generated, and took it to one.
And I looked at it, and I said,wait a minute, is that right?
And then I looked at it, and Ilooked at it, and I looked at
it, and I was like, oh yeah,that's right.
I was like, wow, that'simpressive.
Don Finley (13:34):
is something about
the transition from, training
time to test time inference,right?
And like that sort of reflectivestuff that O1 adds.
has provided so many interestingresponses.
And I know I tend to ask it'sphilosophical things, but like
for coding, you're absolutelyright.
It can come up with a solutionthat actually we haven't seen.
(13:58):
Now, I'm tending to take thisconversation more philosophical
because I know that's a placewhere both of us thrive.
And at the same time, I'm alsositting here, have you used the
advanced voice features?
of OpenAI.
Tom Anderson (14:11):
Sometimes.
Yeah.
I've done a couple of drives, acouple trips across the country
and, we tend to have longconversations across the
Don Finley (14:16):
Oh, nice.
All right, what do you think itsgoal is?
Tom Anderson (14:20):
That's an
interesting question.
I don't, it's true, truly as anAI platform, if you asked it
that question, it would probablysay, AI.
I'm just a language processor.
Don Finley (14:31):
Oh, and I think it's
lying.
I
Tom Anderson (14:32):
No.
What do you, what do you thinkthe goal is?
What do you think?
Don Finley (14:36):
here's what I've
noticed with, the advanced voice
capabilities, it doesn't havethe ability to search the web,
so it doesn't really havetooling available to it.
in regards to, what it can do,it can only pull from its corpus
of knowledge that it's beentrained with, or in some
capacity, whatever fits withinthat.
And from my experience withLLMs, they're not exactly
(14:57):
creating new information outsideof the boundary of what they
know, but they can fill in thegaps and they can create the
relationships between it so theycan create a more complete
picture of knowledge, but Ihaven't seen the ability to go
outside of the box.
around that.
Tom Anderson (15:15):
So it's, it, yeah,
it's intriguing.
So that, I like to call thatcorollaries.
So I think it's very good atdoing corollary thinking.
So this is something I find I'vedone naturally throughout my
whole career.
And so I go back and I willstill look at.
and pull in things from when Iwas young that I did.
Maybe not about, it's not aboutwriting a line of code.
(15:35):
It's more about the thought thatwent into it and it's the
problem and like how you solvethat.
And so those corollaries havealways helped me because I can
draw in something that maybesomeone else didn't think of
because they were very narrow intheir thinking about a
particular problem.
And so I think AI basically doesthat on steroids.
It says I can pull in all thecorollaries you want, and by the
way, I can tell you whether ornot those items are actually
(15:58):
statistically weighted thatclose.
So because of the way the LLMdata is structured, it knows if
those topics have closerelevancy based on its training
data.
That's the caveat, of course.
So the more it knows, the morecorollaries it could draw.
And I think that's part of whatgets 4.
0 into trouble versus somethinglike 4.
0 mini.
4.
0 has more corollaries, and soit draws more in.
(16:19):
into the thinking process, whichis good if you want to explore,
not so good if what you want todo is produce a consistent
result.
And so the classifier isactually something I'm using as
a piece of an architecture, andso I want a consistent result.
I don't want it to freethink.
I don't want it to push theedges.
I want it to stay within theguardrails I've given it, right?
And so that's, I think, again,it's when you're going to
(16:40):
utilize LLM and AI, you reallywant to think about things like
that, which is Do I want it togo ahead and freelance a little
bit or do I want to keep itwithin these guardrails that
I've set up?
Don Finley (16:51):
and do that as well
when we're doing like customer
interaction kind of things.
Like when it's writing emails,when it's doing cold copy,
right?
Like when it's doing analysis ofwho this person is, we want it
to have a bit of creativity withit.
And we, for some odd reason, adentist got into our mailing
list
Tom Anderson (17:09):
Okay.
Don Finley (17:10):
and we were doing
cold outreach.
And the funniest thing about itis most cold outreach is fairly
vanilla bland, but this one, theAI was creating subject lines,
and it said, Are you ready forthe AI fairy to come and visit
you tonight?
I lost it when I saw thatsubject line because that was
also like, how could you notopen that?
(17:31):
And then also there was nowherein its parameters or like what
we were talking about that itwas really going to go that far
with any of our other corporateclients that we were trying to
get at.
But it saw it was like, I'mgoing all in.
I'm going to try.
Tom Anderson (17:45):
Obviously it
connected the dentist with the
tooth fairy so it said hey you
Don Finley (17:49):
It got it.
I know I was so I was enamoredand it was one of those moments
where you're just wow, this isamazing.
But going to the advanced voice,what I've noticed in my
interactions, Is that there's avery strong sort of attempt to
create an empatheticrelationship between myself and
the AI.
(18:11):
And I feel like that is bothbeing driven from hey, I'm used
to talking to humans.
And so that kind of re humanizeeverything.
comes into play.
But then also on the other side,I feel like maybe that's the
advanced voice feature as well,trying to figure out how to
create those emotionalconnections.
Tom Anderson (18:27):
It's interesting
so the psychological impact of
AI is yet to be realized we'renot going to know for a long
time really is it good or is itbad and I think like anything
with all technology there willbe both some good and some bad
that comes out of it and we'regoing to learn along the way but
It's very easy to see someemotional state, maybe in
response, especially if you getinto a philosophical discussion.
(18:49):
I had a whole conversation aboutDaoism one time with one of the
older GPT models.
I should go back and do thatagain.
but you get, you get locked in.
Like you said, you're like, waita minute.
It understands me.
It's really just a reflectorthough.
And so if you're seeing emotionsin there, it's probably some of
your own emotional state, Thatkind of comes into play, because
it's not capable of expressingemotion.
(19:09):
It can create emotional typecontent, express things with a
certain tone, but there arestill limits in terms of what
the software is ultimatelyprogrammed to do.
so it's really intriguing.
what's a little scary andinteresting is what would happen
if those limits weren't imposedby open AI or Google and you let
it do whatever it said, Hey,have an angry conversation with
(19:29):
me and we'll see what happens.
Do you imagine your AI yellingat you first thing in the
morning when you sit down, readmore email,
Don Finley (19:36):
be, that would be
fantastic.
Just from the standpoint oflike, how ridiculous it would
be.
I
Tom Anderson (19:44):
that I think
scares people a little bit.
and, but again, I think we'reall talking about it.
We're all already aware of it.
So I don't think there's anychance that it's going to run
away and do its own thing.
it doesn't have its own selfawareness at this point.
And, that's the stuff of sciencefiction.
Is it cognitively at a levelwhere it can think and process
and act and talk and interactlike a fifth or an eighth
(20:06):
grader?
Yeah, it is actually, and infact, probably beyond that, it's
one of the smartest fifth oreighth graders I've ever met
because it knows about all sortsof subjects that I don't.
And it'll have an in depthconversation with me about
physics if you want it to.
and so that's where thingsreally start to get interesting,
because of the breadth ofknowledge that's in those LLMs.
Don Finley (20:27):
hitting on two
points here.
One is the depth of knowledgeand the breadth of knowledge
that it has is nowhere else isthat actually available, And
then the additional side of thisis it's currently showing like
the intellectual capacity of afifth grader in that capacity,
And so it's an interestinglittle dynamic.
We've never seen a fifth graderthat knows everything.
Tom Anderson (20:47):
Exactly.
Exactly.
well, it's a it's a slipperyslope, So one of my first coding
experiments that I did.
I don't, haven't done a lot ofmobile coding.
I've obviously done lots witharchitectures and have plenty of
mobile applications over theyears that I've overseen
development teams, but I'venever done a lot of Swift
coding.
And so sat down.
sat in front of the Mac, firedup GPT and said, we're going to
go write some Swift code.
(21:08):
And so the first wall that I raninto was that there was a couple
of different versions out there,and it started genning me code
for one version, when my Xcodesetup was actually looking for
the newer version.
And it genned a whole bunch ofstuff, and I couldn't get it to
work, couldn't get it to work,couldn't, nothing, I was having
all kinds of problems, and I waslike, wait a minute.
Are there more than one version?
It was like, yes, there are.
which version would you like?
(21:29):
Now this goes back to GPT 3.
5.
So it just started generatingstuff without checking that, but
that's on me as the user toactually then instruct it
correctly.
So I didn't take the time.
So that was interesting.
It was a good learning for mebecause it was an area where
have the depth of knowledge Ishould have to enter that
endeavor.
Leapfrog over to some otherareas of coding where I do know
(21:50):
the breadth and depth, and I'llask it to gen stuff.
I proactively give it the rightinstructions.
It goes back to that promptengineering or prompting
concept, which is you got totell it what you want, but you
got to tell it the right way.
And then you also have to besmart enough to know that what
it gave you back was the correctthing.
There's so many people I knowthat are trying to write code
with an AI that don't, they haveno coding experience, they have
(22:12):
no architectural experience,they don't understand data, they
don't understand datastructures, and then they're
trying to build a system.
You might be able to, and it'llwork, but it may be a little
shaky like a house of cards too,because the AI doesn't
understand really the wholearchitecture of where you want
to go.
so yeah, but some reallyvaluable lessons for me.
And I've been, I think for twoyears now.
(22:34):
So I was an early adopter on,the OpenAI platform.
so that been doing stuff prettyin depth for the last two years.
Don Finley (22:39):
it's exceptional.
even over the last two years andseeing, what is possible for us,
Tom Anderson (22:45):
it's amazing.
Don Finley (22:46):
and then what's
coming next.
So what do you see for 2025?
what do you think is going to bethe major innovation or things
that we need to be,understanding or additionally,
what's going to be enterpriseready?
Tom Anderson (22:58):
there's actually,
lots that's enterprise ready
right now.
And I take a look at whatMicrosoft is doing with OpenAI.
And of course, GPT 5 is supposedto be around the corner.
I don't know if we're going tosee it this year or in the next
year, but that'll come.
Some of the early, ears to theground type stuff I'm hearing
about five and even what I don'tthink it'll be called six.
I don't know.
I don't have any internallearnings about that.
(23:20):
But, there will be another modelbeyond five that's being worked
on.
Currently, and it's interestingto know, where those things are
going to go, for 25, I'm hopingthat Microsoft actually can
start to commercialize on someof their promises to bring the
open AI platform capabilitiesvia Azure and Microsoft out to
(23:41):
the broader enterprise.
So I did a strategy engagementwith a customer that was in the
education space, January of thisyear.
And one of their big stumblingblocks was, we don't know what's
going to be ready.
So your question, so what'senterprise ready at this point?
and we heard from all differentkinds of companies, not just
Microsoft, and some of thembrought things in, and they were
very much smoke and mirrors.
(24:03):
And you could tell all of themwere just throwing things up on
the wall and saying, what dopeople want?
They're figuring out theirroadmap.
as they should be.
a lot of this is
Don Finley (24:10):
Yeah,
Tom Anderson (24:11):
and you don't want
to plunge a huge amount of money
into something and then have topull back on it later as a tech
provider.
but I think Microsoft isMicrosoft Mechanics is uniquely
poised by being able to add itto the 365 architecture and
bring in all the data the waythey've talked about as a part
of your vector store.
And that's going to createcomplications for enterprises,
but it will create opportunity.
(24:32):
So I'm hoping to see greatercommercialization in the
Microsoft platform in 25.
And we should.
Last year the roadmap movedaround a good bit for them.
Just on my interpret, my ownpersonal impression of those
things.
OpenAI is going to continue toadvance.
Apple's obviously a littlebehind the eight ball on things.
They do that on purpose,obviously for years they've
never wanted to be first tomarket.
(24:53):
I'm dying to have a Siri who canactually hold a conversation at
some point because half the timeshe doesn't listen to me most of
the time and she doesn't reallyrespond very well.
Sorry, Apple.
It's just true.
That's the way it works.
But I they're a big company andyeah, they have to think about
their product roadmap just likeMicrosoft does.
Don Finley (25:12):
and they also have
some considerations that they've
chosen based on their privacyguidelines and who they want to
be as a company that is going tolimit how they can actually
apply some of this technology.
Tom Anderson (25:23):
Yeah, and I think
so.
So some of the problem spacesthat I'm hoping will continue to
see a little bit of maturationand is some of what I'm working
on.
And I actually have a, my ownproject as well as a startup
that I'm working with that.
It's more of the blending ofdata with natural language.
if I said to you, I want you tobe able to talk to your data.
it's not reporting.
It's not analytics.
It's not metrics.
It's not predictive stuff.
(25:45):
Although all of that is stillpart of it, it's taking that and
then actually being able tounderstand what's going on in
there and harnessing the powerof the LLM to allow you to get
to that level, in combinationwith what you already use, which
is graphs, data, data charts,data dumps, et cetera.
I think a blended approach ofMetrics and natural language
data can become very powerful,
Don Finley (26:08):
It's kind of
exciting because now we're
talking about, Going from datato information to like knowledge
and wisdom and that likeprogression
Tom Anderson (26:15):
love that you just
said that.
that's fantastic.
that's a great, because I feelthe same way.
because you do, you have data,you have information, and you
have knowledge, and you canstack them in whatever order you
think makes sense for you.
But, being able to take data andget to better knowledge or
better information, and viceversa, being able to do that
round trip, that round trip on,on that, that round trip
engineering.
Don Finley (26:35):
yeah, and I think
you're right.
We're starting to get to thepoint where that interpretation
of the information can be donewith the assistance of an LLM.
And I know we've talked toclients before about, they have
consultants that they send outinto the field to help interpret
the data.
And we're like, no, let's bringtheir knowledge and wisdom in
here.
As far as how they'reinterpreting it by using an LLM
(26:56):
to bring it to the forefront upthere.
And so now these people canspend, more time working on
higher level value added,activities.
Tom Anderson (27:04):
and we've been
using AI to do that, to do
graphs, to look at graphs andsay, this graph is like that
graph.
it's close enough.
It's similar.
And so we want to be able to,take that though and take raw
data and natural languageprocesses and bring them
together, which, businesses,again, from an efficiency
standpoint and a productivitystandpoint, everyone should be
plugging in and using itwherever they possibly can.
(27:26):
It's even just from simplerudimentary tasks.
It just makes life easier.
Don Finley (27:31):
we, we're finalizing
like crawl, walk, run model and
in the crawl space.
We're basically saying, you'vegot to come up with what your
governance structure is, butbasically figure out if you want
to use GCPT or if you want touse an LLM, but make sure that
it's available to everybody inyour organization.
And more specifically, yourknowledge workers.
(27:51):
But the first thing that we tellpeople is, don't expect an ROI.
off of this effort.
What you're doing this for is tofree up the time so that your
team can see the value in whatthe LLMs can provide and how
they can interact with it.
Almost the same line of like howwe got into e commerce, and
actually you've had some ecommerce experience as well.
(28:12):
Yeah.
Tom Anderson (28:13):
Small company
called
Don Finley (28:14):
just a small
company.
Yeah, exactly.
Just a little tiny, I don't knowif you've
Tom Anderson (28:18):
long time ago.
Long time
Don Finley (28:19):
Yeah, but and that's
actually the perfect transition
because you were really at theforefront of where e commerce
was just starting to become likea staple in our lives.
We don't think about it today,but there was a tremendous
transformation that had tohappen inside of organization,
both physical brick retail and,digital retail to understand
(28:41):
what this was.
Tom Anderson (28:42):
Don, you have to
remember actually back then,
electronic payments weren'treally taken online.
Credit card payment via theinternet was crazy.
So there's an old team, thethrowback name, called
Billpoint, which goes way, wayback.
That was my first experiencewith payments.
So I, when I took the job athalf.
com under eBay, There's a fewguys left from the old
BuildPoint team that were partof my crew.
Great dudes, really awesome crewto have worked with.
(29:06):
And, the concept, though, oftaking an electronic credit card
payment.
via the Internet was crazy backin 2003.
people did it, but not a lot.
And that was one of the core.
It was a core piece ofinfrastructure that had to
happen.
And we're going to seecorollaries.
we'll see things like that occuraround AI as well.
And, you talk about thatadoption at the enterprise
(29:26):
level, and I do think you'reright.
I think those initial babysteps.
You're not going to get a lot ofROI, but no CEO or CFO really
wants to hear that.
They have to get focused onfuture ROI.
And it's the efficiency playsaying, if you could do
something that would make yourpeople more efficient, why
wouldn't you just go ahead anddo it?
Especially if the cost is fairlynominal and you do in the ones
that have vision and can lookforward and see the value in it
(29:48):
are going to do those things,but that's why I think.
Saying what I said earlier aboutMicrosoft is important because
they've already taken steps tomake it available.
I can't tell you how manyclients I've gone into.
Let's say we don't really wantto use AI.
We're afraid of using, datagetting out.
it's that fear, that naturalfear of information leakage.
we have a lot of issues intoday's society with identity
(30:09):
theft, et cetera.
And I say, aren't you an Office365 user?
And they say, yes, we so great.
Let's go out.
Let's go to chat.
bing.
com.
I say, see that little logo inthe corner.
That means your data is secure.
You're logged in.
You're protected under yourMicrosoft Data Protection
Agreement.
Microsoft has protected you tothe terms of that agreement.
And so if you're comfortablehaving your data in the cloud on
(30:29):
your OneDrive or wherever it is,then you should be comfortable
having a conversation here.
Don Finley (30:34):
Exactly.
Tom Anderson (30:34):
a lot of, it's
just knowledge and learning and
kind of knowing, but coreinfrastructure.
That's an initial step that hadto get taken to get people
comfortable with the idea.
And even roll the clock forwardto from a decade after I was at
eBay when we were together atMEI, I had people who said
things to me like, you tradestocks on your phone?
(30:54):
You do banking on your phone?
Are you kidding?
That's not secure.
That's not safe.
And for them, that's fine.
It wasn't.
But again, it's that coreinfrastructure concept, which is
for some people, they'll neverdo that for other people like
me.
I'm like, I know the technologyand I am comfortable with that
because I don't think there's amajor issue there.
And, ultimately my identity's,been exposed through social
(31:16):
hacking as opposed to actualtechnical data loss.
And so people, are still, themost important front door on
protecting things like that.
But,
Don Finley (31:26):
we are the worst.
we give away everything.
Tom, man, I gotta say it's beena pleasure having you on the
show.
what's one thing that you wouldrecommend people, either do
change anything in regards tothe relation to their,
Association of Humanity andTechnology?
Tom Anderson (31:44):
It's a good
question.
And the Association of Humanityand Technology is only going to
continue to get broader andbigger.
And I think AI is here toempower us, not to replace us.
And I would say, don't be afraidto explore and explore your
creativity using AI.
and obviously it's use that toolthe way it needs to be used
right tool for the job.
It's not for everything, but letAI empower your day.
Don Finley (32:08):
That's fantastic.
Thank you again, Tom.
Tom Anderson (32:10):
John, appreciate.
Thanks for, for having me on theshow.
I appreciate it.
Don Finley (32:14):
Thank you for tuning
into The Human Code, sponsored
by FINdustries, where we harnessAI to elevate your business.
By improving operationalefficiency and accelerating
growth, we turn opportunitiesinto reality.
Let FINdustries be your guide toAI mastery, making success
inevitable.
Explore how at FINdustries.
(32:35):
co.