Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Welcome to Tech
Travels hosted by the seasoned
tech enthusiast and industryexpert, steve Woodard.
With over 25 years ofexperience and a track record of
collaborating with thebrightest minds in technology,
steve is your seasoned guidethrough the ever-evolving world
of innovation.
Join us as we embark on aninsightful journey, exploring
(00:27):
the past, present and future oftech under Steve's expert
guidance.
Speaker 2 (00:32):
Welcome back, fellow
travelers, to another exciting
episode of Tech Travels.
Today we're excited to journeyinto the heart of technology and
innovation with one of the mostbrilliant minds in the industry
today, ian Harris.
Having shaped product andtechnology strategies for
leading global firms, ian'sinsights turned complex concepts
(00:53):
into accessible knowledge andhis work is pivotal as we
navigate this concept around AI,content creation and the
business frontier.
Ian, it's fantastic to have youhere.
Could you give us a glimpseinto your journey and what tech
innovations you find the mostthrilling today?
Speaker 3 (01:11):
Steve, great to be on
the program.
I really appreciate you havingme having me along.
I've spent a long time manyyears helping companies build
platforms, so helping them buildtechnology that helps them
deliver the services for theirbusiness to their customers, and
so, as you can imagine, a lotof that is in translating what
(01:31):
businesses need and turning itinto tech talks so that
engineers can build the thingthat the businesses require and
really understanding, from abusiness perspective, what their
requirements are, so that whenwe actually deliver something,
it actually gives them the realbenefits that they need, rather
than what they think they need.
And so I think the thing I'vegathered over the years is that
(01:52):
there's often a disconnectbetween what technology can
bring and what businesses reallyneed, and I think we're at a
very interesting crossroadsright now, especially with the
emergence of these new AItechnologies, as we all, we're
all trying to work out what doesit mean in terms of these
capabilities and what does itmean for business?
What could we use it for?
(02:13):
How is it going to affect jobs?
What will it change in theeconomy?
I think that if your listenershave enjoyed a few of your
previous podcasts, they'll be upto date on what this AI thing
is all about, but I think it'dbe great to explore today what
this means for us in businessand in even our personal lives
and our jobs.
How is this going to affect uson a day-to-day basis?
Speaker 2 (02:35):
Yeah, absolutely, and
I think the intersection of
business needs and thetechnological capabilities is a
pressing topic, Especially withAI rapidly advancing.
There's a lot of debate aroundthis as its impact in the
workforce.
Just recently I saw a reportaround.
The Stanford Social InnovationReview, for instance, said that
AI could significantly reshapethe job market.
(02:58):
So let's delve into this anddemystify really what this means
around AI and for businessesand for individuals in practical
terms, and how it's shapingjobs, the economy and our daily
lives.
Speaker 3 (03:12):
Yeah, it's a good
question.
So let's just take a step backfor a moment.
When we talk about this AIrevolution, I'm seeing a lot of
comparisons between this and theIndustrial Revolution.
So let's just be clear aboutwhat we're talking about here.
What do we mean by theIndustrial Revolution?
What do we mean by the AIrevolution?
(03:33):
If we go back to a few hundredyears ago, so kind of mid 1700s
to mid 1800s, we had afundamental change in the way we
as a society decide how we wantto work, and the transformation
there was really from a bunchof people that are providing
services, agriculture andbroadly providing food and small
(03:55):
amounts of services to localpeople.
So you had farmers that wereproviding food for locals and
you had not much more than handywork really done at home making
clothes, making shoes, makingthings on a kind of one by one
basis for people that I expectmost of these artisans would
(04:17):
know, the people that wereselling their products to.
So that's where we were.
And then the IndustrialRevolution, which was brought
about by some fundamentaltechnology changes, and that's
where I think the kind of linkcomes in.
Well, we went from making asmall amount of something for a
small amount of people to makinga large amount of something for
a large amount of people, andso we went from broadly a
(04:40):
distributed workforce across anentire country to moving a lot
of people into cities, because,despite the technology, we need
a lot more people in factoriesmaking these things and then
being able to distribute thesethings to more people in more
places.
So much more global audiencerather than just the local
people that you knew.
So that's, that's ourIndustrial Revolution.
(05:01):
So technology changed, but itwas a was a fundamental shift in
how we as humans on this planetoperated, and it took place
over probably 60, 70, 80 years.
So it took a long, took a longtime from beginning to end.
Is what we consider?
That when we're looking at theAI revolution, it's only been a
few months, actually maybe ayear, where we've had this
(05:23):
capability, where we've beentalking about AI for many years,
and it's certainly been helpingus for some time.
It's helping us withtranslations, it's helping us
with search, it's helping uswith some fundamental tasks that
, broadly, humans are not verygood at.
It's very difficult to dotranslations very difficult to
do.
It's very difficult to dosearch.
It's very difficult to comegather lots of information from
(05:46):
lots of different places, butnow we're at a point where AI is
starting to do things that seema lot more human, like that,
seem to be able to do thingsthat humans actually think
they're pretty good at on anaverage basis, and it can do it
at a level where actually that'sactually not too bad.
And so there's two areas thatare probably worth talking about
(06:09):
there.
The first is in terms of largelanguage models and their
ability to manipulate andgenerate text, so generate
actual words that make sense ina sentence on particular topics,
and also in terms of graphics,so being able to create new,
novel graphics, almost photolike realism from a textual
(06:30):
description.
And now we're starting to seeactual videos being made, so
stringing those individualframes together and creating a
video as well.
So we've got this situation nowwhere we're starting to get
technology that feels like it'smuch more like a human doing
human stuff.
I mean, even a few years ago,if you said that you could type
(06:51):
a few words in and get somethingto generate a poem or generate
a couple blocks of an essay or anews article, or could
summarize text, you'd be hardpressed to find something that
could do it, do it well andcould fool the human into
thinking some other human hadhad actually done that.
But now, now we're at thatpoint.
(07:12):
So what does that?
What does that mean in terms ofa revolution?
What does that mean in terms ofour workplace?
Well, if we are someone thatgenerates text for a living,
like we're a content writer, andwe now have the ability to type
a few words in and get prettymuch a you know, in a few
seconds, an equivalent sort oftext, that's good enough for
many purposes.
Yeah, absolutely.
(07:33):
That's going to change the waythat I do my work, because me,
as a content writer, I'mdirectly threatened by something
that can do pretty much what Ido, or at least some people
think it's a broad equivalent.
Okay, so that's that'sinteresting.
As a graphic designer, if, if Ihave a company that's that's
building logos or pictures orgenerally puts out ads and needs
(07:54):
some images, instead of hiringa photographer or employing a
graphic designer, maybe I canjust type in a few words and get
a bunch of images that are yeah, actually that's pretty good,
that's close enough for what Ineed.
And the difference there is thatwe can now do that at a cost
that's a tenth, a hundredth ofwhat it would take to get a
human to do the same thing.
(08:14):
So I think that's where thesense that people are getting
that there's some threats tojobs, some some sort of
revolution happening Now, theequivalent to the, to the
Industrial Revolution, whichtook place over over many, many
years, over a long period oftime and fundamentally changed
our society.
I don't think it's that at thatlevel, in terms of the
(08:35):
fundamental shift in what we doeverywhere, and I think that
there's probably a few differentanalogies that are better off
in describing what we're doingwith AI.
Steve, do you know the?
Do you know the fairy story ofthe elves and the shoemaker?
Do you know that story?
Speaker 2 (08:51):
No, no, I don't,
please, please tell me.
Speaker 3 (08:53):
So if you, if you
find a, find a kid's book with
fairy stories in it, this inthis particular story, the
shoemaker and his wife, down totheir last piece of leather,
they kind of, they're kind ofdone, they're pretty poor.
They put it out one night and,for reasons unknown, a couple of
a couple of elves rock up andmake a beautiful, beautiful pair
(09:17):
of shoes beautiful, perfectlystitched, because the elves are
tiny make beautiful pair ofshoes and the shoemaker and his
wife wake up the next morning,find this beautiful pair of
shoes and put it out in thewindow and someone walks by and
goes oh my goodness, beautifulpair of shoes, pays a lot of
money for it.
Then they buy two pieces ofleather, leave it out the next
night, end up with two pairs ofshoes and sell another pair of
(09:37):
beautiful shoes and over a fewnights, weeks it's not really
clear in the story the shoemakergoes and his wife go from being
poor and destitute to doingvery well for themselves and at
some point they go hang on asecond, we'll be working out
what is going on every night, sothey stay up and and watch what
happens and realize that theseelves have been making these
shoes for them.
(09:58):
And so they they bust in and andwant to say thank you.
So they make some clothes forthe elves who don't generally
get clothes made for them.
They're very thankful and theygo off on on their way.
And so you've got thissituation where the, the
shoemaker, is using a magicalforce in order to produce better
goods than they could on theirown.
(10:19):
And so, if you look at it as akind of analogy of the
Industrial Revolution, we'regoing from just being able to do
things with your hands to usingsome magical force, industrial
tools mechanism, to be able todo something more productive or
faster, or make more of them.
For sure.
But it's actually a betteranalogy of the AI revolution
(10:39):
that we're seeing at the moment,and that is.
I think that what AI gives usand this is a good example is in
terms of computer programmingis it's a kind of force
multiplier, so whatever energyyou put into it, you can get
even more out of it than youcould before, because
something's kind of giving you,giving you a boost.
And so when AI is used forcomputer programming now,
(11:01):
instead of having to sit thereand type every character that
the programmer needs to put intoto convey to the, to the
computer, what it needs to do,or indeed copying a chunk of
code off the internet and thencustomizing it to your
requirements, a bit moreefficient.
But then, if you can tell it, Ineed a piece of programming that
does this particular task forme, using this particular
(11:24):
language and here's the thingsyou need to know, and then it
goes and types it up for you.
Then you can see that I can grabthat chunk of code that,
basically, is completelycustomized to my requirements
and move on with something else.
And now I'm dealing with thewhole programming at a much
higher level than having toworry about individual variables
and lines of code and even thesyntax.
(11:45):
I don't have to worry about itanymore because that's been
taken care of for me, and so itis a much more.
It's much more like a forcemultiplier in that respect.
So it gives me much more power.
I'm still in control of theprogram still in, I still have
to make sure it works, I stillhave to pull it all together and
make sure it makes sense and Istill have to understand the
business requirements.
But now I can get so much moredone than I could in the same
(12:07):
period of time, because I havethis magical force behind me to
do, to do something that wasn'tthere before, and so I think
that's an interesting comparisonwith with what we're talking
about here in terms of the AIrevolution.
Speaker 2 (12:21):
Yeah, absolutely, and
I think the the potential of AI
to streamline our own workflowsand enhance productivity is
really incredible.
I know in my own role, I knowI'm, I know I found AI to be a
very powerful tool to automatecomplex tasks without needing a
large team of developers by myside.
And I think it's really aboutfinancing AI to achieve a
(12:42):
greater sense of velocity at ourwork.
Yet I think, as we integratethese capabilities, it keeps
coming back into my mind thatthe ethical considerations
really continue to keep comingto the forefront.
It's how do we balanceefficiency gains with the
potential impact on jobs,particularly in creative fields?
I think about it.
If, if you were to have acritical dialogue that
(13:06):
intersects that technical withthe human aspect of AI, where do
you see this conversationheading, especially in terms of
ethical use and its impact onthe workforce overall?
Speaker 3 (13:21):
Yeah, that's a good
question.
So, as as we're, as we're kindof barreling down this course of
of trying to work out what wedo with this technology, I think
there's been a few interestingarticles written about how do we
handle this as a society.
What does this mean for us interms of the choices that we
make with this tool that we'vebeen, that we've been given,
(13:44):
that gives us these magicalcapabilities?
Now, how do we, as a society,make decisions that are fair,
that are beneficial?
And again, it comes down to howwe make decisions about, about
our society.
But there's been a couple ofgood examples where I think that
it can be used in a great wayand we can get many benefits as
(14:07):
a society.
I don't think anyone reallywants to go back to having farms
and making, making clothes onan individual basis.
I don't think we're going there.
So are we gonna step back fromAI and the and all the benefits
that it gives us?
Well, you know jobs are goingto change, or at least Tasks are
going to change within a job.
So you might still be acomputer programmer, but how you
(14:29):
go about doing that?
There's no way that you wouldgo about doing it much more
slowly.
We as a we any company is gonnasay, well, you know, if I can
use an AI powered programmer andby a power I mean a human that
he's Got the benefit of AI and Ican get maybe 30, 40, 50
percent more done for the sameamount of expenditure that you'd
(14:51):
be crazy not to take advantageof that particular capability.
For sure, it becomes a littlemore nuanced, I think, when it
comes to art and creativity andwhat that means and the the
recent negotiations in Hollywoodin respect to AI and it's using
in films, is this kind ofstarting point for, for what we
(15:13):
mean is is how we decide totreat that.
But there's always going to bea push to try and get the most
efficient way of doing somethingwith the tools you have
available.
And so if we keep that in mind,if we can get more done with
less, fundamentally, then whatdoes it mean?
What's that free people up todo?
That is not what they're doingright now, and I think that we
(15:33):
have to break apart those two,the two problems there.
One is I love what I'm doingand I want to keep doing it.
That that's probably gonnachange.
You probably not gonna havemuch choice about that, but what
possibilities does it open upfor new things that I could be
doing that were not possiblebefore, and so I think it'll be
exciting to see what artists aredoing with AI as a again as a
(15:54):
force multiplier.
What can I do as an artist thatI couldn't do previously?
But now I have these technologycapabilities that enable me to
do new and exciting things thatare not available before.
But as we look at the, thelegislation that's coming in
place in in the EU, for example,in terms of how we, how we deal
with data now in in Europe,very they're very passionate
(16:15):
about the Privacy of of personaldata and how that's used, and
even in terms of particularly,copyright.
So there's probably some areasthat are on the edge of AI at
the moment where we need to Notrain it in, but but control a
bit more carefully in terms ofwhat data is used for, for
teaching AI and do that in anethical way.
(16:35):
I think that's very importantthat if you're producing, if
you're producing work and it'sbeing used for a business
purpose by someone else, youshould be compensated for the
thing that you're producing.
That sounds.
That sounds very fair.
That sounds very reasonable.
You shouldn't just be able totake stuff and and do whatever
you like with it withoutcompensating the person If they
hadn't, if they hadn't made thatavailable publicly, for example
(16:55):
.
So I think that's an importantpart.
Speaker 2 (16:59):
Yeah, ian, I think
that you've raised compelling
points around AI as a forcemultiplier, especially in
creative fields, and as we lookat things like AI generated art,
it becomes more prevalent andit distinguishes and
distinguishing features betweenhuman created and AI generated
work Really becomes crucial.
(17:20):
So your thoughts on digitalsignatures is intriguing.
Could this be the key topreserving the individual
identity and possibly theownership of a person or a
person's identity in the digitalrealm?
How do you see this playing out, particularly with the
increased discussions aroundData ethics and copyright in the
(17:42):
AI's?
Speaker 3 (17:43):
Yes, it's a very good
point, and I think we're seeing
a little more emphasis on thatnow.
So Places like companies likemid-journey and to some extent
that way.
Three, you're actuallyembedding watermarks into the
images so that we can detectwhether or not they are an AI
generated image and Whether it'sa or whether it's been done by
(18:07):
a human, and so Facebook meta,for example, is now Implementing
a process by which they aregoing to analyze the images as
they're uploaded and then beable to put a tag on them If
they've been, if they've beengenerated by an AI source.
So that's good, because thatand I think I think we need to
be honest as well Like, if it'sgenerated by AI and and it's not
something that is actually real, there's a fine line between a
(18:32):
nice, pretty picture of anelephant that's eating a banana
and the next step to Some knownperson doing something that they
wouldn't want to be seen doing,but there's a picture of it.
There's a line there where youreally need something to tell
you if the photo of the famousperson doing something they
don't want to see a photo of, ifit's genuine or not.
That's, that's very important.
(18:53):
What if it's a Fantasy picturethat looks interesting and is
pretty, it's probably not soimportant, but there is a, there
is a line there, especially interms of our society, our
politics, our, ourdecision-making is based on the
information that we're given,and so knowing that it's genuine
that's, I think, the importantpoint there.
So you're absolutely right weneed a way of being able to
(19:14):
determine whether some images istrue or not, and so companies
are starting to take that intoaccount now, and we're also
seeing it on the other side aswell, so camera manufacturers
are now starting to put inwatermarks into their photos.
So it's the other way around,so that you can tell it's an
actual photo that was takenthrough a lens onto a sensor and
(19:36):
it was recorded onto a disk,and we know that it actually
came from a camera of a realthing that happened out there in
the world.
And so we're seeing it fromboth sides now, both in terms of
being able to analyze AI imagesand tell that they are AI, but
also in terms of this was agenuine photo, taken in the real
world, of something that was infront of the camera.
Speaker 2 (19:57):
It's interesting.
It's like the more we cravemore things such as artificially
generated things, the thingsthat we kind of crave for, is
that on the flip side of that isthat we're still searching for
something real and genuine andactually created by a person.
Speaker 3 (20:10):
Yeah, and this is the
kind of funny thing about
humans I mean we love we havesuch a great imagination and we
love that kind of fantasy ofbeing able to create anything,
but we also crave thatconnection of reality and is
this something that I can trust?
In that sense is also veryimportant to us as well, and I
(20:32):
think you bring up some verygood points about where the
subtleties and the dangers, infact, in respect to AI, are
going to be as we try and workout what is real.
What does it mean?
Where are we?
Where are we placing our trustin terms of the images that
we're seeing or even the textthat we're seeing?
If I can produce a pervasivetext, persuasive text that tells
(20:55):
me about a topic that I'minterested in and I'm convinced
about it, but it came from an AI, is that different from if a
human-generated text thatpersuaded me of a?
But it's some very interestingkind of subtleties there in
terms of how we get informationand where we place our trust as
well.
That's a very good point.
Speaker 2 (21:12):
Talking about trust I
really have to bring this up is
I saw a news article thatGoogle was going to start kind
of funding their own kind ofprivate laboratory for them to
be able to use.
That way they can use this labso they can look at and evaluate
and apply framework andstandards around their AI models
(21:34):
.
And I'm all for regulation, I'mall for frameworks, I'm all for
prescriptive guidance, but Ialmost kind of think it seems a
little bit, almost too much likethere's not going to be enough
transparency there in terms ofwhat type of framework is really
being applied here.
And I think I kind of seeacross cloud platforms, across
(21:57):
the different entities that cancreate AI there's so many
different models out there andeveryone's got a ethical AI
framework or some sort of AIframework you see in the
industry and there's so many ofthem out there.
No one's.
You know you can always kind ofpick and choose which framework
you want to apply to, but itreally kind of seems like is
there a way for us to kind oflike take a look at the industry
as a whole, put a broadspectrum, or is there a way to
(22:19):
kind of put just some sort ofgovernance and framework around
AI, so there's more transparency, more trust that we can build
into it.
That's not just hey well, wegive everything to Google, so we
have to trust them.
Speaker 3 (22:31):
Yeah, yeah, exactly.
Well, it's a good point.
I mean, it's fine for Google tofind out about their own models
themselves, but it's not clearexactly what they're exposing
about what they found as well,and the challenge is manyfold
there.
So, for a large language model,if you ask it a question, it
will give you an answer, but ifyou ask it the same question
(22:52):
again because it starts off witha random number it's going to
give you a different answer, andso there's no cut and dried
direct response that you cankind of say well, it said this
and so that's the answer.
You kind of have to ask it alot of times, and so this
complexity added to the factthat, what can you ask it?
(23:13):
What area are we interested inpolitics?
We're interested in facts aboutthe unit, we're interested in a
legal argument so many areasthat you could delve into.
It's going to be reallydifficult to work out a kind of
subtle framework that works inall cases.
But I think, as technologists,as a society, one of the
important things we need to dois work out what are the
(23:34):
implications of the things thatwe're doing and work back from
there.
So if I'm now creating textthat is going to be influencing
people, then what are theprinciples that I would normally
apply in, say, journalism?
So I'd want a couple of sources.
I don't want to be able to backit up.
I want to be able to have somesense, at least for myself, that
(23:55):
it was true.
How can we build that in toensure that the things that AI
large language models, aregenerating have some sense of
believability and trust in them?
Or if it's the case that wecan't trust them at all and we
just have to treat them likeit's a fiction writer, that's
fine too, but we kind of need todecide where is that line that
we're willing to cross in termsof where we place our trust in
(24:18):
the results from theseparticular engines.
So I think that it's great thatwe're doing experiments on our
own products.
That's fantastic, and good thatwe know what we need to do to
fix them.
That's very important.
But I think you're right, itneeds to be a transparent result
.
What do we find from doingthese tests?
(24:38):
Ok, so we found that 87% of thetime it told the truth, 13% of
the time it made it up, and thedifficulty then, of course, is
OK.
So what does that mean?
I mean, again, we also seem tohave different expectations on
computer systems than we do onhumans.
So if I have an accountingsystem, I expect the numbers to
(25:01):
be correct, I expect them to beright every time.
There's no doubt about myexpectations there.
But we all know that humansmake mistakes and we're
influenced, and we have devious,often duplicitous, methods of
achieving our aims.
We are not entirely trustworthyourselves, let's be honest.
So how do we translate thatexpectation on humans, which,
(25:27):
again, we expect humans to haveopinions and feelings and make
mistakes, and so in some wayswe're actually setting the bar
higher for these artificialsystems.
We want them to be true all thetime, or at least know that
they could not be true.
But we kind of need somemethodology, some system that is
similar in some ways to the waywe treat humans.
(25:48):
So, steve, I've listened to manyof your podcasts.
I think you speak withinteresting guests.
You obviously are wellrespected in the industry, and
so I put you in a position oftrust in terms of what you say.
If you said something new to me, I'd be like OK, well, the
other stuff I heard from himmade perfect sense, so I expect
what he says now is true as well.
Yeah, we kind of build trustover time, so we need a similar
(26:11):
system in a transparent way forthese artificial systems to
build that trust over time aswell.
Speaker 2 (26:19):
And it's interesting.
I really want to kind of startto kind of transition into this
segment around AGI and I reallywant to talk about, you know,
what are some of thetechnological challenges and
breakthroughs that we reallyneed to achieve AGI and what's
the timeline that we start toforesee this happening.
Speaker 3 (26:37):
Yeah, so we're.
By AGI we mean an all knowing,all perfect intelligence that
can answer anything and give useverything we need all the time.
It's the kind of ultimate AIgoal and the challenges that the
companies that are buildingthese models have kind of run
(27:01):
out of data.
They came about these companiesbecause humans as a whole have
basically taken everything theyknow and shoved it on the
internet.
So everything we know now isall out there, it's all publicly
accessible and so effectively,they were able to slip this all
up and shove it into a verylarge computer and get out a
database that enabled them togenerate text.
(27:22):
As a really simple way ofdescribing what happened with
large language models.
Now the challenges, thechallenges that they've slurped
up all the text there is.
There is not much left otherthan private collections of
stuff that is available forthese large language models to
learn from.
So what do we do next?
Well, we can get large languagemodels to generate text, so
(27:47):
maybe we can learn from that,and so we're starting to see
artificial text being used totrain AIs, and again, of course,
it's not as good qualitybecause it's not as good as
humans yet, but now that'sstarting to have an impact, but
the I think the pace of changeis going to take place, because
now we're seeing companies thatare doing high level
(28:11):
negotiations with withorganizations that have more
data.
So we've just seen an agreementwith with Reddit, for example.
We're not sure who it's withjust yet, but they've agreed to
sell the Reddit data to an AIcompany so that they can again
slip up all the stuff that's inReddit, which is generally
regarded as as quality becauseit's by real humans answering
(28:34):
real questions.
And so there's an example wherethese AI companies now trying
to find every other corner ofdata that they can get access to
, and I think what we'll find isthat other companies that hold
large quantities of data andinformation and text will be
hard for them not to not to notto negotiate a good deal with
(28:54):
one of these companies to gethold of all the rest of the data
available to humans.
And then we need some.
As every year goes by, we getmore computing resources for
less cost, and it's becausewe're at this point in in 2024
where we can run these massive,massive models at all.
We still need, we still needmore computing power to be able
(29:15):
to get to that point.
So I think we're at the pointnow where the best language
models that we've got is isdoing.
Is is doing a very credible job, but it's not quite there yet.
There's some areas that it'sbetter than humans.
There's many areas that it'snot.
But I think within the next twoto three years will be at a
point where it will beindistinguishable from from
(29:36):
different styles of humans.
Speaker 2 (29:38):
Incredible.
What are your predictions forthe next five years in terms of?
You know how people can look atAI in terms of their normal
everyday life.
With what can they kind ofexpect to see and what should
you be kind of looking at fromor advising businesses from?
A business perspective, ontheir outlook, on how they're
going to, how they're going toadopt AI into their platforms.
Speaker 3 (30:00):
Yeah, I think the big
advantage we're going to see
from AI is that things that werethings that were really
expensive before and very, veryhard to do will become a lot
easier for a lot more people.
And that kind of segues nicelyto a project that I'm working on
called pulse podcasts, where weare creating podcasts for
(30:21):
companies that would notnormally be able to afford.
I mean, steve, you know whatit's like to make a podcast.
You've got to go record it,you've got to edit it, you've
got to have script, you've gotto have or you have guests,
you've got to spend timerecording, then you've got to
get all the whole thing togetherand package it up and write
this.
There's a lot of work involvedin producing producing a podcast
(30:42):
, and the process that we'regoing through now enables us to
take existing content thatcompanies are creating for their
marketing purposes, likenewsletters or blogs, and we
create scripts using largelanguage models and then we use
some of the best voice over AIengines to create the actual
voices, and now we're at a pointwhere we can create a podcast
(31:03):
from an existing set ofmarketing content for about a
tenth of the cost it wouldnormally take to make the
equivalent podcast.
That's a good example wherewe're going from something that
would just be not economicallyfeasible previously to being
okay.
Well, now this is within thecost framework.
That works for me, for mybusiness, and so, whereas
(31:24):
previously some companies wouldnot be able to afford a content
writer to write content fortheir ads or be able to create
beautiful images, or be able toafford a top end photographer to
take pictures, now they can dothat for a much lower price
point.
And so we're going to see thedemocratization of content and
art and words, and the greatthing there is that that'll make
(31:46):
that much more accessible tomore companies in more different
ways, and so, instead of havingto try and reduce costs by
outsourcing call centers todifferent countries, we'll be
able to customize our chatmodels to understand my business
deeply and to not make mistakesand less make mistakes less
often than humans to be moreresponsive.
(32:08):
And I'm sure you've you've beenon the other side of a chat with
some company where you'retrying to return something or
get some feedback about a flightor some some kind of
interaction, and you kind ofknow that the person on the
other end is clearly dealingwith a dozen different chats at
the same time and they'recycling between them and they're
trying to give answers and yougive them a response to
something and they come back twominutes later.
(32:28):
We're going to see a lot moreservice level improvements in
terms of those thoseinteractions with with systems.
So companies are always lookingto reduce costs, they're always
looking to make it a moreefficient business, and I gives
them a chance to take take costsout of their business in one
area and then again, you know,as a society, things will change
(32:50):
.
There'll be new jobs, there'llbe new opportunities and we'll
have a change from spending timeand money on labor and spending
time and money on makingefficient systems and improving
performance and being able togrow the grow the business, but
by doing it, by improving thequality of services being
delivered to customers.
So I think that's the importantthing to consider.
(33:11):
In terms of AI, it does enableus to reduce costs.
It will, in many cases, changethe way people are doing their
jobs, but that frees up frees upmoney to spend in other areas
where we can actually grow abusiness and improve it and take
new ideas and turn that into areality.
So I think, steve, we're at avery exciting time and, as all
the companies are looking at howthey can use AI and what does
(33:32):
it mean for my business.
The key, the key key takeaways,I think, are it's a great force
multiplier for things thatyou're already doing and there's
areas that you can get intothat might have been previously
way too expensive for you, butnow we can do that in much lower
price point and enable them todo conduct business in new and
exciting ways.
Speaker 2 (33:53):
Yeah, totally agree.
I couldn't agree more.
Wow, the hand.
That was absolutely incredible.
That's amazing.
And again, your insights are.
You're a fountain of knowledgeand I could go all day on this.
It's been an absolute pleasurehaving you on our tech travels
today.
Your insights and experienceshave really shed a lot of light
on to this pivotal role in howit's shaping our future,
(34:13):
especially in the contentcreation and business strategy
aspects, and I just want to saythank you for taking the time to
share your wisdom with us andour listeners, and we're all
looking forward to seeing howthis is going to kind of it can
impact all of us in the next fewyears.
So your insight is greatlyappreciated.
Thank you so much for coming onthe show.
Speaker 3 (34:29):
Thanks, Tim.
It's been a pleasure to be here.
Thank you.