Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Hannah Clayton-Langton (00:05):
Hello
world and welcome to the Tech
Overflow podcast.
As always, I'm Hannah ClaytonLangton.
Hugh Williams (00:10):
And I'm Hugh
Williams.
Hannah Clayton-Langton (00:11):
And
we're the podcast that explains
technical concepts to smartpeople.
Speaking of smart people, Q,how's it going?
Hugh Williams (00:17):
Uh I'm too smart,
Hannah, but I am going really,
really well.
I'm actually about to go jumpon a plane to head to the US,
but I I think you're probablyleaving the US.
Hannah Clayton-Langton (00:28):
Yeah.
There's some sort of missedopportunity there, because just
as you arrive, I'm going, andthen we'll be back to Australia
UK time zones, which is probablythe biggest challenge of the
podcast that I didn't seecoming.
Hugh Williams (00:40):
Yeah, me too.
But I'm looking forward togetting together with you in
October and actually recording acouple of episodes again in a
podcast studio.
That'll be that'll be awesome.
Hannah Clayton-Langton (00:48):
Exactly.
And if I play my cards right,maybe I'll get to come down to
Australia to do the same in ourUK winter.
Hugh Williams (00:54):
Yeah, series two,
maybe, if we hit our if we hit
our OKRs.
Hannah Clayton-Langton (00:57):
That's
right, listeners.
Please like, subscribe, andshare.
Share with your group chatsbecause uh if we get enough
downloads, then I get to go toAustralia.
Hugh Williams (01:05):
So, what are we
talking about today, Hannah?
Hannah Clayton-Langton (01:07):
So today
is a big one.
Today we're going to be talkingabout the topic of AI,
artificial intelligence.
And it's such a big topic, verytechnically complex and very
culturally significant.
I think it's fair to say thatwe're already planning on this
being a two-part episode.
So today will be part one oftwo on AI.
Hugh Williams (01:25):
Awesome.
Why don't we get started?
Hannah Clayton-Langton (01:28):
Let's do
it.
So, as I just mentioned, thisis quite a technically complex
topic.
And I think the best way to getinto those topics is by telling
some stories or giving someexamples.
So, why don't you walk usthrough an example of AI in
action just to sort of situateus in the topic?
Hugh Williams (01:46):
You're probably
not going to believe this,
Hannah, but when I first joinedeBay back in 2009, the search
engine was actually run by thebusiness team.
And what I mean by that is thatif a buyer came onto eBay and
searched for something, therewas actually a text file that
was maintained by the businessteam that controlled what search
(02:07):
did.
And so the whole of the sort ofsearch ranking, if you like,
was driven off a text file thatwas managed by business people,
which is quite something.
Not exactly how I was used toworking at Microsoft when I
worked on search there, andcertainly not how you know
companies like Google went aboutit.
So, you know, really, reallyold school way of running
search.
And you might ask sort of,well, what was in the text file?
(02:28):
Um, it was things like if thisitem has free shipping, then
boost its ranking by 20%.
So, you know, just humanwritten, human written rules.
Hannah Clayton-Langton (02:38):
So was
it like an Excel file that was
applying those as rules asformula, or was it how was it
pulling it?
Hugh Williams (02:45):
Uh it was it was
a text file.
So it looked a little bit likecode, if you like.
So it said if this, then that.
The ifs were things like iffree shipping, and then the then
was things like uh multiply thescore of the item by you know
120%.
Things like that.
Okay.
So really, really basic stuff.
Actually, I remember walkinginto my boss's office, um, who
(03:07):
you know, Mark Carges, used tobe the CTO of eBay.
Hannah Clayton-Langton (03:10):
Shout
out to Mark if you're listening.
Hopefully you are.
Hugh Williams (03:13):
And Mark had just
searched for an iPod, so they
were a thing in 2009.
Uh, he searched for an iPod,and the first result was a
Jaguar car.
Hannah Clayton-Langton (03:25):
Uh-oh.
Hugh Williams (03:26):
Yeah, not good.
And the reason it was there isbecause two things.
Um, it in the description saidthat it had an iPod adapter in
the in the glove box.
And the second thing was it wasexpensive.
And so this simple little textfile had said, well, if it's got
the keywords, then it's amatch.
And then if it's expensive, putit at the top.
Hannah Clayton-Langton (03:45):
So this
tells me a few things I didn't
know.
One being that people wereselling Jagger cars on eBay in
2009, but that's not the that isnot the target topic of this
episode.
But so basically, you're sayingit was pretty basic and it
wasn't really working as itshould have.
Hugh Williams (03:58):
No, and of
course, you know, if you think
about a marketplace, right?
I mean, eBay's, you know, theoriginal marketplace, really.
If you think about amarketplace, it's all about
connecting buyers to sellers.
And so search is incrediblyimportant in a marketplace
because that's how buyers goabout finding things that they
that they want to buy.
And you know, search was justbasically broken.
Hannah Clayton-Langton (04:16):
Okay, so
how does AI fit into the story?
Because I have a feeling it'sgoing to present us with a
better answer.
Hugh Williams (04:21):
Yeah, absolutely.
So, first thing, uh, firstthing I did when I when I got to
eBay was I hired this wonderfulguy, Mike Matheson, is uh still
still a friend of mine.
He's up at Amazon these days.
And Mike started what we callthe search science team.
And the search science team didwhat today we call AI, but back
then we called machinelearning.
And we'll unpack these terms alittle bit later on.
(04:42):
But basically, what Mike andhis team did was replace this
text file that was managed bythe business team with an
algorithm.
And the algorithm was somethingthat was generated off very,
very large amounts of data.
So imagine you have lots andlots of examples of buyers
successfully buying things oneBay, and you have lots of
examples of buyers failing toget what they want.
(05:03):
So imagine you've got lots andlots of examples.
You can use this thing calledmachine learning to basically
learn a function, the maths, ifyou like, learn a better version
of this text file that combinesall of the information that you
might need to do a better jobof ranking items in response to
buyers' queries.
So this whole thing was waslearnt, if you like, off huge
(05:25):
amounts of data.
And basically we ended up witha piece of AI, if you like, that
replaced the text file.
And it doesn't matter how smartyou are as a human, you can't
write down a set of rules in atext file that cover all the
possible cases and uh build asearch engine that does a great
job.
Like it's just not possible toconsider everything that you
(05:46):
could consider, right?
So if you're going to build agreat search ranking function,
you probably need to think aboutthe buyer who's buying things,
you probably need to think aboutthe seller who's selling
things, you probably need tothink about the description, you
probably need to think aboutthe user's kind of behavior
through the site, you probablyneed to think about the images
that are there.
There's lots and lots of thingsyou could think about to return
a great result.
(06:07):
And as humans, we can't keepthat all in our brains at once
and write down a function, ifyou like, that works for you
know all occasions.
And so this thing justbasically didn't work.
There's no way smart humans canwrite a write down in a text
file a set of rules thatperfectly drive the search of
eBay.
Hannah Clayton-Langton (06:24):
Okay.
I have so many questions.
I want to know a bit more aboutwhat AI is.
We've touched on it.
And then I think there'ssomething around the use cases
of AI in the tech companies thatthe users will be using all the
time.
Because I think people thinkabout AI basically as chat GPT.
And I don't think we peoplewould actually realize that if
they're searching on a retaileronline, they are actually using
(06:46):
AI.
So you've been in technologyfor quite a long time.
How is AI, when did it sort ofland on the scene, and how have
you seen it evolve during thattime?
Hugh Williams (06:55):
Yeah, so I guess
the first place to start is AI
stands for artificialintelligence, right?
And I guess, you know, if youjust sort of pause and think
about that for a second, youknow, what that's about is
really building machines thatare intelligent in the sense
that we understand it, right?
So they're they they displayintelligence, but in a in an
artificial way.
So it's not human intelligence,it's machine intelligence.
So that's the that's the broadfield.
It's been around since the1950s, you might be surprised to
(07:18):
know.
So AI was sort of conceived asan idea way, way back in the
1950s.
I was forced when I was atuniversity in the late 1980s to
do an AI course in my in myundergraduate degree.
I did that in the third year ofmy undergraduate degree.
And I think at that point intime it was kind of seen as a
field that was a little bit kindof aiming for the pie in the
(07:38):
sky and was really, really goingnowhere.
But then in the, you know, ifyou jump forward into the into
the 2000s, the field started toreally, really take off.
And I think it's really comeinto the public consciousness
over the last few years, again,as you say, with the emergence
of large language models, LLMs,and the advent of Chat GPT.
So it's now something we're alltalking about.
(07:59):
But this, you know, dates backto the 1950s.
Hannah Clayton-Langton (08:01):
Okay,
and artificial intelligence is a
fairly broad term.
So is it right that like LLMsand so that's a large language
model and machine learning, soML, are like subsets of
artificial intelligence ratherthan additional concepts?
Hugh Williams (08:17):
Yeah, yeah,
that's right.
So artificial intelligence, ifyou kind of you know think about
the broad field, includes a lotof things.
And machine learning, whichwe'll unpack, I'm sure, in a
second, is certainly part ofthat.
Um, and large language models,if you like, are a part of
machine learning.
So big field artificialintelligence, machine learning
is part of it, large languagemodels are part of machine
(08:38):
learning.
But there's also other thingsthat constitute artificial
intelligence.
So if we were trying to build ahuman-like system, um you'd
probably think of more than whatwe're thinking of today as Chat
GPT.
So you'd start to think aboutthings like robotics, right?
You'd say, oh, that that wouldclearly need to be part of AI if
we were going to buildsomething that was truly a
(08:59):
human-like piece of uh piece ofintelligence.
There's other things likesearch algorithms, um, things
called expert systems, a fieldcalled symbolic reasoning.
These are all sorts of parts ofthe broad field of artificial
intelligence.
But in practice today, youknow, what's artificial
intelligence?
Well, it's probably 70% machinelearning, which is large
(09:20):
language models, and then theseother fields sort of really sit
on the side.
Hannah Clayton-Langton (09:24):
Yeah,
exactly.
And uh, not a topic we'll lookto get into on this podcast, but
the ethics of AI are a hottopic at the minute.
And again, that's largelytalking about LLMs.
We're all using AI, I think, ona sort of much more micro level
when we search for something,you know, on Google and our
phone.
I think there's a lot moremicro use cases that if people
(09:46):
might say that they're not thatthey're opposed to AI or they
have, you know, ethicalchallenges with how AI currently
works, they really mean LLMs.
And you I think you just gave auseful stat.
So 70% of the field of AI inthis day and age is probably
LLMs, but there's theseadditional, possibly a bit more
esoteric use cases that existalongside that are absolutely
(10:06):
artificial intelligence.
Hugh Williams (10:08):
Absolutely.
I should say, too, I mean, I'mit's sounding like I'm talking
down LLMs and saying, well,they're not really AI.
I I I do think there's beensome massive breakthroughs, and
I guess we'll talk about more ofthose in our in our second
episode.
But LLMs really do exhibit, youknow, some of the
characteristics of of this ideaof artificial intelligence,
right?
So we're able to now, withChatGPT and its and its
(10:30):
brethren, we're able to engagein really coherent
conversations, answer questions,and we can talk about a diverse
range of topics.
Um, you can see reasoning andlogic and sort of inference
coming from these tools thatcombine information to
synthesize new information.
So you can ask a question thatpulls together two broad fields
and synthesizes an answer, andthey can do lots and lots of
(10:53):
different tasks, even tasks thatthey weren't designed for.
So we're certainly seeing someaspects of what you'd think of
as artificial intelligence, youknow, coming from these from
these LLMs, but they aren't thecomplete story of artificial
intelligence.
Hannah Clayton-Langton (11:06):
Okay, so
maybe a very different example,
a Tesla, like a self-drivingcar, is that using those
additional classical examples ofAI that you mentioned earlier?
Hugh Williams (11:17):
Yeah, that's a
good example.
So certainly when your Teslatoday is in its self-driving
mode, it's full self-drivingmode out on out on a highway or
a freeway, there's a lot ofthings going on there, one of
which is LLM-like technology.
So the Tesla car has abouteight cameras.
Those cameras are bringingpixels back to your Tesla.
(11:38):
Those pixels are being you knowunderstood by an LLM, and
that's making decisions abouthow to drive the car.
But at the moment, when you'reout on the highway, if the car
needs to brake, then the actualbraking logic is not driven by
the LLM.
So let's imagine the LLM says,oh, you're gonna you're gonna
crash into a tree.
Um, that then triggers somesymbolic reasoning, another sort
(12:01):
of part of AI.
And this symbolic reasoningbasically says, hey, if you're
traveling at a certain speed andthe distance to the tree is
this distance, then you shouldapply this amount of braking.
And that, if you like, isreally a handwritten set of
rules that's written by humans.
So, you know, you're travelingat 100 kilometers an hour, 60
miles an hour, whatever you wantto think about it.
(12:22):
You know, the tree is now, youknow, 100 yards away, 100 meters
away, whatever it is, you know,you should apply maximum
braking force.
So, you know, your Tesla today,when it's down on the freeway,
is a is a combination, if youlike, of what we're thinking of
as sort of modern artificialintelligence and then some
handwritten rules.
So, you know, nice combinationof both of those things.
(12:43):
I think Tesla though behind thescenes is heading towards the
whole thing being driven by alarge language model.
So I think they might be tryingthis in if you're uh you know
not on the highway, um, then thewhole thing's kind of hand off,
if you like, isn't that thesystem's making the decisions
about when to break and how hardto break, and there are no
rules.
So it's it's all driven bylarge language models, if you
(13:04):
like.
Hannah Clayton-Langton (13:04):
Okay.
And is this a good time to askyou about training data?
Because my understanding isthat those models which
translate a whole bunch ofexamples in data form into your
optimized output, uh, that datathat feeds them is something
called training data.
Is that right?
Hugh Williams (13:23):
That's right.
So if we go back to the eBayexample, um, the way we were
able to replace the handwrittenhuman rules with machine written
rules, if you like, using usingthis machine learning idea,
which you know, again todaywould be called AI, was to train
a system.
And to train that system, youneeded good examples and you
needed bad examples.
So you needed examples of whatwere the right answers and
(13:46):
examples of what were the wronganswers.
And this all dates back, if youlike, this kind of field to
what you know Google andMicrosoft were doing in the
early 2000s when they werebuilding new search engines.
So, you know, when I was atMicrosoft, we had a huge
labeling effort there, um,working on search, where we
would ask human judges.
So we'd employ people all overthe world, we'd ask them for a
(14:07):
given query, is this a goodanswer?
And they'd rate that answer ona scale from uh you know being a
perfect answer through to beingan answer that would break the
user's trust with the searchengine.
So they'd rate the they'd ratethe answers and then we'd
collate those back, and thenwe'd have this training data, if
you like, on a massive scalethat we could use to learn the
(14:28):
ranking algorithm that was thenyou know deployed out to our
customers.
So that whole field was kind ofborn in the in the early 2000s,
if you like, and Google andMicrosoft in particular were
labeling data at uh at hugescale, and and that continues
unabated to today, right?
So behind the scenes of all ofthese companies that are using
machine learning is a lot ofhuman labeled data.
(14:51):
And it can be one of twothings.
It can be labeling the datathat actually goes into training
the algorithm, or once you'vegot the algorithm, it can be
sort of helping the algorithmrefine itself, point it in the
right direction by giving itfeedback on when it's doing a
great job and when it's notdoing a great job.
So there's a sort of a manualadjustment later on.
But certainly data and humanlabeling are the fuel that
(15:14):
that's really driven this AIrevolution.
Hannah Clayton-Langton (15:16):
Okay, so
I have two follow-on questions.
One I think is fairlyself-evident, but just to call
out the outputs of these modelswith this AI is really only as
good as the training data thatgoes into it, right?
Which is why Google andMicrosoft were in the early days
investing in actual peoplebeing involved in it.
Is that right?
Hugh Williams (15:36):
Actually, yeah,
you're totally right, Adam.
I mean, there's there's reallytwo things that are important,
right?
So the first thing is you'vegot to decide what you're
optimizing for with thealgorithm.
So you're asking the algorithmto do something, you're trying
to learn something that achievesthe task.
So you've got to be very clearabout what is the task that you
want to achieve.
That's the first thing.
And then the second thing isyou need a huge number of
examples of what's a goodoutcome and what's a bad
(15:57):
outcome, or and perhaps someshades of gray in between those
two things that allow you tolearn the function that achieves
the goal.
Hannah Clayton-Langton (16:05):
And is
that the blessing and the curse
and some of the moral conundrumof things like Chat GPT, or that
they're scouring the entireinternet as they're training
data?
Is that if I understood thatcorrectly?
Hugh Williams (16:19):
Yeah, I mean
these these companies, and
again, it's a lot like Microsoftand Google back in the day.
These companies have avoracious appetite for data,
right?
So the the large languagemodels at places like ChatGPT
are only as good as the uh thedata that they've that they've
got.
So the more data they have, thebetter the models get.
And so these companies aredoing a couple of things.
You know, one is they're out,as you say, scouring the web, so
(16:41):
crawling the web, getting everyevery possible website that
they could possibly get.
So more and more examples oftext helps them, helps them
learn from the web.
And they're also doing deals,right?
So they're going to companiesand saying, can we buy your
data?
Um, can you be a data sourcefor us?
And again, the more data youhave, the better the model gets.
And then they're also, in somecases, employing people to
(17:04):
actually create data that givesthem better examples of
particular things that they thatthey need so that they get
better at those kinds of tasks.
But basically, they want theworld's information.
And if they have the world'sinformation, then it's possible
to build a better LLM thatachieves more generalized tasks
in a in a better way.
Hannah Clayton-Langton (17:22):
I mean,
it's kind of like stating the
obvious, but how a child learnsgrowing up, right?
Like they are taking a feed ofeverything they see around them
and everything they read andeverything they're taught, and
they're turning that into anoptimized output based on the
situation that they findthemselves in.
So obviously you removeanything subjective and it comes
down to hard and fast rules.
(17:43):
But it's, I guess that's thecallback to the intelligence
point, right?
It's just learning.
Hugh Williams (17:47):
Yeah, exactly.
And, you know, I guess when ourkids are young, you know, it's
why we read to them, right?
So we we read to them so thatthey they learn how to read,
they get knowledge, and then youknow, we encourage them as they
get older to read books and youknow, listen to the news and uh
listen to great podcasts likeour podcast, whatever it is, so
that they uh they become youknow better humans who are
capable of you know reasoningabout more things.
And so these systems areexactly the same.
Hannah Clayton-Langton (18:09):
I was
gonna make a cheesy joke about
this podcast being someone'straining data, but you did it
for me.
Okay, so data is key.
And does that mean that if I orsomeone is trying to set up an
AI company or get an AIdevelopment right in their
company, that's really wherethey have to start?
Hugh Williams (18:25):
Yeah, that's
that's the advice I'd give to
our our listeners is if you wantto use AI effectively within
your company, then absolutely itall starts with data.
And there's lots of propertiesyou want of that data, but you
want the data to be fresh, youwant that data to be
comprehensive, you want thatdata to not be erroneous.
So we, you know, we'd say wewant that data to be clean and
we want that data to beorganized and available.
(18:47):
And and maybe it's worth justsort of pulling apart an
example, Hannah.
So we we talked about eBay atthe top of the show.
I mean, let's imagine that youand I work at a commerce
website, um, and we let'ssuppose we want our users to
land on a fantastic personalizedhomepage whenever they come to
visit our website.
The more data that we have, thebetter we will be able to build
(19:08):
uh an AI algorithm, if youlike, to solve that task.
And so that data needs to beaccurate and correct.
We need that data to be all inone place, it needs to be as
real time as possible.
We want to know just what thecustomers were doing right now,
not just this customer, but allof our customers, so that we can
really deliver a greatreal-time homepage.
It needs to come from all theright sources.
So we need data about how usersbehave on our website, we need
(19:32):
data about how users behave whenwe send them an email or a
notification, we need to knowwhat the users are doing in the
apps.
So we need all of this data inone place to give us the best
opportunity to build the bestalgorithm to deliver the best
personalized homepage, you know.
So incredibly important that weget the data right if we want
to get the AI right.
Hannah Clayton-Langton (19:52):
Yeah,
that makes total sense.
Another good example that I'veheard talked about at work in
the last sort of six months iscan we leverage our internal
like knowledge database?
So, where we store things likeinformation about certain teams
or policies, and can we havelike an AI chat, which I guess
would be an LLM to answer likeemployee queries?
(20:14):
Um and the answer is yes, ofcourse, you can do that, but
then you need to make sure thatthere's no old policies stored
on the system.
And you have to make sure thateverything's being refreshed
because the answer will only beas good as the data that's going
into it.
And I think probably the lesssexy truth is a lot of places
aren't getting those basicsright, particularly if it's
something like an internalknowledge database, right?
(20:34):
That can be an afterthought.
Hugh Williams (20:36):
I guess if I was
summing up my advice, I'd say
garbage in, garbage out.
So if if you've got garbagedata going in, then you will get
garbage coming out of your AIsystem, right?
So the the first thing youabsolutely have to do is get
your data house in order.
Hannah Clayton-Langton (20:52):
Okay, so
it sounds like clean,
well-structured data isfoundational.
Ideally, lots of it.
So then what's your what's yourmodel?
Is it just like a super smartalgorithm?
How does that layer into it?
Hugh Williams (21:04):
So great
question, Hannah.
What what do you what exactlyis a model?
Well, the good news, I think,for most companies and for most
of our listeners is you don'thave to invent one.
There's lots of differentchoices out there of models that
you can use to learn a functionor an algorithm that that
solves a problem.
And so what most teams todayare doing is they're selecting
(21:24):
almost off the shelf, if youlike, a model that they that
they want to use to actually goand learn from the data how to
solve a particular problem.
And you basically just grab oneof those off the shelf.
So there's very few companiestoday actually innovating in the
models themselves.
They're they're really thingsthat you just sort of grab off
the shelf and start to use.
Hannah Clayton-Langton (21:44):
Okay,
wait a minute.
My mind is being blown in realtime.
So literally, how many modelsare there?
Like you say off the shelf,like do you buy them?
Like what what are these, who'sselling them?
Like, where do they come from?
Hugh Williams (21:56):
Yeah, so mostly
they're published research.
So they're they're generallyinvented by people who work in
academia.
Um, some have been invented inlarge companies like Google, and
then Google have gone andpublished papers and put those
things in the public domain.
But by and large, these thingsare in the public domain.
So you can buy implementations,sure, but you can also get
what's called open sourceimplementations.
(22:17):
So you can actually go and findopen versions of the code and
download those pieces of codeand use them.
So the models themselves aren'tgenerally terribly sort of
secretive, if you like.
They're they're really thingsthat you can you can grab from
the shelf.
Hannah Clayton-Langton (22:31):
Okay, so
they're kind of like hardcore
academic thought and research.
Hugh Williams (22:37):
Yeah.
So you might have looked, someof our listeners will have heard
of things like support vectormachines, you know, gradient
percent boosted decision trees,you know, these these kinds of
things.
They're all examples of neuralnets.
I mean, these things are allexamples of types of approaches
to learning from data how toactually do things.
Hannah Clayton-Langton (22:55):
Okay, so
you need a wealth of clean,
well-structured data.
You need to pick the rightmodel, which I'm sure there's an
art in that.
And then you sort of need toapply it to a use case that is
unique and useful for users, andthen you have a tech product
like a chat GPT that can takethe world by storm.
Hugh Williams (23:16):
Yeah, absolutely.
And I think you know, anotherway to think about this.
I mean, we we talked about sortof carpenters and analogies and
things in our in one of ourearlier episodes.
But you know, if you think ofyourself as a as a carpenter or
something, you know, then it'sreally about choosing the right
tools, right?
So you sort of say, Do I need asaw?
Do I need a drill?
You know, what's the tool thatI need to solve this task?
And I think as a carpenter, youyou have some intuition about
(23:38):
what tool to go and use.
And I think as a data scientistwho are the people who work on
these kinds of problems, youhave an intuition as to what
kinds of models you might use tosolve particular problems.
But you're not inventing a saw,you're not inventing a drill,
you know, you're you're notworking for bot or ray and you
know, innovating in drills orsaws.
You're somebody who, by andlarge, typically would go and
(23:59):
grab one of these and and makeuse of it.
Hannah Clayton-Langton (24:01):
So, what
was the big revolution that's
come about with Chat GPT andlarge language models, which for
me as a lay person burst ontothe scene, I want to say two,
three years ago.
But a lot of the concepts thatyou've talked about don't feel
fundamentally new or different.
So, what was what was therevolution there that occurred?
Hugh Williams (24:19):
Yeah, so so LLMs
are certainly a genuine
technical breakthrough, that'sfor sure.
And I want to talk about thatin our in our second episode,
but there's a couple of thingsthat were really exciting
breakthroughs that led to LLMs.
So there's this thing calleddeep learning, we'll talk about
that in our next episode.
Then there's this these thingscalled Transformers, we'll we'll
talk about that in our nextepisode as well.
That was a that was a hugebreakthrough.
(24:40):
And then, of course, there wasa lot of hard work that went on
behind the scenes at OpenAI tobuild ChatGPT, and there's some
really interesting things thatthat happened there.
So, no, no, they're definitelya technical breakthrough, but I
think also they're an incredibleuser experience, a really user
experience revolution, right?
So AI is something that folkslike like me have been doing for
(25:02):
you know 20, 25 years behindthe scenes in large tech
companies.
Um so this this is this wholeidea isn't very new to me and to
lots of people like me, but nowthe whole world can use AI in a
in a consumer kind of way.
So, you know, you can you candownload an app, you can get it
on your phone, you can pull itout of your pocket, and you can
actually kind of talk to it andreason with it.
(25:25):
And that's a that's a massivebreakthrough.
I think that's a little bitlike the iPhone, if you like.
So, you know, back in the day,computers were big things that
were stored in rooms, and thensome people had one at home that
was on a desk, and then boom,this revolution happened, and
now everybody has one in theirpocket.
I think that's exactly what'shappened here is AI has now
become a uh a product that'sused by consumers, and then it's
(25:48):
it's obviously swept the world.
But no, no, there's definitelysome technical breakthrough.
I'd love to talk about that,but also there's a huge user
experience revolution that'shappened.
Hannah Clayton-Langton (25:57):
So it
basically is about relevance and
accessibility.
Suddenly, as a person who maynot work in a tech company, may
not consider themselves to be anearly adopter of technology,
they're hearing of this thingbecause everyone's using it.
It's super easy to use.
They can pull it up on theirphone and you know, literally
enter text into a like a textmessage almost, except it's a
(26:18):
machine that's responding back.
Hugh Williams (26:20):
Yeah, and that's
massive, right?
And I think we're, you know,it's taking a lot of traffic
away from Google.
Um, we're now answering what wecall our informational queries,
so ones where we want tosynthesize information or get
information about something.
We're largely now answeringthose using tools like ChatGPT.
So Google's being marginalized,if you like, to doing other
tasks.
So this thing's, you know, it'staken the world by storm.
(26:41):
I mean, it's one of the OpenAIis one of the fastest growing
companies in history.
Um I wish I was an earlyinvestor.
Hannah Clayton-Langton (26:48):
I like
what you did, likening it to the
advent of the iPhone, becauseyou're right.
The computers existed in roomsfor a lot much longer than most
people realize.
And it sounds like that's astrong metaphor to the AI
machine learning that's existedin the technical space since the
50s.
Like that to me is a bigrevelation that I don't think
I'd quite grasped until we gotinto this topic.
Hugh Williams (27:10):
Yeah, and you
know, the web was a revelation,
mobile was a revelation, cloudcomputing was a revelation.
It allowed, you know, threepeople in a garage to actually
go and build a startup.
And this is certainly a majorrevolution.
Um, I guess history will tellus which of those things is most
important, uh, but they've allbeen massive, massive
breakthroughs.
(27:30):
And of course, you know, PC ona desk was a breakthrough.
I mean, there's been a bunch ofthings that have happened in
our lifetime that have beenbreakthroughs, but this is,
yeah, this is a little bit likethe advent of the iPhone.
Hannah Clayton-Langton (27:39):
Okay,
and just before we wrap up, as
someone who's been involved inthe tech world for a good number
of years through some of theserevolutions, are there any other
cool anecdotes or experiencesfrom your career that you think
bring this topic to life?
Hugh Williams (27:53):
Yeah, look, maybe
this is ending the episode on a
on a sort of a flat note, ifyou like.
But I'd I'd say to ourlisteners, look, AI isn't the
solution to everything, and youshouldn't always use AI.
Right?
So if you've got a human andthey know how to do the task and
do the task well, then why riska machine messing it up?
I'll tell you a quick story ifyou like.
You have you up for a story?
100%, always.
(28:13):
So when I was at Microsoft, wehave this where this brainwave,
this is this is back in uh let'scall it 2007 or 8 or something,
we have this brainwave.
Hey, in search, wouldn't it becool if when a customer types a
query that somewhere on theresults page they can see the
phone number that they shouldcall if they want to call that
company, right?
So so I don't know, let'simagine we're in the UK and we
(28:34):
want to we want to call Tesco.
So you type Tesco into yoursearch engine, you press enter,
and then somewhere on the pageit says Tesco customer service,
and the phone number you cancall.
It's like great, now I don'thave to go to the Tesco website
and you know trawl around thisthing, desperately trying to
find the uh contact number thatthey've probably really tried
hard to hide.
So turn to the team, say, hey,this is a great idea, we should
(28:56):
go do this.
So they they say, okay, youknow, AI is the way, which we
were calling machine learning atthe time, but AI is the way.
Let's go and learn um a modelthat can go to any website and
extract out the phone numbersthat are useful to customers.
So we'll learn this.
Like hands off, lots oftraining data, examples of good
phone numbers, examples of thewrong phone number, you know,
examples of websites, whateverit is.
(29:17):
We'll go and learn this thing.
Lots of engineers, lots of hardwork.
This thing goes on for a while,and it's pretty good.
Um, it's about 99% right.
So 99% of the time or so, thisthing can find the right phone
number and show it to thecustomer.
Pretty good, but not goodenough.
Hannah Clayton-Langton (29:34):
What
happened?
Hugh Williams (29:36):
Well, I think it
was Southwest Airlines.
We um we put their fax numberup as their customer service
number.
Hannah Clayton-Langton (29:42):
Oh wow.
A fax.
Now I bet there's listeners whodon't even know what a fax
machine is.
Google it, ask chat GPT.
Hugh Williams (29:50):
Old school way of
communicating.
Um around.
I had to fax something to mybank the other day.
Anyway, so um customers, youknow, they type Southwest
Airlines into the search engineand then get back.
This phone number that'sclaiming to be the customer
service number, they call it andthey give this, you know,
they're like, oh, helpless.
Of course, Southwest Airlinesis now really, really upset
because their fax machine is nowcompletely unable to be used
(30:11):
because it's been called byhundreds, if not thousands, of
people who want to talk tocustomer service.
So the these people aren'thappy, Southwest Airlines are
unhappy.
They give us a call and theysay, What have you guys done?
What am I missing?
And so it turns out 99% isn'tgood enough, right?
You need you need this thing tobe 100% right.
So we got rid of machinelearning.
We employed a small team ofpeople, uh, and their job was to
(30:33):
go to websites and find therelevant phone number and type
it into a spreadsheet.
Um, we had more than one persondoing the same task.
So we'd get more than one humanto go to Southwest Airlines and
try and find the phone numberand type it into a spreadsheet.
And then once the humans agreedenough, we'd say, okay, that's
obviously correct.
And we could do this for youknow tens of thousands of
websites.
We could get these people to goback every month and check
(30:55):
them.
And it was 100% right.
And so, you know, no machine,no machine learning, no
mistakes, humans getting itright.
Probably the cost was less too,you know, rather than having
expensive software engineersdoing this, you could uh you
could employ people at, youknow, near the minimum wage to
go and do this task and do itwell and give them a job.
And uh 100% right.
Hannah Clayton-Langton (31:15):
Okay, so
that blows my mind.
But what year was that?
Like that can't seriously stillbe happening today.
Hugh Williams (31:20):
Oh, it's still
happening, huh?
Yeah, look, that was 2008, butit's absolutely still happening.
And and look, two reasons, youknow, the reason one is AI is in
the consciousness, right?
So every board is talking aboutAI, every CEO is talking about
AI, there's pressure to doeverything using AI.
So there's enormous pressure tosolve tasks with AI.
And in many cases, thoseprobably shouldn't be solved
with AI.
So there's that.
(31:41):
And also, if you turn to anengineering team, one that's
proficient in thesetechnologies, and you say, hey,
I'd love to solve a problem, youknow, guess what they're gonna
do?
They're gonna go and use AI.
Hannah Clayton-Langton (31:52):
Well,
that sounds fun, right?
Hugh Williams (31:54):
Well, it's fun,
yeah, yeah.
And it's what they're qualifiedto do.
And, you know, if you've got ahammer, everything looks like a
nail, right?
So so they're just gonna go forit.
Um, and so I think as a as aleader, you know, somebody who's
experienced, um, and I givethis advice to our listeners,
you know, stop and think, is AIthe right solution to this or
should we be doing this anotherway?
And of course, while you'rethinking about that, ask how
(32:15):
good is good enough, right?
Because 99% sounds great, butin this case it wasn't good
enough.
And in some cases, 51% is goingto be good enough.
So you have to really decide,you know, what is good enough
before you decide, well, whatcan the solution be?
And and remember that AI willalways make mistakes.
Hannah Clayton-Langton (32:31):
That is
a really refreshing place to
end.
I know we've got part two, butsomeone with as much experience
in the tech companies thatyou've worked in coming back
around in 2025 and saying AIisn't always the answer isn't
what I was expecting from thisepisode.
So let's call it there.
Uh, should we tease what we'regonna be talking about in the
(32:53):
second episode?
Hugh Williams (32:54):
Yeah, absolutely.
In our next episode, we aregoing to be talking about large
language models.
We're gonna talk about how ChatGPT works and how it's in work,
and we're gonna talk a littlebit about the future of AI and
what we can expect next.
So that'll be a fun secondtopic, Hannah.
Hannah Clayton-Langton (33:09):
Awesome.
Okay, well, thank you so muchfor listening.
This has been the Tech OverflowPodcast.
As always, I'm Hannah ClaytonLangton.
Hugh Williams (33:16):
And I'm Hugh
Williams.
And if you'd like to learn moreabout our show, you can visit
TechOverflowpodcast.com.
We're also available onLinkedIn, X, and Instagram.
Hannah Clayton-Langton (33:25):
Please
like, subscribe, and share with
your friends.
And we'll see you next time forour second instalment on AI.
All right, thanks to you, safetravels.
Hugh Williams (33:34):
Thanks, Hannah.
See you soon.
Bye.