Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Nick Pelly (00:03):
To me, this is just
one of the most interesting
engineering challenges of ourtime.
Hannah Clayton-Langton (00:08):
LLMs are
the first time that AI isn't
like hidden away under layers ofcomputer.
Hugh Williams (00:12):
Somebody said to
me the other day, how does it
summarize a document?
And I said, well, nobody knows.
Hannah Clayton-Langton (00:18):
Hello
world, and welcome to the Tech
Overflow podcast.
I'm Hannah Clayton Langton, andtoday I'm without my co-host
Hugh Williams, but that is forgood reason.
Those of you who tuned in tothe early episodes of season one
might remember that my wholemotivation and inspiration for
starting this podcast was that Iwanted to learn about
technology.
I work for a tech company andI'm a technology user, I'm super
(00:41):
interested, but I'm not anengineer.
And when I reached for theresources to teach myself about
tech, how it works, and to bebetter at my job, I couldn't
find anything.
And that's when I called upHugh.
So as Hugh's not here, beforewe get into it, let me just
remind everyone of his prettyimpressive CV in the tech space.
Hugh started off his career inacademia, but was quickly
whisked off to the SiliconValley and worked for the likes
(01:04):
of Microsoft, Tinder, Google,and eBay, just to name a few.
And so he's pretty well placedto tell us some industry
secrets.
This episode is all about thehighlights of season one.
And I'm gonna take you throughsome of my favourite moments,
our best insider stories,technical insights, and special
guests from season one.
So if you're enjoying the TechOverflow podcast, please do like
(01:27):
and subscribe wherever you getyour podcast.
Share with your friends,family, colleagues, and you can
find us on LinkedIn, Instagram,and X.
Season two is coming to you onthe 3rd of March.
So mark your calendars, andwe'll see you there.
And now for the good stuff.
So I've put together somesegments in this episode to
share some of the best bits andmy favorite moments from season
(01:49):
one.
I hope you enjoy.
Moment one was when we deepdived into the CrowdStrike
disaster.
So that was episode four.
Hugh and I talk about whatactually happened, what went
wrong, and why this was thebiggest outage in the history of
tech at the time of recording.
Hugh Williams (02:04):
So it opened up
the file, it found it had 21
fields, the software isexpecting 20, and all sorts of
bad things started to happen.
Hannah Clayton-Langton (02:12):
And so
because it's so deeply embedded,
it wasn't just like error,please restart, or error,
couldn't read file type.
You ended up sort of trippingthe whole system.
And blue screen at death ratemeans can't use your computer.
Like that's it.
Hugh Williams (02:26):
Exactly.
Because if something like Wordhad a problem like this, Word
would crash.
Yeah.
And you'd say, huh, Word'scrashed, I'll try starting it
again.
Huh, it keeps crashing.
Maybe I'll try downloading anew version or I'll wait till
tomorrow until Microsoft updatesit.
But because this Falconsoftware runs deep inside the
operating system, it actuallytook down the operating system,
this error.
And so all these blue screensof death started happening.
(02:48):
So CrowdStrike folks deploythis file and they basically
shut down every Windows machinethat this software is installed
on.
They all get the blue screen ofdeath.
Of course, what happens afterthe blue screen of death is a
lot of folks will try and rebootthe machine.
Yeah.
So they say, oh, you know,reboot.
But the problem was when itbooted back up, the same thing
happened again.
Hannah Clayton-Langton (03:07):
And
every Windows system across the
world that had Falcon installedbasically went black or went
blue.
Hugh Williams (03:16):
And was unusable
and would not boot up again.
Hannah Clayton-Langton (03:19):
Moment
two was a real interesting one
as a British shopper.
So this was all about Marks andSpencer's or MS.
They had an absolutely massivehack through 2025.
It took them out for well overa month.
And the most interesting bitwas that Hughes takes slightly
differed from what is on publicrecord.
(03:39):
So have a listen to what wethink likely actually happened.
So I think MS issued someofficial communication that they
caught this whole hack prettyearly days.
But I think I think that's beensomewhat debunked.
Speaker 3 (03:51):
Yeah, I don't think
that's true.
I think what is true is thatonce the ransomware was
executed, and we'll talk aboutransomware and what it does a
little bit later on, once thatwas executed, MS were very quick
to explain that that's whathappened and begin a path that
took a long time to towardsrectification.
But uh I don't think they werequick to detect that the hackers
(04:13):
were inside their systems.
And some folks are sayingthey're probably in there at
least a couple of months.
Speaker 2 (04:18):
We can talk later
about whether or not that's
unusual.
I think the answer is no.
Not unusual.
No.
Okay.
Speaker 1 (04:23):
So hackers they get
in, they exploit this
vulnerability, and then they dolike a few really key things
that sort of set them up forsuccess in taking things down.
Speaker 3 (04:31):
Yeah, that's right.
I I think there's a couple ofparts of the story that we'll
never know the real details to,but what's certainly happened is
they got in as fairly low-levelemployees, right, or
contractors.
They've now got access to somesystem.
It's certainly not going to bethe absolute core of MS, but
they're they're in.
They're in the edges.
They've made it into some ofthe outbuildings if you wanted
(04:52):
to use an analogy.
Somewhere along this track overthe next couple of months,
they've managed to what we callescalate their privileges.
So they've managed to figureout how to get more access to,
you know, more of the buildingsto continue the analogy.
Speaker 2 (05:06):
And our last little
snippet, I think, is just the
perfect example of good productmanagement in action.
So when Hugh ran Google Mapsfrom I think it was 2015 to
2017, he talks about somecompetitor research he used to
do on his drive to work.
Speaker 3 (05:19):
I was lucky enough at
the end of my uh executive
career in the US to run GoogleMaps.
So I looked after the productand engineering teams for Google
Maps, and I can tell you that Iquite often used to drive to
work using Apple Maps.
Speaker 1 (05:31):
But you were doing
that just to just to know they
hadn't changed their game orcome up with anything.
Speaker 3 (05:36):
Yeah, and and uh I
had a lot of respect for them.
You know, I knew they weretrying really, really hard.
They had a catastrophic launchin 2012, and this is more like
2015 when I was using it.
So they'd really got off theirknees and kept on going, and
they were beginning to do somepretty useful things.
Like I'll give you a coupleexamples actually.
They were the first ones tohave a parking feature.
So when you parked your car,they'd put a pin on the map that
(05:56):
showed where you parked yourcar.
Fantastic feature.
Um, so I remember going intoGoogle and saying, you know,
hey, Apple's got this awesomefeature.
One of the product managerssaid, Oh, yeah, we've got that
kind of queued up in our list.
And I said, Well, probably timewe we put that towards the
front and got on with that.
And, you know, ultimatelyGoogle released that, I think,
probably six or seven monthslater than Apple.
But I thought it was a greatfeature.
Speaker 2 (06:15):
Okay, so now I'm
gonna run you through our top AI
moments through season one.
The AI insights that I tookaway from season one are
probably the things that havemost shifted my mindset about
how technology works.
And AI is gonna be a huge focusof season two at listener
request.
So we're super excited to getinto more detail there.
(06:36):
Just as a little reminder, Hughworked in AI from the early
2000s at the likes of Microsoft,eBay, and Google.
So he helped with some of theirAI tech, which, if you haven't
listened to our episode in moredetail, you'll learn that AI's
been around a little bit longerthan people might think.
And in this next segment, Hughand I talk about large language
models or LLMs.
So that's the likes of Chat GPTand how ubiquitous they are in
(07:00):
terms of daily users globally.
Speaker 3 (07:03):
AI is something that
folks like like me have been
doing for you know 20, 25 yearsbehind the scenes in large tech
companies.
Um, so this this is this wholeidea isn't very new to me and to
lots of people like me, but nowthe whole world can use AI in a
in a consumer kind of way.
So, you know, you can you candownload an app, you can get it
(07:23):
on your phone, you can pull itout of your pocket, and you can
actually kind of talk to it andreason with it.
And that's a that's a massivebreakthrough.
I think that's a little bitlike the iPhone, if you like.
So, you know, back in the day,computers were big things that
were stored in rooms, and thensome people had one at home that
was on a desk, and then boom,this revolution happened, and
now everybody has one in theirpocket.
(07:44):
I think that's exactly what'shappened here is AI has now
become a uh a product that'sused by consumers, and then it's
it's obviously swept the world.
Speaker 2 (07:54):
I think my favorite
episode of the whole first
series was episode seven,actually, and that's where Hugh
and I discuss how LLMs aretrained, and we talk about the
massive scale of building them,and there's some pretty cool
stats that bring that to life.
Speaker 3 (08:08):
The scale of LLMs is
just so incredibly, incredibly
different.
I mean, I'll use the word tokena little bit later on, but you
can just think words for now.
These LLMs are trained onhundreds of billions or maybe
even trillions of words to beable to generate the text that
they generate.
I've heard estimates that saythat OpenAI's latest GPT-4, so
(08:31):
the chat GPT that you're usingtoday was trained on about 13
trillion tokens, about ninetrillion words.
Speaker 2 (08:41):
And probably the most
compelling thing about ChatGPT
and its peers, as we know, ishow they produce output that
feels like cognizant andhuman-like.
In episode seven, Hugh sharesthat no one actually knows how
this works, and it's part oftheir magic, and I don't use
that word lightly.
They're basically built with atechnology that learns from so
(09:02):
much data that it can outputplausible text based on any
input you give them.
And in this clip from episodeseven, Hugh talks about how LLMs
summarize text.
Speaker 3 (09:12):
Somebody said to me
the other day, how does it
summarize a document?
And I said, Well, nobody knows,really.
So when when you say, pleaseshorten this document or
summarize this document or turnthis document into bullet
points, it's just seen enough ofexamples of that in the vast
amount of text that it's seenthat it's able to carry out that
(09:32):
task, right?
So it's seen examples of a longdocument shortened to a shorter
document, it's it's seen anexample of uh an essay turned
into PowerPoint slides, whateverit is.
It's seen enough examples ofthat in the trillions of words
that it's seen that it's able todo that.
So you give it a simpleinstruction like summarize or
shorten, and it can take thefollowing content and know what
(09:53):
to do with it.
Speaker 2 (09:54):
And the final piece
of the LLM puzzle, which I
particularly loved, was allabout how they're trained and
how they use GPUs, which isthose chips that everyone's
talking about that companieslike NVIDIA make.
Now, I've heard a lot aboutchips, GPUs, and NVIDIA, but I
hadn't quite understood how theyfit into the overall LLM story.
So you have a bunch of GPUswhich cost like 40 grand or
(10:15):
something.
Speaker 3 (10:15):
Yeah, of that order.
Speaker 2 (10:17):
In USD.
Speaker 3 (10:18):
Yeah, absolutely.
So, you know, 20 to 40,000bucks per GPU card, and uh, you
know, these data centers areabsolutely full of them for this
training process.
Speaker 2 (10:26):
Just for the training
process.
Speaker 3 (10:27):
Yeah, they're used in
the evaluation as well.
So when you want to buy yourholiday to Florence or whatever
it is that you're asking about,or help me cook a recipe, or
whatever those kinds of things,the GPUs are used in that as
well.
But you need vastly moreinfrastructure for the training
than you do for the inference,which is the what we call the
the question-asking piece ofthis.
Speaker 1 (10:45):
Okay, because there
is a whole topic of conversation
around like the compute powerrequired by LLMs and like the
environmental cost of it and thefinancial cost of it, but the
main consumption of that computehappens in the training phase.
Speaker 3 (10:58):
Correct.
Speaker 1 (10:59):
Okay.
Speaker 3 (11:00):
So training takes
weeks or months, probably costs
50 to 100 million dollars to do.
When you type a question, like,you know, help me plan my
itinerary to Florence, thatprobably costs a very small
fraction of a cent.
Speaker 2 (11:13):
So we loved our
interviews in season one, and we
know you guys did too.
We had two special guests, andwe're having even more joining
us in season three that you'lllove, but no spoilers.
So, first up, we had JonathanBedeen, who is the co-founder of
Tinder and the inventor of thefamous Swipe Write, which
changed the landscape of moderndating.
Jonathan opens by telling usabout Hatch Labs and shares how
(11:34):
the app actually got the nameTinder.
Speaker (11:36):
So the original
Matchbox was to be an app, it
wasn't, it never really existed.
Because Hatch Labs' focus wasactually on creating uh
disruptive mobile apps, sonative mobile apps and all.
And so that that was the focus.
And then uh, of course, Tinder,when we were making Tinder, uh,
we were gonna call it Matchbox,but because of issues that
(11:58):
could arise with match.com andall, started looking for
different ways of uh differentnames and all.
And Tinder was one of thosekind of went along with the fire
theme that we were kind ofgoing with.
And so we ended up ultimatelylanding on Tinder.
Speaker 2 (12:10):
Tinder was also
famously the first dating app.
And here Jonathan talks aboutsome of the decision making
behind the scenes that werereally smartphone-centric for
the first time and how that ledto truly revolutionary user
experience.
Speaker (12:23):
But there were a lot of
decisions that got made because
of the platform.
And obviously, your swipecouldn't exist without gesture
technology, which comes from atouch screen.
Uh, we originally had sort ofthe a quick and easy way to log
in or to create an accountbecause you don't want to be
typing profile for an hour on asmall little uh so Facebook
login, uh, which was more of athing then than it is now, but
(12:44):
it allowed us to create theseprofiles real easy.
We've crafted the communicationpart of it more off of texting
as opposed to the previousthings, which were more
email-centric.
You know, you've got this smallscreen, you're gonna put less
information about the personright up front.
And so the the very firstversion had a photo, although
not quite as large as it uhultimately ended up becoming
(13:06):
first name, age, and it had uhnumber of shared friends and
shared interests.
And then you tap into theprofile and you could get all of
that plus a little writtenblur.
But uh, you know, I think it'suh a lot of people think, oh,
it's only photos.
It never was only photos.
However, it does turn out to beactually one of the most
important things.
Speaker 2 (13:23):
We had to ask
Jonathan about how he invented
the swipe right, and he shared areally insightful story about
how flashcards were the realinspiration behind the way that
he built the swipe right on oursmartphone screens.
Speaker (13:36):
I woke up one morning
with this epiphany, just like
literally woke up and just gotreally excited about how I
thought you would make theperfect flashcards happen.
And that was you would swipe inone direction for flipping the
card over, and you'd make anactually good swipe, but then
you'd swipe in the otherdirections for saying I got the
card right or I got the cardwrong.
(13:58):
And I kind of came up with thiswith an idea of like how I
would use real flashcards, likein real life, not going to
class, but like if I was sittingthere, I'd start out with a
stack of cards, and I would takethat card and I'd put it into
one pile if I got it right, andthat's this the cards I don't
need to study anymore.
Uh, you know, oh, I got thisone wrong.
I'm gonna put it over here intothis other pile.
(14:19):
It's the one that I need tostudy more.
And so now you got three pilesof cards until you whittle it
down to two.
And so I envisioned those twostacks of cards, the right and
the wrong ones, right off thescreen of the iPhone, because
your screen's small, right?
And basically, that's where thegesture comes from is dragging
that card to the wrong or rightstack that's just off screen.
(14:40):
So ultimately, when we ended upmaking Tinder, we had already
landed on this sort of one at atime sort of card.
Speaker 2 (14:46):
Our second
interviewee was Nick Pelle,
director of engineering atWaymo.
We were really lucky to havehim join what was our most
listened to episode from seasonone.
Nick shares why users andriders love Waymo, why they're
actually safer than human-drivencars, and how they're expanding
across the globe.
They're actually coming toLondon, and I am really waiting
(15:08):
to jump in my first Waymo.
So, in this next clip, Nickexplains to us, in his own
words, exactly what Waymo is.
Speaker 4 (15:15):
Let me describe what
Waymo is.
It's a robotaxi experiencewhere you know you pull out your
phone, fail a vehicle, youknow, much like you would do
with Uber.
In fact, in some places wepartner with Uber, and you know,
the vehicle comes to pick youup.
But in this case, the vehicleis empty.
It's gonna pull over on theside of the road with no one
inside.
You get a private vehicle, aprivate experience, and it's
(15:36):
going to fully autonomouslydrive you to your destination.
So this is a ride hailingservice.
It's what in the industry wedescribe as level four autonomy.
Uh, you know, if you'd likelater, we can get into some of
those different levels.
But level four meaning, youknow, there is no human in the
vehicle that is ready to takecontrol of the steering wheel.
There's no one in the vehiclewho needs a driver's license.
Speaker 2 (15:58):
And if, like most
people, you're concerned about
the safety of robotaxis, let'sjoin Nick in discussing what the
data shows about the safety ofautonomous vehicles versus
human-driven cars and how thatdebunks the myth that they might
be in any way less safe.
Speaker 4 (16:14):
That's the most
noticeable uh for the user is
you get a private experience,and that's a really big win.
But what we think a lot aboutis the 40,000 road deaths, just
in the US, 40,000 road deathsper year that are completely
avoidable.
And the safety benefits that anautonomous vehicle bring are
quite dramatic.
(16:35):
We've now driven over a hundredmillion fully autonomous miles.
And you know, we've looked backat that data, and we have
between five and 10x lessaccidents than human drivers who
uh are driving in the samegeographies.
The safety benefit is is reallyquite stark.
Speaker 2 (16:54):
One thing that I took
away from this episode is that
Nick really loves his job andthe challenges it brings to him
as problems to solve.
So there's a lot of hardwareinvolved, and I learned some new
words in this episode (17:03):
radar,
LIDOR, and other sort of sensors
on the car.
But here's Nick talking aboutthe breadth of the software
challenges that Waymo have tosolve as well.
Speaker 4 (17:12):
This is just one of
the most interesting engineering
challenges of our time becauseof the well, the complexity, but
the breadth of engineeringinvolved.
And I've touched on some of thehardware side of things, but
you know, also on the softwareside, we have these real-time
safety critical systems on thevehicle, as well as, you know,
rider experience and you know,user interface systems in the
(17:34):
vehicle.
Then we have the mobile app,and we have a lot happening in
the cloud.
Uh, there's both the sort ofride hailing system that you can
think about matching demand andsupply and having a efficient
marketplace there, you know,much like other ride hailing
companies do, but then also thesimulation systems and the log
replay and the ability tovisualize and play back, you
(17:56):
know, what's happened in thefield.
There's such a rich ecosystemof tooling and infrastructure
off the vehicle as well.
I don't know if I've everworked on a project with such a
span of different softwaresystems.
It kind of brings every singlesoftware discipline together, as
well as many different hardwaredisciplines.
Speaker 2 (18:13):
So clearly the world
is changing as autonomous
vehicles are becoming more andmore common.
And here is Nick sharing alittle bit more about what the
future could look like in aworld of autonomous vehicles.
Speaker 4 (18:23):
This is going to take
a little longer for all the
impacts to play out becausewe're talking about city design,
we're talking aboutmanufacturing of much larger
objects.
We're talking about a safetycritical system that you know
has a lot of engagement withregulators as well.
But this is the direction.
It won't just be ride hailing,it'll be personal car ownership,
it'll be trucking, it'll be allforms of transportation over
(18:46):
time.
Uh, you could imagine somequite high percentage of cities
right now is dedicated to tovehicles and especially to
parking.
I believe it's in the 30 to 40percent range.
If you look at a city by realestate, it's dedicated to
parking.
Speaker 2 (19:00):
Wow.
Speaker 4 (19:01):
And you know, with
autonomous vehicles, better
utilization of the vehicles, sothere's less parking.
And when you do need to parkthem, you can easily move them
out outside of the city.
So, what this will mean for howcities are laid out and and
real estate is quite dramatic,and this will be significant.
Also to people's lives.
I think this will make theworld feel smaller, you know,
(19:23):
much like the jet plane did oror the automobile did
originally.
It'll become easier to get fromA to B because you can use that
time much more productively,and you can know you're getting
there much more safely.
And I'm sure over time we'llsee a sort of abundance of
options.
You could imagine autonomousvehicles that have have beds
that you can sleep in.
I I don't know what timelinethat is on, but that's that's
(19:45):
clearly you know where we'regoing that you can work, you
could play, you could sleep, youknow, in in in these cities and
just make the world feel feelsmaller.
Speaker 2 (19:53):
And last but not
least, listener questions and
the controversial tech trivia.
So in our last episode of theseason, we took some listener
questions.
Thank you to all of you whosent those in.
And here is Hugh getting intosome acronyms.
Speaker 3 (20:08):
LAN is local area
network.
So, you know, if you're in abuilding, you've got Wi-Fi,
you've got some cables runningaround the building, could be
your house, could be where youwork, that that would be called
a LAN.
So it's basically the networkthat is within your building.
And then a wide area networkis, you know, the bigger version
of that, right?
So that's something that acompany might use to connect two
campuses together.
Or, you know, you've you've gotsome infrastructure out in out
(20:31):
in the field somewhere and youwant that connected back to the
the head office, then that wouldbe a wide area network that
you'd be using to connect all ofyour infrastructure together.
So folks like Google, Amazon,you know, those kinds of folks
have very, very large,sophisticated WANs that connect
together all theirinfrastructure, including all of
their data centers, warehouses,offices, all these kinds of
things are all connectedtogether on a giant network that
(20:52):
you'd call a WAN.
Speaker 2 (20:53):
One thing I learned
was that Hugh has spent a lot of
time in massive data centers,which I have to say is not
something I've thought aboutmuch until they've hit the news
a fair bit quite recently.
Speaker 3 (21:03):
I might uh I might
share a couple of pictures on
socials of me walking arounddata centers with some big data
center infrastructure from theold days.
You know, lots of blinkinglights, cables, really cold,
fluorescent lights.
And they're gigantic pieces ofinfrastructure.
I mean, they're some of thebiggest warehouses you will ever
see, just completely filledwith computers.
It's it's very, very cool.
Speaker 1 (21:20):
Interesting.
And they're like in randomplaces where there's like loads
of land available, right?
Speaker 3 (21:24):
Yeah, yeah.
And you might put it next to ahydropower station or somewhere
where there's a lot of solaravailable, or perhaps, you know,
nuclear energy or whatever itis, because they do use a lot of
power to basically to run theinfrastructure and keep the
infrastructure cool, because youknow, all of these CPUs and
GPUs get very, very hot, and soyou need a lot of cooling.
And so, you know, put them nextto rivers and pump cold water
(21:45):
through them, all sorts ofinteresting things.
So they're in very interestinglocations, often hard to get to.
Speaker 2 (21:49):
And our last moment
takes us back to our very first
episode.
Hugh and I were super nervoustrying to figure out how to be
podcasters, and we learnedsomething really interesting
about how engineers that know.
Some of the super old codinglanguages can actually make some
pretty good money helpingcompanies out.
Speaker 3 (22:05):
So, for example,
there's a coding language called
COBOL.
Uh it was very popular in the1970s.
It's one of the most verbosecoding languages, so it takes
lots and lots of lines of codeto get anything done.
And you can get paid anenormous amount if you are a
COBOL programmer today, becauselots of big banks, insurance
companies, these kinds of folks,telecommunications companies
are still running COBOL systems.
(22:26):
I said I was in Thailand lastweek, I was talking to a bank
executive, and most of theirsystems are still COBOL systems
running on giant mainframecomputers that they maintain
themselves.
And so if you're capable ofwriting code in COBOL, you can
get paid very, very well.
That's probably not true forVoyager 1 and Voyager 2 because
they're probably really excitingjobs to have.
They probably don't have tooverpay for those.
(22:47):
But knowing historicprogramming languages turns out
to be valuable.
Speaker 2 (22:50):
So that's a wrap on
the best bits of season one.
Hugh and I had a ton of funputting it together for you.
And season two is coming soonwith our first episode out the
3rd of March.
So we'll have a trailer out forseason two in a couple of
weeks, and I am really excitedabout some of the guests we have
lined up, but I won't share anymore just now.
Please do like, follow, reviewthe podcast, share it with your
(23:13):
friends, colleagues, family, andHugh and I will be back soon.
We can't wait to see you.
Bye.