Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Oh, there's no
clicking, it's all keyboard.
It's all keyboard and it'sreally powerful.
And I would say for us, thoughher a little gray right here,
run it with a larger data set upon the cluster.
We're going to give you, if youwant to grab about a hundred
cores, a couple of terabytes ofmemory.
Keep the time below 60 minutes.
Speaker 2 (00:21):
Welcome back, fellow
travelers, to another exciting
episode of Tech Travels.
In today's episode, we're goingto dive into the topic around
high-performance computing, alsoknown as HPC, and understand a
little bit why this is a verycritical, important and
critically important elementaround building artificial
intelligence and machinelearning models.
Now, this is a very supercomplex topic, even for the most
(00:44):
experienced technologist.
So I thought, to help usunderstand this topic better,
who better than to help us breakthis down into more digestible
and funny and witty pieces ofhumor?
None other than Brooks Sehorne,a true pioneer in the tech
landscape.
And, brooks, it's great to haveyou back on.
Welcome back to the TechTravels Podcast.
Speaker 1 (01:04):
Thanks, man, I'm glad
to be back on.
The first one was a hoot.
I got a bunch of pushback onsome things I said, so I'm
excited to find out what happenswith this one.
Literally, the collective holdmy beer after that episode, so
let's see what happens with thisone.
Speaker 2 (01:25):
So I want to dive
into this topic around
high-performance computing, butI also want to also just kind of
help set the stage for some ofour listeners here.
You know, if I was a thirdgrader, how would you explain
high performance computing to mein a way that I can easily
understand it?
You know?
Speaker 1 (01:35):
it's interesting when
you talk about it in those
terms because, you know, tryingto really break it down like
that, I kind of get to thispoint where I say this imagine
if you could just snap yourfingers, get a really big
computer and get just the rightfeatures you need, with just the
right amount of memory, withjust the right amount of cpus
and everything, do a job on it,like, figure something out like
your math homework, and then getthe answer back and then you
(01:58):
release it.
It's what we've talked about solong in the cloud.
You know cloud was always likeyou only pay for what you
consume, so you can build thebiggest thing you want, that you
can immediately get rid of it.
Hpc is that way as well.
Us, as you know, wanting tobuild machine learning models,
wanting to build artificialintelligence, things like that.
Those are huge jobs thatrequire huge computers.
So HPC gives us a path intobeing able to say give me that
(02:22):
huge computer.
Literally thousands of CPUs,terabytes, terabytes upon
terabytes of memory run the job.
Give me back the answer.
Speaker 2 (02:31):
So this is just
simply a way to us aggregate
computing power that deliversmore.
You know, more of a high putthroughput for an outcome, right
?
Speaker 1 (02:42):
Exactly, exactly.
That's exactly the idea.
You know you've got some likegigantic ETL extract transform
load that you need to do.
That's a great example of whereyou could do massive ETL jobs
relatively quickly, whereasnormally if you're just on your
computer, on like maybe a reallygood desktop computer, it could
take days, maybe weeks, tocomplete the operation.
(03:02):
With HPC, with that spread outof your resources, being able to
get huge resources, that'swhere that come in.
And let me put it in context foreverybody what I'm talking
about.
Think of computers likeFrontier at Oak Ridge Mountain
up in Tennessee.
This thing has like 9,400 CPUsinside of it.
This thing is huge, and so thatwould give us the opportunity
(03:25):
within the hpc space, if youwere doing like an extract
transform load, to say, look, Ineed a couple of hundred cpus
because the transform is goingto be massive and it's going to
be a ton of work.
But the thing is is that I justonly need it for this job and
once the job is done, I want torelease those resources to other
users.
That's the basic concept of hpcand that's why it's important
to understand it and theinteresting thing is is this is
(03:47):
not a.
Speaker 2 (03:47):
This is not new right
.
This has been something that'sbeen out there since the 1950s
and 1960s, I think I rememberback.
You know, even you know peopletalking about the IBM stretch
supercomputer.
That was back in the 1960s.
Speaker 1 (04:00):
So it seems like.
Speaker 2 (04:01):
I think it used to
use transistors.
I think I studied at collegearound vacuum tubes and you know
, really, just thinking aroundhow supercomputers at that time
were really defined.
So this is kind of almost, Iwould say, like with artificial
intelligence, with everythingthat's been happening in the AI
space, there seems to be more ofa renaissance.
To get back to the hardware,right, you remember a couple of
(04:24):
years ago it was we moved offhardware and get back to the
hardware, right?
you remember a couple years agoit was we moved off hardware and
we moved to the cloud, right?
Now everything now is movingback to getting back to the
actual kind of hardware, the cpu, the gpu, and then programming
it at that layer and and I wantto kind of understand a little
bit more about this this kind ofthis resurgence.
What's caused this resurgence,I guess, is maybe just the chat
(04:47):
GPT, it's, it's AI, it's theevolution, it's the revolution.
Speaker 1 (04:52):
It is.
And the thing is is that Iliked the way you said that,
because it is almost like arevolution.
It's like we're going back tookay, I'm not going to use a GUI
, I'm at the console.
Oh, I need to think about howmuch memory I'm actually using.
Oh, I need to think aboutreleasing resources, consuming
resources, things like that,whereas you know, so often when
we talk about technology, we'reon our phone, some gee whiz
(05:12):
interface, you know, being ableto talk to Jack B, cheat the
chat, gbt, stuff like that.
This is very basic stuff, thisis command line.
What kind of CPU you've got,having the hardware there?
Maybe usually the case, andthis is what we're seeing with a
lot of really large companies.
They've got these giantclusters sitting there that
they're taking advantage of andit's getting back to the core of
(05:34):
it.
This is why so often people aresurprised, moving into AI and
ML, that there's these folksrunning these things.
They're using these low downLinux systems and these low down
Linux tools and all thesereally low likes.
Where do I click?
Oh, there's no clicking, it'sall keyboard.
It's all keyboard and it'sreally powerful.
And I would say for us thoughher a little gray right here.
(05:58):
It's cool because it's like,yeah, that's the way we used to
do it and it's just to see thepower of it come back to where
we've got the power back is justso exciting.
Speaker 2 (06:10):
It is incredible
because, even though the
technology has been around forsome period of time roughly
probably about 50, 60 yearswe're seeing this resurgence
back into it again.
There seems to be a real kindof skills gap, with people
who've just never seen hardwarelike this before right just
coming into this um, and I feellike there there definitely
(06:30):
needs to be a way for us toupskill.
Um, talk a little bit aboutkind of what you're seeing in
terms of kind of the evolutionof the learner, the right the
advocacy around uh aidevelopment and and learning,
not just the language model butgetting down to the actual
hardware element itself.
Kind of talk me through some ofthat, you know when you start
(06:51):
talking about AI and ML.
Speaker 1 (06:53):
A lot of us and you
and I are both the same way.
We started learning about ML,we started playing with it, we
started doing things with it andit was great, but it was always
within this context of hey,isn't that fun?
And we were doing it right hereon our laptop.
Maybe we had like a Jupiternotebook or something like that,
or we ran something in thebackground and it wasn't too
heavy, it wasn't too intensiveand it allowed us to kind of
(07:15):
play with some of the tricks ofit.
For those of you who are outthere who currently have a tear
rolling down your face, thatdeep lens the Amazon or AWS deep
lens has gone away.
I still have mine.
I wish it worked.
Mine is a complete brick, butthis is uh.
When did we go to that, steve?
The, the uh, the one, the eventthat we had, and it was in
Florida.
Do you remember that when you?
Speaker 2 (07:36):
remember that one?
Yeah, it was uh.
Was it a tech kickoff 20, uh2017, 2018.
I think it was.
Speaker 1 (07:42):
I think it was 2018.
Yeah, we were getting the.
They you could.
It was, I think it was 2018.
Yeah, we were getting the.
You could go to that one classand walk away with a deep lens
camera, so everybody was tryingto get into it.
But the thing was, is thatmodel was so simple.
You could crush it right thereon your machine.
You could push it onto thatdevice, have the model up in the
cloud.
That's fantastic.
Here's the problem.
(08:02):
If you've learned that way and,let's say, you get hired by and
I don't want to name anycompanies, just think big
company that may be using AI, mlin some way when you get there,
what you're liable to hear issomething along the lines of hey
, that's a great model, here'swhat we want you to do.
Run it with a larger data setup on the cluster.
We're going to give you, if youwant to grab about 100 cores, a
(08:24):
couple of terabytes of memory.
Keep the time below 60 minutes.
We do have H100s out there, sobe sure to bring those in as
resources that you can use.
And your head's just startingto come off your head.
You're like off your neckbecause you're like what are
they talking about?
That's the skill gap,understanding what HPC is and
how you can put your job upthere.
You don't have to necessarilyknow all the architecture and
(08:45):
how things are actually working,but knowing it's there and
knowing you can use it, orshould be using it, for really
big jobs.
Steve, there are eight.
There are ML jobs that I knowof out there that have runtimes
of months, wow, in order for itto complete.
And this thing is consuming,you know, a thousand CPUs,
several hundred terabytes ofmemory, some of the biggest GPUs
(09:11):
that you can think of, to runover a couple of months on a
machine like you know, like aMac here or a Wintel box.
Impossible Can never be done,and so you've got to make that
jump.
How do I do that?
And as far as the advocacy goes, I will say this to just about
anyone Start looking into theuser tools of HPC environments.
I don't want to do anysalesmanship here.
I do work for a company thatsupports one of the biggest open
(09:32):
source HPC software out there.
It's called Slurm.
You can go check it out.
It's completely open source,which to me is pretty amazing
that it is in that state, andyou can just go grab it, run it,
run it local and do stuff.
Well, you'd have to do somework to get it to run, but the
idea is that with that, youwould start to understand how do
(09:53):
I actually put a job on acluster, how do I actually do
those things?
And, to be absolutely honestwith you, without even doing
that, there's a lot ofdocumentation out there that
could get you going in the rightdirection.
So, out there, that could getyou going in the right direction
.
So if you're really seriousabout AI and ML and you don't
have that set of skills that cananswer the question how do I
put this on a cluster?
Start looking into it, getyourself educated on it.
It's open source, opendocumentation.
(10:15):
You can find these things and Ihighly encourage anyone go out
there and check it out, becauseit is going to be a.
It's going to be a bigstumbling block for you, a big
one, when you suddenly get thatone day, when they show you the
giant cluster.
Like.
I've seen these clusters before, steve, there was one I was at
a couple of weeks ago.
They were showing it off to usand I was like, wow, look at all
the stuff I mean like in casein in cages I've never seen
(10:39):
before, like one of their cagedoors, several of the cage doors
they have water flowing throughthe door, like normally.
It's like on this, oh no, it'sin the door.
Um, there was an H 100.
For anybody out there whodoesn't know what the H 100, how
much would that cost us, steve,to pick up an H 100?
Speaker 2 (10:55):
I think they're right
around half a million dollars
now, aren't they A quartermillion dollars?
Speaker 1 (10:59):
The one that I saw
was half a million, 10 figure,
and I was just like, oh my, whenI saw it.
Knowing how to put a job ontothat thing and use it is
critical, and so that's wherethat stuff comes in, and that
skill gap that can bite youshows up.
You know what it is.
It's tantamount to this, steveRemember, we would talk to
people about cloud and we wouldget to networking and we would
(11:20):
say, okay, 192.16800 slash 24.
Who doesn't know what thismeans?
And the room was just cricketsbecause nobody knew IP
addressing.
They knew cloud but not IPaddressing.
If you're an AI and ML and youdon't know how to use HPC, you
need to get educated on it.
But that's just the first part.
I'll come to the second partlater.
Speaker 2 (11:42):
I remember that we
would do that exercise for
students and learners, right,and you think about the
explosion at which we're hittingartificial intelligence with
large language model.
Now, with Microsoft's PHY3,phy2, phy3, you're getting into
smaller language models.
Now you're looking at whatApple is doing with the ELMs,
(12:03):
right, there's just a rapid paceof innovation.
And I think back to when wewould talk a lot about Moore's
law and I love this conceptright Around Moore's law, and it
was basically 10X every fiveyears, 100 times every 10 years.
But it seems like and I'm justgoing to throw it out, there is
that seems like NVIDIA.
They've gone a thousand X overthe last eight and there's two
(12:27):
more years to go, but, with thatbeing said, the pace at which
innovation is happening is justso fast speed.
I feel like the moment ofwatching Spaceballs again, when
we go to ludicrous speed.
Ludicrous speed go.
Speaker 1 (12:44):
Exactly, you know.
The thing about it is yeah, itis kind of ludicrous speed,
ludicrous speed go.
Exactly the thing about it isyeah, it is kind of ludicrous.
What flips me out about thething, though, is that there's
still so many people out theresaying, okay, you don't have a
great business case yet.
I don't know.
I've seen some real goodbusiness cases where I've used
tools like that to get jobs donereally, really fast.
Now, correct no, it wasn't 100,but I'm telling you what it
(13:06):
sure cut down the work time onit.
So, yeah, it's out there, andit starts making you wonder,
moving at the speed that nvidiais innovating keep in mind, they
put the grace hopper chips outthere.
Is it the blackwell that arecoming out next in the fall?
Oh my goodness.
And these things are evenfaster than that.
Once you start getting intothose speeds and everybody this
is the point Steve and I aremaking here Once you get into
(13:27):
that kind of performance, thosebig LLMs you can run them a lot
faster, you can make them a lotsmaller.
Suddenly, they're sitting onyour watch trying to help you
out in just the journal.
So my thing has always been kindof like I don't expect it to be
some big business driver.
I really don't expect it to besome big business driver.
I really don't.
I'm almost seeing it as likethis incredible assistant that's
(13:47):
going to live in and around inour lives to really make things
better.
Now, of course, now we do havethe challenge of you know, I
want AI to wash the dishes andfold the clothes.
I don't want it the other wayaround.
You know, I don't want to befolding the clothes and doing
the dishes.
So, yeah, ai is over thereactually doing my job for me,
but we clothes and doing thedishes.
So, yeah, ai is over thereactually doing my job for me.
But we're kind of seeing alittle bit of that.
So, at that speed of innovationand, by the way, intel's CEO
(14:11):
came out this morning on thatvery point, talking about
Moore's law, talking about thespeed of innovation, talking
about NVIDIA, because that's gotthem all spooked over at Intel
is because of what those GPUsare doing Going back to HPC,
knowing, for example, what a GPUis, how it works, how to take
advantage, like if you're doingthe different types of
(14:33):
mathematics that you can takeadvantage of on a GPU that runs
much faster, knowing how to callthose out in a job and take
advantage of them.
You've got to know how to do it.
Everyone You've got to know howto do it because once you get
to that space, yeah, you canfigure it out.
Yeah, they may show you.
Show up, know what you're doing.
Just show up.
Knowing what you can do, you'dbe a lot better off.
Speaker 2 (14:53):
Good gravy.
So fun antidote right, and thisis from a great source at Chad
GPT.
It says I ran this experimentthrough it and I wanted to kind
of give it.
Remember we were going back inour first podcast.
We were talking about the lackof humor and artificial
intelligence.
I hope that I was listening toour podcast because here we go,
all right.
So the fun antidote is this thecomparison humor between kind
(15:16):
of where NVIDIA is heading withthe Blackwell architecture is
using traditional computing forAI tasks is like trying to tow a
jumbo jet with a bicycle.
With high-performance computingand Blackwell it feels like
you've swapped the bike for arocket engine.
Speaker 1 (15:33):
Mm-hmm, mm-hmm,
exactly, exactly.
Speaker 2 (15:38):
Not far from really
true here.
Speaker 1 (15:40):
Yeah, and the thing
is, in talking about our last
podcast, this is what came backto bite me that whole hold my
beer moment.
Do you remember I popped offabout how AI couldn't get the
muddy sound in music that I canget when I pull down one of
these guys back here and I canget that.
I had somebody within a week Ididn't tell you about this Steve
Within a week send me a muddysounding bass line and to make
(16:03):
it muddy, he trained the modelto take some of the strings and
what will happen is, when itvibrates too big, everyone,
it'll hit the fret, the metalfret, and you'll get that
ringing sound.
He injected it into it and then, on top of that, it was almost
like he figured out how do I geta grainy AM radio sound and put
that on it, put it into themodel to the point of now, you
(16:26):
just put the note, you send thenote trans, you got this muddy
sounding bass guitar.
And again, it's that speed ofinnovation when you have chips
like what we're talking about,they can crunch down that data
in a big hurry and get it outthere.
Now you're in a situation, Ithink, steve, where Not only is
it, oh, you've come up with agreat idea, but how fast can you
(16:47):
get that model created andpushed to the market?
Remember that whole thing abouthow fast can you get your
feature into production.
How fast can you get that LLMsquashed down and ready to roll
on that device?
Speaker 2 (16:58):
Yeah, that's a really
good question.
I think that's the biggestthing.
I think IDC did a study wherethey said 55% of most
organizations are still tryingto figure out what their
position on AI is right, whattheir foundation is, and trying
to understand what it is thatthey're trying to solve for
Almost every single day, I kindof hear a lot of requests for
(17:21):
you know, hey, you know, talkabout AI, talk about some
prototyping, some sort of MVP,and the question always goes
back to what exactly is theproblem?
What exactly the problem isthat you're trying to solve?
For what is the desiredbusiness outcome?
And a lot of it is vague, it'svery ambiguous, and I don't
think that they are really trulyable to define exactly what it
(17:42):
is that they're trying to dowith it.
I feel like we're still at veryearly stages.
The early adopters, um, andyou're and you're right those
who are going to figure out howto rapidly prototype something
to get it into into, kind ofinto either customer facing or
as much into their environmentas possible um, that's going to
have the best, uh, total cost ofownership, the most return on
(18:06):
investment.
Um, you know, I think that'sgoing to be kind of a winner, um
, and I think most are probablystill kind of, maybe still
scratching, scratching your headon it, right?
Yeah, what is this widget?
Speaker 1 (18:17):
yeah, what is this
thing, what is this tool?
I mean it literally is likewalking into like a mechanic
shop seeing a tool laying there.
You go, wow, that's a powerfultool.
I wonder what I could use itfor, for and it's a couple of
places that are already beingused, but you get this idea that
could be used in even a biggerspace, and I think that's one of
the things that gets me kind oflike that when I think about
(18:38):
people getting into AI and MLand not knowing HPC because of
that thing right there.
When you finally think of that,oh, wouldn't it be great too?
Yeah, that would be great.
Now, how are you going to getthat model done as quickly as
possible to get ahead of yourcompetitor?
Because I think, steve, maybeyou've seen this in your life.
(18:58):
Have you ever done somethingwhere you've thought of this
great idea and then, two monthslater, somebody was doing it and
you're like, hey, I thought ofthat first, that was my idea.
You didn't tell anybody.
It's like there's thiscollective idea that descends on
the planet and somebody doessomething with it, understanding
HPC, having access to HPC, andthat's a little bit of a.
That's going to be to me alittle bit of a.
(19:18):
Here's the.
Am I going to say the rightword, d-democratization?
Yes, d-democratization.
I don't know if that's theright word, but you remember we
used to preach about that.
Like in the cloud, you can useanything if you can pay for it
and it's a lot cheaper If youhave access.
If you're at an organizationthat has a giant HPC system,
(19:39):
you're going to be able to feedit all that data to get that
really good model.
This can become that incredibletool so that when you finally
do think of, I know what I cando with this to actually be able
to spit out that model and makeit work.
Speaker 2 (19:55):
You mentioned an
interesting point.
You know working for anorganization that has all the
equipment, that has the capitalto be able to invest in the
equipment.
Right, do I need to be in anorganization that has HPCs
running in order for me to learn?
Can I learn on my own?
Is there a way for me tosimulate running on an HPC or
high-performance computingcluster without actually having
to pay for it?
It seems like there might be alittle bit of a hook there.
Speaker 1 (20:20):
Yeah, we've gone all
the way back around to here
again.
We're all the way back totrying to get our CCNAs again
back in the early 2000, andnobody owns a router.
So it's like, how are we goingto learn how to do this to that
thing when I can't afford one ofthose things?
There are actually somesimulators out there.
There's actually one weactually make, one called a DSO
(20:41):
Docker scale out.
It'll allow you to basicallysimulate a 10 node environment
with GPUs.
It's very small, it runs insidea virtual machine.
We have one.
There's a few others out therethat you can find that will
allow you to do some simulation.
If you're crafty and I dare notshow exactly how I'm looking at
(21:04):
it right there you can actuallyinstall Slurm, the product that
we kind of watch over opensource, remember, on Raspberry
Pi fives.
You can do it.
You can build a small clusterat home for a few 100 bucks and
actually do those sorts ofthings.
So is it absolutely free?
No, it's not free.
Our DSO is free, running itinside a virtual machine.
(21:26):
You have to do a little bit ofwork.
You could do it on a RaspberryPi 5 as well, and actually the
amazing thing about that, steve,is if you do it that way, you
would have to do it exactly theway it would be done with a
giant cluster, because you'regoing to treat each one of those
things like a node, put all thespecial other machines in there
that you want, so you can do it.
That you want, so you can do it.
It takes a little more work.
(21:46):
Nobody has like a nice, youknow.
Speaker 2 (21:51):
Go to this website
begin clicking, and if they do,
I wouldn't trust it.
Speaker 1 (21:55):
Just log in with your
credit card.
That's all we need.
Just log it, yeah, and it'sgoing to work just fine, so it
okay, so there's.
Speaker 2 (22:03):
So there are
simulators out there that can
give me the capability.
I can build my own kind ofquasi-quantum computer with a
couple of Well, not quantum.
Speaker 1 (22:11):
Quantum and HPC Well,
HPC.
Speaker 2 (22:13):
I was going to kind
of Sorry, I was kind of moving
into this direction here.
Quantum seems to be like kindof that potential future trend
of HPC right, Yep, Kind of fromyour perspective and help kind
of helped me understand a littlebit of the context around
separating exactly thedifference between high
(22:34):
performance computing andquantum.
Just, you know, broad spectrum.
Speaker 1 (22:39):
Yeah, here's the
thing about it, and this is what
I've heard a lot from a lot ofengineers in the space Quantum
to a lot of systems will simplybecome another resource in the
HPC environment.
It'll just become what we whatwe call a generic resource that
you can reach out and use.
And so at that point and again,this is understanding how to
use HPC when you submit your job, you can make a request in that
(23:01):
command line, a man line, yeah,you command line this thing and
you make a request for when itbecomes available, like time on
a quantum box.
Now there's a couple morethings too that we have to talk
about.
For example, understandingapplication layout,
understanding you're not goingto do this in a language that's
not supported in thatenvironment.
So we're back to things like C,c++, those types of language.
(23:24):
Python does work well in HPC.
I'm a big Rust.
What do they call us Rustations?
Rust does have some support inHPC.
So knowing how to write code inthose particular languages
obviously is going to be a bigpart of making sure it works.
But that's really the end pointof it.
Steve, is like when a lot ofthese quantum machines become
available, if you're in anorganization that can afford one
(23:47):
or can access one.
It could become part of acluster and then it would become
a resource that you could takeadvantage of when you actually
push out there.
So that's another big power ofHPC.
I would expect that once theybecome available, our fun
friends at Name, your FavoriteCloud Provider, will have them
out there and you could spin upand they had different HPC
(24:08):
solutions as well.
They're not going to be free.
For example, a parallel clusterin AWS.
You could spin up a clusterright there and experiment with
it.
Make sure you set your billingalarm, kids, and you could set
it up.
And then, once they haveQuantum available, you could
theoretically spin that up, addthat to your cluster and make
that a consumable resource.
So once Quantum becomesavailable, I really see it as
(24:30):
becoming oh, it's going to be aquick access for anybody who
needs to be able to use it in anHPC environment.
Speaker 2 (24:36):
Quantum's going to
break AES-256, true or not,
debatable.
Speaker 1 (24:45):
You hear the
hesitation?
Because I spoke to somebody nottoo long ago who I cannot name,
who I absolutely trust, and hewas like, yeah, I mean, that was
his.
I mean he was like, yeah, and Iwas like, really, dude, you
think?
So?
He's like, look, it's his pointwas well taken.
Any key whatsoever that we usefor cryptography is not for
encryption, it's for time ofencryption.
(25:05):
How long do you need to protectthat data set from being
unencrypted?
Great story about this guy.
He was going to a football gamewon't name it because I don't
want to place them anywhere inthe world it was.
It was American football.
Everybody you know the gamewhere we only use our hands.
Anyway, um, he was going in andthis was a long time ago.
(25:26):
This is the when he told thestories about 1012 years ago.
This is when he told thestories about 10, 12 years ago.
He, uh, they were using theseportable scanners to actually
like credit card print, print,ticket, give it to people.
Yeah, you can go in so youcould buy it right there.
Well, he was watching what theywere doing.
He was telling me they werechanging the batteries out
pretty constantly.
So what this guy did was heliterally went the next day
because he used to be a memberof, or he owned, a consulting
(25:47):
group before he became a C-levelofficer at a company and he
said I think you'reover-encrypting, I'm pretty sure
you're over-encrypting.
They were using like a 256encryption on these things and
he said to them you don't needto do that.
You need to protect thesetickets for about 10 hours.
Once they have to let anybodyhack it.
Great, you printed a ticket fora game.
That's already happened.
(26:08):
You're a knucklehead.
So I think, based on what hetold me, yeah, he fully expects
it.
Now, I kind of expect it andit's just a matter of time,
because encryption, more thananything, is protecting data for
a certain amount of time andquantum can do it.
If the math all works out, then, yeah, it's going to crack it
real fast.
That's why you're seeing allthese people coming forth and
saying we've got a quantum proofencryption protocol and I'm
(26:31):
like, really, you don't evenknow what quantum is yet and,
okay, great, go ahead and dothat.
Speaker 2 (26:40):
But, at least it's a
step in the right direction.
Potentially yeah.
Let's hope that it doesn'thappen.
A lot of Microsoft.
You know cryptographic keys arein AES-256.
Swerving back into HPC again,you know, what I find really
interesting is, you know, whenpeople talk about
high-performance computing, hpcwith edge computing, and then I
(27:01):
think about things aboutbringing data, the HPC
capabilities, closer to datasources like IoT devices,
reducing latency, increasingprocessing speed.
I mean that to me is reallyreally cool.
I can't wait for that to happen.
Speaker 1 (27:16):
We have a customer
who has a cluster on a boat no,
no, no, no, no, no, no, no Likea, like a, you know seafaring,
you know like that and whatthey're doing is is they're
running data crunching on seatemperature, like they're
dropping probes and stuff likethat, and they're doing that
sort of work right there on theship and then, once it's all
(27:39):
crunched down, satelliting it up.
So absolutely you can do thatsort of stuff much closer and
take advantage of those.
The idea that, okay, I've got amachine here with 10 cores, it
could go real fast.
Wouldn't it be neat if I had 10of these machines and via HPC,
I could have this virtual 100core machine.
That would run the job muchfaster.
(27:59):
You can stick that on a boatand get that faster.
Now the other side of it that Ialways kind of go back to is
would have it been better justto siphon that data all the way
back to the cluster sittingsomewhere?
You're going to have to decide,you're going to have to figure
that one out yourself.
But I will say this You've gotthe option and in some cases
(28:26):
like let's say, you're at theNorth Pole or South Pole you may
have to go with.
We've got to do it on site.
No, by the way, heat's not aconcern there.
Speaker 2 (28:29):
So run those things
like crazy, yeah.
Yeah, I mean you're probablygoing to have to get into
something like low earth orbitto be able to send the data
through some sort of like asatellite, you know
communication.
You talk about being in remote,distant places.
You talk about kind of having aboat where it's actually
testing water temperatures.
You're probably way out there,um, you know, into deep waters,
uh, far away from, you know,cell phone communications and
(28:52):
towers.
So probably doing something withlow earth orbits or leos, um,
close enough to probably seespace aliens exactly just to get
close enough to see them.
Speaker 1 (29:01):
They're coming down
there and you know you're doing
it wrong here.
We'll give you something.
I wish that was true, no, but.
But here's the other thingabout it, though, that I was
really surprised by talking to alot of researchers who are
creating these type of softwareapplications.
In a lot of cases, as theirmodel's building the steering
that has to happen in order toend up with a really good model,
you can't send the data up andget that right.
(29:24):
It's got to be local, going toyour idea.
I've got to keep it closer.
So, as these models arebuilding and it's pulling those
water temperatures up, it's kindof steering the way the
application's going in terms ofthe logic to actually create
that language model.
So you can't have it sittingback somewhere in Colorado.
It's got to be on that boat inthe middle of the Pacific.
So once it's done low Earthorbit, get the data over
(29:47):
everybody's happy.
But you're going to have to getit closer to really be able to
take advantage of what you'regoing on in terms of building
that model in a reasonableamount of time.
Speaker 2 (29:55):
I would think that
cruise companies would be all
over something like this or havethe current capabilities where
they're doing cruise ships right.
They're out there everywhere.
They're all over theinternational seas.
They're going to differentparts of the ocean.
Speaker 1 (30:07):
They're all over the
sea, international seas.
They're going to differentparts of the ocean.
If anybody who's watching ifyou're seeing my face kind of
warp out because steve knowsgood and well, I know about this
and I know a particular cruiseline that is absolutely doing
this and what they're doing isfantastic because the example I
got to see from them was thequestion of is the passenger?
That was the question they weretrying to do and it literally
(30:29):
to me.
It knocked me over because itwas like next level customer
service, because their point wasnot from a negative but from a
positive.
If we have a guest aboard oneof our ships and I hope I'm
saying that right, because I gotin a lot of trouble there,
because I can't remember if Iwas supposed to say ship or boat
One of them is like you're notsupposed to say that I think it
was ship.
So anyway, they want to be ableto say, oh, we have a customer
(30:54):
there who's potentially drunk,we need to go help them, like we
need to send a crew member upthere to help them get back.
How were they doing it?
They were doing it with ML.
Where was the ML running Onboard the ship?
Because, know, send that dataup, wait to get an answer and
come back they may have fallenoverboard.
So, yes, that idea of of gettingit close like that, and uh,
(31:14):
it's not even just.
You know, we're talking aboutthat remote stuff, but there's
also cases where there is like aum, see, if I can walk up on
this one carefully a, a, a, agreat part to take your family
for fun, and that's all I'mgoing to say.
They're using that type ofstuff too from a security and to
an enhanced the experience ofthe people who are actually
there by, for example, let's say, there's a character that your
(31:38):
kid is just in love with.
So what you do is is you haveto give them money they can give
you in the enhanced experience.
They can literally find you inthe crowd, assure that's who
they are, talk to.
So, and so use the modeling andactually make sure that when
the person steps out to say hito your kid, they're in an area
(32:01):
of the park where there's fewerpeople, there's fewer chances
for other people to come up toincrease the possibility of that
one on one experience.
So, everybody, if you think thisstuff has not got some amazing
capabilities, you're wrong.
If you don't think and this tome is where it really gets me,
steve.
In some cases it's almostgetting the point of the stuff
is going to make things lookmagical.
Remember that old quote aboutif a society has sufficiently
(32:24):
enough advanced technology, itwill appear like magic?
That's where this stuff istaking us with AI, ml and being
able to crush these big jobsusing HPC to quickly get it out
into the space.
I'm so excited.
Speaker 2 (32:37):
I really am.
I think, and you're right, Ithink.
I think a lot of people areexcited.
I know me personally I'vereally been excited over the
last six months to see that justthe trajectory of where things
have been the last three to fiveyears and then all of a sudden,
probably in the last maybe year, 18 months or whatever, is that
you've seen this huge explosion, massive amount of concentrated
(33:11):
energy into a particular topic,specifically around ai, where
there's more now than everbefore, you have more access to
things such as training,learning, education uh,
everything's online.
It's very low to no cost interms of being able to learn,
adapt and have a new skill,which is also kind of why I love
it, because there's somethingnew coming out all the time.
Speaker 1 (33:31):
Exactly.
There was something I waslooking at the other day, and I
ground my teeth a little bit,because it looked like oh great,
somebody has a new acronym forAI Super duper.
It was like gear reversed lowlanguage modeling with inherent
redundancy.
Okay, just stop right there.
This sounds like jibber jabber,but the point is is that, yeah,
there's so many people workingin the space coming up with
(33:54):
these ideas, and I think some ofthem want to make it sound like
it's something special, but ina lot of cases, when you really
look at it, it's that oldconcept, those old principles,
injecting into the space,because some of those principles
stay the same, and injectinginto the space because some of
those principles stay the same,and I think, more than anything,
that's why we're seeing theexplosion.
We've been there, we've donethat, we've learned the
principles.
Now let's put these principlesin place with AI and ML, and we
(34:16):
don't have to relearn it, and wecan go even faster.
What I'm curious about, though,is what is the new stuff that's
going to be coming out of this?
At some point?
It's not going to be like oh, adifferent, you know a new song,
or somebody's done somethingfunny, you know Will Smith
eating spaghetti.
It's not going to be that.
It's going to be somethingtotally weird, something totally
different.
And I think the challenge isgoing to be for a lot of us is
(34:39):
trying to get over that thing ofis this evil, is this bad, and
go no, no, no, no, no, no.
It's just technology, it's theapplication of it is where we're
going to have to ask ourselveswhere are we exactly at with
this sort of thing.
Speaker 2 (34:51):
You know, I, I, I
listened to a lot of interviews
with CEOs.
I listened to a lot of, a lot,a lot of people who are very,
you know, very concerned aroundthe, the, the social, the social
impact of on society withartificial intelligence, and
there's a lot of skeptics interms of saying, listen, it's
great for the 10 million peoplewho are already currently
(35:12):
working in it right now.
What are we doing with theother 8 billion people on the
planet that don't have access tothis advanced technology?
Right, you know, it's kind ofthe how do we kind of shift the
paradigm to get everybodyinvolved?
And I think it's it's goingback to kind of like Maslow's
hierarchy, which is kind of youknow, when, when people don't
have to worry about fighting forfood and for struggle and
things like that, they can thenfocus on things such as learning
(35:33):
and training, education anyway,but I digress Um, no, that's a
great point.
Speaker 1 (35:37):
That is.
That is a huge deal.
Speaker 2 (35:39):
Yeah, but I think
it's interesting because again
is you see AI almost in everyaspect of our life, every aspect
of our society.
Everyone is going to beinteracting with some sort of AI
entity in the next 12 to 14months, if not already right now
.
I guarantee you, if you'reprobably booking a travel ticket
anywhere right now is thatyou're probably going to be
(36:01):
chatting with some sort ofvirtual agent.
Speaker 1 (36:03):
That's probably the
chatbot right.
Speaker 2 (36:07):
I think that I'm very
excited about the landscape to
see what happens.
I know I need to continue tokeep educating and to keep up
pace.
Right, and I think again it'sthat thing is you have to run as
fast as possible, just to justkeep up.
Speaker 1 (36:20):
Yeah, yeah, you do,
and I will also.
This is something I've told alot of people about technology
and there's no nice way to saythis If you've just got.
Oh my gosh, when did it startfor me?
It was probably about 1989,1988, working, trying to get my
(36:41):
degree in organic chemistry,starting to work with computers
and stuff like that.
The bug bit.
It has not let go of me sincethen.
I'm absolutely.
I still love technology.
My goodness, I'm such a nerd.
If you don't have that passionabout it, this can be kind of a
tough business because peoplegive you got to learn your whole
life.
Yeah, and that's part of thefun for us is learning how to do
(37:02):
these things, Not to mentionexactly what you said.
I'm back to the command lineagain.
Yeah, that's awesome.
So where's the mouse?
Don't need it, Just put it away.
Put it away, it's going to slowyou down.
Just put it away.
And it is that passion about itand the fun and the excitement.
So I would say to a lot ofpeople and this is what I hope a
(37:23):
lot of people get from thiswhen it comes to technology,
let's make sure the door isproperly marked.
It's real big.
There's a big neon sign overthe top that says everybody is
welcome.
Let's make sure that is there.
But at the same time, let's notbe ignorant and start walking
into the crowd and shovingpeople towards the door because
they may not want to do this.
(37:44):
It may not be something theywant to do, but I will guarantee
you there's probably somebodyin that crowd out there that you
never thought about, who's gota real heart for this stuff and
just wants to, who's looking atit going.
I'll never be a part of that.
We need to make sure that theyknow.
Hpc, AI, ML the door's open.
You are welcome.
We can get all the we need allthe help we can get.
Speaker 2 (38:12):
Yeah, somewhere,
welcome, we can get all the we
need all the help we can get.
Yeah, somewhere out therethere's another Jeff Bezos or
Jensen Hong, or basically thenext Mark Zuckerberg.
Right, there's the nextinnovator out there, right, like
you've got, you've got thesepeople with with a huge capacity
to learn, and you know, it'slike they just they, they, they
need to get in touch with thetechnology.
Speaker 1 (38:32):
That's what I really
love about it.
Yeah, I mean it's incredible,and the applications of it are.
It's just ridiculous.
I mean I was thinking the otherday just going through the
grocery store just lookingaround like there are so many
applications for this stuffsitting here.
There are so many applications.
I want an application where Ican show the thing the water.
Everybody does this.
They'll pick up a watermelon,they'll thump it.
Yeah, this one sounds good.
You don't know what you'redoing.
Or my wife has got this thingwhere it's got a nice yellow
bottom and the bands are realwide.
(38:53):
That's a good one.
No, that doesn't mean anything.
But if I could train an MLmodel to pick out a great
watermelon and it could beintegrated into my glasses so I
could just there's a good one,let.
So I could just there's a goodone.
Let me tell you what 99 centdownload will be rich in no time
whatsoever.
There's so many applicationsfor this stuff and I think
(39:18):
sometimes we get ahead ofourselves thinking it's going to
be something huge, like it'sgoing to be able to fly the
plane.
No, back up, don't need thatWatermelon picking.
That could be fun andinteresting.
Speaker 2 (39:26):
I like the idea of
that We'll stay away from the no
airline pilot flights for now.
Speaker 1 (39:34):
Well, you know, we
say that, we say that.
I don't know if you heard thestory.
One of our country's veterans,I believe what was happening was
he thought he had indigestion.
Turns out when he got to thehospital he was having a heart
attack or something like that,realized he couldn't drive
himself.
The man had a Tesla, got in it,autopiloted him to the
(39:58):
emergency room.
He got out, told him go parkyourself.
Basically, that is amazing.
That is incredible when thatstuff starts happening.
That's what I'm talking about.
Yeah, I still want my big truckthat burns gas and I can go
mudding it, but knowing thatthat thing is there and can do
that for us, wow.
And all we got to do is startlearning about AI, ml, how to
(40:21):
use HPC to run these big jobs,we could change the world
Exactly.
There's some kid out theresomewhere who can make this
happen and I cannot wait forthem to do it?
Speaker 2 (40:29):
Yeah, absolutely,
Brooks.
I can go on about this all day.
I can't begin to thank you.
So again, thank you so verymuch for joining us today.
Thank you for sharing yourinsights on this really
important topic, I thinkhigh-performance computing.
Again, it's very complex.
Speaker 1 (40:47):
I hope we did it
justice to be able to kind of
break it down for our listenerswe scratched the surface just a
bit of this monstrous thing, butI'm hoping that by scratching
that service we have enoughpeople go.
Oh wait, I need to go look.
Speaker 2 (40:59):
Yes, you do yes, you
do yeah, and brooks, thanks
again for having on yes thankyou for having me.
Oh, it's a pleasure.
Oh, um, and to all ourlisteners out there, thank you
for tuning into this episode.
Um, I really hope that youfound this informative, uh and
very fascinating, just as muchas Brooks and I did.
Um, again, we really appreciateyour support on these journeys.
Don't forget to subscribe tothe tech uh, tech travels
(41:21):
podcast, uh, on your favoritepodcast platform.
Uh, stay tuned next episode.
Until next time, stay curious,but, most importantly, stay
informed and happy travels.
Speaker 1 (41:33):
Absolutely Bye
everybody.