All Episodes

August 21, 2023 52 mins

The Audit - Episode 24

In Part 1 of the Tech Lessons Series by Bill Harris, prepare to be transported into the future of computing resources, with our fascinating guest, Bill Harris from IT Audit Labs. We're opening up the world of processor design and specialized workloads, discussing the intricacies of chip fabrication, the genius behind improving processor speeds, and the art of creating modern processors. Get ready to discover a realm of substrates, lithographies, and elements that form the backbone of future processors.

Ever wondered about the application of Moore's Law in real life, or what really behind processor clock speeds? This episode answers all that and more, bringing in exciting insights into the clever tactics used to amplify modern computation. Dive into the mechanics of how assembly is utilized to build processors and learn about the advanced technologies such as 3D NAND, chiplets, and SSL acceleration that are revolutionizing the field.

As we look forward to the future of computing and the exciting investment opportunities it presents, we delve into the potential of semiconductors, the massive CERN particle collider and the intricate challenges of breaking into the semiconductor industry. Don't miss out on our spirited conversation on the potential of DNA and crystalline molecular storage, and the role of quantum computing in enhancing processor speeds. And remember, amidst all this tech talk, the importance of security, risk and compliance controls to safeguard our clients’ data remains paramount. So, buckle up and come along on this exhilarating journey into the future of computing!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Eric Brown (00:02):
You're listening to the audit presented by IT Audit
Labs.
Thanks for joining us andwelcome to today's episode of
the audit, where we will bediscussing classical computing
and what will be shaking up theindustry beyond 2030.
Join us on a journey with ITAudit Labs own Bill Harris to

(00:25):
learn more about the future ofsemiconductors, advances in
lithographies and what comesafter silicon.
You're not going to want tomiss a moment of this
information packed episode, somake sure to listen until the
end.
You can stay up to date on thelatest cybersecurity topics by
giving us a like and a follow onour socials and subscribing on

(00:47):
Apple Spotify or wherever yousource your podcasts.
More information can be foundon ITauditlabscom.
Well, we are here today to talkabout the future of computing
resources or, I guess, thefuture of compute.
We've got Bill Harris joiningus, so thanks, bill, for coming

(01:09):
on and putting together thispresentation.
I've had the chance to hear itone time before and it's really
awesome.
Thank you for doing that.
This is part of a three-partseries where you're talking
about quantum and then thefuture of storage as well.
In other podcasts, we've gotAlan Green with us, so welcome,
alan.
Alan, you are from where?

Alan Green (01:32):
I'm from a company called Fair Isaac Corporation,
more lovingly referred to thesedays as FICO.
We are known for your FICOcredit score in the market right
, but we do more than that.
We also sell software, and myrole is I'm Senior Director of

(01:52):
Infrastructure for the softwaredivision, and so I own that
responsibility globally Awesome.

Eric Brown (02:01):
So as an employee there, do you have the ability
to adjust your score a littlebit?
How does that work?

Alan Green (02:07):
I wish I could, but they keep me far away from the
data Gotcha.

Eric Brown (02:13):
And then we've got Nick Malamon, a regular here
hosting the podcast with us.
So thanks, nick.

Nick Mellem (02:20):
Yes, sir, happy to be here.

Eric Brown (02:22):
We're going to learn some stuff today, and Nick,
Bill, Alan and I go back alittle ways from a previous
organization that we all used tocollectively work at named
Thompson Reuters.
And Alan and Bill, did you guysspend much time working
together?
Were you in different divisions?

Alan Green (02:43):
Well, we were in different divisions.
Bill had the sweet spot in thearchitecture space and I was
more so on the operation side,but we absolutely interacted.

Bill Harris (02:54):
A little bit afterwards too, yeah.

Alan Green (02:57):
Yeah, admittedly yes .

Eric Brown (02:59):
And you guys share a common hobby.
And, alan, I saw you drinkingfrom a red solo cup.
What was in that, alan?

Alan Green (03:09):
Ice tea, my friend, Slow lice tea to keep the pipes
well lubricated for theconversation today.

Eric Brown (03:17):
Otherwise known as bourbon and Bill.
You are a.
You're a aficionado as well,are you not?

Bill Harris (03:26):
I am, and two nights ago I had one of the
worst Manhattan's that I had inManhattan.
So if you go to a show onBroadway and you go pony up to
the bar, don't trust whatthey're going to put in that
drink.
It's not well trained, theyreally aren't.

Eric Brown (03:45):
Was it something off the rail?
What was?

Bill Harris (03:47):
it.
No, they put in too manybitters.
So I watched the guy you knowput in like a dash or two of
bitters.
I watched this guy like dumping, like you know eight to 10
dashes so it didn't come outright and, nick, are you a
bourbon guy also?

Nick Mellem (03:59):
I am and I would say I'm pretty strict to only
drinking old fashioned.
So I'm understanding what Billis saying with the bitters and
be careful with that.

Eric Brown (04:10):
Awesome.
Well, we'll leave Alan to histea and let's jump into this,
this presentation.
Bill, you want to give us alittle backstory on this, how
you, how you got started, howyou put this together and then
take us through it.
It's pretty exciting.

Bill Harris (04:24):
Sure so.
So my field, one of the thingsthat I focus on, is futures
technology futures.
I've got a pedigree in storageand compute, so I found it
pretty easy to latch onto thesethings and focus on what's
coming up in the future.
And and I don't I don't reallycare too much about the software
aspect of it, so really I'mmore concerned about the

(04:45):
hardware and the physics behindit.
So, consequently, I've hadopportunities to connect with
some of the advanced labs peoplefrom IBM, from EMC, from HP,
some of the biggest companiesand I've been able to visit
their, their advanced labs andtalk with the scientists behind
this stuff and pick their brainsa little bit from a non-product

(05:08):
perspective, about what'scoming up and how it's going to
shake the industry.
For me, that's reallyinteresting and that's why I'm
here today is to share it witheveryone else.
So for today's agenda, I'mgoing to talk about the future
of compute, from current day upuntil around the mid 2030s.
Our agenda will be in fiveparts.
First, I'm going to introduceeveryone to compute foundries,

(05:30):
where these chips are fabricated, what the challenges are today
and what we're doing about thosechallenges.
It's important to understandwhere the processors are made
and from that we'll get about abetter understanding as to
what's coming up.
We'll talk a bit also aboutwhat makes a processor better,
so getting beyond thosefoundries.
What are the scientists doingtoday to improve processor

(05:52):
speeds, processor yields andkeep things competitive in the
decade ahead?
That will necessarily lead usinto a conversation around the
processor design and some of theconstraints that we have to
deal with, as well as how we'reovercoming some of those
constraints, and we'll talkabout some of the innovative
things that are gettingintroduced to processor design

(06:13):
today and in the near future.
We'll also talk about thephysics and the chemistry that
go into the processors.
So it's not just a matter ofhow these things are assembled,
but right down at the atomiclevel, what are the physics of
today's modern processors?
Talk about substrates, talkabout lithographies, talk about

(06:36):
the elements that are employedto build processors now and in
the future, alternative elementsthat they're looking at, and
then we'll wrap it all up andwe'll talk about really kind of
what this means for the industryand what we might expect to see
in terms of some of thosebenefits and some of the
challenges that lie ahead.
So, first off, we'll discussabout where the processors are

(06:59):
made, and it's important tostart here as we look at the
five big foundries.
Now, 90% of the world'sprocessors are made in these
five foundries.
First up is TSMC.
That is a Taiwan Semi-ConductorManufacturing Company, and
that's the name implies.
It's based in Taiwan and theyproduce 54% of the world's

(07:24):
microprocessors today from AMD,apple, nvidia, other types that
are in there.
The number two is second, whichis Samsung, at about 17%.
This is a South Korean companyand they develop processors for
NVIDIA, for IBM most notably.
Umc is number three, also aTaiwanese company, producing

(07:48):
processors for Texas Instrumentsand Realtek.
Umc is about 7% and they'retied with global foundries,
which was founded in 2008 or2009 by AMD and then
subsequently spun off.
Amd wanted the capital and theyalso wanted to focus really on
being a fabulous company, so AMDused that capital to really

(08:11):
great effect.
But now global foundries existindependently and they're the
only ones only major foundrybase here in the United States.
They produce processors stillfor AMD and for Qualcomm.
And then, rounding out the topfive, is SMIC.
This is the first Chinese entryand they produce processors for
Qualcomm, texas Instruments andBroadcom, among others.

(08:37):
So you might see the problemhere, but as you look at these
five foundries, the issue isthat four of them, and
approximately 83% or so, arebased in Asia.
So that creates a problem,because any type of political or
financial issue in Asia willaffect worldwide supply, and

(09:02):
that has national securityimplications, it has
implications for the world stockmarkets and so on.
It's just a lot of risk to havein one place, especially when
you consider the strife that'shappening between China and
Taiwan right now.
So, in an effort to amelioratesome of that, the Biden
administration introduced in2022, the CHIPS Act, which

(09:24):
encourages companies to buildfabrications here in the United
States, and it will subsidizetheir efforts to do that.
So, consequently, a number ofcompanies have stepped up on
that.
Tsmc is building a fab inArizona, intel in Ohio and
Micron is looking at New York.

(09:45):
So building some of thoseprocessor factories here in the
US will, I think will helpstabilize some of the risk that
would otherwise be present Billin the top five.

Eric Brown (09:59):
there I don't see Intel.
I know you say they're buildingone out in Ohio, but where are
those manufactured today?

Bill Harris (10:08):
So I think I'm not mistaken.
I think Intel has one in NewYork, but Intel's not in the top
five, right, so they have.
They are working with otherfoundries here to produce.

Eric Brown (10:20):
And when you say CHIPS, it's any of the
processors that we might see ona motherboard or in a phone or
in a car, or just the hundredsof thousands of areas that these
CHIPS could be in.
And, as I understand it, youcould have a device that has
CHIPS from multiple differentmanufacturers.

(10:41):
That's kind of the IoT problem,right?
So you could have CHIPS fromall of these different foundries
in one device.

Bill Harris (10:48):
Yeah, absolutely yeah.
This term CHIPS here reallycovers the whole gamut, Because
when we think about processorstoday, we might think about the
stuff that's in our computer andthat's actually, in today's
world, a minority of theprocessors that are out there.
You correctly named all theones in your phones and the one
in your refrigerator and the oneyou have in your doorbell and

(11:12):
so on.
It really it's just a ton ofstuff out there, and so this
presentation will really kind oftalk about all of them, but
it'll be focusing more on sortof the higher end processors,
the type types that you see inenterprise spaces for the most
part.

Alan Green (11:29):
So, Bill, I have been led to believe that the
reason most of these CHIPSmanufacturers were in that
region was due to two factorsthe product required to build
the CHIPS was abundant there andlabor was inexpensive.
Can you comment on whether ornot that's factual?

Bill Harris (11:52):
Yeah, I think it is factual.
I think the labor certainly isinexpensive.
That'll probably be your numberone reason.
Number two, to an extent as faras the product being readily
available there.
Yes, the product is availablethere.
It's also when you talk aboutfabrication labs it's a very
sticky thing too.

(12:12):
So, once you have theinfrastructure in those places,
that's going to be kind of whereyou tend to continue to put it.
With that said, though and I'mgoing to get to this in just a
moment we're going to talk aboutsilicon and just how abundant
silicon is.
So you're going to be able tofind silicon, which is really
the primary ingredient in theseprocessors, being enormously

(12:32):
abundant across the world.
But, yeah, I would say thatwhat you're saying of there is
generally true, but that doesn'trule out from putting it any
other place.

Alan Green (12:43):
Okay, thank you.

Bill Harris (12:46):
So I want to talk a bit about how scientists today
are improving CPUs over time.
Now, when I talk about a CPU,literally speaking the central
processing unit, the mainprocessor that gets things done,
including any sub processors,your ARM chips, your FPGAs, etc.

(13:07):
It really kind of includes allof this.
First and foremost, they'retrying to make them smaller, and
this is done for a number ofreasons.
First of all, smaller CPUsreally just cost less in terms
of materials.
They also generally draw lesspower.
You need to feed a smaller CPUusually a little bit less

(13:28):
voltage than you do for a largerone.
Because you're trying to pushthe electron to a shorter
distance, they tend to produce alittle bit less heat and
they're certainly going toresult in a lot less latency,
again because of those speed oflight issues as you're firing
electrons down the path.
Important to stay here thatwhen we're talking about smaller

(13:49):
lithographies, you may see someof the manufacturers talk about
well, we've got a 7nm processor a 5nm process.
These are not comparable amongmanufacturers directly, so these
have really become marketingterms.
So, for example, intel's 10nmprocess fits 100 million

(14:12):
transistors into one squarecentimeter.
Tsmc's 7nm process does aboutthe same thing.
So there's just a difference inhow they're building those
lithographies that results inthat type of a sort of this
nuance in the way that they namethem.
Now, looking beyond just thereduction in lithography and

(14:34):
making things smaller, I alsowant to point out that Moore's
Law has become pretty famous atthis point about how, every two
years, number of transistors ona microchip will double still
lives.
Its death has been touted for awhile now for several years I
think but it kind of keepspushing on and kind of going
past the next level.

(14:55):
I think it's got another coupleof years ahead of it yet before
we have to reevaluate that forreasons that will become
apparent In addition to justreducing the processor to get
the benefits that I just talkedabout, one of which is to get
less latency out of it.
I also want to point out that,by and large, processor clock

(15:17):
speeds have plateaued, and thishappened like 15, 18 years ago
at this point where it startedto kind of creep up around that
5 GHz mark and has roughlystabilized around that area.
And that is just because, asthings have become a lot smaller

(15:38):
, pushing things much over, thathas just produced a lot of heat
and a lot of other problemsthat we'll talk about later,
about the way that electronsflow through the silicon.
It's become problematic as itruns a whole lot faster.
Interestingly, the speed crownfor today belongs to an Intel

(15:59):
Core i9-13900K.
On liquid nitrogen, they've gotthat to run at about 9 GHz, not
particularly practical in thereal world.
Most residential applicationsare not going to be cooling with
liquid nitrogen and you'll findit in some enterprise
applications, but there arerisks associated with liquid

(16:19):
nitrogen, including leaks.
It's expensive, it's kind ofdangerous, so it's not really a
practical solution.
It's just an interesting thing,but you generally won't see
things go much higher than, say,6 GHz or so via conventional
cooling methods.
And finally, I want to call out,in addition to the densities

(16:42):
and the clock speeds and thelatency, one of the other things
that they've done to pushprocessor technology forward is
to play clever tricks in it.
So one of the things they cando is to improve the number of
instructions per clock.
Most modern processors todaycan do multiple instructions per
clock, so it's not just one.

(17:03):
Instruction sets also matter alot, and you'll see these in
today's processors in the formof SSL acceleration, in the form
of AES256 acceleration, inwhich you have a specific
instruction set on thatprocessor Calculate the
cryptographic math necessary toencrypt and decrypt something

(17:29):
like AES, or.
Similarly, you'll seeinstruction sets on modern
processors that are reallyfocused on virtualization, so
they can hook into some of theHyper-V calls or some of the
VMWare calls and acceleratethose functions.

Eric Brown (17:44):
Where would be a real world example of that AES
type of encryption or decryption?

Bill Harris (17:50):
Well, it's in pretty much all the modern
processors today.
So AES is pervasive, to say theleast, today.
So nearly every modern proctoday can handle that.
If you have an enterprise arrayor even a hard drive at home
that you want to encrypt with,say, bitlocker or some other

(18:15):
method, you'll find that thespeed at which that processor
can handle that encryption is awhole lot faster than, say, some
other types of encryption thatit may not have an instruction
set for.
So there's a lot of use casesfor it.
I think, as people becomeincreasingly aware of security

(18:37):
and privacy concerns aroundtheir data and their beginning
to encrypt it more and more.
And then assembly is also goingto matter a great deal, and I'll
talk about that on this verynext slide, where I'll get into
how they build these processorsin innovative ways that really
start to kind of stretch thelimits of what we've seen.
So in terms of assembly, it usedto be that they would build

(18:59):
like scientists would focus onthe processor from end to end.
There would be an X and a Yaxis, and that was difficult
enough.
When we're talking aboutcircuits at the microscopic
level and there are billions ofthem that's kind of a difficult
feat, but they've gotten so goodat this and their fabrication
techniques have become soprecise that they are now able
to build up the chip more andmore.

(19:22):
We're seeing this in thingslike 3D NAND.
A lot has been talked aboutwith 3D NAND, and the reason
that 3D NAND has become so denseis because they're now taking
that into that third dimensionand they're building up the chip
and they're stacking these NANDlayers on top of one another.
So now, instead of getting, say, maybe, like a 16 gigabyte NAND

(19:47):
chip, you're able to get upinto terabytes of size.

Eric Brown (19:50):
What's NAND?

Bill Harris (19:52):
Oh, this is the flash.
This is the type of storagethat you use in your flash
drives.
Okay, interestingly, most NANDsalso produce in Asia, so
that'll be another conversation.
But the way this applies now tothe processor design and I'll
give you a good use case on thisone is AMD has introduced in

(20:13):
its previous generation and nowinto its current generation of
what they call the 3DX chip orthe X3D chip, in which they're
putting cash, a significantamount of cash, right on top of
the main processor, and this hasenormous, enormous benefits in

(20:33):
workloads that can really usecash.
So this is for workloads thattend to recall the same
information over and over andover again, so it doesn't have
to go all the way back out tomain system memory, just grabs
it from the cash that is nowbountiful sitting on top of the
proc.
So it works more with gamingthan it does with, say, like a
completely random relationaldatabase.

(20:55):
But gamers have really lashedonto this because they can get a
whole lot more frames persecond in their games, and
that's now driven, I think, moreinvestment in the gaming
industry with the likes ofNVIDIA and ATI.
So that's really been a boon tothe performance of these chips.
Another trick that engineers areputting into their processors

(21:18):
is around chiplets, and sochiplets is a very specific chip
or a wafer that is designed toperform a specific function, and
you can generally produce theseat greater scale because they
do one thing and they do onething really well.
Think of it as just a veryspecialized chip, and you can

(21:40):
put chiplets together to formbigger processors, and so you'll
find chiplets now in a lot ofthe, in a lot of the world's
leading processors, and theykind of come together to do
something.
Bigger Chiplets also give you abetter yield because they are
simple.
The manufacturers end upthrowing fewer and fewer of them
away because they tend to comeout right the first time.

(22:02):
And then finally, it's a littlebit of a tweak on some of the
specific instruction sets Italked about.
But building processors foraccelerated workloads and one of
those is getting a lot of pressrecently is artificial
intelligence.
So, as an example of this, ibmintroduced their telem chip on

(22:25):
the Z16 mainframe.
The telem chip is an expertwith AI inferencing and it does
that workload exceptionally well.
In most use cases it will rivalor it will beat GPU computing
for artificial intelligence.
It's purpose built to do that,so it's really really fast.

(22:50):
We're actually seeing a lot ofthese types of innovations on
big mainframes of supercomputersbefore they leak down into the
smaller microcomputer segments.
It would be a mistake to thinkthat mainframes are old or
somehow dinosaurs, becausethat's not the case at all.
We're seeing a lot ofperformance there and a lot of

(23:13):
really cool things happening inthat space, so it's a very
innovative platform.

Eric Brown (23:17):
Bill, I know we're going a little bit down the
rabbit hole here on the chipsand the design, but maybe we
could just back up for a secondand talk about these specialized
workloads.
So you had mentioned GPU, whichwould be the graphical
processors that we see on gamingchips, and one of those

(23:37):
companies that's in the newsrecently is NVIDIA for making
those graphic cards.
It sounds like there's an AIplay using those cards, which is
why the stock has taken off alittle bit recently.
But how do those specializedworkloads?
Sometimes, when we think of achip, kind of looks the same,

(24:00):
but how is the chip designed tohandle one type of workload
versus another?

Bill Harris (24:09):
So all of the chips work with a system underneath
of it.
It's not just a series of justmeaningless circuits kind of
banded together.
So it works with themotherboard and it works with
the chipset underneath of it toemploy a language, and the

(24:30):
processor and the chipset cometogether to build that language
that can be customized to thosespecific workloads.
And that's the language inthere, the machine language in
there that has instruction setsbuilt for very specific
workloads.
Some of the instructions canthen short circuit in a good way

(24:53):
, short circuit some of thecomponents in the chip.
So, for example, instead of oneof the routines having to go to
the chip and ask for a memoryfetch and the chip has to go to
its registers and figure outwhere that data is, one of the

(25:14):
instructions can be well forthis particular workload.
You don't have to go throughthe whole process.
You can just go directly tomemory because I know where that
data resides.

Nick Mellem (25:25):
You can use it in like IoT, like Alexa could have
something cached, or Siri,absolutely, yep, absolutely.

Bill Harris (25:33):
It has great uses for things like IoT.
So what I just described therewas direct memory access, so
it's those types of workloadsthat allow some routines to
bypass a complicated mess ofcircuitry to accelerate what it
needs to have done.

Eric Brown (25:53):
It used to be, and going back a couple of years
when the Bitcoin mining or coinmining started.
You'd have your spare PC athome and you could dedicate that
to mining bitcoins, for example, which essentially is solving a
complicated math problem andtrying to do it faster than

(26:16):
other computers are doing itthat are connected on the
internet.
And then I think somewhere itpivoted, maybe eight years or
nine years ago, to GPU basedmining because that was faster
than the typical CPU of acomputer to mine those coins.

(26:38):
And now I think there's evenmore of a specialized miner that
you can buy that people putinto mining farms.
I think one of them is a dragonmint it's the name of the unit,
but it seems that's thespecialized workload that you're
talking about in that computerwouldn't be good for word

(26:58):
processing or even gamingbecause it's so focused or
specialized on just coin mining.

Bill Harris (27:05):
That's right.
This is a good example.
Other examples that I'll getinto in a future conversation
will be quantum.
Quantum is good at extremelyspecific things and nothing more
.
Quantum is a lousy solution fora general purpose compute
solution, but you give it aparticular algorithm and it can
turn through that one algorithmmore quickly than anything else

(27:25):
in the world can.
So, yeah, those are all goodexamples.
So now we get to thesemiconductor.
So, really, what sits at thefoundation of our processors
today is silicon.
Silicon is the second mostabundant element on the Earth's
surface.
The first one is oxygen and,being so abundant, it is very

(27:48):
cheap to gather.
It's one of the non-metalelements and it's a very good
conductor and an insulator atroom temperature.
So that's what makes asemiconductor Is you apply
voltage and you can make it aconductor or an insulator.
Now we're going to continue tosee silicon through this decade

(28:10):
easily and into 2030, but I'lltalk a bit about what happens at
the end of this decade, thesheer mass of silicon and how
much we are dependent on ittoday.
It means it's going to bearound for a while and we really
perfected it.
However, some problems appear onthe horizon.
Silicon is really approachingthe minimum size that we can

(28:32):
deal with it within a processortype of a construct.
So a silicon atom itself is 0.2nanometers or two angstroms.
A few years ago, 7 nanometerswas approaching what a lot of
folks thought would be the limit, but then came extreme

(28:52):
ultraviolet radiation and theywere able to build silicon at
even smaller lithographies andmake it even more and more dense
.
So that will continue and we'regoing to see this getting
shrunk down farther and farther.
Silicon will probably bedeployed at 1 nanometer in the

(29:17):
next 2 to 3 years, and at thatsize.
You see that the silicon atomitself is just 0.2 nanometers,
so it's really kind of gettingdown to really the smallest that
it could possibly get, sinceit's at a 1 nanometer deployment

(29:39):
.
Now the problem is at that sizesilicon becomes susceptible to
quantum tunneling, and quantumtunneling is a phenomenon in
which a particle withinsufficient energy can still
pass through material that itshouldn't be able to pass

(30:01):
through by the laws of classicalphysics.
Of course, that's a major issuefor something that is supposed
to be highly predictable andthat has to be 100% correct, and
so that's really.
They will have to overcome thatproblem, I think, before they
can really shrink it down muchfurther than 1 nanometer.

Eric Brown (30:26):
Nick, we're getting pretty deep in here.

Nick Mellem (30:28):
Yeah, some of this stuff is.
The one thing that was hittingme too was the marketing from a
couple sides ago, like how Applekeeps talking about oh, now our
chips are 7 nanometers I thinkwas the 813 or whatever.
So there's so much stuff herethat I was not aware of that we
can unpack.
But yeah, we're getting reallydeep.

Eric Brown (30:47):
Bill with the.
I certainly can understandshrinking the chip down to use
less energy, make it moreefficient, less heat, and I
could certainly appreciate thatfor memory, making cash bigger,
which would be on chip memory,or instead of like a hard drive,

(31:10):
which is maybe a different formof having a, it becomes denser
so we can store more data on it.
Right, I get that.
But when you said I think itwas was it maybe eight or nine
years ago where we reached themaximum speed of like four
gigahertz on a chip?

(31:31):
And we think about our commonbusiness use case application of
a PC or a laptop or a MacBook.
How much smaller does it reallyneed to get Like?
Does a, you know, does a sevenmillimeter or a 10 millimeter
chips that really make that muchof a difference?

(31:53):
When you know I'm running iOSor Windows or you know Mac OS or
whatever it is like you knowwe're just doing Word and
surfing the internet, howimportant really is that dense
chip?

Nick Mellem (32:09):
Doesn't it just become a heat thing at that
point?
Bill, right, the smaller thanjust gives off less heat, right,
so it can be fanless maybe.

Bill Harris (32:16):
Yeah, yeah.
So a lot of that all that playsinto it.
I would argue that if you'rejust surfing the internet and
you're looking up recipes andsharing cat pictures on Facebook
, that you don't need the mostmodern processor, you don't need
miniaturization.
You could actually run a fairlyjust a typical processor with

(32:42):
barely a fan.
The reason that we're seeingminiaturizations continue in
processor design is they want tosqueeze in more transistors
into that dye, because moretransistors mean more execution
units and that means they canjust shove through more work

(33:03):
through that processor.
So that's Moore's Law, and soyou'll continue to see that
miniaturization drive those bigworkloads that we talked about
earlier artificial intelligence,deep learning, machine learning
, you know, data analysis, allthat's very important for those

(33:24):
things, but you won't need that.
Most people won't need that fortheir home PC.
You're not really going to needthat so much for IoT, unless
you're trying to miniaturize thedevice itself and then you need
everything inside of it,including the ProctaVe,
miniaturize.
So a lot of things go into that.

Nick Mellem (33:43):
So what you're saying, Bill, is if you need it,
you know you need it.
If you think you need it, youprobably don't.
You're probably good with thenormal chip.

Bill Harris (33:52):
Yeah, I think so, absolutely.
I think that's very true.
If you really need it, you'llknow.
So we talked about what happenswhen silicon reaches that 1
nanometer level and we start torun into some problems with
classical physics and we startto see some limitations there.

(34:12):
So what comes next aftersilicon?
How are engineers, how arescientists, looking to overcome
those barriers?
So they have introduced intothe labs a bunch of solutions
that absolutely work, whichincludes new semiconductors.

(34:33):
There's a whole list and this isa partial list here, but a lot
of this really focuses arounddifferent forms of elements and
doping one element with anotherelement.
So silicon carbide is siliconthat's doped with carbon.
There are other carbonsemiconductors listed here
diamond, graphene, graphine allcarbon-based and these, like I

(34:54):
said, these exist in the labtoday, but not in a method that
is able to be mass produced inany type of an economical
fashion.
So the race is on it's races onto find the replacement to
silicon in a commercially viable, mass reproducible way.

(35:15):
It's important to note that notall of these semiconductors are
built of elements that aresmaller than silicon.
However, they are almost all ofthem more conductive than
silicon, which means less energyis lost to heat.

(35:41):
They also consume less power,which is going to be huge as we
go into a more environmentallyconscious world.
This slide after this will talkmore about what's happening in
superconductivity, but this iswhat we're looking at right now
in terms of semiconductoractivity, and this will probably

(36:01):
start to take place towards theend of this decade, but I don't
think you're going to see anyof that really get perfected
until early to mid-2030s.

Eric Brown (36:13):
Bill, could you maybe talk a little bit about
where copper might come in,because I'm picturing a circuit
board and I'm picturing copperinterweaved within the green
board itself and then chips andresistors and transistors and
whatnot on the board.
But where is that relationshipbetween copper and then?

(36:37):
Where does silicon come in?

Bill Harris (36:39):
So copper is a conductor, it is a great
conductor of electricity, and sothat's what it's going to do
it's always going to conductelectricity.
So you'll see copper used forlines that are supposed to be
always carrying electricity,always carrying a signal of some
kind, whereas silicon will beswitching off and on between

(36:59):
being an insulator and aconductor.

Eric Brown (37:02):
It's like I wish I'd paid more attention in that
eighth grade electronics classnow, Nick.

Nick Mellem (37:09):
I'm just thinking that the whole presentation.

Bill Harris (37:12):
So this is where things get really interesting,
right After semiconductoractivity.
Really, I think the holy grailof where computing is going for
the foreseeable future issuperconductivity.
So the graphic that I'm showingyou here in the upper right of
the screen is, of course, thevery recognizable one the Large

(37:33):
Hadron Collider in Switzerland.

Eric Brown (37:36):
Hold on a second Bill Nick.
Did you know what that was?

Nick Mellem (37:40):
Well, I was coming off mute to ask, never seen this
, oh, okay.

Eric Brown (37:44):
Okay, I like how Bill's casual he's like yeah,
you know, everybody recognizesthis.
Right, we should just startshowing Bill pictures of
different stuff.

Bill Harris (37:55):
I've clearly had my head in this for way too long,
then.
All right.

Eric Brown (37:58):
So we'll have to see if you can recognize the
difference between two differentbreeds of cats that Nick has.

Bill Harris (38:05):
So this is in CERN, switzerland, and it's just a
giant machine.
It's miles and miles long, andthey've been using this to test
superconductivity and quantummechanics.
So in superconductivity allenergy passes without resistance
and nothing gets lost to heat,which is huge.

(38:29):
We've talked about heat a lot.
We've talked about the loss ofenergy.
If you can make all that goaway, then you've got yourself
something really special.
Now the superconductivity isnothing new.
The Dutch discoveredsuperconductivity in 1911.
They were experimenting withmercury and liquid helium Again

(38:51):
two things you should not findreadily available in anyone's
home.
But we've known about it for along time, and in recent decades
we've begun to sort of envisionsome of the commercial
applications for this.
Problem with superconductivity,though, as you can maybe guess

(39:13):
from the liquid helium elementup there is that in order for it
to work you need one of twothings you need either very high
pressure or you need to be very, very cold, and by cold we're
talking about absolute zero cold, or as near to it as you can
get.
We can do this today and, asI'll talk later on, we've done

(39:35):
it with quantum computing.
Quantum computer is asuperconductor.
It does all of its operationsat approximately the temperature
of space, but it takes a lot ofenergy to get those types of
temperatures or that type ofpressure, and so it becomes a

(39:57):
little bit of a catch-22.
You can get superconductivitywhich loses nothing to heat, but
you have spent a lot of energyjust to get it cold enough to
even do that.
So the scientists are searchingfor solutions at a normal

(40:17):
atmospheric pressure orsomething at room temperature.
Needless to say, that is goingto be very difficult to do, and
if they find it, good luckreproducing it.
We're probably a couple decades, a few decades, maybe four or
five decades away from reallybeing able to do that.
I think you'll probably seesome breakthroughs on it, but,

(40:39):
much like the recentbreakthrough we saw in cold
fusion, it's going to take quitesome time for that to really
come around and mean anything.
However, when it does, when ithappens, it will absolutely
revolutionize electronics, andhere's how it's going to do that
.

(41:00):
We talked about some of theconsumer grade processors,
enterprise grade processorstoday, which is a chip?
It's a three-inch square, a fewmillimeters high, and around
that chip sits all kinds ofcooling apparatus.
You might have big fans, youmight have liquid cooling with

(41:21):
radiators, anywhere from two tofive pounds of cooling material
to cool this little tiny deviceand all those fans and all that.
Liquid cooling draws power andit takes up space and it costs
money to produce.
With superconductivity, if wecould get there, that goes away.

(41:44):
Now all you have is the chip.
No longer do you require allthat extraneous stuff around it,
so yields can increase.
You can use less materials.
You're not producing heat.
You don't need as much airconditioning in your data
centers.
You can run your data centerslike, almost like a normal room.

(42:05):
You don't need giant chillers,big water pumps flowing through,
so it would mean it would trulyshake the industry when this
ever comes about.

Eric Brown (42:19):
You could get that PUE right at one, couldn't you?

Bill Harris (42:23):
Yes, to use the data center terms absolutely,
your efficiency would be at oneyeah.

Eric Brown (42:28):
Now, if we continue to nerd out a little bit here
with Bill Nick, I kind of thinkeverybody should have their own
favorite physicist, and mine's,brian Cox.
So Brian Cox is a physicist, heworks at CERN in Switzerland

(42:48):
and he's got a couple of YouTubevideos that I think are
particularly cool.
One of them is called why weNeed the Explorers, and he did a
TED Talk.
It's on YouTube now, but it wasa TED Talk and then he did
another one on CERN's SuperCollider.
So if you want to see some ofthe stuff in action, bill, I

(43:10):
would surmise it it's going tobe on this particular TED Talk.
But the why we Need theExplorers is about the spending
that countries have on scienceeducation, and Brian talks about
the Voyager mission, whereVoyager is now reaching the

(43:36):
edges of our solar system andhad been sending pictures back
along the way, and one of thosepictures shows Earth as just a
tiny speck of dust, and I thinkit's a pretty elegant passage
that he reads about thatparticular picture.
But I was on a long flight Ithink it was to London one time

(43:59):
and I was watching these TEDTalk videos and came across
Brian and subsequently watchedhim and Dr Miku Kaku, who's also
a physicist and talks aboutdifferent technologies and
things like that that are prettyinteresting.

(44:19):
But anyway, I went down evenmore of a rabbit hole, so I'm
going to turn it back to you,bill.

Bill Harris (44:26):
Those are all good raps.
Yeah, I know those twophysicists.
They're a lot of fun to watch.
So, to start to wrap all thisup, what does it all mean?
I want to be clear that siliconisn't headed out anytime soon.
It's going to be around a longtime.
It's sticky stuff.
I think it'll be around until2030 and it'll be around after

(44:52):
that, but around that time Ithink we're probably going to
see some more innovativeelements come to the forefront.
I also think we're going to seesome really cool things in
processor design.
We're seeing it from, I think,seeing some especially
innovative things, mostly fromIBM.
Nvidia.
Intel's lost a little bit oftheir shine recently, I think,

(45:14):
but they forgot it in them, andI think they still introduced
innovative things and I thinkthey're going to come back and
continue to do that.
Overall, though, I think inorder to shrink semiconductors
past their current state intothe 2030s, you're going to need
to have new stuff.
We're not going to be able toget that much more out of

(45:35):
silicon.
The physics for it it's justnot there.
We're going to reach otherbarriers when some of the gate
sizes within those processorsreach a single atom.
We can't do anything with it atthat point.
We can't shrink that anyfurther than one atom.
So it's going to become reallyinteresting at that point, which

(45:59):
will probably be in themid-2030s or pushing into the
2040s.
I also want to be clear, too,that although quantum computing
is getting a lot of press, as itshould be, it's not going to
address any of this, becausequantum computing is designed at
very specific workloads.
It is not a replacement for ageneral purpose computing device
.
It's not a replacement for IoT.

(46:19):
It's none of that.
It's just something else whichI'll get into at another time.
But classical computing willcontinue to drive our
information age, but we willhave to find new substances to
take it into the next era.

Eric Brown (46:38):
So how do we make money on this bill?
What's the stock market playwith this stuff?

Bill Harris (46:44):
Well, if you have yourself a few billion, you can
hire the engineers and someplants to build some of this
stuff.
This is one of those areaswhere it's really hard to break
into.
This isn't like software, whereyou can write clever software

(47:04):
and you can do it from yourgarage.
Yeah, this is stuff that hugeorganizations will have to drive
.
I think the barrier to entryhere is really high indeed, bill
can you say I think it wasthrough Switzerland.

Nick Mellem (47:22):
How big is this?
I think you mentioned it.

Bill Harris (47:24):
Oh it's miles long, miles and miles long, yeah.

Nick Mellem (47:27):
And how big is the top and bottom here?
Do we know?
It just looks giant, oh yes,yes, it is.

Bill Harris (47:34):
I want to say, and I might be wrong with this, but
I want to say it's probablyaround 30 to 50 feet across or
something to that effect.
Yeah, yeah, if you see, I'm notsure if you can see my mouse
come across this screen here,but you can kind of get a feel
for the scale for this platform.
Work to the left.

Nick Mellem (47:56):
Oh yeah, I didn't notice that before.
Yeah.

Eric Brown (47:59):
I believe you can tour it too if you happen to be
in Switzerland at the time.
They're allowing tours, which Idon't think is all of the time,
but I know that it is open.

Nick Mellem (48:11):
I wasn't aware of this.
It's pretty cool.

Bill Harris (48:16):
This superconducting particle
collider got a lot of press whenit was opened.
What was it?
Maybe 15, 20 years ago, I thinkit was.
My history is a little rusty onthis one but it got a lot of
attention because there was afear that it would create a
black hole that would destroythe Earth and everything around

(48:38):
it, and I think at the timesomething like 98 or 99% of the
world's physicists said no,that's definitely not going to
happen.
But it's always that holdout.
It's always like that onedentist who doesn't like the
same toothpaste and all theother dentists like what if that
one's right?
And so there was always thisfear when this thing opened that

(49:00):
it would just result in ourcomplete obliteration.

Eric Brown (49:03):
I thought they were paid by the toothpaste companies
to like certain toothpaste.

Bill Harris (49:10):
Probably.
I'm not sure who's paying thescientists to speak out against
this.

Nick Mellem (49:17):
Oh yeah, there's certainly a lot of information
here.
I think we spend so much timetrying to secure software and
tools and help end users bebetter, but it's really cool to
see how the light bulb iscreated and what we're actually
securing from a holistic level.
I've never been this deep withwhat we're talking about, so
it's really cool to learn allthese different items that go

(49:39):
into it.

Eric Brown (49:41):
Can we ask you a couple other questions Bill?

Bill Harris (49:43):
You can, I will warn you.
I am not a physicist, I'm not achemist.
So this is, we'll see how.
I do have my limits, but go forit.

Eric Brown (49:53):
Is the Earth flat?

Bill Harris (49:57):
That's the right.
Yeah, that's one of the thingsfor if you have to ask, you're
not going to like my answer.

Eric Brown (50:09):
So that's not a yes or a no.
Nick, I didn't get that out ofhim.

Nick Mellem (50:14):
I don't think we have enough time to unpack that
one.

Eric Brown (50:17):
And then the last question is how long have you
been a Brony?

Bill Harris (50:21):
What is a Brony.

Nick Mellem (50:24):
We just learned about this, last week, I think.

Eric Brown (50:27):
But a Brony is a person who collects my little
ponies.

Bill Harris (50:35):
Really, really, that's the same.

Nick Mellem (50:39):
True.

Bill Harris (50:40):
Well, five and a half years then.
Yeah, I'm going on my sixthyear anniversary.

Eric Brown (50:46):
Cool.
Well, Bill, thank you.
This awesome presentationlearned a ton, and it sounds
like you've got what's up next.
Is it quantum or is it storage?
Quantum Quantum's next instorage and the storage we're
going to get into things likeusing DNA or crystals for

(51:07):
storage, Something like that.

Bill Harris (51:09):
Right, Talk about crystalline molecular storage,
DNA storage, all of that, All ofwhich exist today in the lab
it's all proven technology.

Eric Brown (51:21):
I feel like I'm going to do some reading before
I'm even ready for thatpresentation.

Bill Harris (51:26):
Don't come through prepared please, I'm sorry, what
was that?

Nick Mellem (51:31):
I got a brush up on my favorite physicist.

Bill Harris (51:33):
Oh yeah.

Eric Brown (51:34):
Absolutely.
Yeah, that's right.
You have been listening to theaudit presented by IT audit labs
.
We are experts at assessingsecurity, risk and compliance,
while providing administrativeand technical controls to
improve our clients datasecurity.
Our threat assessments find thesoft spots before the bad guys
do, identifying likelihood andimpact, while our security

(51:58):
control assessments rank thelevel of maturity relative to
the size of your organization.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.