All Episodes

September 30, 2020 54 mins

In September 2020, PC gamers went bonkers for new graphics cards called the RTX 3080 and the 3090. What do these cards do, and why was the launch a bit of a headache?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from I Heart Radio.
Hey there, and welcome to tech Stuff. I'm your host,
Jonathan Strickland. I am an executive producer for I Heart Radio,
and I love all things tech and I'm just back
after a week off on vacation, and I nearly forgot

(00:24):
how my intro goes. So that shows how my brain works.
But enough of that. So I follow a lot of
gamers and streamers, and there's some folks I just, you know,
find really entertaining, and when their love of games comes
through on top of them being entertaining, I know I've
really hit something that appeals to me. And in mid

(00:48):
September twenty it seemed like every single person I followed
was chatting about something called the r t X thirty
ninety or in some cases, the thirty eight. And the
gamers elt they're already know exactly where I'm headed on
this one. But while I love video games, I'm not
exactly dialed into the heart of hardcore competitive gaming. And

(01:10):
so I had no clue what the heck this thing was.
I mean, I had an inkling, but I had to
look it up. And it's a new high performance graphics
card with a graphics processing unit or GPU. So today
I thought we would talk a little bit about GPUs
in general, where they originated, why they're important for modern games,

(01:32):
and why they can be so hard to get hold
of as well as so expensive. And here's another interesting tidbit,
the main reason they're hard to get hold of has
nothing to do with video games. Will also cover why
the r t X thirty eight and the thirty nine
cards have had a well, let's call it a troubled launch. Now,

(01:55):
despite the fact that graphics cards have been around for
more than two decades, there's still something that I have
only had limited experience with. And here's where the grumpy
old man Jonathan comes out to, you know, shake his
fist at a passing cloud. See. I come from a
time when your CPU, the amount of RAM your computer had,

(02:18):
and the operating system you were running were really the
only things that mattered when it came to which games
you could actually play on your machine or how well
those games would perform on your PC. Heck, I remember
when games first started requiring that your PC run on Windows,
and I was a Windows holdout. I preferred the fast

(02:39):
responsiveness and lighter framework of DOSS. The doss user interface
consisted of command prompts. You would actually type in stuff
in a line command to change directories and navigate to
where a file was, and then type out the execute
file to really get it started. Now, it wasn't actually

(03:00):
hard to do, but it also was not intuitive at all,
and it stood as a barrier for the average person
to you know, embrace computers. Windows made stuff easier to understand.
You just, you know, move your cursor to the picture
that represents whatever it is you want to do, and
then you click on it. That was super simple. But

(03:21):
Windows also required more processing power from the PC, and
so I was of the snooty opinion that I would
rather set aside that power for the stuff I was
running on the computer, apart from the operating system. That's
how old I am. Also, Eventually it didn't matter. Eventually
games started requiring Windows and I had to give in.

(03:44):
Over time, game developers began building out games that required
more umph from the PCs that were running the games,
and sometimes that meant you just had to have, you know,
a pretty recent CPU to run the game, which meant
that if you were relying on a computer that was
a year old or older, you might be out of
luck unless you could upgrade your machine, or you know,

(04:07):
and really severe cases, you'd have to go out and
buy a whole new one. But one thing that PCs
had that really opened up some opportunities were expansion slots
built into the motherboard. Now, these are standardized slots that
are that are built into that mother board. There's been
a couple of different standards over the years, but pc

(04:29):
I expresses the current one. The motherboard is the main
circuit board of a PC. That's where you'll find components
like the CPU that connect to other components like memory
or the power supply will connect to the motherboard to
supply power to all the components. So motherboard manufacturers would

(04:50):
frequently include slots that would allow for additional cards to
plug into the circuit board, thus expanding the capabilities of
your PC. All the wiring, all the circuitry was there
to work with the other parts of the motherboard. So
if a card manufacturer, you know, a company that makes

(05:10):
expansion cards, as long as they adhered to the standard,
then you could buy the card, you could open up
your computer case, you could plug the card into one
of those pc I express slots. These days, on the motherboard,
you reassemble the case, you know, make sure everything's lined
up properly with the backplate of your case, and voila,
you've got added functionality to your PC without having to

(05:33):
replace the whole darn thing. And manufacturers made all sorts
of cards, and I think I first really became aware
of this upon the release of various sound cards, which
would allow PCs to produce all sorts of wondrous sounds, music,
and spound special effects, that kind of thing. The early
PCs could essentially just beep. I mean, even R two

(05:56):
D two had a more extensive vocabulary, but sound cards
allowed for virtual orchestras to play on your machine. By
the way, if you seek out videos of early sound
cards playing computer music, you're probably gonna laugh at my
description because it definitely sounds primitive compared to what a

(06:17):
PC out of the box can do these days. Now.
Graphics cards followed close behind sound cards. The first card
to be described as having a graphics processing unit was
the G Force two fifty six from in Video in Videos,
the same company that's behind the recent rt X thirty

(06:37):
eight cards. By the way, more on that later. But
what the heck does a graphics card actually do well.
It's first good to remember what a CPU or central
processing unit does. It's the CPUs job to execute instructions

(06:57):
upon data. The data flows into the CPU from input
devices like a keyboard or a touch screen, as well
as from stored locations like a computer memory or hard drives,
and the instructions come from programs or input devices. And
instructions are mathematical operations, So it might be something as

(07:17):
simple as add this one really big number to that
other really really big number, and then compare the result
to this other number, whereupon a specific outcome will follow
based on that comparison. Really, everything your computer does is
a result of processes like this. You could think of
it as I choose your own adventure book, which I

(07:40):
guess also kind of dates me. But by that I
mean you can think of a path that branches into
lots of other potential pathways, and the results of a
math problem determine which of those potential pathways you actually
go down. Now, we describe the speed of a CPU
as terms of clocks. Speed that refers to the number

(08:02):
of pulses the CPU generates every second, and these regular
pulses synchronize operations on the computer, and they determine the
speed at which the CPU can carry out instructions on data.
So generally speaking, the higher the clock speed or clock
rate of a computer, the more instructions CPU can carry

(08:24):
out per second, and the quote unquote faster the processor is.
We express this in terms of hurts h E R
t Z. That refers to cycles per second. One pulse
would be one cycle. So if you have a computer
with a three point two giga hurts processor, that processor

(08:46):
is pulsing three point two billion times every second. As
game developers began making more sophisticated games, particularly as the
era of three D graphics dawned, meaning you know, graphics
that appeared to be three dimensional rather than a two
dimensional representation, you know, the more like cardboard cutout looking stuff.

(09:09):
Before the three D graphics era, that's when CPUs were
starting to hit a choke point. The CPU has to
handle pretty much all the processing, though in some cases
you might have what's called a coprocessor to tackle specific
subsets of mathematical problems. Graphics cards would become another type

(09:30):
of coprocessor. They would shoulder the work of processing the information,
specifically relating to presenting graphics on a display and remove
that responsibility from the CPU, freeing it up so it
could continue to work on you know, other stuff. Together,
the CPU and the GPU could handle all the processing

(09:50):
that the game required and create a really cool experience
for the player, you know, for a price. It was
right around this time, in the late nineties when I
got out of PC games for a pretty long time. See,
I had grown frustrated with the need to update my
machine on a regular basis if I wanted to play

(10:12):
the latest games. I hated the idea of having to
buy an expensive graphics card every so often and then
also upgrading my entire computer, or at least replacing the
CPU every couple of years. On top of that, I mean,
come on, these these components are expensive. Buying a new
computer is even more expensive. And in the late nineties,

(10:34):
I was what we like to call poor, or at
least I wasn't making enough money to be able to
keep up with that cycle of upgrading if I wanted
to play the latest games. So I fell off of
PC games for a really long time, and instead I
saved up my money and made a switch over to
consoles like the Nintendo sixty four. That kind of stuff,

(10:57):
because one thing you can depend upon with consoles, at
least until more recent generations have proven otherwise, is that
a game that's released on launch day of a console,
the day the console comes out, and a game that
is released at the very end of a console's life
cycle should both run just fine on that console. Now,

(11:21):
the later games should be better as developers learn how
to optimize for a console's hardware, but both games should
run just fine. You you don't have to worry about
your console not having the capacity to run the game.
Consoles aren't designed to be upgraded generally, and so game
developers have to work within those limitations and optimize their

(11:43):
games to run on standardized hardware. PCs are totally different.
PCs can come in up entire spectrum of capacities and capabilities,
and generally speaking, game developers want to make the coolest
stuff out there, so they're taking aim at the heavier
hitting end of the PC market. Usually there are ways

(12:06):
to reduce settings so that you can at least play
more advanced games on more modest hardware, but at some
point you just feel like you're no longer giving the
experience you want and you feel obligated to upgrade now.
Even though I got out of the whole PC gaming
thing for a long time, it turns out that the

(12:27):
PC game industry was going strong without me, which I
personally find very insulting. Developers were making increasingly impressive games,
and GPU companies like in Video followed suit by creating
more capable graphics cards, and that was really a necessity
that ties into a Rye observation about computing power. Okay,

(12:51):
so a lot of folks have heard about Moore's law,
which we usually use in reference to how computer processing
speeds improve over time. The original observation Gordon Moore made
decades ago was that, due to market factors, silicon chip
manufacturers were cramming about twice as many components onto a

(13:13):
single square inch of a silicon wafer as they had
two years previously, and they do this by shrinking down
those individual components so they're about half the size as
they had been, and that as long as the market
continued to place this kind of demand on an increase
in processing power, that trend would likely continue until it

(13:36):
would become physically impossible to achieve because you just could
not reduce the components in size any further due to
the limitations of physics. Now these days we dumb all
that down to say essentially that computers double in processing
power about every two years, So a typical computer in

(13:57):
twenty has about twice the processing capability of a typical
computer from However, there's another observation called Worth's law, and
it's named after a Swiss computer scientist named Nicholas Worth.
The Worth himself credited another computer programmer named Martin Riser

(14:18):
with the idea. Worth's law states that the demands of
software grow faster than the increase in capability of hardware.
So while processing speed was doubling every two years, the
demands of software were such that this otherwise incredible increase
in capability was hard to detect because the software of

(14:42):
the time that people were writing was growing more demanding.
This also would feed into the perception that a computer
would become obsolete super fast. Like you know. The joke
was that by the time you got a computer home
from the store and you got it out of the
box and you plugged it in, it would be outclassed

(15:03):
by a brand new PC unveiled at the very same
store where you bought yours. From and while that was
an exaggeration, it often felt like it was pretty close
to the truth. The software bloat was forcing people to
either rely on older programs that could still run on
their PCs, or else cough up the cold hard cash

(15:24):
to buy a new computer or upgrade their current machine. Now,
this cycle was felt throughout the entire PC community, but
gamers felt it particularly acutely Worth observation or you know,
risers if you prefer, though truthfully, people were already kind

(15:44):
of becoming aware of this general trend around that same time. Anyway,
that observation was published in the first graphics card to
have what in video called a g PU would debut
a couple of years later. The GPU U was in
many ways a response to the problem presented by Worth's law.

(16:05):
Game developers were coming up with lots of new tools
that allowed them to build more spectacular games, but that
in turn placed increasingly heavy demands on computers. Graphics cards
were a necessity to meet those demands that the games
were placing on the computer systems, and it in turn
helped perpetuate this cycle. I'll explain in more detail how

(16:28):
graphics cards help out, but generally speaking, it's a pretty
simple concept. The graphics card has its own microprocessor, similar
in many ways to a CPU, but a CPU is
a general purpose device now. That means it needs to
be able to handle a wide spectrum of different tasks,

(16:50):
and processors are a lot like people in this way.
If you dedicate yourself to learning how to do one thing,
like really focus on just one thing, then eventually you're
likely to get super good at that one thing. You've
blocked everything else out. If, however, you decide you want
to be a jack of all trades, you want to

(17:12):
learn lots of things, chances are you will not reach
the same level of expertise with any single task as
you would if you had just focused on that specific task.
You can do all of them, and maybe you can
even do them well, but not at the same level
as if you had specialized. Well. The GPU is like

(17:34):
a specialist. It doesn't have to handle all the other
tasks that a CPU has to perform. It can focus
on more specific types of operations, which means chip designers
can create a more efficient architecture to carry out those
specific processes. Specializing allows the GPU to perform a subset

(17:55):
of tasks far more efficiently. Than a typical CPU could day.
When we come back, I'll go into this a little
bit more, but first let's take a quick break. Before
the break, I talked about the motherboard. You know, the

(18:18):
primary circuit board in a computer. The motherboard has the
circuitry that connects the CPU to the different components in
the system, like memory and stuff. Well, a graphics card
is at its heart a printed circuit board that in
many ways is similar to a motherboard. It's smaller and
it's designed to connect to the mother board itself, but

(18:40):
it's got a lot of the same stuff you will
find on a typical PCs motherboard. At the heart of
the printed circuit board in the graphics card is the
graphics processing unit itself, or the GPU, but you'd also
have RAM dedicated to the GPU, just as the mother
board has its own AM dedicated to the CPU. So

(19:02):
I guess I should give you guys as a quick
reminder of what RAM is. RAM stands for random access memory,
and it's a type of temporary computer storage. The purpose
of RAM is to hold information that the CPU, or
in the case of the graphics card, the GPU needs
to reference frequently. So RAM access kind of short term

(19:25):
memory a quick reference for these processors, and RAM helps
reduce the weight time for a program to complete an operation.
So when your computer is running a program, it will
load some information into RAM. This is the stuff that
the processor is going to need most frequently to do
whatever the program needs it to do. Now, RAM has

(19:47):
a limited capacity with most PC manufacturers, including you know,
some RAM, but they don't max it out. They leave
it up to the end consumer who can choose to
purchase more RAM and then in to all it on
the motherboard. Typically, not all motherboards allow you to do this. Uh,
some companies are less open to you adding more memory

(20:09):
to their systems. Cough Apple cough. The motherboard itself will
have limitations to how much memory it can support. There
is a top cap. You can't just keep adding RAM
chip after ramchip. You will eventually cap out. And that
also means that eventually you have to do a more
extensive upgrade to keep up with evolving technology, as you

(20:32):
will eventually encounter components that the old motherboard just can't support,
so you'll have to you know, go up a step.
You could, I guess keep pulling parts out of your
PC and replacing them bit by bit but sometimes it
just gets to a point where it's better to go
ahead and build a whole new machine. By loading information
into RAM, the computer limits how frequently the processor has

(20:56):
to send a command to retrieve information from the longer
term storage like a hard drive disk, and that process
takes a little longer, actually much longer in computer terms,
than accessing information that's stored in RAM. So you've likely
heard that one way to speed up your computer is
to add more memory. Now the computer itself isn't actually

(21:19):
operating faster. Rather, it can load more information into that
temporary memory that RAM, and thus reduce the need to
go hunting for the information in long term storage. That
cuts down on delays and lags. So the processor isn't
going faster just because you added RAM to it, It

(21:39):
just doesn't need to send as many retrieve requests for
data that's stored on a hard drive. For example. Graphics
cards typically have a decent amount of RAM on them,
sometimes beyond decent. Some of some of those graphics cards
have way more memory on them than my current PC
has in it, and that's just on the graphics card.

(22:00):
But that allows the GPU the same sort of benefits
that the CPU enjoys with the RAM that's on the
PCs motherboard. Another important component is the connections between the
processor and the memory. This is what we call a bus.
A bus is sort of like a data pathway. The
capacity of the bus and the actual distance between the

(22:23):
processor and the memory can have an effect on how
quickly information can move from one component in the system
to the other. And really, when you start looking at
computer speeds and you're looking at, you know, the the
edge of computing, like the cutting edge, it really becomes
a game of find where the bottle neck is. Is

(22:44):
the bottleneck the processor, well, then you need something that
has a higher clock rate, or is it a limitation
in the system's memory. Then you need more RAM or
is it the actual connection between the components. Then you
might even need an upgraded motherboard with a more robust
bus between processor and memory. So it all comes down

(23:04):
to figuring out where's the slow point, where's the weak
link in this chain. The RAM on a graphics card
tends to have a dual port design, meaning the system
can both read and write to RAM simultaneously. Now, in
the simplest design, you could do one or the other,
but you couldn't do both at the same time. With

(23:26):
older graphics cards, the RAM also connects to a component
called the digital to analog converter or DACK d A C,
and then together you would sometimes find both of these
terms smush together. You would have RAM and DACK together
to make RAM DAK. The purpose of that component is
to take digital information, which at its heart is binary,

(23:47):
you know, in the form of zeros and ones, and
then convert that into an analog signal, which is continuous
and a changing signal that is capable of sending information
to like CRT monitor. However, today we have plenty of
digital displays and digital cable stuff like h d M
I that carries digital signals, and that makes the converter

(24:10):
component less critical. It's not really something that you would
necessarily hear much about with graphics cards these days because
it's just not necessary. The hardware people are buying doesn't
require the converter. Modern graphics cards typically support multiple displays.
You know, chances are a lot of you out there

(24:32):
have systems where you have at least two displays. I've
got two in front of me right now. The pc
I Express connector on modern motherboards allows for support for
up to four monitors, though not all graphics cards can
actually do that, not all of them have four connections
for displays. The r t X thirty nine, the monster

(24:54):
card that kind of prompted this whole episode, that one
can support up to four monitors, and it as a
maximum resolution display of seven thousand, six hundred eighty by
four thousand, three twenty pixels, which we tend to just
you know, it's say, is an eight K resolution. In
other words, and just in case you need a refresher,

(25:15):
resolution refers to the pixel density on a screen. Pixels
are points of light, so generally the more points of
light you have per square inch to to make an image,
the smoother the image will be. I often talk about
using like think about wooden blocks that a kid plays with,

(25:35):
and think of them in different colors, like just nice
primary colors. If you were to try and make a
picture out of those blocks, it would be very blocky.
You would see the edges of each block as they
were up against each other, and it wouldn't be a
very smooth image. You might be able to make something
people could recognize, but it wouldn't look very smooth. If
you reduce the blocks size and half and you increase

(25:57):
the number of blocks, you can make a slightly less
blocky looking image. You keep doing that over and over,
reducing the pixel size and cramming more pixels in, and
you create smoother images up to a point, right, Uh,
you get a point of diminishing returns where it can

(26:18):
be tricky to detect a meaningful difference when you're getting
to ultra high resolution displays. For instance, I remember looking
at two K, four K, and eight K displays at
CS and not being able to really tell the difference
unless the screens were truly enormous, like huge displays. Uh,

(26:40):
and if I had the benefit of, you know, holding
a magnifying glass so I can look at the pixels
up close. But hey, at eight K resolution, you could
take a tiny section of a screen shot, you could
blow that tiny section up to a full screen size
and it would probably still look pretty good. Anyway, Let's
get back to the more general discus of graphics cards.

(27:01):
The early graphics cards were really dedicated to creating three
dimensional images out of binary data, and that involved building
out a wire frame for the image with straight lines
that would end in little points, you know, connecting to
other straight lines, and the more straight lines. You use
the smoother you can make the edges very much like

(27:22):
the resolution of displays, and then on top of that,
you would fill in all the pixels that would exist
between those lines. You would add in effects like color, texture,
and lighting, and you would have to do that many
times per second, which is made more complicated by the
fact that these images are not still images. They are
changing over time, and in the case of video games,

(27:43):
you might have a ton of things happening within the
field of view simultaneously. You also get into a pair
of terms that are easy to get mixed up, refresh
rate and frame rate. The refresh rate is how frequently
a computer display will refresh an image on screen. So,

(28:03):
for example, the Razor Raptor twenty seven gaming monitor. By
the way, none of this is part of like sponsored
content or anything. I'm just using specific versions of things
to kind of have concrete examples. Anyway, this this particular
gaming monitor has a refresh rate of a hundred forty
four hurts, and that means that the pixels on that

(28:25):
display refresh one forty four times per second. Now, on
top of that, you've got the demands of how smoothly
the video game runs and you can think of the
action of a video game being kind of analogous to
film or just playing video. And you may know that
movie film consists of a strip of film onto which

(28:47):
you have a sequence of still images. Standard film playback
speed is twenty four frames or images per second, meaning
that for every second of movie, you are looking at
a sequence of twenty four pictures. And the speed of
this playback is sufficient to fool our dumb, meaty brains

(29:07):
into thinking that we're watching stuff that's actually moving. It's
the illusion of movement. Well, video games create the same
sort of thing, and that you're watching a series of
very quick instances of pixels that represents something going on,
like I don't know, Pacman fleeing from a ghost or something.
Though I'm told by the besties that we've come a

(29:29):
long way since Pacman. But we described this as the
frame rate of a video game. How frequently the graphics
card generates the frames that are shown on the display
in terms of frames per second. So while the two
terms refresh rate and frame rate both deal with graphics,
they are separate concerns. Generally speaking, you want more frames

(29:53):
per second to create a smoother experience, though Again, once
you get above a certain amount, you are to encounter
diminishing returns, meaning that you get to a point where
if you increase the frame rate you really can't tell
the difference, but at lower levels we definitely can spot
the difference. The same is true resolution. A game that's

(30:14):
running at twenty frames per second or less is going
to appear choppy. It's probably unplayable because gamers are going
to miss key information and thus be incapable of reacting properly.
Games running at around thirty to forty five frames per
second are pretty good. I mean, they're okay, though elite
gamers who have you know, crazy refined skills would probably

(30:37):
find it insufficient. And these are people who can see
faster than I can based on my observations of viewing
their live streams. Though again, to be fair, I'm viewing
a live stream which includes the compression of their video
image before it gets to me, so what I see
is not exactly the same thing that what they see. Anyway, However,

(30:58):
most serious gamers really want frames per second rate of
at least forty five, and preferably sixty or even more.
But as I said, once you get above sixty, it
becomes harder to tell the difference, and it means that
the graphics card has to work super hard to keep up.
So it might make use of a frame buffer, which
is sort of like a holding space in memory that

(31:20):
can serve up a corresponding image when it's needed, but
it still has to work super hard. Because you can
just imagine how much work it is for a processor
to generate the information necessary to create a high resolution
image complete with complex textures and lighting effects, and to
do so at least sixty times per second, perhaps for

(31:41):
sessions that can last for hours. You realize that graphics
cards need a good amount of power. As they got beefier,
the power requirements of the graphics card exceeded what the
cards could draw using the motherboard connection through that pc
I express slot I was talking about. Those are did
to providing up to seventy watts. Graphics cards frequently need

(32:05):
an excess of two fifty maybe three wats of power,
so that necessitated the inclusion of a separate power port
that would plug directly into the PC's power supply itself.
That also meant that gamers often needed to upgrade their
power supply on their PCs to supply the juice that
the graphics card needs. As the GPU does all this

(32:28):
work with this much power, it generates a lot of heat.
And that's because no machine that we create is perfect.
Every machine we humans make experiences some conversion of energy
from one useful form into another form, like heat, which
we typically think of as lost energy, because remember, energy

(32:49):
can neither be created nor destroyed. You can convert it
from one form into another, and if the energy converts
into heat, that heat tends to just dissipate into other
parts of the system, or out of the system into
the bigger system around it. That energy is effectively gone.
You have lost it. But on top of that, heat

(33:11):
and electronics don't get on very well. Overheated electronics can
cause lots of failures, and for that reason, high performing
graphics cards have heat mitigation and management systems built into them.
One common component is the heat sink, which is kind
of what it sounds like. It's an object that disperses

(33:31):
heat away from the heat generating object. A common heat
sink is a series of fins made of a thin
thermal conductor. So the fins provide a larger surface area
for heat to move across it. Moves out from the
processor and starts to go through these fins and it
dissipates more easily. But GPUs and often CPUs generate way

(33:56):
too much heat for fins to handle without a little
extra help. Usually that help comes in the form of
a fan, which circulates air across the fins and pulls
heat away from them. High performing graphics cards are truly
beasts these days in large cases that have their own
fans that are built into the case of the card

(34:19):
itself in order for them to help pull heat away
from the heat sink. More advanced forms of heat control
include things like water cooling systems, in which tubes of
water move underneath various components and absorb heat from those
components and carry the heat away from the processor to

(34:39):
go through a heat exchanger, essentially a radiator also made
out of fins. So these fins take the heat from
the water, cooling the water down so that it can
continue to circulate through the system, pull more heat from
the processor, take it to the fins, et cetera. Typically,
the fins are also cooled by a fan, so there's
like multiple elements to this particular system. There's lots of

(35:02):
points of failure too. So these things not only are
they expensive and complicated, they can they have more points
of failure. Doesn't mean that they're less reliable, just means
that there's more opportunities for stuff to go wrong. However,
it might be necessary if you're really running some of
these graphics components at their highest capabilities, and you can

(35:23):
kind of think of this as the circle of life,
or at least the circle of a heat exchange system.
I've got some more to say about g p U s,
but before I get to that, let's take another quick break.
So one thing I haven't really touched on in this

(35:46):
episode yet is the practice of over clocking. So remember
when I said that we measure processor performance speed and
part by talking about the clock rate. Well, process or
manufacturers typically set an upper limit on a processor's clock rate.
Usually this is to make sure that the processor is
going to perform reliably under what's considered to be normal

(36:09):
operating conditions, and sometimes it can get a little more
icky than that. There have been some processor companies that
have used the exact same chip with different limiting factors
on the clock rate in order to offer up a
range of products at a range of prices. So you
can have an entry level chip and then maybe a

(36:30):
moderate chip, and then maybe a premium chip, each with
a different clock rate. But it turns out all three
or the exact same chip. It's just that the manufacturer
has put a kind of artificial limit on how fast
the chip can run. That doesn't happen all the time,
but it has happened before, and I personally find that
kind of weird because the capability was there for all three.

(36:53):
It's not like the price of the premium chip for
the manufacturer was greater than the entry level chup. It's
the same chip cost the same amount to make it anyway.
That's another topic. So the fact is most processors can
operate at a higher clock rate than what manufacturers rate

(37:14):
them for, and with a little tweaking, you can make
those processors operate at that faster rate. That is, you
can if the motherboard and processors that you have are
the right models. Some systems put really hard limits on
that kind of stuff and prevent you from changing the
clock rate on a processor to any real degree. But

(37:35):
if your system allows for overclocking, you would make the
changes in the computer's bios. Maybe you're using some special
software to do it to make it, you know, easier
to manage, and you would essentially be increasing the clock
rate and probably also boosting the voltage that is going
to the processor. Essentially you push more voltage through more

(37:57):
pressure to the processor, it will work faster. That's the
kind of loosey goosey way to explain it. Overclocked processors
can lead to better results when it comes to stuff
like you know, rendering graphics at a high frame rate,
but it can also cause stability problems with a PC,
and it also generates way more heat. Serious gamers who

(38:19):
over clocked their systems really should look into water cooling systems.
In the competitive overclocking scene, I mean like these people
are pushing the limits to what overclocking can do, It's
not unusual to see competitors use extreme cooling solutions like
liquid nitrogen. Liquid nitrogen has a boiling point of minus

(38:40):
three degrees fahrenheit or minus one degrees celsius. That means
that at that temperature, nitrogen would boil off into a gas.
So you have to keep it colder than that to
keep it liquid, or you have to keep it under compression.
But neither here nor there anyway, that's that's pretty dang frosty.

(39:01):
It's also, by the way, not recommended for practical everyday use,
even for hardcore gamers. Now, if you do wish to
experiment with overclocking, there are a lot of useful resources
online for you to follow, and it's important to look
stuff up with your particular hardware because the process is
not uniform across all pieces of hardware. The one bit

(39:23):
of advice I would give anyone who wants to overclock
their system is to do so in very small increments
and run tests frequently to check to see what the
heat levels are and checking your computer stability. And then
you can gradually bump up the overclocking rate bit by
bit as a test. And then once you start to

(39:44):
see a dip in performance or you see temperatures going
above a certain threshold, you can then back off a
little bit and say, okay, this is my new peak
for where my my my processor can work. And that
applies both to the CPU and the GP you. Now,
I mentioned earlier that the first generation of dedicated graphics

(40:05):
cards were really about handling some of the heavy lifting
when it comes to three D graphics. These days, there's
a lot more to it than that, and you've got
speed and detail and color representation all being a big deal.
But perhaps the most buzzy of buzzworthy terms to emerge
in the graphics seen lately is ray tracing. Ray tracing

(40:27):
ultimately is about how a computer system handles the display
of light, like how does it portray light on the display?
Not how does it get the image to your eyeballs.
But when you are playing a game where there are
you know, light is playing across the scene, how does
it handle that. The goal of ray tracing is to

(40:48):
create graphics systems in which light in the virtual world
behaves the same way it does out here in the
real world, complete with how light bounces off of objects,
how attos are created, what reflections look like, and more so,
imagine that you're walking through a real world forest and

(41:10):
sunlight is occasionally breaking through the forest canopy overhead in
some places. In person, this kind of experience would have
a lot of really subtle details in light that older
graphics cards just couldn't really replicate. So with those games
where you might be like in a jungle or in
a forest, you would typically have a more uniform approach

(41:31):
to how light was presenting itself. Within the game, you
might have some areas that are darker than others or
brighter than others, but the graphics cards weren't really able
to get super subtle and detailed about it. Now a
card that supports ray tracing might be able to do
a better job of that and other stuff as well. So,

(41:52):
for example, a rain soaked street might reflect a neon
signed back at you in a really realistic way, and
as you move around own the light behaves just as
it would in real life. This is actually a really
tricky thing to do. It requires a good deal of horsepower.
It also requires support from the software side. The game
has to include rate tracing for this to be a

(42:13):
thing after all, But the latest graphics cards often tellt
raid tracing as a big feature now. A few years ago,
the big buzzworthy term was HDR or high dynamic range.
HDR refers to the spectrum of luminosity that a display
can provide, which deals with both the range of colors

(42:35):
that the display can create as well as the range
of brightness per pixels. So it's a combination of color
and brightness and the the variety that the display can create,
and a system that supports HDR can typically create really
spectacular images. And this also reflects the fact that, you know,
image resolution is not the end all be all for

(42:57):
a long time, especially with camera man you factors, the
use of megapixels was the way to really push a camera.
More megapixels equals more good. That's not necessarily the case.
There's a lot of other factors that play a part,
like contrast and color representation. Anyway, if you hear about
ray tracing, that's really what it comes down to, trying

(43:18):
to simulate within a virtual world the way light behaves
in the real world. Now, I mentioned earlier that graphics
cards at the top of the line ones can be
hard to find, and why is that. Well, not only
are they sought after by you know, real leite gamers,
but they also are often used by people who want

(43:40):
to do a lot of parallel processing with a networked
system of computers, typically to do something like bitcoin mining
or sometimes even breaking encryption. So let me explain. And
I've talked about parallel processing in previous episodes, including some
fairly recent ones, which it's all involved using two or

(44:01):
more processors or two or more processor cores to divide
up tasks so that it takes less time to complete
the overall task. You're breaking it down into parts, and
it's faster to solve the parts and is to solve
the thing as a whole. Not all computational problems can
break into a parallel approach, but for the ones that can,

(44:22):
parallel processing can speed things up considerably. One application of
parallel processing involves working out the potential answer two difficult
math problems, which happens to be the way cryptocurrencies like
bitcoin verify transactions and subsequently reward the system that solves

(44:42):
the problem with some cryptocurrency. So, in other words, people
use bitcoins to make a transaction, right they pay for
something in bitcoin, The record of that transaction goes into
a block of data, and when that block is full,
when it's hit as many transactions as that can hold,
it has to be verified before it can join the

(45:03):
chain of previous blocks the block chain. The bitcoin system
devises a difficult math problem that will verify the transactions
and thus make the block the most recent in the
chain of transaction blocks. The first computer system to provide
the correct solution to this hard math problem gets some

(45:25):
bitcoins in return, and as long as the value of
the bitcoin reward is greater than what it cost to
get to that reward. There's an incentive to build out
faster computer systems to try and solve the problems before
anyone else does. Now, these high end graphics cards aren't cheap.

(45:47):
The founders edition of the rt X thirty nine, that is,
the version of the card that's actually built by Nvidia,
would set you back about one thousand, five hundred U
S dollars if you could find one. But as I
record this, the value of a single bitcoin is more
than ten thousand, seven hundred U S dollars, and if

(46:11):
you solved a block, you would actually net twelve point
five bitcoins, so that means one solution is worth more
than a hundred twenty five thousand dollars. And new blocks
joined the blockchain every ten minutes. So if you have
the fastest system trying to solve these bitcoin problems and

(46:34):
you're able to solve a significant number of them for
whatever span of time you're looking at, you're looking at
a fortune, which means there is a huge incentive for
bitcoin miners to sweep up powerful processors that could give
them the edge when it comes to solving those problems
and netting a ridiculous amount of money, virtual money, but money.

(46:57):
So they really want the those processors. They could buy
a hundred of these Nvidia cards and they could pay
it off by solving two blocks. Not that this is
particularly easy, but you get the point. There's the incentive there, YAWLSA,
and that means that actual gamers are competing not just
against each other to get hold of these graphics cards,

(47:20):
but against bitcoin miners. And on the positive side, it
means that if you aren't absolutely determined to have the
state of the art hardware in your machine, you can
probably settle for a card that comes from the previous
generation or maybe two generations back, because bitcoin miners really

(47:40):
have no option but to embrace the fastest hardware, because
if they don't, the odds of them having a system
capable of solving a Bitcoin problem first reduced down to
near zero. One of the interesting things about bitcoin is
that the complexity of the math problem is actually dependent
upon the amount of assessing power being dedicated to solving

(48:02):
the problem. So if the Bitcoin system and if it
detects that computers are solving the problems too quickly, it
will automatically increase the difficulty of the math problem for
the next generation of transaction solutions. Now we'll likely see
this whole cycle continue until it becomes more expensive to

(48:26):
scoop up the graphics cards. Then you would make in
solving the blockchain problem. So every four years or so,
the number of bitcoins that are released per solution reduces
by half. When bitcoins first appeared, you would get fifty
of them when you solved a blockchain transaction problem. These

(48:48):
days it's twelve point five. There is a finite number
of bitcoins that will ever exist, So eventually we're gonna
reach a point where the reward you get for solving
a blockchain problem will be relatively low, and it won't
justify hoarding and operating a suite of GPU cards in

(49:10):
various computer cases that are all network together. It would
be more expensive to do that then you would make
from solving blockchain problems. Now, you could still do it
if you wanted to, but you would lose money in
the process, So it doesn't make sense. But for the
time being it is incredibly frustrating. Building on that frustration

(49:30):
are some recent problems with those r t X thirty
eight and thirty nine D cards. Now, I mentioned a
Founder's card earlier, and that that is a card that's
made by Nvidia itself, but in video also license out
the design the specs of the graphics cards to other manufacturers,
essentially saying, here are the components you need to put
together to make one of these cards, and then these

(49:54):
other manufacturers it's up to them to actually follow instructions
essentially and make their own version of the thirty eight
and the thirty cards. Some of these companies will end
up putting their own little spin on the card designs,
and unfortunately that can sometimes result in cards that have
poor reliability or other performance issues. And that's one of

(50:16):
the things that seems to have happened with the r
t X thirty eight and thirty nine cards. It didn't
take long for people to report that they were having
some problems while running games on systems that had these
new graphics cards in them. Sometimes they would get kicked
out of a game and back to the operating system.
Sometimes the whole system would crash. Sometimes they would get
weird artifacts and lines that would show up on screen. Now,

(50:39):
this would all be unacceptable for just a modest graphics card,
but it's really hard to forgive for a high end
model like say the thirty nine. And while it's early
days and it's difficult for me to point a finger
on any one specific problem or cause of this, what
appears to be the issue is that some of these

(51:00):
companies that are manufacturing this kind of graphics card have
taken some liberties with the design that ultimately have hurt
the stability of the card's performance. In particular, the Founders
version of the card has a series of small capacitors
that some card manufacturers have replaced with a single, cheaper capacitor,

(51:20):
and that in turn seems to create some electrical interference
issues that create an unstable environment. And it also makes
talking about specific graphics cards more confusing because while Nvidia
is responsible for the card design as well as the
manufacture of the Founders version of the card, other companies

(51:40):
are making the same type of card, but potentially with
tweaks to that design or with less expensive components. That's
why you can actually find a range of prices for
the same type of graphics card. Some companies are using
more premium components, which in turn drives the price of
the finished card up. Other cup these are using lower

(52:01):
cost components in an effort to bring the price down
enough so they can sell a high performing graphics card
but at a lower price than their competitors are offering. However,
the danger of that is that the lower price components
may not be as reliable as the ones that come
stock with the Founders edition. Of course, some companies might
even go the other way. They might include even more

(52:22):
expensive components than the Founder's version does, and then those
cards will be more expensive. But if the manufacturers can
sell consumers on the benefits of those more expensive components,
it can pay off in the long run. It becomes
a real game of deciding what is going to be
most important and most profitable. In the end, these cards

(52:44):
are necessary if you want to get the most out
of a gaming experience, and it also is necessary to
revitalize your machine every so often. UM. I know some
people who update their machines maybe twice a year, which
to me is incredible. I can't like. I still love

(53:05):
the mindset that that's way too much money to be
spending on a single device over and over and over again. UM.
But I'm also not a pro gamer and I'm not
a streamer, so there's that I would be more likely
to buy a more modest graphics card and hope that
it gets me through the next couple of years and

(53:26):
then upgrade from there. But then again, I'm not doing
it for a living, so I'm I'm a different consumer anyway.
But I hope that this helps illustrate what graphics processor
units do, what graphics cards are meant for, why it's
hard to find them, and you know what's going on

(53:46):
with the current craze with the graphics cards that are
on the market today. I am going to sign off now.
We're gonna wrap this one up, but if you guys
have suggestions for future episodes of tech Stuff, send me
a message. The best way to do it is on Twitter.
We use the handle text stuff h s W and
I'll talk to you again really soon. Y. Text Stuff

(54:14):
is an I Heart Radio production. For more podcasts from
my Heart Radio, visit the i Heart Radio app, Apple Podcasts,
or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS
Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.