All Episodes

March 31, 2021 45 mins

What exactly is edge computing? Why is it important? We learn about edge computing, its relationship with cloud computing and the Internet of Things, and what we can expect in the future.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from I Heart Radio.
Hey there, and welcome to tech Stuff. I'm your host,
Jonathan Strickland. I'm an executive producer with I Heart Radio,
and I love all things tech. And over the history
of this podcast, I have covered a lot of topics
that were, at least at one time a little more

(00:27):
than buzzwords to me, cloud computing, machine learning, Internet of things.
A lot of these topics were either just not widely
spoken about in mainstream society or they were even just
emerging within the tech world itself. This is where I
have to remind all of you that I came into
tech Stuff as someone who loved technology, but who is not,

(00:51):
in actuality technologist or an engineer or anything like that.
And one of the terms that I started running into
around fourteen or so, so it was after I had
started this podcast. One of those terms was edge computing,
or sometimes computing at the edge. And if you listen

(01:11):
to yesterday's Smart Talks episode with Malcolm Gladwell, you heard
a little bit about that in there, and now we're
gonna dive in whole hog. So today I thought we'd
talked a bit about what edge computing actually means and
contrast it with other types of computing models, and we'll
talk about how all of this is meant to work

(01:32):
together in different ways, and why it's even a thing
in the first place, and where edge computing is today
and what we might think of it in the future.
I thought that a good way to approach edge computing
is really to start with some basic facts. There are
certain things that we, as smarty pants human types, can
do to speed up computer systems, To speed up the

(01:56):
the moment from where we put input into a system
to an we get output from that system. We can
create more efficient and powerful processors. For example, you know,
we can lean on parallel processing for some types of
computer problems. Doesn't work for everything. Quantum computing, similarly, will
speed up certain types of computer problems exponentially. We can

(02:21):
design better software, and we can try to avoid Worth's law.
That's what the law that states that software tends to
get slower at a rate that's faster than improvements in hardware.
If we do combinations of all these sort of strategies,
then the machines of tomorrow will in theory run software
better than the machines of today. Again, assuming we don't

(02:43):
just make even more bloated software that ends up canceling
out our sick hardware gains. But there's one thing that
we just cannot get around, and that is the top
speed at which information can travel from one point to another.
Even using a fiber optic cable and a clever optical

(03:04):
system to deliver information, we are limited by the speed
of light itself. Now that speed is um let me
check my notes here says wicked fast. When traveling through
a vacuum, light zips along at two hundred million, seven
hundred thousand, four hundred fifty eight meters per second. Nothing

(03:29):
goes faster, thanks a lot, Einstein. What this means for
us is that distance matters. Of course, until fairly recently,
this wasn't the biggest concern for us because the way
we did computing was pretty much always right up close
and personal. So let's talk about that for a second, alright. So,
to begin with, we have computers that we physically access

(03:51):
in some way in order to carry out a program.
So in the very early days, people program computers by
physically plugging in different cables into different sockets and throwing
switches and I don't know, waiting for lightning to strike
or something. Okay, ignore that last bit. The process was tedious,

(04:12):
it was complicated, it was easy to come up. The
speed of the computer was limited both by its own
method of processing information as well as the speed at
which human operators could operate it. But once the computer
actually finished the calculations, delivering them was pretty fast. I mean,

(04:32):
sometimes delivering meant printing out a sheet of paper or
making little lights on a panel light up a specific way,
and then the humans would have to consult a guide
to figure out what that all meant. But if the
computer had had a display, it would be able to
throw up the answer as soon as it got there
with zero delay. Flash forward a few decades, and we

(04:53):
then had computers that had simplified input and output devices.
You had your keyboards, you had your monitors and what not. Now,
programming a computer was far less convoluted, though as programming
language is evolved, it can still be fairly easy for
a human to gum things up with a careless error
or a miscalculation or typo. Again, most of the time

(05:15):
we were talking about people being all right up on
the computer, and so there was really no delay between
the machine arriving at the endpoint of a computational process
and then delivering the result to the person who was
using the computer. The information wasn't really traveling anywhere, it
was just there. Then we get ARPA, the Advanced Research

(05:37):
Projects Agency. It's the predecessor to DARPA. One of the
projects ARPA tackled was to find a way to network
different types of computers together. This was a big challenge
for several reasons, but one of them was that different
computers ran on very different processes and what we'll call
them operating systems, but that's really being a bit general us.

(06:00):
But the point is that the computers from different manufacturers,
effectively they spoke different languages and they could not directly
communicate with one another. And in the easy way boiled down,
the computers spoke math, but it was kind of like
wildly different dialects of math. So there needed to be
some sort of common language that all computers would be

(06:21):
able to convert their native speech into and then translate
incoming speech back into their native language. Now that's a
super oversimplified way to say that very smart people built
out the protocols that would allow for the transfer of
information across networks we would eventually get stuff like T
C P I P that would facilitate these connections. Now

(06:46):
computers could connect into their network and they would be
able to send information to and receive information from other
computers on that same network. And as other networks took shape,
they could interconnect with that first network, and then we
got the network of networks, or the Internet. So I'm
giving a really fast rundown of the history of the

(07:07):
Internet here. This is like from space levels of high
level view of the history of the Internet. All right,
now we're gonna skip way ahead. Let's get up to
the ninety nineties, where the general public became aware that
there was this thing called the Internet, and the development
and deployment of the Worldwide Web really helped that along considerably.

(07:30):
Now we had this information super highway, or if you prefer,
we had a series of pipes that let information flow
from one source to another. Using this vast network of machines,
we could send messages to friends on the other side
of the planet, or check in on a particular coffee
pot at the computer lab in the University of Cambridge

(07:51):
in England, even if we happened to be in Athens, Georgia,
at the time. This, by the way, was actually a
thing that once existed. And yes I did use a
computer in the University of Georgia Computer Lab to check
and see whether or not there was coffee in a
coffee pot across the pond. Normally there wasn't because I
was often in there in the afternoon and it was

(08:13):
several hours later over in the UK. But you get
the point now. The way the information actually travels across
the Internet is pretty fascinating. First, information travels in bunches
of data called packets, and this was a brilliant move
early on in the development of networking technology. When computers

(08:33):
send data across the Internet, they chopped that data up
into packets, and a packet has two different kinds of
data in it. There's data that's all about control information.
So in other words, this is the data that explains
where the packet came from, where it's going to, how
many other packets the receiving computers should get in order

(08:54):
to make up the entire file, and where within that
sequence this particular packet should go. So you can think
of that information is sort of like being the information
on the outside of an envelope that you would send
through the mail. Then you've got the payload or the
data that relates to the file itself, the thing you
are sending. The packets are kind of like puzzle pieces

(09:15):
that get reassembled on the other side. Actually, think of
it a lot like the sequence of Mike TV and
Willie Wonka and the Chocolate Factory. The gene Wilder version
that is the description they give for how Mike gets
broken up into millions of tiny pieces and then flies
through the air and gets reassembled on the other side.
Is sort of like what happens with data packets sort of. Well,

(09:39):
those packets, which tend to be pretty small like bytes,
is common. They can all travel different pathways in order
to get to their intended destination. If you think of
the Internet as just a huge, interconnected series of roads
that allow you to get from point A to point B,
but you can choose literally thousand of different routes in

(10:01):
order to do so. That's why, I mean, these packets
can all travel different routes. This actually helps make information
transfers more robust. I mean that makes sense, right, because
if you were to send a really big file that
could not be broken up, well, first of all, you'd
have to have a connection that could handle that much
data being sent all at once, and that connection would

(10:22):
need the same throughput all the way to the destination,
or you would have to divide up the information in
some way. And if you did do that, if you
broke it up into packets, but all those packets had
to zip down the exact same pathway, and then something
happened halfway through the transfer, uh, like maybe a connection broke,
maybe a server went offline, whatever it might be, the

(10:45):
incoming file would be incomplete on the receiving end. So
packet switching helps ensure that communication between two computers across
different networks can actually happen. But as you can imagine,
if the users can computer is in one part of
the world and the destination computer is on the other

(11:06):
end of a communication channel on the other side of
the world, then there might be a bit of a
delay between sending something and getting something back because that's
a lot of distance to travel, a lot of hops
to make on the way. We'll talk about hops a
bit later, and so you can encounter some latency. Now
let's move ahead a little bit more and we get

(11:27):
to the early days of cloud computing. The most basic
definition I've ever heard about cloud computing is that it's computing,
but it's happening on someone else's computer, which is pretty accurate,
and there are a lot of subcategories we can talk about,
like cloud storage that doesn't necessarily require any computing on
the server side, but it does mean you're saving stuff

(11:48):
to someone else's machine, or more likely saving stuff to
someone else's several other machines for the sake of redundancy.
Cloud computing is really useful because you're no longer worried
about the device that the end user is relying upon
to do heavy lifting. One of the really big frustrations
of owning a computing devices that well, Moore's Law is

(12:11):
a thing. Basically, we interpret Moore's law to mean that
every two years or so, the new computer processors that
are rolling out of clean rooms are about twice as
powerful as the ones that came out two years earlier,
so we see processor capabilities effectively doubling every two years.
It's actually a little bit more nuanced than that interpretation,

(12:33):
but I've done full episodes about Moore's law in the past,
so we'll just go with the overly simplified but widely
accepted definition. This tendency means that computer capabilities are on
a pretty incredible trajectory. But the flip side of that
coin is that any computer you buy today has a
limited shelf life, at least as far as running the

(12:54):
most current software goes. Even in the early days of
personal computers, the joke was that by the time you
got your computer home from the store, it was already obsolete.
And that joke wasn't that funny because it felt all
too true. Well, cloud computing offloads the computational lift off

(13:15):
of the end user device and puts it on a
server farm somewhere in the world. If the company providing
the service is a really big one, or it's piggybacking
off of a really big company like Amazon, then it
might distribute those servers in different geographic regions. Otherwise it
could be in a centralized data center somewhere. The server

(13:37):
farm provides the number crunching, and it means that the
end users machine doesn't have to be that advanced in
order to take advantage of some heavy duty computing power.
This is what enables devices like Chrome books from Google.
These are very lightweight computers, both in terms of physical weight,
and their native processing power. That's because Chrome books rely

(14:00):
heavily on cloud based services, so a lot of the
actual processing is happening on machines that could be hundreds
of miles away. The Chrome book itself is kind of
a conduit for computational services. It's doing some work, but
not most of it. This is the same strategy that
powers things like streaming game services like Google Stadia or

(14:24):
Microsoft's Xbox Game Pass. The game's run on specialized hardware
that can crank up the settings on demanding games and
then deliver all of that over a streaming internet connection.
So as long as you have a good connection to
the Internet, you can enjoy a gaming experience that would
otherwise require you to have a souped up gaming rig
or console. But while this removes some of the burden

(14:48):
from the end user, who might no longer need to
go out and buy the best computer to run the
latest software, because that gets pretty darn expensive, particularly if
you're interested in gaming. A aiming rig can run thousands
of dollars if you want something that's state of the art,
but it does bump up against that fundamental speed limit

(15:08):
that we mentioned at the beginning of this podcast. When
we come back, we'll talk about how edge computing fits
into this strategy. But first let's take a quick break.
I've got a couple of other things that i need
to say about network computing, and then I promised we're

(15:30):
going to get to edge computing. One thing is that
the Internet has made it possible to have the actual
Internet of things. Now, back when I first heard this term,
it didn't seem to be much more than just a buzzword.
It was a concept that sounded intriguing but hadn't really
begun to manifest in a way that was visible to

(15:52):
the general public. But starting around the early twenty tens,
that began to change. And today there are more than
thirty billion IoT devices connecting to the Internet, with experts
estimating that the total number will be somewhere in the
neighborhood of thirty five billion by the end of this year,
and it's just going to get bigger from there. And

(16:12):
these devices are all across the spectrum when it comes
to purpose and intended user base and what the outcome
would be. You've got the consumer facing stuff that a
lot of us have encountered, you know, everything from smart
thermostats to home security systems to personal medical devices, to
athletic trackers, all that kind of stuff. Then you've got

(16:35):
more specialized variations like lab technology for hospitals and research facilities.
You've got municipal infrastructure components that are turning traffic lights
into smart intersections and that kind of thing. The variety
and scope of the Internet of Things is unbelievable, and many,
if not most, of these devices are pretty light lifters

(16:59):
when it comes to processing information. For most of them,
their main job is to gather data in some way,
whether that's through optics or other sensors, and then they
send that data up through the Internet, where some other
system somewhere connected to the network will process the information
and then presumably will do something with that info. So

(17:21):
the output might be a readout that's useful to an
end user, like it might be that you look at
your phone and you're able to see input based upon
the Internet of Things that are around you, or it
might send that information off to some other related system
so that a result will show up somewhere else, maybe
in a way that isn't even obvious to us. The

(17:43):
traffic light example could be a version of that, right,
You could have a citywide system of connected smart traffic
lights that could detect changes in traffic patterns and perhaps
respond proactively in an effort to minimize congestion. For example.
This is actually a really really complicated problem. It's not

(18:04):
something that's simple to solve, but it would be impossible
to solve unless you had a sort of Internet of
things infrastructure to collect and process all that data. But
the era of lightweight devices connected to the Internet really
brings into focus the limitations we face if we rely
on centralized data centers. The further out you are from

(18:26):
that data center, the more delay there's going to be
as the devices in your area are trying to communicate
back with that distant center. There's just no getting around that,
because even if we made a perfect conduit between the
data center and the end device, we'd still be limited
by the fact that light has a speed limit. And

(18:46):
here's where edge computing finally really comes into play. Now.
I should also mention that edge computing is one of
those things that has a few different interpretations and definitions.
What it is largely depends upon whom you are talking
to about it, so you might get slightly different definitions,
but there are some general features that I think are

(19:07):
common across all definitions. So if we visualize a typical
cloud computer network, we might think of a centralized data
center that connects out to various end points. Edge computing
is a practice in which a company locates servers at
the edge of networks. In other words, you are creating

(19:27):
servers that can do data processing, and you are putting
them close to where the data is actually being gathered
or generated as and you're putting it physically close to them. So,
for example, I live in the city of Atlanta, Georgia.
Now let's say that Amazon's Web Services, which provides hosting
services to thousands of different apps and processes. Let's say

(19:50):
that they were all located in the state of Washington,
that's where Amazon has its prime headquarters. Well, that's about
two thousand, sixty five miles away from me, around four
thousand two kilometers. So if I'm running an application that
needs super fast response times in other words, I need

(20:11):
really low latency, then I'm probably going to have a
bad experience because the data has to travel pretty far,
and it's not necessarily going in anything like a straight line,
because that's not how data traveling on the Internet works.
It's likely having to go through multiple hops, so I
mentioned hops earlier. A hop is when a data packet

(20:33):
moves from one network segment to the next network segment,
So you can think of it as like hopping from
one router to the next in order to get to
its final destination. And the more hops that there are
between me and the server that's running the app I'm
actually using, the more latency I'm going to experience because
the data has to travel more hops in order to

(20:55):
get to the server that's doing all the number crunching
then has to do the same thing in order to
return results to me. So I'm going to experience this
as lag or latency. But if Amazon instead set up
different regional server farms around the world and located edge
servers at those spots, the edge servers could take over

(21:17):
the immediate processing requirements. So let's say Amazon set up
that kind of data center in Atlanta where I live now.
Instead of my device whatever it might be connecting back
to Amazon's home headquarters in Seattle, Washington, it's instead connecting
to this much closer edge server in Atlanta. The edge

(21:38):
server can carry out whatever the immediate function is that
I'm trying to do so. Maybe now the data traveling
between me and the edge server is only going through
one or two hops. Not only is it traveling a
shorter physical distance, it is having to make fewer transitions
from one machine like a router on the Internet, to

(21:59):
the next than it would if it were to go
all the way back to Seattle. And the reduction in
hops is critical for reducing latency. Now, this doesn't mean
that edge servers totally replace centralized data centers. Instead, they
work in concert with those centralized data centers. Edge servers
kind of act as a point of processing for quick response,

(22:22):
but you might want to have deeper analytical work being
done by your centralized server. This work isn't necessarily as
time sensitive, and in fact, the results of that work
might not return to the end user like me at all.
It might be things like trend analysis, where you're looking
at millions of different transactions over a course of months

(22:46):
and months and then drawing conclusions from that data. Well,
that's not as time sensitive. That's literally looking at changes
over time. So that can be done in a big
centralized data center. It doesn't need to be offloaded to
the servers that are geographically close to me. Let's take
an actual example. Let's say I'm using an augmented reality

(23:08):
headset that connects wirelessly to hot spots and cellular networks.
So this is an actual headset that I'm wearing. I've
got a battery pack for it, and it's communicating with
the network. I'm walking through Atlanta and I'm looking at
various features, and the augmented headset is displaying information about
the stuff I'm looking at, and I can see the

(23:31):
information within my view. It's overlaid on top of my
view of the world around me. The headset would likely
be fitted with stuff like a GPS chip, accelerometers, magnetometer
for compass directions, camera to identify what it is I'm
looking at, a processor, and so on. The headset needs
to relay data to servers to process this information, to

(23:54):
interpret it, and then to return relevant information to my view.
The reason you why do this is because by offloading
those computational requirements to another machine, you don't have to
make the a R goggles way like fifty pounds and
have big cooling systems attached and everything, because they don't

(24:14):
have to do as heavy a lift in the computational department.
But this has to happen really fast, otherwise I'm going
to see information about what I had been looking at
a few moments before while I'm now looking at something else,
which would be really disorienting. I might look at a
historic building and I might wonder, oh, I wonder what

(24:36):
this building used to be, and then I happen to
look away and I'm looking at a tree or something,
and then I see that, apparently that tree is actually
the historic Wren's nest, which was the home of the
controversial author Joel Chandler Harris. And heck, I might just
think that my headset detected that there's an actual wren's
nest in the tree I'm looking at. It would be confusing.

(24:59):
So applications like streaming video games two players using a
specialized device, edge computing is absolutely critical. Like a R
and VR, latency will ruin your experience in gaming, There's
nothing like playing a game and feeling that bit of
lag between when you press a button and when something
actually happens on screen. You don't want there to be

(25:21):
a perceptible delay between hitting a jump button and having
your little Italian plumber dude actually jump. You want that
to feel seamless, and humans are pretty sensitive to delay.
You would want there to be less than one hundred
milliseconds or one hundred thousands of a second, but even
that is a little long. The general consensus is that

(25:43):
the goal post is to have delays of less than
twenty milliseconds, and that is fast enough so that our
mushy brains don't really process any kind of delay. If
you've ever used a VR application that has even a
tiny bit of latency in it, you've probably felt a
sort of unpleasant swimming sensation that can frequently lead to

(26:04):
a type of motion sickness. It's because you're sensing that
delay between when you turn your head and when your
actual point of view within the virtual environment adjusts, and
your brain says, hey, something is like really wrong here,
and then it sends a message to your stomach to
go hog wild because somehow that's going to fix things

(26:26):
all right. My understanding of biology is admittedly a little
limited here, but the important bit is that latency is
bad and we must do our best to eliminate it.
A r VR and game streaming are all really obvious
examples of technologies that rely on low latency response time.
While also typically requiring a high data throughput communication channel.

(26:49):
So when you hear people talking about applications that require
stuff like five G connectivity and edge computing, a r VR,
video games, those are all typically part the conversation. And
before I go further, I shall also clarify that I
specifically mean high frequency five G connectivity. If you've listened
to my episodes about five G, you know that there

(27:11):
are a few different flavors of that technology, all of
which relate to the band of radio frequencies that are
being used. The higher end frequencies within five G can
carry an incredible amount of information all at once. These
are bands of frequencies that provide data speeds that rival
that of dedicated fiber optic lines. But these frequencies also

(27:34):
don't travel very far and they can't penetrate solid walls
very well, so you quickly lose out on that high
volume throughput if you move away from the transmission antenna
or something comes between you and it. It's why a
really robust high speed five G network would need a
lot of towers to make it work. Anyway, the five

(27:55):
G would be the communication channel, while edge computing would
provide the actual proces sessing capabilities to return relevant information quickly.
You can imagine how edge computing would be important for
all sorts of applications, not just VR, A R and
video games. The Internet of Things depends pretty heavily on
edge computing, as the end devices, like I said, tend

(28:17):
to be pretty simple. A lot of IoT devices boiled
down to a sensor that detects some sort of dynamic element,
you know, a thermometer for example, detecting changes in temperature.
Then it has a means of sending the information off
to somewhere else, and it's that somewhere else that ends
up making meaning of the data that the sensors are
actually gathering. So the end device is at least in

(28:40):
a way in this particular instance, kind of stupid. It's
just giving constant updates and a processing center is actually
making use of that information. However, there are also some
IoT devices that actually do have some compute capacity built
into them. There are machines that have processing units kind
of like CPUs on a computer, and they also have

(29:01):
memory and the ability to do at least some data
processing right at the point of data generation or data collection.
So if you were to go out and purchase a
brand new car, chances are that car would have somewhere
in the neighborhood of fifty CPUs built into it in
order to control various functions, all of these operating independently

(29:23):
of each other. But it also means that with the
elements of the network, these can become effectively edge devices.
So you've got your edge servers and you've got your
edge devices, which if they weren't connected to any other network,
you would just call them computers. When we come back,
we'll talk a bit more about edge computing and some

(29:46):
of the pauses and some of the challenges associated with it,
and what we might expect to see develop in the
near future. But first let's take another quick break. One
thing I haven't talked about so far in this episode

(30:07):
is containers, not physical containers that you would find in
the real world, but rather the concept of containers within
the context of software development and deployment. Now, remember how
I talked about how packet switching and how information travels
across the Internet and how data packets play apart. Well,

(30:28):
in a kind of similar way, containers are standard units
of software. So we're not just talking about data, we're
actually talking about applications here, and a container is a
way to put together everything that a specific piece of
software needs in order for it to operate. That includes
the system tools that it needs, the code of the

(30:50):
app itself, any libraries the code has to draw upon
in the process of executing the program, and so on.
So essentially you can think of contain inners as all
the stuff this particular app needs in order to do
whatever it is the app does. Now, the reason that
containers are important is that they allowed developers to move

(31:11):
software two different operating environments easily. Let's say that you
are developing an app, and you might first develop it
on your own machine, but then you actually need to
test the app out. You've built it up to a
point where you can run some tests, get some people
in there to check it out, go through all the features,
make sure it works. So you want to deploy it

(31:33):
to a test environment, a discrete environment in which the
app can behave as if it were released to the wild.
But here's the important part. You haven't actually released it
out to the wild, so that gives the opportunity to
find any problems with it, vulnerabilities, anything that could be
detrimental Once it was released. You can do all that

(31:53):
in a safe space. So you do that, and you
find out what works and what doesn't work. You go back,
you make some changes, you deploy it again to the
test environment, you test it again. Once it gets to
the point that the app seems to be working the
way you wanted, then maybe you move it to a
staging environment, and from there it goes into production and

(32:14):
it becomes an app that end users can actually download
and install on their devices and actually use well. Containers
make it easier to move this app from one environment
to another, from laptop to test environment and back again,
or test environment to staging environment, staging environment to production,
et cetera. And these environments can all have different elements

(32:38):
from one another, and those differences could mean that code
that works really well in one environment suddenly doesn't work
at all or is doing really weird stuff in a
different environment. But containers contain all the elements of the
app needs to run properly, so that at least in theory,
the code should run the same way regardless of whatever

(33:00):
environment it is in, because all the requirements for the
app are contained along with the code of the app itself.
Now I get that this is a little difficult to
grock for some folks. I mean it's tough for me
and I have covered it multiple times. But beyond containers,
we have another term that frequently pops up when we
talk about cloud computing and edge computing, and that term

(33:21):
is the dreaded Kubernetes, which refers to in now open
source platform for a container management. So in other words,
this is kind of one step up from the containers themselves.
This is a product that helps teams operationalized containers at scale.
Because it's one thing to move a container from a

(33:43):
you know, a developer laptop to a testing environment. It's
another thing to deploy software that is going to potentially
millions of users. So from a software perspective, we could
say that Kubernetes and containers are trying to do from
a code approach what ed computing is trying to do
from a hardware approach, But in reality, all the stuff

(34:04):
kind of gets mixed up together. Containers make it easier
from an operational standpoint to deploy apps to the edge,
whether that's to edge devices where you're more likely to
use a simple container platform, or to edge servers where
you might be relying on the more robust Kubernetes platform.
So these are all pieces of a puzzle that makes

(34:26):
edge computing a viable strategy. Now, there's some other considerations
that I T professionals have to make when it comes
to edge computing. Distributing computing to the edge comes with challenges.
For example, when you've got your own on premises cloud
computing data center, you have a lot of control when
it comes to ensuring the safety of your equipment and

(34:48):
the information that it holds. That spans physical safety. That
means you can actually make sure that those facilities are
protected that unauthorized people cannot easily get physical access to machine.
Means that you've got a properly cooled facility to deal
with all the heat that those computers are giving off.
That kind of thing. It also means that you have

(35:09):
more control over cyber security, so you can make sure
that protections are in place to keep things running smoothly.
But moving out to the edge creates more opportunities for
bad actors to find a way to attack a system.
So creating an edge computing network that is easy to
administer and orchestrate while also being secure is a pretty

(35:31):
big hurdle. It may mean leaning heavily on other entities
and trusting that they've got their act together, and as
we've seen pretty recently, sometimes that it turns out that
a trusted entity has been compromised and then there can
be fallout of specifically, thinking about the Solar winds hack. Now,
in that case, we weren't talking about edge computing, but

(35:53):
I'm using it to illustrate a point. The interconnectedness of
these systems and the fact that you might talking about
half a dozen companies that own parts of this network
that are all involved in this edge computing enterprise means
that you're asking a lot of organizations to trust one another,
and if one of those organizations gets compromised, there's a

(36:16):
danger that hackers could take advantage of those trusted relationships
to gain access to the others. Now, related to the
issue of security is privacy. When it comes to the
Internet of Things, we're often talking about devices that are
gathering data about us in our activities. Yes, we've also
got devices that are sensing environments and not so much

(36:38):
focused on people, but a lot of devices are either
directly or indirectly keeping track of where people are, who
they are, and what they're doing. Now that data can
be useful for a lot of good things, but it
can obviously also be misused or outright abused. So there
is an onus on companies to make sure they are

(37:00):
good stewards of data and that they protect that information.
And since this information is potentially moving between lots of
different points, from say the endpoint in the environment or
on a user through the edge network and potentially further
up the chain to a cloud computing network, there are

(37:20):
a lot of opportunities for vulnerabilities. Now. While we often
associate vulnerabilities with hackers who are trying to find ways
to exploit systems, a vulnerability could just as easily be
a poorly protected web portal that is publishing what should
otherwise be private data. We've seen this happen with companies
where somewhere someone along the chain failed to take into

(37:43):
account what was actually going on, and data that should
have remained protected in private was somehow published publicly or
semi publicly. As we look at decentralized computer systems, we
see a lot more points where this kind of thing
could potentially happen, either with the raw data collected in

(38:03):
the wild or then the processed data that comes out
of the edge network. Either way, that's something else that
I T professionals have to focus on. Reducing the number
of times data needs to move from one part of
the system to the other is potentially one solution toward
providing better privacy and security. You're reducing the number of hops.

(38:25):
You're reducing the number of vulnerabilities that could potentially exist.
If all the computing can stay at the edge and
nothing needs to you know, phone home to the centralized
data center, there are fewer opportunities for something to go astray. Similarly,
we will likely see lots of companies specialize in ways
to manage the flow of data between devices on the

(38:46):
edge of a network and big data centers. I think
that's going to be a really big business. That it's
like enterprise to enterprise business. But but what if you're
not an I T professional? I mean, I'm not so
what if you're like me, it's not your job to
worry about this kind of stuff. What does all this
actually mean to you? Well, essentially, it means the gradual

(39:08):
introduction of more lightweight technologies that have increasingly usefulness in
our lives. Well, maybe usefulness is going a bit far.
We'll be able to do a lot more stuff with
different devices, interacting with our environments and with ourselves. Not
all of it might be useful. I mean I often
talk about augmented reality applications like I did earlier in

(39:32):
this episode, where you know, you use an app on
a smartphone, or maybe you're lucky you've got a pair
of a R goggles and you're able to visualize the
world around you in different ways. You know, I love
this idea of going to the site of an old
castle and looking at the ruins and then seeing a
virtual reconstruction of what the castle looked like when it

(39:52):
was in its heyday. That to me is like the
gold standard of a R applications, Which can you know
tells you that I majored in Medieval English literature when
I was in college. But I also admit that the
reality of a R means I'll probably see a lot
more let's call them frivolous uses of the technology, like

(40:14):
walking through a grocery store and seeing characters in the
front of cereal boxes seemingly come to life, inviting me
to enjoy their sugary goodness. Or I might look at
a movie poster and suddenly a trailer for that film
begins to play inside my vision. A lot of the
services that we use are monetized in various ways, and

(40:35):
I mean, I'm not knocking it. That makes sense. No
one wants to work for free. But that often means
that all certain technologies might have incredible potential. We also
have to wade through a lot of less lofty applications.
But edge computing is going to make that sort of
stuff possible. Without edge computing, we wouldn't have the responsiveness
capable of generating those experiences in a timely fashion. Sin

(41:01):
So with edge computing, we're also going to see a
lot of one of my old favorite words, convergence, the
convergence of multiple disciplines of technology creating new approaches towards
various problems. You've got your communication channels supplied by technologies
like a robust five G rollout. You've got your computer

(41:21):
technologies at the edge of the network. Behind that, you've
got machine learning, artificial intelligence, autonomous management taking over tasks,
optimizing them, always striving towards constant improvement. It's a pretty
cool way to look at the future. Even something as
seemingly dull as supply chain management really could have big

(41:42):
results to consumers down the road. Literally and figuratively, imagine
that the price of some goods starts to come down
because we've actually developed far more efficient approaches to producing
and shipping that stuff, and thus it costs less to
get it to market, and various companies are competing with
one another, so they reduce the price to you. That's

(42:06):
one benefit we could see through this kind of robust
rollout of edge computing. However, we have to remember one
this version of the future is not a guarantee. It's
a possibility. It's also good to think about the larger
effect that we see as a consequence of more computing systems,
more IoT devices, more processing, because this ultimately means we

(42:30):
have to consume more energy. We need more electricity to
fuel all this stuff, which means we got to produce
more electricity, which frequently means we also are going to
have a big ecological impact as a result of all
this progress, assuming that we're still relying heavily on fossil
fuels for our generation of electricity. Nothing exists in a

(42:55):
vacuum except for all that light and zipping around out
in space. This is all interconnected. We have to train
ourselves to think about big picture stuff, to tackle problems
in a way that aren't exacerbating different but very important problems. Now,
I never said I had all the answers. Heck, I
only have a couple of answers. For example, the capital

(43:18):
of Iceland is Reka, Vic doesn't really apply here, but
that's my point. But yes, that's our overview of edge
computing and the role it plays within networks and our
experiences with technology. It is one of those things that
continues to evolve. And like I said, this one's a
pretty young one. Like I think the earliest mentions I

(43:39):
could find were somewhere around t so not that old
in terms of technologies that we have at our disposal.
So I'll probably be doing multiple episodes about this further
into the future as we see it evolve over time
and different implementations take shape. I'm excited to see what

(44:00):
will become a reality based on this technology. Uh, And
of course I am concerned about the impacts that the
technology will have beyond just its direct application. But I
think we can leave off here and then we will
come back with new episodes about other stuff. So if

(44:21):
you have any suggestions about other stuff, I can cover
preferably related to technology. I mean, if you want me
to talk about ka Vic, I'll do it. It's just
not I don't know how well it fits in with
tech stuff, and my sponsors might get mad the heck
um game. If you are, let me know what you
would like me to talk about in future episodes. The

(44:41):
best way to do that is to reach out on Twitter.
The handle I use is tech stuff H s W
and I'll talk to you again. Release it. Tex Stuff
is an I Heart Radio production. For more POE casts
from I Heart Radio, visit the I Heart Radio app,

(45:03):
Apple Podcasts, or wherever you listen to your favorite shows.
H

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS
Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.