All Episodes

September 13, 2023 38 mins

A blog site called Simplicable outlines several "fundamental principles" of technology. We take a look at a few of them, talk about what they mean, and how they relate not just to tech but how we incorporate tech into our lives.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Be there
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you today? I thought I would talk about a
few fundamental principles that underlie technology, at least as identified

(00:26):
by a blog called Simplicable. The name of the author
on all the articles I was looking at was John Spacey,
so it appears to be John Spacey at the helm
of this blog, which I think is based out of Singapore.
So these are concepts that apply to or drive technology
and technological change and innovation. And it goes beyond things

(00:47):
like circuits or electricity or mechanical systems or anything like that.
These are more like ideas and observations that underlie technology,
again as identified by Simplicable. So I don't wish to
suggest that these are universal fundamental principles, just rather that

(01:08):
I came across this list Unsimplicable and I thought it
was interesting, so I thought I would talk about some
of them today. So first up, we've got a concept
that ties in with Moore's law to an extent. You
could argue that Moore's law is another fundamental principle. In fact,
it's one of the ones that's listed by simplicable. I

(01:30):
think that's going a little far to call it a
fundamental principle. It's certainly an observation that people have attempted
to push beyond some pretty hard boundaries. But as a refresher,
just so that you understand what I'm talking about here,
Moore's law stems from an observation that Gordon Moore paid

(01:53):
back in the middle of the last century, actually, when
he saw that economic and technological factors were contribute meeting
to a system where there was an economic demand for
progressively more powerful microprocessors and later on computer chips, and
that this demand in turn created an incentive for fabrication

(02:16):
companies to come up with ways to meet that demand. Like,
it wasn't just magical that the fabrication companies were able
to make more powerful processors. They saw the demand there
and then they said, okay, well, how can we make
something that meets that demand. It's not just magically happening
on its own. So every so often we would see
companies find new ways to fit more discrete components onto

(02:40):
a square inch of silicon wafer. As they were able
to a couple of years previously. So generally speaking, the
law of Moore's law boiled down to every eighteen to
twenty four months, fabrication companies would find a way to
double the number of transistors that they could it onto

(03:01):
a silicon wafer an inch of silicon wafer square inch. Now,
over time, this concept morphed into a similar but different one.
We still call it Moore's law, but now we don't
necessarily say that the processors of today have twice as
many transistors on it as the processors from two years ago.

(03:22):
For example, now we say every two years or so
the chips that fabrication companies are producing are twice as
powerful as the ones from two years earlier. So, in
other words, if you bought a high end processor in
twenty ten, and then you bought another high end processor
in twenty twelve, the twenty twelve one should be twice

(03:43):
as powerful as the twenty ten one, And then in
twenty fourteen, if you bought one, that chip should be
twice as powerful as the one that you had bought
back in twenty twelve. Now we also tend to get
a little loosey goosey with a whole concept of what
power means in this context. Often this comes down to

(04:04):
processing speed, how fast can the chip process information, how
fast can it complete executions of operations, But powerful can
also include some other concepts like bitwidth, and you can
think of bitwidth as how large of a chunk of
data can this processor handle. So if the bitwidth is greater,

(04:29):
the processor can handle larger chunks of data. To me,
one interesting thing about Moore's law is how we choose
to interpret it so that it will remain relevant over
the years. Because pretty much everyone has acknowledged we're running
up against some obstacles that are just impossible to get
around based on the technology we have developed so far.

(04:50):
Let me explain that a little bit more so. As
you reduce the size of these components so that you
can fit more of them onto a silicon wafer, you
start getting down to a size where you're encountering issues
with quantum mechanics, and these quantum mechanics issues are not

(05:10):
in alignment with the way that we want our electronics
to work. Essentially, we get to a point where the
pathways we have created for electrons are so small that
the electrons have the potential to exist in a different
part of the pathway than where we want them to be. So,
if you think of transistor gates as actually being physical gates,

(05:33):
as in, you know, you're when it's closed, you're not
allowed to go through once you get to a certain size,
there's a potential for an electron to be on the
other side of the gate without having to actually physically
pass through the gate. It just means that there's the
possibility that the electron could be on the other side.
And as long as there's a possibility, it means that
sometimes that's what happens. And if you can't control where

(05:56):
the electrons are, then the gates mean nothing, and it
means that you're going to get computational errors. So there
are fundamental physical limitations to how small we can make
things if they're working the way that we have designed
microprocessors for you know, the better part of a century.
So that means that we have to come up with

(06:18):
other means, other ways to try and eke out more
performance in these chips. If we want Moore's law to
remain relevant, and that's only if we say, all right,
we want Moore's law to remain relevant, but not the
original Moore's law. We're talking about our reinterpretation of that

(06:38):
observation that was made, you know, in the middle of
the nineteen hundreds. So that is kind of where we
are with Moore's law. It's still sort of relevant, but
mostly because we're willing to bend on how we interpret
it and how we define it. So no one wants

(06:59):
to reach a point where they have to admit that
the rate of improvement is slowing down. No one wants
to get to that point. So we'll just keep on
moving things around, like doing the balls and cups routine,
until we can no longer fool ourselves into thinking that
we're able to keep this rate of change up. And
this is what brings us up to a different fundamental principle,

(07:22):
as defined by Sipplicable called accelerated change or accelerating change,
I should say, and that's just what it sounds like,
because we all realize that over time, stuff changes. Accelerating
change means that the rate of change itself is changing.
It's not just that things are changing day to day.

(07:43):
It's the concept that they're changing more quickly than the
rate of change was before, and that this is driven
by technology and how we use that technology. So with
this kind of concept, we would be able to look
at any ten year span. So let's say we looked
at nineteen fourteen to nineteen twenty three, and then let's

(08:03):
say we compared that with a more recent ten year span,
so let's say twenty fourteen to twenty twenty three. Then
we ask the question in which of those ten year spans,
In which of those decades would we see more change?
And if we want to get specific, where did we
see more technologically driven or oriented change. Since the nineteen

(08:27):
fourteen to nineteen twenty three decade predates the invention of
the transistor, it's pretty easy for us to say, well,
the most recent decade has seen way more innovation driven
by technology. It's not even comparable. And that's true, but
we should also remember once again, these advancements in technology,
they're not happening in a vacuum. It's not like technology,

(08:50):
if left to itself, will evolve and improve over time.
Stuff in the world shapes our approach to technology, and
then our technology shapes the stuff in the world and
our interaction with it. So in nineteen fourteen, you would say, well,
what kind of factors were influencing technological development. In nineteen fourteen, well,
there was a big one. It was the Great War,

(09:13):
which was later called World War One. They didn't call
it that at the beginning because there were still optimists
back then. They weren't expecting there to be a second one.
In fact, they called it the War to End all wars.
It turns out that was wrong. Anyway, this war spurred
a ton of innovation as various countries tried to find

(09:33):
more efficient ways to kill the enemy while sustaining fewer
losses of their own, or at the very least just
killing war of them than they managed to kill of us.
So we got lots of stuff like machine guns, motorized
military vehicles, airplanes were used in warfare. Chemical warfare became
a thing. We also got some other stuff that wasn't

(09:55):
expressly made to kill people, like gas masks which was
meant to save people, and field radios. Anyway, my point
is that the concept of accelerating change isn't something that
we can just isolate from the rest of the world.
It's also something that can lead people to make predictions
that I think may at best be a long shot.

(10:16):
It's what has fueled a ton of discussion around concepts
like the singularity. This is this idea that we reach
a point where change is happening so quickly and it's
so constant that there is no way that you can
meaningfully talk about the way things are, because by the
time you're done making a sentence, they ain't that way

(10:37):
no more. They've changed already. So this idea, which relates
to other concepts like humans becoming something more than human,
the so called transhuman approach. Transhuman in this case meaning
something beyond humanity. It doesn't have anything to do with
something like the transgender community. That it's a different thing,

(11:01):
very important thing, but different. This concept of transhumanism is
about no longer being strictly human the way we would
define it today, and that could include lots of things.
It usually includes some form of augmented intelligence. Either we
figured out a way to boost our own biological intelligence,
or we've incorporated technology into us in some way that

(11:24):
that then boosts our intelligence. Sometimes we also have an
idea of digital immortality thrown in there. Mostly it feels
a lot like stuff that's in the realm of science
fiction rather than stuff that's in reality. But I would
argue that it's based on this perception that change is
happening faster with every passing year. And if that's your

(11:45):
basic argument, well, then it stands to reason that at
some point the rate of change will be such that
there will be no meaningful way to quantify it. But
like Moore's law, I would say this belief is based
off things that may not be universally or permanently true.
I think one of the mistakes that some futurists make

(12:06):
is that they equate all change with what we see
in things like Moore's law, because Moore's law describes exponential
rates of improvement right with at least with processor complexity,
which then we redefined as processor power or performance, So
that same rate of change would then apply to everything

(12:27):
with the way some futurists frame stuff out. But that's
just not true. We don't see exponential change in everything
that's related to technology. Some things actually change at an
even faster rate, they might start to approach a hyperbolic
rate of change, But some things experience much slower change.
They don't advance that quickly, like battery technology does not

(12:48):
advance nearly as quickly as microprocessor technology. So it appears
to me that the singularity is going to require a
lot more than just super fast processors, unless we decide
to redefine what the singularity is, and then we could
say we're already in it because we've redefined it, which
we kind of have done with Moore's law. I guess
we could do that, but it's not very satisfying. So

(13:11):
accelerating change is one of those fundamental ideas in tech
that may or may not apply depending on the tech
type we're talking about. I would say that for some technologies,
like autonomous vehicles, for example, we have not seen accelerating change. Instead,
we saw an initial burst of innovation, incredible innovation, and

(13:32):
for a few years that continued and it did look
like it was accelerating change. But now we're seeing engineers
having to hone in on specific limitations and problems and
challenges within autonomous vehicle technology, and these require careful solutions,
and some problems might be accounted for, but lots more

(13:53):
have been discovered or are not accounted for, and we
aren't seeing accelerating change in that field anymore. Iterative changes,
which is still good, like we're still seeing advancements, but
it's not going at this you know, breakneck speed. That
accelerating change suggests. Okay, we've got a lot more principles
to cover. I think it's time for us to take

(14:15):
a quick break. All right, we're gonna move on to
a different fundamental principle that Simplicable has identified as being,
you know, fundamental to technology. And so let's talk about complexity.

(14:37):
You can think of complexity falling into one of two categories.
Simplicable calls it essential complexity and then accidental complexity, or
what I would refer to as non essential complexity. Now,
if something has essential complexity, that doesn't necessarily mean there's
no way to simplify the technology. It might be possible

(14:58):
to simplify it, but it's essential complexity. That means that
if we were to try and simplify this technology, to
streamline it or remove features or anything like that, then
in the process, we would also reduce the usability or
value of that technology. So, in other words, we could

(15:18):
make it less complex, but we would also make it
less good, or less desirable or less useful. This can
fall into subjective perceptions. It's not just an objective truth.
So for example, let's say you've got a team and
they're developing a smartphone, and their initial lineup of features

(15:41):
are all the typical ones you would find in a smartphone.
It'll be able to make calls, they'll be able to
send and receive emails and text messages. It'll be able
to take photos, all that kind of stuff. The complexity
necessitates certain design decisions, right, Like, in order to achieve
the things you have listen did as what the smartphone

(16:02):
has to do, you've got to make certain design decisions
to support it. This all is just logical, right, And
that could range from everything from the size of the
battery you're gonna need if you want to have a
useful phone life so it lasts at least a day.
It may involve things like screen resolution. You want to
make sure that people can see whatever it is that's

(16:22):
being displayed. There, process or power to support all these
different functions, Like, all these things become necessary considerations in
order to provide a good experience. But let's say there's
someone on your team who just doesn't see the value
in having a camera and a smartphone. Maybe this person
never uses the camera on their smartphone at all. Maybe

(16:43):
they just don't take pictures. They don't see it as
being fundamentally important to a smartphone's design. They could argue
that you could drastically simplify the design of the smartphone
if you just ditched the camera right. That means you
get rid of the lenses, you get rid of all
the sensors, you get rid of all the stuff that

(17:03):
otherwise would have to be in the smartphone for the
camera to work. And then you could either make the
smartphone smaller, you know, create a smaller form factor because
it no longer has to house those components, or dedicate
that space for something else. Or even this could precipitate
into changes for things like the battery life and the
processor because it no longer would need to support the

(17:25):
functions of a camera like these are sort of a
cascading list of decisions that this could affect. However, someone
else might say, well, this smartphone has no camera. It's
not a good smartphone. I would argue it's not even
a smartphone at all because there's no camera in it,
And to them, the reduction in complexity has resulted in

(17:46):
a reduction of usability or value. Again, there's no objective
truth here. It's all dependent upon how you feel about it.
But there is a general understanding that technologies can only
be simplified and made more efficient and less clunky up
to a point, and once you get beyond that point,
you start to lose whatever it was that makes the

(18:06):
technology useful in the first place. Now, if we go
to accidental or non essential complexity, we've got the opposite.
This describes a technology's tendency to have functions or features
or elements to it that make it more complex but
add no extra value to the technology itself, which means

(18:29):
if you get rid of it, not only do you
not eliminate something valuable, you might actually increase the value
of the technology because you've gotten rid of some clutter.
Now this could be software, it could be hardware. Like
it doesn't have to be a physical technology. It could
be something like a bug in software that when you
eliminate it, not only does it make it less complex,

(18:50):
but now the software works more effectively. So you've increased
the value of the software. Even if you were to
just argue that eliminating the bug reduces the size, like
the data size of the software itself, you have increased
its value, right because size is it's not an infinite
resource that we have like your machines that run software

(19:12):
have a limitation on how much they can handle. And
if you start to reduce the demands of software by
eliminating bugs, that increases that software's value, maybe not monetarily,
but certainly from a process standpoint. So, with essential complexity,
reducing complexity reduces the text value. With accidental complexity, reducing

(19:34):
complexity increases the technology's value. Now there's a related tech
issue I would like to kind of dip my toe
in and mention here. That's called feature creep. Now, this
is when a team is building out a technology and
then they begin to add in features that were not
included in the original plan for the tech. You know, oh,

(19:56):
what if we were to add neon lights, or what
about we put speakers on the outside of the car.
Feature creep happens a lot in tech space. It can
again happen in hardware and in software. It can also
get to a point where it will doom a project
or the very least delay it for ages and potentially

(20:17):
mean that you end up with something that is less
valuable because of all that feature creep, which ties into
another fundamental principle that simplicable identifies where they say worse
is better. By that, they mean not that if a
technology is worse, it's better than a better technology, But
sometimes a technology that has fewer functions but works really

(20:41):
well is going to be viewed as more valuable than
a device that has way more functions but it's harder
to use, Which makes sense, right. If you make something
that is easy to use and it does what it's
supposed to do, then people are going to gravitate toward
that more than they will technologies that it might have
a lot more bells and whistles, but they don't do

(21:02):
anything particularly well that doesn't typically stand the test of
time well. Feature creep plays into both of these things here,
both the complexity issue and the worst is better issue. Now,
to me, the definitive example of feature creep, the one
I would use if I were doing my TED talk
on what feature creep is and why it's bad, would

(21:24):
be the game Duke Nukem Forever. It was an infamous
game while it was in development, and once it published
it no longer became infamous. It just kind of became
a bit of a punching bag, or sometimes just completely dismissed.
So if you're not familiar with the Duke nucom franchise.
It follows this overly macho male hero who's based a

(21:48):
lot on characters that Arnold Schwarzenegger has played or Bruce Campbell.
In fact, it lifts a lot of lines straight from
the Evil Dead movies. Use Campbell Evil Dead movies when
they were really campy and stuff. And it's a first
person shooter game that a company called three D Realms
originally announced in nineteen ninety seven. They'll keep in mind

(22:10):
when studios announce a game, it typically means that they've
already been working on it for a while, so they
announced it in nineteen ninety seven, but the game wouldn't
actually come out until two thousand and eleven. Now, video
games can take a long time in development, but fourteen
years is atypical, although Star Citizen might catch up in

(22:34):
a couple of years if they don't have a full
game released before then. And one of the many problems
that was causing a lot of these delays was feature creep. See.
While the team at three D Realms was working on
the game, other companies were releasing updated game engines that
supported more features, so you could build on the game

(22:57):
engine you were depending upon already and continue building out
your game. But the fear was that when Duke Nucom
would release, it would look dated against games that were
created on the more recent game engines. So you get
the head of the project who suddenly demands that the
team swaps to a different game engine, a more recent one,

(23:21):
and that necessitates starting from scratch for most aspects of
the game, like almost all the assets needed to be
redone in order to work with this new game engine,
and it sets the entire development process back to the beginning.
Then you would have times where the leader of the
project would want capabilities that were starting to show up
in other games to then be incorporated into Duke nukeomb forever,

(23:43):
when that had not been a consideration earlier in development.
So as a result, the game was constantly going through development,
then revisions, and sometimes complete restarts, and by the end
of it all, I think it was pretty safe to
argue that the game was filled with non essential complexity.
And of course, by the time it finally published under

(24:06):
a different company. At that point, because it had changed hands,
it was no longer really a relevant game to the
sensibilities of most gamers. You know, the even if the
gameplay had been stellar and free of things like bugs
and other issues. The tone of the game no longer
fit what people wanted anymore because more than ten years

(24:30):
had gone by since the game had been announced, and
people's tastes in gameplay and game tone had changed in
that time. So feature creep really was a huge problem
for that game. Well, here's another concept that Simplicable includes
as a fundamental principle of technology, the creativity of constraints.

(24:54):
This one really speaks to me. Basically, this idea says
it is much easier to be creative and innovative when
you're working within some form of constraint, because constraints drive decisions,
and if you are without constraint, you have no limiting
factors that make it necessary to decide to go one

(25:16):
way versus another. Any decision you make appears to be
as valid and viable as any other decision you could make,
because there's nothing pushing back against you. And that can
mean you just end up spinning your wheels a lot
and you don't really make any progress. Constraints can be
pretty much anything. Budget is probably the big one, right.

(25:39):
Usually you're working within some sort of budget, and that
means at least in theory, you can't make decisions that
would require more money than the budget allows. Obviously, lots
of projects go over budget, but the budget is meant
to serve as that constraint. A deadline is another constraint
you might face. You know, you have to finish your

(26:00):
task by some appointed time. But there can be lots
of other types of constraints, even in tech. Some of
them could be technological constraints, like it's just physically not
possible for you to go beyond a certain level of
performance because of the limitations of technology. There can be
material constraints. Maybe you know you can't go further in

(26:22):
any particular direction because the materials that you can use
have their own physical limitations, and if you were to
try and push beyond that, you would break the device
or whatever it might be. You can have social constraints,
maybe there are things that are not socially acceptable that
you back away from with decisions as you're making. You know,

(26:44):
this technology. There could be legal constraints. Maybe there are
regulations or laws that mean that you can't do certain things,
and that means you have to come up with creative
solutions to get your technology to work. Properly while still
being within that legal framework. So with constraints, you end
up saying, I need to do X, that's my goal.

(27:05):
But meanwhile, AB and C are all in my way,
So how can I achieve my goal? And you start
problem solving, and the problem solving shapes not just how
you get around those challenges. That problem solving actually shapes
the end product itself. The thing you make in part
is a reflection of the constraints that you encountered while

(27:27):
you were making it. But if you don't have constraints,
then you don't have those guidelines, right, You don't have
anything to push against, no hard edges that you're gonna
bump up against and have to work around, and honestly,
that can stifle creativity. That particular concept really rings out
to me because I've encountered it myself when I've played

(27:48):
certain types of video games, right Like, there are big,
open world video games that are exploration based and there's
very little direction, and that can feel too vast. I
feel like I'm not really doing anything. And then there
are smaller, more modest games. They might be much more
directed or have very defined goals that you need to achieve,

(28:10):
and those really resonate with me because I feel like
I'm making progress as I play it. Now, this is
not to say the big open world games with very
little direction are bad. They're not bad, and the people
who love them are not bad people. They're not wrong
for loving them. It's just something that fundamentally doesn't work
with the way my brain works. And so for me,

(28:31):
those constraints really are important because they provide structure, and
with that structure, I can then feel if I'm doing
well or not well. Without structure, I don't know that,
and then I start tumbling into an existential crisis. And y'all,
you've heard enough episodes of this show. You know nobody
wants that. Okay, we're going to come back in just

(28:51):
a moment and talk about one other principle that simplicable identifies.
Keeping in mind they have others than the ones I've
just mentioned, and we'll finish off this episode from there.
But first, let's take another quick break to think our sponsors. Now,

(29:15):
as I mentioned at the top of this episode, I
was just kind of surfing around the web, which dates
me right, using terminology like that. That's fine. I'm an
old man, I get it, and I found this simplicable
blog which I had never seen before, and I started
reading this article about the different fundamental principles of technology,
and this one also really stood out to me because

(29:36):
I think it's one that we can easily contextualize right
now based upon things that are playing out at this
very moment. And that is the principle of cultural lag.
And again this is these are things that surround technology
and technological development. Right We're not talking about the actual

(29:57):
things that make the technology work. So what is cultural lack? Well,
it's pretty much what it sounds like. It's essentially when
technology outpaces society in some way. Technology or the things
that the technology introduces, like the possibilities the technology creates
are ones that society lacks the facility to handle. So

(30:18):
I would argue, right now, we're really seeing this with
generative AI and then artificial intelligence in general, because keep
in mind, generative AI is one application of artificial intelligence.
Generative AI is a type of AI, but not all
AI is generative AI. Right, all cats are mammals, but

(30:39):
not all mammals are cats. So generative AI has some
potential applications that society is just not prepared to handle.
Everything from copyright infringement, to plagiarism, to the capacity to
generate and disseminate misinformation. All of these are issue with

(31:00):
generative AI that we just don't have the ability to handle,
or even being able to differentiate between something that was
created by generative AI versus something that was created by
a person. We're not really able to handle that either.
And again, artificial intelligence in general also falls into the
category of a technology that we have you know, cultural

(31:22):
lag associated with it. So often I talk about this
within the context of legislation as various politicians and leaders
around the world struggle with the challenges created by technological
innovation and how can they take advantage of that innovation,
how can they try not to stifle innovation, but at

(31:45):
the same time, how can they protect the people and
institutions of a country from harm based upon what this
technological innovation can do. And we're even still seeing it
here in the United States with regard to like base
principles of what the Internet in general and the Web
in particular allow, right, I mean, that's why we get

(32:06):
arguments about stuff like Section two thirty here in the
United States. There's a cultural lag that is significant because
keep in mind section two thirty. Section two thirty is
what protects online platforms from being held liable for the
content that users post to those platforms. Right, Like, if
you were solely responsible for the content that goes up

(32:30):
on a website, all the content that goes up on
the website comes from you, and you start posting stuff
that's illegal, well, logic dictates you should be able to
be held accountable for posting illegal material. You posted it,
you created it, you posted it, you're the one responsible.
But if instead you create a website that allows anyone

(32:50):
to post there, and some other person you've never heard of,
you don't know them, you've never met them, they come
to your website and they post something illegal, section two
thirty would protect you from being held accountable for the
thing that this other person did. You provided the space,
but you didn't create the content. And section two thirty
itself has got some limitations. You're supposed to at least

(33:13):
put forth reasonable effort to remove illegal material, or else
you can lose the protection of section two thirty anyway.
All of this was worked out as part of the
Communications Decency Act of nineteen ninety six, the year before
three D Realms announced the development of Duke Nukem forever.

(33:34):
So it was nineteen ninety six when Section two thirty
was first written into law, and we're still struggling with
it today. You still have people on either side of
the political ideologies who want to either eliminate Section two
thirty or to amend it significantly for very different ideological reasons.

(33:57):
But you know, there's this agreement that on both sides
that it's not what they want, and that shows a
cultural lag that's really significant. I mean, we tried to
acknowledge it back in ninety six, and more than a
decade later, more than a decade and a half later,
we're still trying to grapple with it. That's a significant

(34:21):
cultural lag. Now, there are several other topics that Simplicable
lists as foundational principles of technology. I'm not sure that
I agree with that classification in every case. I think
they are things that relate to technology and are to
varying degrees important. I don't know if I would call
them fundamental principles. However, I think all of the ideas

(34:45):
are well worth discussing, and I will likely do another
episode on this in the future because I find it
really interesting. To think about these concepts and observations and
how do they interact with our approach to technology. And
there are tons more that they list, so we'll we'll
get to those in another episode if you'd like to

(35:08):
read up on them, by the way, and just to
see what Simplicable lists as fundamental principles of technology, as
well as all the other stuff that's unsipplicable. The url
is just simplicable dot com. That's s I M P
L I C A B L E dot com. Now

(35:28):
full disclosure, I do not have any connection to that site.
I didn't know it existed before today. I don't know
anyone who writes for it. I just stumbled across it
by chance and thought that the pages about, you know,
these foundational principles of technology were really interesting and there's
tons more on the site as well, so if you're
so inclined, you should check it out. I'm very thankful

(35:49):
that I came across them because it gave me a
lot to think about. And as I said, I don't
agree with all the conclusions made by Simplicable, but I
think it ends up being kind of find details that
are arguably subjective. So it could just be because my

(36:09):
point of view is slightly different, but it ultimately may
mean that we're both arguing the same thing, we're just
doing it in slightly different terms. So again, no slight, unsimplicable.
I think the goal is really an ideal one. They
want to produce informative, straightforward and objective information to help

(36:31):
educate people, which I think is a great thing to do.
Certainly I strive to do some of those things, but
not all of them, at least not all the time.
All Right, Well, that's it for this episode. Like I said,
I'll do another one coming up. Also, I've got a
lot of travel coming up in the near future where
I'll be recording remotely. Got a really exciting opportunity to

(36:54):
record an interview in a studio that's in Las Vegas,
which I will be doing pretty soon, so be on
the lookout or listen out for those. We've got some
more episodes of Smart Talks with IBM that are going
to publish in the near future in this feed, and
also an episode of The Restless Ones, the show that

(37:16):
I host where I talk with various chief officers usually
CIOs or CTOs of companies to kind of get insight
into their leadership. Process and their approach to technology. We'll
have one of those episodes published in this feed in
the not too distant future as well, says you can
hear my other work besides the stuff that I do

(37:38):
here for tech Stuff, I'm still the same Dufiss no
matter what where you put me, So no fear there,
but I wanted to give you the heads up on that.
And yeah, this month turned into at least the back
half of this month has turned into something that's far
busier than I had originally anticipated when the month started,

(37:59):
so I wanted to just kind of give a shout
out and make you all aware of what was going on.
All right, that's it. I'm getting out of here. I
hope you are all well, and I'll talk to you
again really soon. Tech Stuff is an iHeartRadio production. For

(38:20):
more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts,
or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.