Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Get in text of technology with text stuff from hastaff
what dot com. Hey there, and welcome to tech Stuff.
I am your host, Jonathan Strickland, and today I want
to talk about war. War never changes. But I'm not
talking about Fallout, even though I just recently started to
(00:25):
play Fallout for again because I never finished it. No,
I'm gonna talk about a war going on in artificial intelligence.
And it's not sky Net, it's not the Terminator, It's
nothing like that. No, it's really about who gets to
be your personal assistant. That's the best kind of war
because we win no matter who wins, right, we get
(00:45):
the best out of whichever combatants into the AI personal
assistant thunderdome. I'm throwing out a lot of references to
sci fi here. I'm gonna cut that out. So we
want to talk today. We being me and you you
can talk back. I just won't be able to hear
you about these personal digital assistants, but not the p
(01:07):
d as of the past. We want to talk about
the the series and the Cortanas and the Google Assistance
and things of that nature. And I want to specifically
look into how are these going to be incorporated into
our lives in the future, and what are some of
the concerns we have and what differentiates all these products
that have been sort of coming into their own over
(01:28):
the past few years. So to start with, you might say, well,
you know which of these assistants came first, And arguably
you could say Google actually beat everyone to the punch
by a couple of months because on June four, two eleven,
Google announced at an inside Google Search event that it
(01:49):
was going to roll out voice search on Google dot Com.
And the project name at Google was uh Majel or Magel,
depending upon how you want to pronounce it, but Majel
would be the way her name was actually pronounced. Named
after Majel Barrett, who was the wife of Gene Roddenberry,
the creator of Star Trek. Majel Barrett actually played the
(02:10):
voice of the computer system, particularly on Star Trek the
next generation. Whenever you heard the computer speak, that was
Majel Barrett's voice. She also played uh Deanna Troy's mother,
Luaxanna Troy. Anyway, they named it after her. Internally, uh
it actually doesn't have a name name, which kind of
(02:31):
sets it apart from some of the competitors. So the
Voice Command project was a tool from Google Labs, so
their research and development arm and on March fourteen, this
particular feature was rolled into the Google Now product and
uh it was part of the Android four point one
release that was the jelly Bean release. Now, at that point,
(02:54):
the speech recognition commands had evolved a little bit. It
had gone beyond some of the initial old stuff where
you could just ask Google to search something for you.
This was also a feature that was worked into Google Glass,
so if you had a pair of Google Glass, you
know that the voice command would always start with the
phrase okay, followed by Google I'm not gonna say it together,
(03:17):
just in case some of you are listening to your
devices or listening with a device nearby and it's on
its home screen. I don't want to activate it for
whatever reason, but you can use that phrase that would
end up alerting the virtual assistant that you wanted something,
and then you would speak whatever it was you wanted.
And over time, functionality increase, so it went beyond just
(03:37):
searches and into more interactive features like with an Android
phone you could set an alarm, or you could set
a reminder or review your calendar and more. As time
went on, at this point it is evolved into something
a little bit more robust than that. Even you can
start to interact with some third party stuff as well.
(03:58):
And at Google Io two thousand and sixteen, it became
part of Google Assistant. Now Google Assistant is really the
intelligent personal assistant product from Google. The earlier versions you
could think of a sort of a rudimentary form or
or perhaps a prototype or maybe just like these are
(04:20):
features that would eventually be rolled all into one finished product,
being Google Assistant. So by that argument, if you say
Google Assistant, you know, if you mark the Google I
event as its premier, then it's not the oldest. But
it dates back to June fourteen, two thousand eleven, when
(04:40):
Google announced this initial search voice search ability. So that
same year, in October, on October fourteenth, in fact, Apple
introduced Sirie. And I'm sure you all know what Siri is,
but just in case you don't, it's billed as an
intelligent personal assistant and it was introduced as a feature
(05:01):
with the iPhone for S and it's been part of
the iPhone iOS ecosystem ever since. And it uses speech
recognition to interpret user requests and responds with what is
hopefully inappropriate action. According to series creators, Apple actually scaled
back what Siri was supposed to be able to do.
They said that they had arranged for Siri to work
(05:23):
with about forty to forty five different apps that Apple had,
and then the company scaled that back significantly, so the
serie creators essentially sold the product to Apple. Then they
went on to create a different intelligent assistant called VIV
v I V and VIV is currently unaffiliated with any
(05:45):
other big names, but it has received funding from some
very wealthy folks in the text sphere, like Mark Zuckerberg,
for example. And VIV is what the creators of vivs
say that it's what Siri was supposed to be from
the get go, and and essentially they're saying that Siri,
you was kind of hampered, hamstrung, if you will, by
Apple um And we'll get into more about why that
(06:06):
may be in a little bit. So Sirie actually came
second after Google had announced their voice search, keeping in
mind that Siri was a different presentation, So you could
argue that Siri was really more of the first assistant,
and that the Google approach eventually evolved into an assistant.
But wasn't really at that same level back in Moving
(06:31):
forward in spring two thousand and fourteen, that's when Microsoft
got into the game by unveiling Cortana, which is their
intelligent assistant for the Windows Phone platform. And in twenty fifteen,
Microsoft included Cortana with Windows ten. So if you have
a Windows ten machine, Cortana is part of that, and
if you have a microphone you can actually give voice
commands to Cortana. You can also interact via text um.
(06:55):
Cortana is named after the AI and the Halo franchise
and is voiced by the same act Risks who provided
the voice of Cortana in the games. So you can
ask fun things about Master Chief and she always has
a interesting answer for those. All of these, by the way,
tend to have some sort of fun element to them,
where the developers clearly thought of ridiculous things you could
(07:18):
ask the digital assistance and built in responses that were humorous.
For example, the big one that everyone talked about with
Siri was where can I hide a body? And Syrie
would come back with nearby quarries and caves systems and
things of that nature. Now, in November two thousand fourteen,
we get our final big name in this battle, Amazon.
(07:43):
That's when Amazon unveiled the Echo, which is that sort
of standalone speaker system UH that has the Intelligent Assistant
Alexa incorporated into it. And like the other ones I've
mentioned so far, Alexa can follow your voice commands and
interact with the Internet as well as with other Internet
connected devices. That list of Internet connected devices Alexa can
(08:04):
work with is growing day by day, and Amazon is
actually trying to build out the capabilities further and as
such as hired a team to create a guide on
how to develop for Alexa. I'm going to interview one
of the developers on that team in a later episode.
We actually have that scheduled for later this summer, and
we'll talk more about what it's like to develop for
(08:26):
this platform and the potential of using such a platform
and in new and creative ways. So we have four
really big players in the space. We've got Apple and
Google and Microsoft and Amazon already vying to be the
big digital assistant provider. Then we have the other names,
like we've got the team behind Viv and other apps
(08:49):
as well that are in this space that are trying
to kind of become the voice that you interact with
um so that it can do all the things you
needed to do in a s seamless away as possible.
So one of the things we need to also look
at is how does this differentiate? How do these different players?
How are they different from one another? If they're exactly
(09:12):
the same as each other, then it really doesn't matter
which one you pick, right, I mean, they're kind of
depends just which platform you have available. If you have
all iOS devices, then Siri is pretty much gonna be
the one you're gonna depend upon the most most likely
at any rate. So Cortana, Siri, and Google Assistant are
all part of existing platforms like smartphones and computers, So
(09:36):
they are incorporated into things that we already have. You know,
you probably already have a smartphone or a computer or both,
and so it makes sense that you would incorporate your
digital assistant into that. You don't have to buy anything else.
It's right there, and you can incorporate that into other
(09:56):
systems that are connected to a personal network or a
home net work. Then you've got Alexa, which debuted on
a standalone device called the Echo, which again is just
this sort of intelligent speaker, a smart speaker with a
built in microphone. Google Assistant is actually following suit with
that with Google Home that was announced at Google Io two.
(10:20):
And Google Home is also a smart speaker with a
microphone that's gonna be available sometime later in t and
as of the recording of this podcast, I don't have
a date or a price on that, so it's hard
to say whether or not it will be competitively priced
against the Amazon Echo. It does like it's going to
be a particularly powerful version of this personal assistant, And
(10:42):
there are also rumors emerging that Apple is also working
on Sirie hardware, so it would be another standalone speaker
microphone system of some sort, and that Apple's Sirie platform
would exist on that. Now, as of the recording of
this podcast, we don't have confirmation on that, so there's
no timetable associated with such a thing or a price.
(11:05):
I would expect that any announcement of such a device
would come at one of Apple's big events, So probably,
if I had the guess, I'd say September is when
they would announce it. That's typically when they announced all
the big iPhone changes. But that's just a guess. They
might hold a single event for this particular thing, or
(11:28):
they might not hold an event at all, They may
just release it. That doesn't seem particularly apple like, but
it's a possibility. So once the big deal with this
technology in the first place, why should we care. Well,
for one thing, it represents huge leaps forward in the
field of artificial intelligence. So in one way, it's a
really cool glimpse at the at the the state of
(11:51):
the art in AI specifically, and stuff like speech recognition,
which is pretty hard stuff. I mean, we all have
different ways of pronouncing words, and depending upon your region,
you might have an accent that has a different way
of pronouncing word. For example, Uh, you know, the Brits
(12:12):
say aluminium and we say aluminum here in the United States.
Then even within a single country, you have different ways
of pronouncing things. And when Google first began translating speech
to text in voice messages, I noticed that it was
having a real hard time interpreting the words of some
(12:32):
of my friends and family. Now keep in mind I
am in the Southeast United States Georgia, and that is
we have a lot of people here with Southern accents.
I have a tiny bit of one. My parents have
a slightly stronger Southern accent. Some of my extended relatives
(12:52):
haven't even stronger Southern accent, and so when they would
call and leave a voicemail, Google had to guess at
what they were saying, and was not always correct. I
would have to go and listen to the voicemail because
the transcription would be completely indecipherable. Now, over time this
has improved. The speech recognition software has improved where it
(13:16):
can adjust for things like different accents and and the
different ways that people speak, using a lot of different
algorithms that have been based in machine learning to kind
of get a grip on what is being said and
even and anticipate what the next thing will be in
in any line of thought. Obviously, for someone like me
(13:40):
who stumbles over words occasionally, that's a real challenge because
sometimes I don't even know what's next to come out
of my mouth. But that's really where where that power
comes in. Now, over time, not just speech recognition has improved,
we've also had to look at the problem of natural language. Now,
natural language is how you and I could communicate with
(14:00):
one another. Unless it's like a really formal setting. We
usually are pretty casual with our language, and we can
make use of lots of different linguistic flourishes and tools,
things like figures of speech, metaphors, similes, puns, references, and
lots of other stuff that gives meaning to what we say.
But only if the other person also understands what's going on.
(14:23):
They also have to have that benefit. Otherwise it just
becomes a jumble of nonsense. I'm reminded of a Star
Trek the Next Generation episode where characters only spoke in
um allegory, and if you didn't have that cultural background,
if you didn't understand the references, you didn't understand where
(14:43):
the community, what, what the communication meant um. Similar problem
with machines. They don't necessarily know what we're saying all
the time. A lot of machines are not very good
at doing this. But natural language familiarity has been a
huge challenging in AI, and we're getting better at overcoming
that challenge. So at that that same Iowa event where
(15:05):
Google announced Google Home, they demonstrated that you could start
a conversation with your personal assistant asking something fairly specific,
such as, we're gonna go with a local reference for
for yours truly, how are the Atlanta Braves doing this season? Then?
The assistant would actually break your heart by telling you
how poorly the Braves are doing this season and it
(15:27):
is abysmal. And you could follow that up with when
do they play at home next? And the assistant would
understand that when you say they, you mean the Atlanta Braves,
and when you say at home, it would understand you
meant Atlanta, Georgia. So it would be able to figure
out the context of what you said without you having
(15:47):
to restate when do the Atlanta Braves play in Atlanta next?
You could take these little linguistic shortcuts that we would
normally do in natural conversation, but typically machines are not
great at that. They don't have the capacity to understand
how one sentence can follow another. But this is an
(16:10):
example of how that's changed through machine learning. So you've
gotten this new approach where you can continue a series
of questions that build on previous questions and answers, and
the Google Assistant can continue to give you relevant information,
which is a pretty powerful statement in AI. Also, you
(16:31):
might have heard that funny story that Google fed romance
novels to its AI to make it better at understanding
natural language, And to be fair, that's just part of
the story. Google actually fed lots of different types of
unpublished literature to its AI, all with the goal of
teaching the AI that there are many different ways to
say the same thing. So here's an example. Um, I
(16:53):
could say it's raining pretty hard today, or it's really
coming down out there, or it's raining cats and dogs dogs,
or it's pouring outside, and all of those mean the
same thing. But they're all different ways of saying that
it's raining really hard. And there are a lot of
other ways I could say the same you know, to
(17:13):
to express the same thought using different words. And that's
a challenge for machines because we as humans understand that
you can say all these different things and that all
means the same thing. But machines have to be taught that.
So romance novels, as it turns out, are a good
way to teach a an AI how to interpret different
(17:35):
things because romance novels are incredibly formulaic. If you were
to break down a romance novel and and you outlined
it scene by scene so that you understood where the
beats and the story were and who the characters were
in their relationships to one another, you would see that
a lot of romance novels follow the exact same structure,
(17:57):
the exact same plot structure, but because they're written by
different people, because the character names in places are often
changed from book to book. I mean, obviously you wouldn't
want to write the same novel forty times. It means
that you have a lot of different ways to express
the same ideas. So if you feed a whole bunch
(18:18):
of formulating novels into an AI to teach it humans
have lots of different ways to express the same thoughts,
that's a pretty powerful tool. Um. And again, it wasn't
the only type of story that was being fed to
Google's AI. It's just the one that caught a lot
of people's attention because it the headlines right themselves at
(18:39):
that point. So one thing that is really, you know,
funny about that is a lot of people made jokes
about Google AI suggesting different ways to rip a bodice
or to make a bosom heave from the whole romance
novel thing, But as it turns out, it was there
was some real thought given to using this approach. Now,
one of the ways that these assistants work so well
(19:02):
is to tap into information about you and to store
all of that off of the hardware so that it
can anticipate what you want and what you need and
how to fulfill that. So, for example, if I'm using
Amazon and I'm using the Echo and I'm I'm using
Alexa to purchase certain things off the Amazon Store, this
(19:25):
ends up tapping into that algorithm that tells Amazon what
I've bought and what I have browsed and and the
sort of stuff I'm interested in, so it can suggest
new things that I might be interested in but didn't
know about. All of that is a very powerful tool.
One of the exceptions here is Apple's SIRIE. So Apple
(19:48):
pretty much locks everything down into the hardware as opposed
to sharing it with third party or putting it in
the cloud. That's because Apple's revenue source selling that hardware
and related services like support plans like like a product
support or protection plans for your hardware. That's how Apple
(20:09):
makes its money. It's making it through selling this hardware
that it is producing, as opposed to something like Google,
which until Google Home comes out, it's selling an idea
to you and then selling you to advertisers. Uh. So
Apple That the benefits Apple in some ways because it
means that you can trust Syria a little more than
(20:32):
you could some of the other assistants because it's it's
mostly contained to your device. On the flip side, it
makes the actual service a little less useful because it
cannot tap into the massive resources of the Internet the
way some of these other assistants can, because again it's
all pretty much contained your device. Now I can access
(20:53):
I can pull stuff from the Internet for you, but
it's not as interactive as some of these other assistants are.
Uh So, there with the possibility of advertising or things
like Google or Amazon rather Amazon's integrated shopping services, you
start to see some real potential for revenue generation on
(21:15):
the back end. But it also brings up some questions
about privacy and security. Now. To look into that matter further,
I spoke with an expert on the subject, the founder
of a company called Big I d Dmitri Serota, and
here's what he had to say. Well, I think that
clearly there's a certain degree of inevitability around this. I
(21:37):
think we've moved from an age of having these technologies
and you can almost think of this as kind of
web dot dot one in terms of being responsive to
the user and personalization really being about kind of targeting you.
I think we're now shifting to an era of anticipation.
You know, the technologies are becoming smarter and they know
(21:58):
more about you because they uch you on so many levels,
whether you're on the web, whether you're on your mobile,
whether you're at the office, whether you're in the car,
whether you're at home, as is the case with with
Google Home and Amazon Echo and similar technologies, that they're
no longer just about kind of responding to a particular action.
(22:19):
They're now trying to anticipate what you'd want. And in
some degree, you know, they're becoming more like your mother
or your parents, where they know so much about you
that they anticipate your needs um and there's a good
and a bad to that, right, So just the actions
that we would take in our homes can start to
set up these these expectations, for lack of a better word,
(22:42):
that our technology will have about us. For example, the
easiest way I think to illustrate this to today's audience
is to talk about something like the nest thermostat, where
you have said it a certain way, and it starts
to learn what your preferences are over time and then
it begins to automatically just without you ever having to
touch it, to the point where it's even seeing quote
(23:04):
unquote seeing when you are home versus not home. This
is the sort of stuff that, when incorporated into a
device like Google Home, can become very powerful. But also,
like you said, has this this other side to it,
this side that that if we don't pay attention to it,
it could become potentially um harmful to us, or at
(23:25):
least at the at the very least anyway uh inconvenient
to us. So, for example, with our setting up Google
Home so that we would be able to control lighting
and security systems and uh thermostats, not only would we
have it set up so that it's to our preferences,
but it actually has learned when we're at home versus
(23:47):
when we're not at home, and what that information means
could be potentially very harmful to us. So in your mind,
where where does accountability lie? Is this something that we
ourselves are are at least partly accountable for that kind
of information? Are the companies that create this technology? Are
they accountable? It's it's such a cloudy area. Where do
(24:11):
you see that? So it's a mix now clearly, and
I think I want to kind of emphasize this. You
mentioned earlier how this could become inconvenient. The reality is
is that we as the consumer want this because we
want it because it is convenient. We want technologies that
are passive. We don't necessarily want to click buttons. We
(24:32):
want technologies that are intelligent enough to be able to
help us make decisions. Right. I mentioned earlier about anticipation,
the challenge with convenience, and convenience typically goes against the
grain of security and privacy to some degree. If we
really want a mother, uh, kind of anticipating our needs,
(24:52):
what we want for lunch, where we want to go
for travel, you get. You get. The negative of having
your parent with you after you've kind of left for
university or or left for the office, is that you
don't want them intruding too many places or too much
about you. You want to keep certain parts of your
life separate. And the reality is is that there's a
trade off around here. So you know, I do think
(25:15):
that there's a consumer drive towards this. It's just that
we are not necessarily always prepared because there's a bit
of a lag or a delay before the consequences of
having this convenience are fully made aware are aware to us.
So in terms of the responsibility of who cares about this,
we as consumers obviously care about it. You know. The
(25:36):
companies like Google and Amazon obviously you know, they would
argue that by personalizing service to you, they are giving
you this convenience. But the reality is it's really up
to their best efforts or what they think is the
right combination of privacy and security for now. And the
reason for that is the regulators take time to catch up.
(25:57):
They don't necessarily know the latest, they didn't attend Google Io,
and they don't know necessarily know how to react or respond.
So there's always gonna be this this lag between what
the consumers want, what the companies are able to deliver
in response to that need, and then what the regulators
are able to introduce in terms of a balance in
(26:18):
terms of rules and regulations. Uh And in this produal
case in around privacy and security, and I would argue
that the companies have it in their best interests to
handle this as carefully as possible for multiple reasons. One,
like you've just pointed out. If they do not, then
that means that you're going to get that that sort
of tick talk effect, the tick being that they take
(26:40):
a certain approach, the talk being that regulations are following
because if there's any mishandling, especially of a of a
chaotic scale, then there's going to be a harsh response
further down the line, and it doesn't behoove the companies
to to uh invite that in alsobviously, if they do
(27:01):
not prove to be responsible with that data, that reflects
poorly on them from a consumer standpoint as well, they'll
lose customers. So it's not as if there's no incentive
on the company's part to be careful, but at the
same time they want to be able to leverage that
data uh to to make as good use of it
as possible. I've often said on this show that if
(27:23):
you look around and you realize that the service you
are using doesn't cost you anything, that essentially that means
that you yourself are the product and that what you
are doing is generating value for another entity out there,
for example like Google, where you're using Google Search and
then turn is generating value for Google. You yourself are
are the probably being sold to other companies. So it's
(27:45):
it's one of those things where it's the balance between
the desire to provide this service and to make um,
you know, revenue off of of of something beyond just
selling a device like the Google Home device, and making
certain that you don't alienate your consumer base or invite
particularly restrictive regulations. UH. To that end. Of course, in
(28:08):
the United States, it's one story. In other parts of
the world, there are different views of privacy and security,
some of which go well beyond what is typically seen
here in the US. Do you think the device do
you think Google Home and things like Amazon's Echo, do
you think those are going to have different levels of
acceptance in different parts of the world? And where do
(28:29):
you think might be a case for this is probably
going to be a big success in one place. We've
heard that Amazon Niccho has been a pretty big success
so far, versus a market where it may not be. Yeah, Well,
you've seen even things like credit card adoption differ from
country to country, um, just because there are different kind
(28:51):
of cut cultural kind of mores around credit around potential
privacy implications, UM in terms of no kind of a
transaction and kind of the origins and so forth. UM.
And so you've seen this in Europe in particular, right,
So not all countries in Europe are equally predisposed to
using credit cards as we are in the US. So yeah,
(29:12):
I think there definitely will be different cultural adoption. Um.
But you know, at the end of the day, like
you you mentioned rightly, a lot of these companies, it's
in their interest to do a good job because we
we as consumers, only tend to shop from people we trust.
The challenges, of course, is that we will sometimes wonder,
(29:32):
just like you are right here in terms of this interview,
you know, what are the implications. You know, you could
very quickly go from a situation that appears like having
your mother around all of you around you too, having
a situation we're having big brother around you, uh in
kind of sense in the Orwellian sense where something is
so aware of every facet about your life, but maybe
(29:53):
they just know a little bit too much. Uh. And
so you know, we're kind of entering that phase. Right.
We've we've historically had a you places that we weren't
necessarily connected to, right and our home with the exception
obviously of our PCs and our phones have not been connected.
They've been into some degree of thanks groupe, we sit
(30:13):
down for dinner. Um, we're not connected to the net.
And I think what this revelation is making people aware
of is that, uh, kind of in the future, there'll
be very very few places left, um that are not networked,
where our activities are not kind of transponding or transmitting
or telegraphing kind of our activities. Uh. And you know,
(30:36):
it will take time for people to adjust, and as
I mentioned earlier, it will not it won't just be
about consumers and kind of buyers, but also you know,
the governments will have a say, and as you pointed out,
you know, in certain places the governments have already had
to say around privacy, like in Europe with the introduction
of the General Data Protection Regulation UM to better protect consumers.
(31:00):
I think we're all going to become a lot more
sensitive to the privacy implications of always being online. And
I think that we're seeing that as well, just you know,
in other areas of technology. Just recently, there was uh,
these reports coming out about the FBI's database of biometric
(31:21):
data and the concerns people have about that and even
interesting questions like do I do I own my own face?
Should I shouldn't I have access to data about me?
And this again, you know, we're in a world where
our technology is pervasive, and in many ways that is amazing.
(31:42):
It is giving us an almost seamless experience of having
our desires catered to before we can even give thought
to them. That is the big promise of the Internet
of things, and I love that idea. It is something
that really appeals to me. On the flip side, you
start to realize that you're regular actions are creating data,
(32:02):
and that data does in fact have value, different value
to different entities out there, and so having these sort
of technologies inviting them in. Once you have reconciled this
idea and you realize that, uh, that this is going on,
then you can start to make those strategies. How is
the was the best way of handling that both on
the the end user side and on the back end side,
(32:25):
so that it is a responsible approach. That's really what
what your company is looking into, right, the idea of
security and and helping companies protect uh customer data. Yeah,
so that's actually kind of very very kind of similar
to this. I think you know, one of the things,
(32:46):
um you were kind of touching upon is this kind
of expectation of organizations to do a better job of
safeguarding your information, essentially being responsible custodians of your data.
The challenge for most companies UM, you know, maybe with
with less sophistication than a Google, but maybe even Google,
is that they collect so much information about you, and
(33:09):
they collected it's so many different places and so many applications.
It doesn't necessarily mean that all that information is tied together,
but you are leaving digital footprints across organizations and so
these these companies are essentially becoming large data collection points
and it's hard for you as a consumer to know
(33:30):
exactly what digital footprints you've left. You want to know
what assets you've left with them and believe it or not. UM.
You know, if you think about accounting and how companies
are expensed, expected to have responsible UM tools in place
to track how much revenue comes in, how that money
is getting dispersed, who it's paying. So that's all about
(33:53):
accounting and financial responsibility. On the digital side, there's very
little of that today and that kind of the origin
of Big I D. You could think a big I
D as a tool set to help big companies understand
where their customer information is what's at risk or potentially
at risk, either in terms of breach or in terms
(34:14):
of misuse UH and then how to better understand how
that information is getting used in the organization, either to
help ensure that it's compliant with regulations or secondly that
it's complied with their own kind of privacy rules, their
own consent agreements that they've created between themselves and their consumers.
(34:34):
And so I think that this idea of a ledger
or accounting software for privacy information doesn't as yet exist,
and I think increasingly, just given the number of digital
touchpoints that we have with the companies we interact with,
um that it's going to be certainly a future requirement.
I think that's really interesting, and I'm thankful that there
(34:55):
are organizations like yours that are looking into this to
try and create those best practic This is because, as
we've seen recently, it's scholarship has shown it takes very
few data points to be able to link some information
to a specific person, and I think a lot of
companies out there may not even be aware of the
(35:16):
implications of some of the data they're collecting, not through
any sort of of maliciousness. It simply is, as you
point out, there's so many of these little digital touch
points that you cannot necessarily anticipate what the consequences are
from the from the very beginning. And uh, it's amazing
to me to think that this is going on everywhere,
(35:38):
and and it's it's a it's a snowball that's already
going down the hill. It's just going to keep on going.
It's very reassuring to hear that there are people actively
thinking about these and trying these issues and trying to
find the best ways of handling that kind of information
so that we don't have uh any or we can
(35:59):
avoid as many chaotic moments of absolute failure as possible. Well, again,
I think a lot of people assume that big companies
are actively pursuing the collection and selling of all of
the data, and that's not the case. Across the board,
(36:20):
there are companies that are collecting a great deal of
data in the pursuit of whatever business they do, but
it's not through the it's not necessarily with an intent
to do anything uh, you know, commercial with that information.
But knowing this makes it easier for those companies to
be more responsible and uh, and also to maybe even
(36:41):
get to a point where they change up their practices
so that they're only collecting the points of data that
are relevant to their business. Yeah, well, look, certainly that's
the intent of big ideas to help companies be more
responsible around their digital assets, the customer assets, which you
could argue are probably the most important assets. You know.
Sometimes you've heard people talk about employees, and you know
(37:03):
your your most important assets are your employees, and they
walk out the door every every night. Well, your customers
are pretty valuable too, because if they if they stop
patronizing you, your business suffers, and their loyalty increasingly is
very thickle. So if they don't have confidence that you
are protecting their personal information, they're kind of digital digital
(37:26):
footprints don't go somewhere else. They'll go to somebody that
does take better care of that, which again is why
it kind of makes sense to have uh technology that
gives organizations better, better tooling to track, manage, protect those
digital personal assets. About digital information that represents kind of
(37:47):
who you are, where you live, where you've been, what
you like when you're going on vacation, et cetera. Now,
I've got a question for you personally, which is that,
are you at a point where you would adopt a
technology such as Amazon Echo or Google Home or would
you personally wait a little longer or you know, where
(38:09):
do you stand on that? Because I can tell you
being aware of these issues, I guess it's only fair
that I answered my own question being aware of these
issues and and being cognizant of them. I I'm still
leaning towards getting one, knowing what I know, and taking
the risk in order to have the benefit. My wife
feels very differently about it, so that's why I do
(38:30):
not have one. But but I'm curious what, as you,
as an expert on this subject matter, how you feel
about that. Yeah, so I think there's there's two things
that come into play. Obviously, I'm a fifteen year vetter
in the security industry with a company focused on enterprise
privacy management now, so I understand some of the consequences
and repercussions. But I'm also at heart a person that
(38:53):
likes technology. I as a reader of Isaac Asimov as
a kid, Fine Land, all the kind of great science
fiction writers, and I realized the future is coming towards us,
and we could either try and hide or dock or
we could try and embrace it and understand the consequences.
So for me personally, UM, I look at this and
try to understand how this technology will impact our lives
(39:15):
going forward. So I will be an embracer of the
technology because I think, as I said at the very beginning,
there's a lot of good, there's a lot of convenience
that comes with it, but it's also important for me
to understand some of the consequences by going through it firsthand,
because at the end of the day, if I'm you know,
I'm part of a team building technology to better help
(39:36):
protect customer information, then I had need to understand the
implications of these new home automation, car automation technologies. Excellent,
Dmitri Saroda, thank you so much, founder and CEO of
Big I D. You really helped me and I hope
my listeners understand a bit more of the implications of this.
(39:57):
I realized that this sort of technology that has this
incredible connection to our personal lives, really a level of
intimacy that most technology does not have, carries with it
some things that can be a little worrisome. But I
agree with you. I think if we enter into it
with open eyes and we are aware of the challenges.
(40:18):
We're not denying that challenges exist, but we are aware
of them. That allows us to actually overcome those challenges
and reap the benefits of this really powerful tool. Thank
you so much for coming on the show and talking
with us. My pleasure. Care. I think it's really important
to remember that Mr Sirota actually said we should embrace technology,
(40:39):
but do so in a way where we're aware of
the consequences and we are doing our best to mitigate
any negative fallout from this technology moving forward. We shouldn't
deny it, we shouldn't try to stop it, but we
should definitely be responsible with the way we develop it
and the way that we use it. Potential Really, it
(41:00):
has the capacity to make our lives easier. I mean,
imagine being able to handle everything by by just shifting
it over to your personal assistant who lives everywhere. You
can access that personal assistant wherever you might be through
whatever computer or smartphone or standalone device you happen to
(41:21):
have at your disposal at that place, and access all
of that those features, everything from entertainment to handling travel
and stuff that you want taking care of that you
don't necessarily want to attend to yourself, so you can
save that time to do something else. That's a really
cool idea, And I love the promise of digital assistance,
(41:45):
the idea that we will slowly get towards this future
where the technology around us anticipates what we need before
we can give voice to it. I love that thought
and the idea that that my life just becomes sort
of magical as a result, because the technology is shifting
things to my whim before I can even voice what
(42:06):
that whim is, before I might even be aware there's
a whim. I could be whim less. I'm done saying whim.
In future episodes, I'm going to take a closer look
at what makes these technologies tick and the potential they
have to change the way we interact with the world
around us. In the meantime, if you have suggestions for
future episodes of tech Stuff, send me an email. The
(42:28):
address is tech Stuff at how stuff works dot com,
or drop me a line on Twitter or Facebook. The
handle for both of those is text stuff H s
W and I'll talk to you again really soon. For
more on this and bathos of other topics is how
(42:49):
staff works dot com. M.