All Episodes

May 12, 2023 45 mins

From Amazon to Google to Apple, companies are creating digital assistants to make our lives easier. What's the technology behind them and are they safe to use?

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio, and how the tech
are you here? We are on a Friday again. That
means it's time for a classic episode. This episode originally

(00:27):
published on June fifteenth, twenty sixteen. It is called AI
Assistants and You. This is clearly an episode that could
use an update now twenty sixteen. Man boy, We've got
a lot more to say about AI assistants these days,
but I'll leave that for the autro. Let's take a listen.

Speaker 2 (00:48):
So we want to talk today. We being me and you.
You can talk back.

Speaker 1 (00:54):
I just won't be able to hear you about these
personal digital assistants, but not the pen of the past.
We want to talk about the series and the quirtanas
and the Google assistants and things of that nature. And
I want to specifically look into how are these going
to be incorporated into our lives in the future, and
what are some of the concerns we have and what

(01:15):
differentiates all these products that have been sort of coming
into their own.

Speaker 2 (01:20):
Over the past few years.

Speaker 1 (01:22):
So to start with you might say, well, you know
which of these assistants came first, And arguably you could
say Google actually beat everyone to the punch by a
couple of months because on June fourteenth, twenty eleven, Google
announced at an inside Google Search event that it was
going to roll out voice search on Google dot com.

Speaker 2 (01:45):
And the project name at Google was.

Speaker 1 (01:49):
Majel or Magel, depending upon how you want to pronounce it,
but Majel would be the way her name was actually pronounced.
Named after Majel Barrett, who was the wife of Gene Roddenberry,
the creator of Star Trek. Major Barrett actually played the
voice of the computer system, particularly on Star Trek the
next generation. Whenever you heard the computer speak, that was
Majel Barrett's voice. She also played Deanna Troy's mother, Luaxana Troy. Anyway,

(02:16):
they named it after her. Internally, it actually doesn't have
a name name, which kind of sets it apart from
some of the competitors. So the Voice Command project was
a tool from Google Labs, so their research and development
arm and on March twenty fourth, twenty fourteen, this particular
feature was rolled into the Google Now product and it

(02:41):
was part of the Android four point one release.

Speaker 2 (02:43):
That was the jelly Bean release.

Speaker 1 (02:46):
Now, at that point, the speech recognition commands had evolved
a little bit. It had gone beyond some of the
initial stuff where you could just ask Google to search
something for you. This was also a feature that was
worked into Google Glass, so if you had a pair
of Google Glass, you know that the voice command would
always start with the phrase okay, followed by Google I'm

(03:08):
not going to say it together, just in case some
of you are listening to your devices or listening with
a device nearby and it's on its home screen, I
don't want to activate it for whatever reason, but you
could use that phrase that would end up alerting the
virtual assistant that you wanted something, and then you would
speak whatever it was you wanted. And over time functionality increase,

(03:29):
so it went beyond just searches and into more interactive
features like with an Android phone, you could set an alarm,
or you could set a reminder or review your calendar
and more as time went on. At this point it
has evolved into something a little bit more robust than that.
Even you can start to interact with some third party

(03:49):
stuff as well, and at Google Io twenty sixteen it
became part of Google Assistant. Now Google Assistant is really
the intelligent personal assistant product from Google.

Speaker 2 (04:04):
The earlier versions.

Speaker 1 (04:05):
You could think of as sort of a rudimentary form
or perhaps a prototype, or maybe just like these are
features that would eventually be rolled all into one finished product,
being Google Assistant. So by that argument, if you say
Google Assistant, you know, if you mark the Google Io
twenty sixteen event as its premiere, then it's not the oldest,

(04:28):
but it dates back to June fourteenth, twenty eleven, when
Google announced this initial search voice search ability. So that
same year, in October, on October fourteenth, in fact, Apple
introduced Siri.

Speaker 2 (04:46):
And I'm sure you all know what.

Speaker 1 (04:47):
Siri is, but just in case you don't, it's billed
as an intelligent personal assistant and it was introduced as
a feature with the iPhone four S and it's been
part of the iPhone iOS ecosystem ever since. Use a
speech recognition to interpret user requests and responds with what
is hopefully inappropriate action. According to series creators, Apple actually

(05:08):
scaled back what Siri was supposed to be able to do.
They said that they had arranged for Siri to work
with about forty to forty five different apps that Apple had,
and then the company scaled that back significantly, so the
Siri creators essentially sold the product to Apple. Then they

(05:29):
went on to create a different intelligent assistant called VIV
VIV and VIV is currently unaffiliated with any other big names,
but it has received funding from some very wealthy folks
in the text sphere, like Mark Zuckerberg, for example. And
VIV is what the creators of VIV say that it's
what Siri was supposed to be from the get go,

(05:50):
and essentially they're saying that Siri was kind of hampered, hamstrung,
if you will, by Apple, and we'll get into more
about why that may be in a little bit.

Speaker 2 (06:01):
So Siri actually came.

Speaker 1 (06:02):
Second after Google had announced their voice search, keeping in
mind that Siri was a different presentation, So you could
argue that Siri was really more of the first assistant,
and that the Google approach eventually evolved into an assistant,
but wasn't really at that same level back in twenty eleven.

(06:23):
Moving forward in spring twenty fourteen, that's when Microsoft got
into the game by unveiling Kortana, which is their intelligent
assistant for the Windows phone platform, and in twenty fifteen,
Microsoft included Kortana with Windows ten, So if you have
a Windows ten machine, Kuortana.

Speaker 2 (06:39):
Is part of that.

Speaker 1 (06:40):
And if you have a microphone you can actually give
voice commands to Kortana. You can also interact via text.
Kortana is named after the AI and the Halo franchise
and is voiced by the same actress who provided the
voice of Kortana in the games. So you can ask
fun things about Master Chief and she always has a
interesting answer for the All of these, by the way,

(07:03):
tend to have some sort of fun element to them,
where the developers clearly thought of ridiculous things you could
ask the digital assistance and built in responses that were humorous.

Speaker 2 (07:16):
For example, the.

Speaker 1 (07:17):
Big one that everyone talked about with Siri was where
can I hide a body? And Siri would come back
with nearby quarries and cave.

Speaker 2 (07:25):
Systems and things of that nature.

Speaker 1 (07:28):
Now, in November twenty fourteen, we get our final big
name in this battle, Amazon. That's when Amazon unveiled the Echo,
which is that sort of standalone speaker system that has
the intelligent assistant Alexa incorporated into it, and like the
other ones, I've mentioned so far, Alexa can follow your

(07:48):
voice commands and interact with the Internet as well as
with other Internet connected devices. That list of Internet connected
devices Alexa can work with is growing day by day,
and Amazon's actually trying to build out the capabilities further
and as such as hired a team to create a
guide on how to develop for Alexa. I'm going to
interview one of the developers on that team in a

(08:10):
later episode. We actually have that scheduled for later this summer,
and we'll talk more about what it's like to develop
for this platform and the potential of using such a
platform in new and creative ways. So we have four
really big players in the space. We've got Apple and
Google and Microsoft and Amazon already vuying to be the

(08:34):
big digital assistant provider. Then we have the other names,
like we've got the team behind viv and other apps
as well that are in this space that are trying
to kind of become the voice that you interact with
so that it can do all the things you needed
to do in a as seamless a way as possible.

(08:55):
So one of the things we need to also look
at is how does this differentiate, How do these different players.
How are they different from one another? If they're exactly
the same as each other, then it really doesn't matter
which one you pick, right, I mean, it kind of
depends just which platform.

Speaker 2 (09:10):
You have available.

Speaker 1 (09:11):
If you have all iOS devices, then Siri is pretty
much going to be the one you're going to depend
upon the most, most likely at any rate. So Cortana, Siri,
and Google Assistant are all part of existing platforms like
smartphones and computers, So they are incorporated into things that

(09:32):
we already have. You you probably already have a smartphone
or a computer or both, and so it makes sense
that you would incorporate your digital assistant into that. You
don't have to buy anything else, it's right there, and
you can incorporate that into other systems that are connected
to a personal network or a home network. Then you've

(09:54):
got Alexa, which debuted on a standalone device called the Echo,
which again is just this sort of intelligent speaker, a
smart speaker with a built in microphone. Google Assistant is
actually following suit with that with Google Home that was
announced at Google Io twenty sixteen, and Google Home is
also a smart speaker with a microphone that's going to

(10:16):
be available sometime later in twenty sixteen, and as of
the recording of this podcast, I don't have a date
or a price on that, so it's hard to say
whether or not it will be competitively priced against the
Amazon Echo.

Speaker 2 (10:29):
It does look like it's going.

Speaker 1 (10:30):
To be a particularly powerful version of this personal assistant,
And there are also rumors emerging that Apple is also
working on Seri hardware, so it'd be another standalone speaker
microphone system of some sort, and that Apple's Seri platform
would exist on that. Now, as of the recording of

(10:51):
this podcast, we don't have confirmation on that, so there's
no timetable associated with such a thing or a price.
I would expect that any announce of such a device
would come at one of Apple's big events, So probably,
if I had the guess, I'd say September twenty sixteen
is when they would announce it. That's typically when they

(11:12):
announced all the big iPhone changes. But that's just a guess.
They might hold a single event for this particular thing,
or they might not hold an event at all. They
may just release it. That doesn't seem particularly Apple like,
but it's a possibility. So what's the big deal with
this technology? In the first place, why should we care. Well,

(11:32):
For one thing, it represents huge leaps forward in the
field of artificial intelligence. So in one way, it's a
really cool glimpse at the state of the art in AI,
specifically in stuff like speech recognition, which is pretty hard stuff.
I mean, we all have different ways of pronouncing words,

(11:54):
and depending upon your region, you might have an accent
that has a different way of pronouncing word. For example,
you know the Brits say aluminium and we say aluminum
here in the United States. Then even within a single country,
you have different ways of pronouncing things. And when Google
first began translating speech to text in voice messages, I

(12:19):
noticed that it was having a real hard time interpreting
the words of some of my friends and family. Now
keep in mind I am in the Southeast United States Georgia,
and that is we have a lot of people here.

Speaker 2 (12:35):
With Southern accents. I have a tiny bit of one.

Speaker 1 (12:39):
My parents have a slightly stronger Southern accent. Some of
my extended relatives haven't even stronger Southern accent, and so
when they would call and leave a voicemail, Google had
to guess at what they were saying, and was not
always correct. I would have to go and listen to
the voicemail because the transcript would be completely indecipherable. Now,

(13:03):
over time this has improved. The speech recognition software has improved,
where it can adjust for things like different accents and
the different ways that people speak, using a lot of
different algorithms that have been based in machine learning to
kind of get a grip on what is being said

(13:24):
and even anticipate what the next thing will be in
any line of thought. Obviously, for someone like me who
stumbles over words occasionally, that's a real challenge because sometimes
I don't even know what's next to come out of
my mouth. But that's really where that power comes in. Now,
over time, not just speech recognition has improved, we've also

(13:46):
had to look at the problem of natural language. Now,
natural language is how you and I communicate with one another.
Unless it's like a really formal setting, we usually are
pretty casual with our language, and we can make use
of lots of different linguistic flourishes and tools, things like
figures of speech, metaphors, similes, puns, references, and lots of

(14:08):
other stuff that gives meaning to what we say. But
only if the other person also understands what's going on.
They also have to have that benefit, otherwise it just
becomes a jumble of nonsense. I'm reminded of a Star
Trek in the Next Generation episode where characters only spoke
in allegory, and if you didn't have that cultural background,

(14:32):
if you didn't understand the references, you didn't understand where
the communic what the communication meant. Similar problem with machines.
They don't necessarily know what we're saying all the time.
A lot of machines are not very good at doing this.
But natural language familiarity has been a huge challenge in
an AI, and we're getting better at overcoming that challenge.

(14:55):
So at that same IO event where Google announced Google Home,
they demonstrate that you could start a conversation with your
personal assistant asking something fairly specific, such as, we're going
to go with a local reference for yours truly, how
are the Atlanta Braves doing this season? Then the assistant
would actually break your heart by telling you how poorly

(15:17):
the Braves are doing this season and it is abysmal.
And you could follow that up with when do they
play at home next? And the assistant would understand that
when you say they, you mean the Atlanta braves, and
when you say at home, it would understand you meant Atlanta, Georgia,
so it would be able to figure out the context

(15:38):
of what you said without you having to restate when
do the Atlanta braves play in Atlanta? Next, you could
take these little linguistic shortcuts that we would normally do
in natural conversation. But typically machines are not great at that.
They don't have the capacity to understand how one sentence

(15:59):
can follow another. But this is an example of how
that's changed through machine learning. We'll be right back to
talk more about AI assistance and you after this quick break.

(16:19):
So you've got this new approach where you can continue
a series of questions that build on previous questions and answers,
and the Google Assistant can continue to give you relevant information,
which is a pretty powerful statement in AI. Also, you
might have heard that funny story that Google fed romance

(16:41):
novels to its AI to make it better at understanding
natural language, And to be fair, that's just part of
the story. Google actually fed lots of different types of
unpublished literature to its AI, all with the goal of
teaching the AI that there are many different ways to
say the same thing. So here's an example. I could
say it's raining pretty hard today, or it's really coming

(17:04):
down out there, or it's raining cats and dogs, or
it's pouring outside, and all of those mean the same thing.
But they're all different ways of saying that it's raining
really hard. And there are a lot of other ways
I could say the same you know, to express the
same thought using different words.

Speaker 2 (17:23):
And that's a challenge for.

Speaker 1 (17:24):
Machines because we as humans understand that you can say
all these different things and that all means the same thing.

Speaker 2 (17:33):
But machines have to be taught that.

Speaker 1 (17:36):
So romance novels, as it turns out, are a good
way to teach an AI how to interpret different things
because romance novels are incredibly formulaic. If you were to
break down a romance novel and you outlined it seen
by scene so that you understood where the beats and
the story were, and who the characters were and their

(17:58):
relationships to one another, you would see that a lot
of romance novels follow the exact same structure, exact same
plot structure, but because they're written by different people, because
the character names and places are often changed from book
to book, I mean, obviously you wouldn't want to write
the same novel forty times. It means that you have

(18:18):
a lot of different ways to express the same ideas.
So if you feed a whole bunch of formulating novels
into an AI to teach it humans have lots of
different ways to express the same thoughts, that's a pretty
powerful tool. And again, it wasn't the only type of
story that was being fed to Google's AI. It's just

(18:41):
the one that caught a lot of people's attention because
it the headlines right themselves at that point. So one
thing that is really, you know, funny about that is
a lot of people made jokes about Google AI suggesting
different ways to rip a bodice or to make a
bosom heave from the whole romance novel thing. But as
it turns, there was some real thought given to using

(19:03):
this approach. Now, one of the way that these assistants
work so well is to tap into information about you
and to store all of that off of the hardware
so that it can anticipate what you want and what
you need and how to fulfill that. So, for example,
if I'm using Amazon and I'm using the Echo and

(19:27):
I'm using Alexa to purchase certain things off the Amazon Store.
This ends up tapping into that algorithm that tells Amazon
what I've bought and what I have browsed, and the
sort of stuff I'm interested in, so it can suggest
new things that I might be interested in but didn't
know about.

Speaker 2 (19:47):
All of that is a very powerful tool.

Speaker 1 (19:49):
One of the exceptions here is Apple's SERI, So Apple
pretty much locks everything down into the hardware as opposed
to share it with third party or putting it in
the cloud. That's because Apple's revenue source is selling that
hardware and related services like support plans, like product support

(20:13):
or protection plans for your hardware. That's how Apple makes
its money. It's making it through selling this hardware that
it is producing, as opposed to something like Google, which
until Google Home comes out, it's selling an idea to
you and then selling you to advertisers. So Apple benefits

(20:35):
Apple in some ways because it means that you can
trust Siri a little more than you could some of
the other assistants because it's mostly contained to your device.
On the flip side, it makes the actual service a
little less useful because it cannot tap into the massive
resources of the Internet the way some of these other
assistants can, because again it's all pretty much contained to

(20:58):
your device. Now I can access I can pull stuff
from the Internet for you, but it's not as interactive as.

Speaker 2 (21:04):
Some of these other assistants are.

Speaker 1 (21:07):
Sore with the possibility of advertising or things like Google
or Amazon rather Amazon's integrated shopping services, you start to
see some real potential for revenue generation on the back end.
But it also brings up some questions about privacy and security.

Speaker 3 (21:28):
Now.

Speaker 1 (21:28):
To look into that matter further, I spoke with an
expert on the subject, the founder of a company called
big Id, Dimitri Serota, and here's what he.

Speaker 2 (21:37):
Had to say.

Speaker 3 (21:39):
Well, I think that clearly there's a certain degree of
inevitability around this. I think we've moved from an age
of having these technologies and you can almost think of
this as kind of web dot dot one in terms
of being responsive to the user and personalization really being
about kind of targeting you. I think we're now shifting

(21:59):
to an era of anticipation. You know, the technologies are
becoming smarter and they know more about you because they
touch you on so many levels, whether you're on the web,
whether you're on your mobile, whether you're at the office,
whether you're in the car, whether you're at home. As
is the case with Google Home and Amazon Echo and
similar technologies, that they're no longer just about kind of

(22:24):
responding to a particular action. They're now trying to anticipate
what you'd want. And in some degree, you know, they're
becoming more like your mother or your parent, where they
know so much about you that they anticipate your needs.
And there's a good and the bad to.

Speaker 1 (22:39):
That, right, So just the actions that we would take
in our homes can start to set up these expectations,
for lack of a better word, that our technology will
have about us. For example, the easiest way I think
to illustrate this to today's audience is to talk about
something like the Nest thermostat, where you have said it

(23:01):
a certain way, and it starts to learn what your
preferences are over time, and then it begins to automatically
adjust without you ever having to touch it, to the
point where it's even seeing quote unquote seeing when you
are home versus not home. This is the sort of
stuff that when incorporated into a device like Google Home,
can become very powerful. But also, like you said, has

(23:22):
this other side to it, this side that if we
don't pay attention to it, it could become potentially harmful
to us, or at least, at the very least anyway
inconvenient to us. So, for example, with our setting up
Google Home so that we would be able to control

(23:43):
lighting and security systems and thermostats, not only would we
have it set up so that it's to our preferences,
but it actually has learned when we're at home versus
when we're not at home, and what that information means
could be potentially very harmful to us. So in your mind,
where does accountability lie? Is this something that we ourselves

(24:07):
are are at least partly accountable for that kind of information?
Are the companies that create this technology? Are they accountable?
It's such a cloudy area. Where do you see that?

Speaker 3 (24:19):
So it's a mix now clearly, and I think I
want to kind of emphasize this. You mentioned earlier how
this could become inconvenient. The reality is is that we
as the consumer want this because we want it because
it is convenient. We want technologies that are passive, We
don't necessarily want to click buttons. We want technologies that

(24:40):
are intelligent enough to be able to help us make
decisions right. I mentioned earlier about anticipation, the challenge with
convenience and convenience typically goes against the grain of security
and privacy to some degree. If we really want a
mother kind of anticipating our needs, what we want for lunch,

(25:01):
where we want to go for travel, you get The
negative of having your parent with you after you've kind
of left for university or left for the office, is
that you don't want them intruding in too many places
or knowing too much about you. You want to keep
certain parts of your life separate. And the reality is
is that there's a trade off around here. So you know,

(25:21):
I do think that there's a consumer drive towards this.
It's just that we are not necessarily always prepared because
there's a bit of a lag or a delay before
the consequences of having this convenience are fully made aware
aware to us. So in terms of the responsibility of
who cares about this, we as consumers obviously care about it.

(25:42):
You know. The companies like Google and Amazon, obviously you know,
they would argue that by personalizing service to you, they
are giving you this convenience. But the reality is it's
really up to their best efforts or what they think
is the right combination of privacy and security for now.
The reason for that is the regulators take time to

(26:03):
catch up, they don't necessarily know the latest, they didn't
attend Google Io, and they don't necessarily know how to
react or respond. So there's always going to be this
lag between what the consumers want, what the companies are
able to deliver in response to that need, and then
what the regulators are able to introduce in terms of

(26:23):
a balance in terms of rules and regulations, and in
this particular case, in around privacy and security.

Speaker 1 (26:30):
And I would argue that the companies have it in
their best interest to handle this as carefully as possible
for multiple reasons. One, like you've just pointed out, If
they do not, then that means that you're going to
get that sort of tick talk effect, the tick being
that they take a certain approach, the talk being that
regulations are following, because if there's any mishandling, especially of

(26:55):
a chaotic scale, then there's going to be a harsh
response are down the line, and it doesn't behoove the
companies to invite that in Also, obviously, if they do
not prove to be responsible with that data that reflects
poorly on them from a consumer standpoint as well, they'll
lose customers. So it's not as if there's no incentive

(27:18):
on the company's part to be careful, but at the
same time they want to be able to leverage that
data to make as good use of it as possible.
We're going to wrap up our discussion about AI assistance
and you, at least the twenty sixteen version after we
come back from these messages. I've often said on this

(27:46):
show that if you look around and you realize that
the service you are using doesn't cost you anything, then
essentially that means that you yourself are the product and
that what you are doing is generating value for another
entity out there, for example like Google, where you're using
Google Search and then turn is generating value for Google.
You yourself are the product being sold to other companies.

(28:10):
So it's one of those things where it's the balance
between the desire to provide this service and to make
revenue off of of something beyond just selling a device
like the Google Home device, and making sure certain that
you don't alienate your consumer base or invite particularly restrictive

(28:30):
regulations to that end. Of course, in the United States,
it's one story. In other parts of the world, there
are different views of privacy and security, some of which
go well beyond what is typically seen here in the US.
Do you think the device do you think Google Home
and things like Amazon's Echo, do you think those are
going to have different levels of acceptance in different parts.

Speaker 2 (28:53):
Of the world?

Speaker 1 (28:54):
And where do you think might be a case for
this is probably going to be a big sixs in
one place? We've heard that Amazon Nico has been a
pretty big success so far versus a market where it
may not be.

Speaker 3 (29:08):
Yeah, well, you've seen even things like credit card adoption
differ from country to country just because there are different
kind of cultural kind of moras around credit around potential
privacy implications in terms of knowing kind of a transaction
and kind of the origins and so forth. And so

(29:29):
you've seen this in Europe in particular, right, So not
all countries in Europe are equally predisposed to using credit
cards as we are in the US. So yeah, I
think there definitely will be different cultural adoption. But you know,
at the end of the day, like you mentioned rightly,
a lot of these companies it's in their interest to
do a good job, because we as consumers only tend

(29:51):
to shop from people we trust. The challenges, of course,
is that we will sometimes wonder, just like you are
right here in terms of this interview, you know, what
are the implications. You know, you could very quickly go
from a situation that appears like having your mother around
all of you around you, to having a situation where
having big brother around you in the nineteen eighty four

(30:12):
kind of sense, in the Orwellian sense, where something is
so aware of every patht about your life that maybe
they just know a little bit too much. And so,
you know, we're kind of entering that phase. Right. We've
historically had a few places that we weren't necessarily connected to, right,
and our home, with the exception obviously of our PCs

(30:32):
and our phones have not been connected. They've been into
some degree of thanks Grup, we sit down for dinner,
we're not connected to the net. And I think what
this revelation is making people aware of is that kind
of in the future, there'll be very very few places
left that are not networked, where our activities are not

(30:54):
kind of transponding or transmitting or telegraphing kind of our activities,
and you know, it will take time for people to adjust,
and as I mentioned earlier, it will it won't just
be about consumers and kind of buyers, but also you know,
the governments will have a say, and as you pointed out,
you know, in certain places the governments have already had

(31:15):
a say around privacy, like in Europe with the introduction
of the General Data Protection Regulation to better protect consumers.
And I think we're all going to become a lot
more sensitive to the privacy implications of always being online.

Speaker 1 (31:32):
And I think that we're seeing that as well, just
you know, in other areas of technology. Just recently there
was these reports coming out about the FBI's database of
biometric data and the concerns people have about that, and
even interesting questions.

Speaker 2 (31:51):
Like do do I own my own face? Should I?

Speaker 1 (31:54):
Shouldn't I have access to data about me? And the
this Again, you know, we're in a world where our
technology is pervasive, and in many ways that is amazing.

Speaker 2 (32:07):
It is giving us.

Speaker 1 (32:08):
An almost seamless experience of having our desires catered to
before we can even give thought to them. That is
the big promise of the Internet of things and I
love that idea. It is something that really appeals to me.
On the flip side, you start to realize that your
regular actions are creating data, and that data does in
fact have value, different value to different entities out there,

(32:33):
and so having these sort of technologies and fining them in.

Speaker 2 (32:37):
Once you have reconciled.

Speaker 1 (32:38):
This idea and you realize that that this is going on,
then you can start to make those strategies how is
was the best way of handling that, both on the
end user side and on the back end side, so
that it is a responsible approach. That's really what your
company is looking into, right, the idea of secure security

(33:00):
and helping companies protect customer data.

Speaker 3 (33:07):
Yeah, so that's actually kind of very very kind of
similar to this. So I think, you know, one of
the things you were kind of touching upon is this
kind of expectation of organizations to do a better job
of safeguarding your information, essentially being responsible custodians of your data.
The challenge for most companies, you know, maybe with less

(33:28):
sophistication than at Google, but maybe even Google, is that
they collect so much information about you, and they collect
it in so many different places and so many applications.
It doesn't necessarily mean that all that information is tied together,
but you are leaving digital footprints across organizations, and so
these companies are essentially becoming large data collection points, and

(33:53):
it's hard for you as a consumer to know exactly
what digital footprints you've left. You want to know what
assets you've left with them, and believe it or not,
you know if you think about accounting and how companies
are expected to have responsible tools in place to track
how much revenue that comes in, how that money is

(34:14):
getting dispersed, who it's paying. So that's all about accounting
and financial responsibility. On the digital side, there's very little
of that today, and that's kind of the origin of
big ID. You could think of big ID as a
tool set to help big companies understand where their customer
information is, what's at risk or potentially at risk, either

(34:37):
in terms of breach or in terms of misuse, and
then how to better understand how that information is getting
used in the organization, either to help ensure that it's
compliant with regulations or secondly that it's complied with their
own kind of privacy rules, their own consent agreements that
they've created between themselves and their consumers. I think that

(35:00):
this idea of a ledger or accounting software for privacy
information doesn't as yet exist, and I think increasingly, just
given the number of digital touch points that we have
with the companies we interact with, that it's going to
be certainly a future requirement.

Speaker 1 (35:17):
I think that's really interesting, and I'm thankful that there
are organizations like yours that are looking into this to
try and create those best practices, because as we've seen recently,
the scholarship has shown, it takes very few data points
to be able to link some information to a specific person,
and I think a lot of companies out there may

(35:39):
not even be aware of the implications of some of
the data they're collecting, not through any sort of maliciousness.
It simply is, as you point out, there's so many
of these little digital touch points that you cannot necessarily
anticipate what the consequences are from the very beginning. And
it's amazing to me to think that this is going on.

Speaker 2 (36:02):
Everywhere, and.

Speaker 1 (36:05):
It's a snowball that's already going down the hill.

Speaker 2 (36:08):
It's just going to keep on going.

Speaker 1 (36:10):
It's very reassuring to hear that there are people actively
thinking about these and trying these issues, and trying to
find the best ways of handling that kind of information
so that we don't have any we can avoid as
many chaotic moments of absolute failure as possible. Again, I

(36:30):
think a lot of people assume that big companies are
actively pursuing the collection and selling of all of the data,
and that's not the case.

Speaker 2 (36:44):
Across the board. There are companies that are.

Speaker 1 (36:46):
Collecting a great deal of data in the pursuit of
whatever business they do, but it's not through the it's
not necessarily with an intent to do anything you know,
commercial with that information. But knowing this makes it easier
for those companies to be more responsible and also to
maybe even get to a point where they change up

(37:07):
their practices so that they're only collecting the points of
data that are relevant to their business.

Speaker 3 (37:13):
Yeah. Well, look, certainly that's the intent of big Idea
is to help companies be more responsible around their digital assets,
their customer assets, which you could argue are probably their
most important assets. You know, sometimes you're hear people talk
about employees, and you know, your most important assets or
your employees and they walk out the door every every night. Well,

(37:33):
your customers are pretty valuable too, because they if they
stop patronizing you, your business suffers, and their loyalty increasingly
is very fickle. So if they don't have confidence that
you are protecting their personal information, their kind of digital
digital footprints, don't go somewhere else. We'll go to somebody

(37:54):
that does take better care of that, which again is
why it kind of makes sense to have technology that
gives organizations better tooling to track, manage, protect those digital
personal assets, but digital information that represents kind of who
you are, where you live, where you've been, what you

(38:15):
like when you're going on vacation, et cetera.

Speaker 1 (38:18):
Now, I've got a question for you personally, which is
that are you at a point where you would adopt
a technology such as Amazon Echo or Google Home or
would you personally wait a little longer or you know,
where do you stand on that? Because I can tell
you being aware of these issues, I guess it's only

(38:39):
fair that I answered my own question being aware of
these issues and being cognizant of them. I'm still leaning
toward getting one, knowing what I know, and taking the
risk in order to have the benefit. My wife feels
very differently about it. So that's why I do not
have one. But I'm curious what, as you, as an

(38:59):
ex expert on this subject, matter, how you feel about that.

Speaker 3 (39:04):
Yeah, so I think there's two things that come into play. Obviously,
I'm a fifteen year veteran of the security industry with
a company focused on enterprise privacy management now, so I
understand some of the consequences and repercussions. But I'm also
at heart a person that likes technology. I was a
reader of Isaac Asimov as a kid Mainland, all the
kind of great science fiction writers. And I realize the

(39:25):
future is coming towards us and we could either try
and hide or dock, or we could try and embrace
it and understand the consequences. So for me personally, I
look at this and try to understand how this technology
will impact our lives going forward. So I will be
an embracer of the technology because I think, as I
said at the very beginning, there's a lot of good,

(39:47):
there's a lot of convenience that comes with it, but
it's also important for me to understand some of the
consequences by going through it firsthand, because at the end
of the day, if I'm you know, I'm part of
a team building technology to better help protect customer information.
That you understand the implications of these new whole automation

(40:08):
car automation technologies.

Speaker 1 (40:10):
Excellent, Dimiti Srota, thank you so much, founder and CEO
of Big ID. You really helped me and I hope
my listeners understand a bit more of the implications of this.
I realize that this sort of technology that has this
incredible connection to our personal lives, really a level of
intimacy that most technology does not have, carries with it

(40:34):
some things that can be a little worrisome. But I
agree with you. I think if we enter into it
with open eyes and we are aware of the challenges,
We're not denying that challenges exist, but we are aware
of them. That allows us to actually overcome those challenges
and reap the benefits of this really powerful tool. Thank
you so much for coming on the show and talking

(40:55):
with us.

Speaker 3 (40:56):
My pleasure chick here, byebye.

Speaker 1 (40:58):
I think it's really important to remember that mister Soota
actually said we should embrace technology, but do so.

Speaker 2 (41:05):
In a way where we're aware of the.

Speaker 1 (41:07):
Consequences and we are doing our best to mitigate any
negative fallout from this technology.

Speaker 2 (41:14):
Moving forward.

Speaker 1 (41:16):
We shouldn't deny it, we shouldn't try to stop it,
but we should definitely be responsible with the way we
develop it and the way that we use it. Potentially,
it has the capacity to make our lives easier. I mean,
imagine being able to handle everything by just shifting it
over to your personal assistant who lives everywhere. You can

(41:39):
access that personal assistant wherever you might be through whatever
computer or smartphone or standalone device you happen to have
at your disposal at that place, and access all of
that those features, everything from entertainment to handling travel and
stuff that you want taking care of you don't necessarily

(42:00):
want to attend to yourself, so you can save that
time to do something else. That's a really cool idea,
and I love the promise of digital assistance, the idea
that we will slowly get toward this future where the
technology around us anticipates what we need before we can
give voice to it. I love that thought and the

(42:21):
idea that my life just becomes sort of magical as
a result, because the technology is shifting things to my
whim before I can even voice what that whim is,
before I might even be aware there's a whim I
could be whimless.

Speaker 2 (42:36):
I'm done saying him.

Speaker 1 (42:38):
Well, that was the twenty sixteen version of AI assistance,
and you obviously there's a lot more to say now.
I mean, some AI assistants have been abandoned, like Kortana
no longer really a thing. Also, Amazon has been cutting
way back on its division for its personal assistance that

(43:00):
I will not name at this point, but yeah, there
have been companies that have been taking massive cuts in
those departments, at least as I'm recording these intros and outros,
which by the way, was way back in January of
twenty twenty three. This should be publishing many months after that.
But I'm currently living in a time where those divisions

(43:21):
are getting massive cuts because it turns out that these
assistants have not been particularly valuable as far as revenue generation,
and if you can't generate revenue from a product, eventually
you start to see cutbacks for those products because it
doesn't make sense to keep supporting them if they're just
draining resources and not contributing to the overall health of

(43:44):
the company. So it's been one of those things where
companies have found it difficult to leverage these AI assistants
in a way to generate revenue. And yeah, it may
be that AI assistance since are one of those things
that ultimately kind of fade away, unless that changes. Maybe

(44:06):
by the time you're listening to that this has changed
and I'll need to do an update on this episode. Anyway,
if you have suggestions for topics I should cover in
future episodes of tech Stuff, please reach out to me
let me know. One way to do that is to
download the iHeartRadio app. It's free to download, free to use.
You can navigate over to tech Stuff. Just put tech
Stuff in the search field. It'll pop up you go

(44:28):
into the podcast page. There's a little microphone icon. If
you click on that, you can leave a voice message
up to thirty seconds in length. If you would prefer
not to do a voice message, you can reach out
via Twitter. The handle for the show is tech Stuff
HSW and I'll talk to you again really soon. Tech

(44:52):
Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,
visit the iHeartRadio app, Apple Podcasts, or wherever you listen
to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.