Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production of I Heart Radios,
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
How Stuff Works and I Heart Radio and I Love
all Things Tech. And Oz and Kara from Sleepwalkers have
come back to join me yet again. Spoiler alert. We
(00:25):
actually haven't left anywhere. We just sat here the whole
time and we're recording two episodes back to back. But
if you haven't heard our previous discussion, which is kind
of a high level discussion about AI and the potential
dangers and sort of the the various messaging we've received
about AI and the warning signs and the promises, you
should listen to that episode first. If you haven't subscribed
(00:46):
to Sleepwalkers, you should absolutely do that too, because the
show is amazing. And today we're gonna talk a little
bit more about how different parts of the world are
treating AI, whether it's from a government perfective or a
business perspective, The technological you know, development of AI, where
is that actually happening the fastest, And the answer to
(01:10):
that might surprise you if you haven't been paying close
attention to news around the world and we're going to
dive into all of that. So, without any further Ado,
welcome back to the show, Os and Kara, Thank you
so much. Jonathan, Hi Jonathan again. Hi again. Yea. And
while we were between shows, we were just you know,
(01:33):
talking about Lady Gaga and Shakespeare as you are want
to do. Uh, this is kind of what we technologists
technology podcasters tend to to really, you know, kind of
revel in when we're not on on Mike or at
least not recording. But I wanted to talk first about
where do we see really aggressive you know, moving forward
(02:00):
on on technology and AI, Like, where are we seeing
the most development in AI? Because a lot of people
think of Silicon Valley as sort of the place like
it's it's the breeding ground for all technologies. But as
it turns out, that's that's really a very narrow view,
and it it's ignoring a giant superpower that is pouring
(02:24):
a lot of resources into AI development. Right China. Yeah, sorry,
that China is is um is leading the charge on AI.
And one of the guests we had on the show
on Sleepwalkers is a guy called Kaifu Lee who was
part of the team at Apple that developed the technology
(02:47):
behind SyRI in the nineties, went on to run Google
China and is now one of the biggest technology investors
in China through a fund called Signivation Ventures, which is
about all sorts of different technology ventures in China, including
several unicorns a billion dollar companies, one of which is
called meg v which does facial recognition technology. Kai Fu
(03:08):
Lee recently wrote a book called AI Superpowers, China, Silicon
Valley and the New World Order. New World Order is
quite a resonant phrase, shall we say, But the thesis
of the book is simply that China are doing AI
a lot better, a lot more aggressively, and with a
lot more promise than we are in the US. And
(03:29):
that's for two reasons. Number one, China, unlike the US,
is a centralized country, a centralized government who have said absolutely,
with no hesitation, AI is our biggest priority. In seventeen,
the Communist Party released what they called the New Generation
Artificial Intelligence Development Plan, and the first paragraph read AI
(03:53):
has become a new focus of international competition. AI is
a strategic technology that will lead the future. The was
major developed countries are taking the development of AI as
a major strategy to enhance national competitiveness and protect national security.
So China have been on this for two years, I
mean for longer than two years, but they're taking immensity
(04:14):
seriously and we're not. We did have our own presidential
executive order in April of this year, but it didn't
come with any funding a kaifu. Second point, which I'm
sure we'll get onto, is that the China also have
a much richer data set, which is the power behind
the throne of AI. Yeah, something that I think a
lot of Americans in particular aren't aware of is that
(04:38):
when you look over at China and China's efforts to
to own AI essentially, I mean there there's they're saying,
we are going to be established as the the primary
source of AI development by twenty thirty and they're they're
well on their way to doing that already. Um. You
(04:58):
look at the the B A T that's the three
big companies that collectively are valued at more than a
trillion dollars, bay Do, Ali, Baba and ten Cent, and
those are already enormous corporations that have deep ties to
the state government of China than that are working very
(05:19):
hard in AI. But beyond that, even though you know,
you might think, well, those are maybe three big companies,
but how is that that that big of a of
a of a you know, a force on its own.
They also invest heavily in lots of startup companies, including unicorns. Uh.
They in fact, they out of the six hundred billion
(05:42):
dollars of unicorns out of China from a couple of
years ago, they made up fifty percent of the investment
into those companies. Then beyond that, they're investing in companies
outside of China and in other parts of the world,
including the United States. Uh. Ten Cent, for example, has
been investing heavily in video games over in the West Coast,
(06:06):
I'm along with lots of other different companies out there.
So not only are you talking about a country that
has an enormous population and therefore an enormous source of
information on its own, it has spread out around the globe,
so it's gathering information from everywhere. So it's it's really
(06:29):
this incredibly pervasive system to gather the fuel that is
going to power artificial intelligence. Uh. And meanwhile, you also
have very smart people running very sophisticated laboratories working on
the next generation of of algorithms and applications of AI.
So you've got like the perfect storm over there. Yeah. Absolutely.
(06:52):
And I also think, I mean, just in terms of
what you're talking about, you know, Tencent being involved in
Fortnite if I'm correct, Yes, yeah, I was just doing
was doing the floss right over here. Actually, that's right.
And also the ownership of a little known dating app
called grinder Um, which the US is actually trying to
(07:14):
get back just because of the national security threat that
is involved in the Chinese owning uh so much user
data from United States citizens, including U S military personnel. Um.
I think I think what's alarming about this is that
(07:37):
we is that we're sort of entering into this new
territory of you know, war and competition and war being
less about you know, guns and bombs and more. And
this is not these are not my words I've read this,
but more about you know, bits and bites, so to speak.
(08:00):
I think you read that from Secret Ar State Mike Pompeo.
That's right, I'm quoting Mike Pompeio. Happily, I think, um,
which actually he said in regards to Huawei, which is
obviously in the news quite a bit recently. Um, you know.
I think that the there's there's the technology race, which is,
you know, sort of who's going to have who's going
(08:22):
to advance quickly and in the best way. But it's
also about sort of who owns data and who has
access to what data. I think the China, the Chinese
government does not really have a problem uh spying on
its citizens. I don't even think we'd call it spying necessarily.
(08:42):
I think it's sweeping, um is what is what they
tend to do, and um with that data can make
some pretty uh you know, chilling accusations about people who
they think are dissenting against the government. Um. You know,
in the United States, we are very free with the
(09:03):
data we give away because I don't think enough people
think about how much data they're giving away in any
given day. But the US government hopefully will remain so
is not as what's the word pervasive, pervasive and aggressive
about collecting said data and using it to um, you know,
(09:24):
in prison there citizens. So I don't know, I think
China can kind of get away with more and maybe
that's why the Chinese government is allowing Chinese businesses to
get away with more than the US government. Ill saying
it's worth pointing out in the US, you know, we
have this phenomenous surveillance capitalism, So there's a bunch of
big firms who take all the data we give them,
(09:47):
use it to model us, make predictions about us, and
sell us more stuff. In China, it's surveillance statecraft, and
the data is not siloed between Amazon, Google, face Book
and others. UM. It's tends to be in the hands
of the big technology companies in China, you know who,
given that they're not fully state owned, but much more
(10:09):
hand in pocket with the government than our technology companies
are share their collected data much more widely, which allows
them to make better predictions about what might happen and
be determinative about the outcome of their citizens. I do
think it's worth pointing out a lot of this conversation
about the difference between AI in China and the United
States that we have on this side of the Pacific
(10:31):
um is filtered through our liberal individual worldview, which states
that you know, free will, the ability to control one's
own outcomes, um, the you know, the the recognition of
oneself as an individual are the utmost goods, and that
worldview simply isn't shared throughout most of China. Now you
(10:53):
can say that's that will change, you know, society opens up.
You can say there's an inevitable progress, if you want
to call it, program us towards our worldview. But the
fact is that isn't the world view in China. And
many Chinese citizens are used to living in one on
one party states since um, since mal z Dong or
where it was in the nineteen forties, and so there's
(11:15):
a there's a there's a there's a sense of being
accustomed to the belief that harmony and the furthering of
the state's goals rise um rising is a rising tide
which raises all boats. And so you know, I think
we have this desire to see and of course it
has happened in China. There's been Channeman's Square, you know.
(11:36):
There there there have been organic protest movements. But we
do have the desire to impose our absolute belief in
the importance of the individual and free will on the
rest of the world. And you know, I'm not sure
that if you ask the average Chinese citizen, you know,
do you resent being surveiled when the most highest number
of people have been lifted out of poverty in the
(11:58):
fastest time of any country tree in history. I think
the answer might be, you know, perhaps we prefer more freedoms,
but this isn't the worst thing in the world. Also,
China is an ethnically largely ethnically homogeneous society. So for that,
for the average Han Chinese, that the trade off for this,
this lifting out of poverty and the national pride which
(12:19):
is which is absolutely on the rise in China, giving
a much stronger sense of national identity, I would argue
than we have here, the tradeoff may be worth it.
What's very chilling, Beyond chilling is the treatment of the
non Han Chinese in China, the tibet the Tibetans, and
specifically the Weakers, who are really the people who experienced
the hard end of this surveillance state in China. Hey, guys,
(12:43):
it's Jonathan from the future. I'm just popping into interrupt
here because as it turned out, we got so into
this conversation I totally forgot to put in a break.
So let's take a quick break even while Jonathan in
the past and ours and Kara continue their conversation. We'll
get back to that in just a second, Hey, guys,
(13:10):
Jonathan from the future. Again, we're going to get right
back into the episode. Oz was just talking about China
and it's use of technology and the creation of a
surveillance state, and we're gonna pick up with my response. Yeah,
and as we mentioned in the last episode, you know,
we were talking about about bias and how that can
(13:30):
be unintentionally inserted into a system and how that can
cause harm. But you could also intentionally create a biased
system specifically in order to keep tabs on particular populations
that that are, you know, minorities, and yeah, the and
obviously that could lead to truly horrific inhumane practices, leading
(13:56):
all the way up to even genocide. Yes, I mean,
right now what we're seeing is sort of um imprisonment
in re education camps. But I think it's important to
note and this is again, you know, piggybacking off of
what Oz was saying. You know, the Chinese Communist Party
has often used surveillance as a means for control. The
(14:18):
differences is that artificial intelligence enables a kind of surveillance
that we haven't seen before, which is, you know, in China,
there is basically a very large operation that they call Egypt,
which is the Integrated Joint up Integrated Joint Operations. And
what that does is it's a database that is sweeping
(14:41):
information from you know, basically every source imaginable WiFi, you know,
visitor management system so you know, for example, in America,
what we call you know, when you walk into a
building and register your name to visit an office. You know, um,
wecha um we chat conversations, you know, uh, when you
leave the country, when you come back in the country,
(15:03):
all of these things are being swept into um a
larger I don't I don't know what the word is
for it. It's not server, I don't know how to
say it. It's a system. It's a system. It's a
system that it's then making decisions and predictions about who
is basically doing right and doing wrong. And we just
(15:24):
simply don't have We don't have anything like that in
the United States right now. I mean, certainly we can
there there are companies that can use our information in
the United States to make predictions about us, to sell
us things, to basically set our insurance premiums. But as
far as just an integrated system that is making decisions
(15:45):
about its people and then also using it to imprison
its people. There's it's just unprecedented. And you know, the
Human Rights Watch is basically calling it a humanitarian crisis,
which I which I think it is when you think
about you know, um, your own country basically spying on
your conversations. Um, really, I mean obviously without your consent.
(16:07):
People are stopped all the time in China and their
phones are taken and read through like that's a very
that's a sort of normal day, um, and then using
that information to put you in a re education camp.
I mean, I think if most Americans knew that, which
I don't think they do, um, they would be alarmed.
But let's be clear. You know, we're we're very far
(16:30):
from having a re education camps in the US, but
we do, and it's not on a state level or
a national level. We do make decisions about people's outcomes
with stuff like credit schools. I mean the credit schools.
You know in China it's it's it's ethnically it's explicitly ethnic. Right.
Well again in China they wouldn't say that, but effectively
(16:50):
it's explicitly ethnic against the wagers. In the US, we
have credit schools and guess what most people who grow
up with a certain amount of privilege know what their
credit school is, understand the principle of the credit school.
Get a credit card as early as they can, start
building their credit, making monthly payoffs, and then when it
comes time to get a mortgage and buy a house
and move to a nice neighborhood, guess what, All the
(17:11):
pieces are in place. But but for for many people
who don't grow up with the privilege, that the credit
school comes as a complete surprise at a certain point
in life that you know, there's even notion of having
bad credit. All of a sudden, you know you've got
bad credit without ever having realized that you had this
credit school you're supposed to be working on. And guess what.
You can't move out of your neighborhood, You can't get
what you want, You can't buy your children and things
(17:32):
they need. So we do have this here. It's not
state policy, but we've we've effectually outsourced this predictive technology
about what people are going to do in the future
to private corporations who who use it to profit from
us rather than to control us. But we shouldn't beat
we shouldn't beat China too hard with this stick when
we have certain analogous practices here in the US. Sure,
(17:54):
I mean you could even I'm sure there are plenty
of people who do argue that when we the practices
of certain actual state level organizations in the United States,
that we should be concerned about what sort of systems
they might be using. I'm thinking specifically of the n
s A because it wasn't that long ago when we
(18:15):
were having enormous headlines about the n s A and
its practices of trying to have you know, essentially listening
points just outside of major UH communications channels, whether it
was Internet service providers or the telecommunications industry in general,
and you start thinking, well, if you start applying these
(18:36):
kind of AI surveillance UH programs, which is you know,
in essay, is all about trying to detect communications UH
in that everyone gets lumped into and not just bad actors,
then you start getting those concerns. And you know, we
we saw plenty of that even without the AI element
when the n s A stories were breaking a couple
(18:58):
of years ago, even just to the point where we
were seeing people in the n s A behaving poorly,
like using the information gathered to track down like old relationship,
you know, like old boyfriends and girlfriends, not an ethical
use of your power if you're if you're looking in
(19:18):
on communications. So we know that it doesn't have to
be a an official state line policy for this to
either be uh misused in an unauthorized way or put
to use in a way that maybe it's not immoral,
but you could at least argue it's a moral with
a lot of the the business practices, because that's morality
(19:42):
is not consideration when you're looking at how do we
make more revenue, how do we make a greater profit? Um,
And and you know you're just you're essentially you're checking
off boxes saying, right, here's how we can make this
more efficient, have a lower cost to us, a greater
payoff off in the long run. And so we start
(20:02):
to see how exactly as you were saying, Oz that
that while you it is easy to point to another
country and say these policies are clearly uh harmful to
people and are therefore bad, we also need to make
sure we're reflecting on the environment that we ourselves are
in when we I'm sorry, go ahead, oh no, I
(20:24):
was just gonna make one comment, you know, and for example,
in the state of Arizona, if you applied for a
driver's license in the state of Arizona, your photo was
then put into a database that was being used by
that was being used for facial recognition, right, and that
was without driver's consent. Basically, law enforcement came back and
(20:45):
was like, whoa we you know, we think people know
that this is going on. You know, that was their
best answer. It wasn't like, oh, there was you know,
fine print that people didn't read. It was basically like, well,
we thought people knew, and so you know, I we
can't exonerate yeah, sort of our own our home turf,
because there's certainly I don't even know if you would
call this misuse. It just seems like exploitation um for
(21:10):
gain for for basically gains of I think police departments
that are seeing this technology, recognizing how powerful it is,
but also realizing that it's something that needs to be
regulated and not knowing how or not really caring. And
I think Jonathan, you used the great example of the
n S a UM, which I think tie is very
(21:31):
neatly to what we're talking about. Edward Snowden had a
very haunting phrase which was turnkey tyranny to describe basically,
once you build infrastructure for something, um, anything can happen.
And the technological infrastructure we have here in the US
for surveillance and social control is I mean, we have
few cameras, you know, there are more distinctions between companies,
(21:52):
but effectively the infrastructure exists to do what's happening in
China here. And that's very frightening because you know, as
we know for Henry Ford's what wonderful pressure to build
roads in the US. Guess what you build the roads,
people are going to drive cars and not take the train.
So once this infrastructure exists, and you add to that,
you know a leader who doesn't respect norms or wartime,
(22:16):
all of a sudden the barriers that we think are
so solid to protect that infrastructure being weaponized against us
start to road very very quickly. And I think that
that's the moment we're in right now. And that's another
reason why he wants to call this podcast sleepwalkers, because
we don't insist on those norms and legal protections. Bit
by bit, the infrastructure will have its own logic and
(22:37):
one emergency after another. Remember the Patriot Act will allow
this technology to be used against us in ways that
we currently find sickening and horrifying and terrifying. In China,
they could easily come home. Yeah. Yeah, it's a sobering point.
And on that point, I think we're going to take
a quick break so I can suck my thumb in
(22:57):
the corner. Okay, alright, Pruny thomicide. Now we've talked about
some of the differences between say, China and the United States.
One of the things I thought was interesting is the
idea that in China you have you have sort of
(23:21):
a very concrete strategy in place, right, a top down strategy,
and it's it's pretty much all the companies that are
working on this strategy are in alignment with it to
some degree or another. Some are very much in step
with the state government. Others are to a lesser degree perhaps,
(23:44):
but still you know, following along the strategy. Meanwhile, in
the United States, it's much more of this competitive this
this classic capitalist idea of competition in the space where
you have all these different pockets that are all trying
to own a I themselves competing against each other. So
we're getting lots of interesting innovation, but not nearly at
(24:06):
the same speed or scale as we're seeing in China.
Is that is that more or less a correct assumption
or am I way off base here? No? I think
that's completely fair. You know, this year, the government that
President Trump did announce an executive order on AI in February,
and so that was that was seen as a kind
(24:28):
of related response to what China are doing. And that
executive order that the President issued had four major components.
Number one, to set AI as a national priority of
the United States. Number two, interestingly, to get better at
data sharing between the government and private enterprise. Number three,
it is the ethical guidelines on how AI is used
(24:50):
in terms of surveillance and military and number four to
make sure that the United States is doing the best
it can to educate the next generation of engineers and
AI scientists. Now, those are all good things, apart from
maybe two number two, the data sharing um. But guess
how much funding that executive order came with. I'm going
to guess that was a big old goose egg. Zero dollars,
(25:13):
zero dollars. Whereas shen Zen, which is a province in China,
is spending fifteen billion dollars this year. That's one of
the many, many provinces in China spending not even federal
money but state money on AI. So in that context
you can say, oh, you know, we're getting up to speed,
we're responding, but you know you've got to put your
(25:34):
money where your mouth is and we're not doing that. Yeah.
And on a on a related note, you know, when
we talk about things like the regulations, the laws in place,
like how do we how do we then create policies
that ensure that we're using AI responsibly and productively and
(25:55):
not in ways that are harmful or destructive, um at
least to our own set aisions if and hopefully not
to anyone at all. We tend to see that lag behind,
just like we do with technology in general. We tend
to see technology innovations far out distancing our ability to
to incorporate that into our our legal you know, kind
(26:18):
of massive infrastructure. Understandably so obviously that that system is
going to move much more slowly than technological innovation, but
it does create these pain points, whether it's uh, you know,
like autonomous cars. You know, you have different states in
the United States that will allow for some degree of
autonomous car testing. Meanwhile, you've got companies like Tesla that
(26:41):
are rolling out vehicles that have autopilot feature, which, to
the company's credit, they say, is not meant to be
taken as an autonomous system. But tell that to all
the people going down the highway who are leaning back
in their cars with their hands off the wheel. Um,
you could argue that that false of the responsibility of
the individuals. But if enough individuals are doing it, you've
(27:03):
got to start asking what's the value of actually having
the system in place? Um, we're seeing that as being
a kind of a disparity as well. Right, We're seeing
this this gap between what we're capable of and what
we should be doing, or at least what you know,
what are legal system says we should be doing. And meanwhile,
(27:24):
in contrast to that, and and Kara, you mentioned the
EU a couple of times in our last podcast. Over
in the European Union, you have committees dedicated to thinking
about these sort of things and starting to propose potential
UH strategies or even presenting different options for strategies for
(27:46):
dealing with AI. Even beyond these these cases that I'm
talking about, like they're they're going to the point where
I remember reporting on this a couple of years ago
for Forward Thinking, a series I used to do where
the EU was proposed a committee in the EU, or
rather was proposing the idea of granting personhood two artificially
(28:06):
intelligent systems, And on the face of it, that sounds
absurd to a lot of people, the idea of granting
a non human the concept of personhood, despite the fact
that in the United States we have corporations which are
exactly that. But yeah, like, hey, hey, wait, we corporations
can be people too, and uh can give money as
(28:30):
long as they can give money to politicians, absolutely, so
why can't robots. But the point that the EU committee
was making was not that robots have feelings and we
should really be considerate of them, but rather there needs
to be some way to start to establish concepts like accountability.
In the case where some form of AI constructor or
(28:53):
robot causes harm in the form of damages or injury,
how do you are to determine who's at fault. We
touched on this a little bit in the last episode,
so to me, it's fascinating that those are discussions that
are popping up, like very serious discussions in the EU.
And Oz I think when we met briefly in New
(29:15):
York a couple of weeks ago, we talked about the
fact that this is an area of the world not
known for making incredible advances in AI technology. It's not
like you look at Europe as saying this is where
the hot bed for AI development is, but it is
a region that's dedicating a lot of consideration to the
(29:39):
implications of AI in day to day lives. Yeah. Absolutely,
I mean, the EU has been out in front on
thinking about technology and AI. Um Obviously, g d PR
was a big act on regulating data and informing people
about how people's data is being used passed in the EU,
which triggered a whole series of conversations in the US
(30:02):
about data regulation which will probably see they're starting now,
we may see them come into law in the next
two or three or four years. So the EU, good
old Europe, despite its stifling effects on innovation I think
and on technological innovation, is an innovator in terms of regulation,
which probably sounds like a contradiction in terms and this
(30:22):
is an area that Karas looked into really closely and
has some I think, very interesting insights about. Yeah. Yeah, well,
I mean, as I was saying, to you earlier actually
on the form I guess it was the former the
episode before this. Um, yeah, you know, I think it's interesting.
The EUSE approach to me up very recently, you know,
(30:45):
was to basically collect a group of fifty two experts
and you know, create these seven core requirements for artificial intelligence. Now,
I mean I can name them. You know. One is
human agency and oversight. The other is technical robut us nous.
The third is privacy and data governance. The fourth is transparency.
The fifth is AI that AI systems should be sustainable,
(31:07):
um whatever that means. UM, AI systems should be auditable.
As we were talking about with you know, black the
black box problem, you know, AI should also be available
to all. You know, this is what we talk about
in terms of bias, gender bias, um, you know, racial bias,
all of those things. But I don't I have a
(31:28):
bit of an issue with it just in that it's
it doesn't seem to have much action involved in it.
There aren't many action items. I think I think it
is important, um for governments to to set standards you know,
as a as a sort of first step, um, you know,
and I think it also primes, you know, sort of
(31:49):
average citizens to be aware of misuse, right, because when
you set when you set up requirements, it means that
these are things that can be misused and be misused easily.
And I will say, you know, whenever I go home,
every time you go onto website on your phone, notifications
saying are you willing to let this website put cookies?
Are you willing to share your data? Are you willing
(32:11):
to accept targeted ads? And that the net effect socially
of that reminder that your data is valuable and you
have a choice every single day fifty times that that
inevitably caused the shifting consciousness and the shifting citizenship. So
I do think it's easy to say this regulation is toothless,
but I also think it can, you know, make people think, oh,
it's a huge culture. I mean when you just think,
(32:31):
I mean, think about it. Think about cigarettes, right, I mean,
it's just not you just don't smoke inside right anyway,
illegally you can't. But I'm saying, you know, just the
shift in public perception about smoking, the shift in public respection,
um perspective about sugar, for example, at least in the
United States. You know, those were all things that at
(32:53):
one point, we're not really discussed or talked about, and
you know, everyone kind of smoke cigarettes and that's what
people did and you know, then they got lung cancer.
But you know, I think the there is as long
as there is a public discourse about privacy, people will
begin to care more and more about privacy. So I think,
you know, the EU presenting um these seven requirements is
(33:15):
a step in the right direction. Absolutely, and if if anything,
will just make people think about these seven tenants when
they're going about their sort of everyday lives, and I
think we'll we'll make companies, um, you know, focus on
building these requirements into whatever they are developing. You know,
(33:35):
I think it's important for companies that are developing new
technologies to think about bias, especially when the people who
are developing the technologies might be quite homogeneous. So I
think it's worth saying that, you know, we we tend
to see the new world order in terms of America
versus China. You know, not to wave the flag for Europe,
which unfortunately my my country may longer no longer be
(33:59):
part of. But it's a it's a huge it's a
huge group of nations with tremendous purchasing power, and it's
genuinely significant market for US technology companies. And so you know,
the effects of this may seem far off and slightly irrelevant,
but you know, these finds that the EU is already
slapping Facebook and Google with they're not necessarily materials to
(34:20):
business yet. But you know, Europe as a voice for
regulation is an important one because you know, actually these
new technologies, AI technologies, they tend to affect the poorest
in society and the most vulnerable in society in the
most negative ways. So algorithmic discrimination, the replacement of you know,
(34:41):
of low education, shall we say, jobs like driving or packaging.
You know, these are things that are being experienced at
the hard edge by people who don't have much of
a voice in the political system. And so the fact
that the EU is is taking up the mantle, even
though it comes with hypocrisy, like using facial recognition at
the ports of entry, even though you know that the
(35:03):
level of the finds that can be applied to companies
like Google and Facebook can never hurt their bottom line,
I would say, is an important and valuable thing that
it's happening. And so you know, I think the EU
can offer some point of reference for how we may
think about regulating AI and technology in the US in
the future. And I think that you know, you've heard
(35:25):
futurists say we need to have some very serious conversations
about AI and the ethics of AI and how we
can make certain we're being responsible custodians of a I
and UH. To me, that was one of those things
where it was like, we need to talk about talking
about this, like that was the conversation for a very
(35:46):
long time. We need to talk about talking about it.
It's like having a meeting to talk about when you're
gonna have your meeting. Have you ever worked in an
office before? Oh? Yeah, no, I worked in I worked
in a college administrative office before. So I've had meeting
about how we can have fewer meetings and this is
not the way. Yeah. So, so having the EU actually
(36:09):
take this step, even if you were to argue like
this is a very early step and maybe there's not
a lot of teeth to it yet, it is a
step as opposed to what I've seen elsewhere where it's
been talking about taking a step but not even doing
that much. So I'm encouraged by it because it actually
moves the conversation forward instead of us saying we need
(36:30):
to have this conversation, the conversation has started. I don't
think it's over yet. I think that this is a
good way to actually force more parties to get involved
and think about it and maybe even start proactively thinking,
how can I make certain that the thing we are
building is actually being built in a responsible way that
(36:52):
isn't going to that we can mitigate as many unintended
consequences as possible, knowing that that is impossible to do
uh entirely, but to really try for it. So to me,
this is this is one of those conversations I could
have all day long. But I know you guys need
to get going because there's going to be someone else
who's going to have to use the studio you guys
(37:12):
are in, So I'm going to I'm not going to
have us go all day long on this. Plus, we're
recording this on a holiday weekend, and I know everybody
wants to get home, so we're gonna wrap it up.
But I want to thank you guys so much for
agreeing to come onto my show. Uh it's a fascinating conversation,
and you guys obviously have some great perspectives on this.
(37:34):
And again to my listeners out there, if you haven't
subscribed to Sleepwalkers, go check it out. It's a really
well done show. I've been very impressed as someone who
has a solo show. Most of the time I listened
to it, I just think, wow, that's that's so awesome,
(37:54):
that's so great. I wish I were on that show
every now and then. So hint, hint, if you vernee me,
you just we love it. We would absolutely love it.
Let's talk about that. We had a great time today.
Thank you, Thank you guys, and you guys, if you
want to get in touch with me, drop me a line,
say hey, you need to talk about this other topic
(38:16):
or those guys were great. Have them back on the
show as soon as possible. Send me an email. It's
tech Stuff at how stuff works dot com. Pop on
over to the website that's tech Stuff podcast dot com.
You'll find the archive for all of my episodes. If
you are really bored, they're more than a thousand of them,
so have at it. And then you can pop on
(38:36):
over to the merchandise store and you can finally get
yourself that tech Stuff mug that you've been wanting all
this time, and I'll talk to you again. Really. Sion
Text Stuff is a production of I Heart Radio's How
Stuff Works. For more podcasts from my heart Radio, visit
the I heart Radio app, Apple Podcasts, or wherever you
(38:58):
listen to your favorite shows. Eight