Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn the stuff they don't want you to know. A
production of I Heart Radio. Hello, welcome back to the show.
(00:25):
My name is Matt, my name is Nol. They called
me Ben. We are joined today with our guest super
producer Seth Nicholas Johnson. Most importantly, you are you. You
are here, and that makes this the stuff they don't
want you to know. This is a global episode, it's
an international episode, and we do hope you enjoy it.
(00:47):
Let's just let's put all the cards on the table
real quick here. China is often vilified in Western media,
Europe and the US and Australia, of course, and often
for good reason. But if you look at it, if
you look at the day to day lives and most people,
we all kind of want the same things right in.
(01:07):
China is a country filled with people just like any other.
So most of the day to day stuff that happens
in China never makes it to the domestic news, much
less the international news. Just like here in the States.
You know, you don't see a big CNN headline that
is like so and so got a parking ticket. So
someone in China might get a parking ticket. They might
have to go get married or divorced and do the
(01:29):
paperwork for one reason or another. They might also, unfortunately
find themselves in court. And that is where today's episode begins. Guys,
I don't want to spoil too much just yet, but
I'm thinking we can just nickname this episode Law and
Order China. Addition, um, can I be Sam Waters? Um?
(01:51):
Wasn't he? He was? He was O G Law and
Order right like like original series. I don't know, God
knows how many of those there are. Dick Wolf is
a madman, you said on our recent episode of Ridiculous History, Ben,
something must have happened then, I wonder. Yeah, I was
just about to say that here too. So it's like,
if you look at Law and Order spinoffs, you know, uh,
you will see the same episodes kind of occur with
(02:13):
just different people. Mad lived in prom is a big one.
What happened at your problem? Dick Wolf? Are you okay?
But but in this uh? I would also before you
commit to Sam Waterson, noal, I would point out that
Jeff gold Bloom is in Law and Order criminal intent
(02:33):
for some reason. So that might be that might affect
your decision. But I just like the name Sam Waterson.
I don't know why, it just does it for me,
son of Son of the water, UH, and speaking water.
Let's travel across the Pacific to China. Here are the facts. Like,
if you are listening and you're from the U S
(02:55):
or you live in the US, now you're already on
some level of where of how that country's judicial system works.
The three branches of US government are in theory separate,
meaning that the courts are again in theory, supposed to
be beyond the reach of political theatrics and political parties.
And again we can't say that's theoretical often enough because,
(03:19):
especially in recent years, UH, the courts have been highly
politicized and the court of public opinion has to UH
in an arguable degree, affected judicial decisions. So what are
the similarities and differences between the judicial system in China
and the judicial system in the US? Honnestly didn't really
(03:40):
realize the distinction between these and quite as cut and
dry away as we're going to lay out here, um ben,
But you do a really fantastic job of spelling out
exactly the difference. There is civil law and then there
is common law. And I guess the term common law
to mean helly's associated with like a common law wife,
you know, a common law partner, like someone that by
like or you have just doing something for long enough,
(04:01):
it kind of becomes the thing. Um. But no, it's
more than that. Common law is essentially law that is
based on case law. Case law that is of course
published judicial opinions is prioritized, which means that someone who
is presiding over a case, whether it be a judge
or some sort of tribunal or what have you, um,
(04:22):
would prioritize you know, uh, the cases of the past
to determine how to act in a particular case that
is in front of them um at that moment. The
idea of legal precedent would be a good way of
thinking of it. Um. And some countries that use this
type of law include places like the United States, England, India,
and Canada. Yeah, and then the difference here is that
(04:44):
in China you're you're dealing with civil law. It's a
civil law system and it is a little bit different.
It's ah just to comment on the common law system,
I I think and what I remember being told is
that the reason for having that is so that the
law can evolve, society of the law evolves. Uh. Within
civil law countries like China, Uh, there are there are
(05:08):
just statutes. There are civil like rules that exist, and
what those rules state on paper is what goes right,
Uh this, If you do what this law states not
to do, then you are in trouble and you've broken
that statute. And I mean, conceivably doesn't that simplify things
a little bit? Uh? It definitely becomes a little bit
(05:31):
more didactic and what what's the word like Hammurabian? Like
the idea of this like code that if you break
the code then you get the punishment. Then it's all
written now. Um, So it certainly keeps you from having
to jump through all these hoops and figure out, well, no,
in this case, they did this, and that means that
we should apply this you know, precedent to this, and
oh now it's actually changed a little bit, so we
have to like retool things. I mean, you could definitely
(05:52):
argue that civil law is the more straightforward way of
doing it, but also leads you to more um, kind
of draconian measures. Right, Yeah, it's non negotiable is the
big difference. And one of the big things to keep
in mind here is that you know, you've got the
law that or the you know, the civil codes. Once
those are broken, the way it's kind of enacted is
(06:13):
a little different too, right Ben, And maybe correct me
if I'm anything, if anything is inaccurate here. But the
judge then just collects all the data on that case,
everything that is known, all the facts, and then applies
basically the punishments that are also within that code to
the person that broke the code. Yeah, yeah, just so.
(06:34):
And there are other nuts and bolts aspects of this,
like the role of lawyer, uh, the role of the judge,
you know, fact finder versus decree uh. And this also
plays out in the way a jury trial would work.
But it's important to say China is not unique in
the idea of civil law. Countries like Japan, Germany, France, Spain,
(06:56):
a lot of Continental Europe uses this approach, and a
lot of country across the world use a mix of
features from both civil and common law. And then in
the case of China, some of this is also based
on pre existing Chinese judicial approaches from the past. Well,
certainly there must be ways to change those statutes or
those codes if if the need arises, but it would
(07:18):
probably just take a little bit more doing right. Well
sure be jimping, Yeah, exactly, fair enough enough, and it's
really quickly. I think I coined a new term Hammurabi,
and I mean everybody's when I met by that, I think.
But the code of Hammurabi was, like I think the
if not one of the earliest codified sets of laws
that would apply, you know, to a society. Yeah, and
(07:39):
the quote of Hammurabi um may not have aged well
because it will seem a little bit absolutist and draconian,
but it is, uh. It is the first comprehensive written
record of if you do X, we will do why,
and you know people need uh, the civilizations, I would say,
(08:00):
eat that kind of guidance, but we have to so
we've laid out, you know, this sort of I would say,
not even philosophical difference, but structural difference, and neither. They
both have advantages and disadvantages, but it's not as if
one is evil and one is noble. They're just different approaches.
(08:21):
Like you know, ca CD and a burrito have the
same ingredients anyway, the perhaps the more important difference between
Chinese and us UH judiciary systems. Here is a functional
difference the judicial branch in China, and I hope this
isn't a hot take to say this. It isn't really independent,
(08:44):
not really, because it's an extension of the government, has
very little separation, meaning that it is part of the
ruling political party, the Communist Party of China, that is
in charge of the PRC or Pupil's public of China.
This might not seem like a big, big difference for
little transgressions. Somebody's littering, uh, somebody has like stolen some
(09:09):
DVDs if there are still DVDs around. Those things are
kind of a political depending on the content of the DVD,
and the Chinese system in that case can function like
an independent branch of politics because no one's talking about
something like Tianamen Square, no one's talking about the weakers.
But as you can imagine, this difference comes to the
(09:30):
forefront in a huge way whenever a case has anything
that might be seen as a political element, and often
the way this is reported in the West, UH, those
political elements may seem really small and innocuous, but they
are a big, big deal domestically. And yeah, this isn't
not meant to disparage the Chinese government unduly. Those two
(09:54):
things are simple structural facts. Of course, a lack of
independence can clear early lead to corruption. It can open
the door for conspiracy and even tyranny. But we can't
forget that, you know. Like I said earlier, the U
S courts over the past several years and decades especially
have found themselves embroiled in the world of politics despite
(10:15):
that theoretical just the facts approach, you know. I like
to think about it is we can't throw stones from
our own glass courthouse from or within, I guess, because
it all comes crashing now. I don't know. At least
the China sort of acknowledges that that's how it is.
For the most part, there's no illusions of separation. I
(10:36):
think here we're inching more. Not to be too hot
of a take on it, but I think we all
kind of see it. We're inching more and more towards
something that feels like there is very little divide, you know.
And I think there's a lot of like legal apologists
and kind of purists that maybe say, no, no, no,
it's there it's still there. It's like part of the
tradition and it must remain. But you know, functionally, it's sure.
(10:57):
Sometimes it doesn't seem like that divide the exists the
way it might. And we really see politics entering into
the judicial branch as you go higher in the circuit
system until you get to that old Supreme Court. And
you know that, I would say the politics kind of
start at the top level and then go down trick down.
It's tricked trickled down politics. Well, yeah, yeah, and you know,
(11:24):
we we could spend a lot of time here just
talking about the philosophical differences between you know, China and
the United States when it comes to government. When we
say there's one party in China, it is true that
there are two parties in the US, and that is
a difference. But if you know, you try and you
try and delve too deeply into that you know, highly
(11:48):
connected world of donors and movers and shakers, it's it
gets a little muddy. Yeah, I mean you're making you're
making a solid point here because, uh, there's something else
that I think needs to be pointed out more often
about the American judicial system. If a judge is elected,
then are they not inherently in some way involved in
(12:12):
the world of politics. I mean, that's just an inescapable fact.
But for our purposes today, those are the two big
differences to keep in mind. One, China's judicial system is
based in civil law, and it is too, in practice
inherently politicize. Big question why these differences matter? Why? Well,
first and foremost they matter a lot of the people
(12:34):
on trial. That's that's a rough one. Uh. And Second,
these differences accelerate because we live in a world where
technology is changing the game, changing all games, I would
argue in a very big way, a profound way. If
you have ever been involved with any aspect of your
own country's legal system, then you are doubtlessly familiar with
(12:56):
the slow grind of justice. Cases can eight years to
reach a conclusion. The courts, at least here in the US,
are regularly backed up. They're overloaded with cases, so much
so that it is a ubiquitous trope of most fiction
to see the overloaded public defenders saying, you know you're
my one hundred and forty nine case today. I would
(13:19):
do better if I was on cocaine, but I can't
afford it on this salary. Etcetera, etcetera. The courts are
like before the rise of COVID, courts were backed up,
And this is pretty common in a lot of countries,
and in the US is pretty understandable because we're talking
about a country with almost three hundred and thirty million people. Now,
(13:39):
imagine how convoluted all these variables become when we're in
a country with one point three billion people, like the
population of China. Uh, there's a big difference between million
and billion. Well, I have to waste a lot of
time on it, but the difference is almost certainly bigger
than most people imagine. And we are living in this
age of massive tech logical innovation. When these two factors meet,
(14:03):
that need for more efficiency and this explosive level of
this meteoric rise in technological growth, When those two combine,
we see the makings of something incredibly dangerous. Today's question
is a question that China asked recently. They said, what
if we can make our justice system more efficient? What
(14:24):
if we can do it by taking the human element out,
taking some of the human element out of the human
pursuit of justice. It's a crazy question, and it's one
that gets us to some scary places. What are we
talking about. We'll tell you after a word from our sponsor.
(14:48):
Here's where it gets crazy. So Matt Noel Seth fellow
conspiracy realists, this is one of those, Thank you. This
is one of those episodes that, in the course of
research and the course of digging in, always makes me
wonder what's gonna go down when we reach customs in China?
(15:10):
You know what I mean. I don't think they care
because we're not you know, We're not like big political figures,
were not like what's his name, John Cena having to
apologize for calling Taiwana country but are headed no comment
but the the right but the interesting well not getting
in legally, I guess. But the interesting part of this
(15:32):
is that what we're about to talk about will inevitably,
I would argue, apply to things outside of the courtroom.
And shortly you'll see why. Right now, Uh, we we
gotta set the stage. You may have heard us briefly
mentioned this development in a previous Strange News segment, and
(15:52):
it's it's such a big story that we pretty much said,
hang on, we're gonna pause. This is an episode and
we're gonna come back to it. And that's what we're doing.
Because the rumors are true. The government of China has
indeed created an artificial intelligence program for its judicial system.
It's a machine that you're going to be hearing called
(16:13):
System two oh six or two oh six system, but
it's that's not entirely true. The new thing is based
on System two O six, which was a pre existing
AI tool. Is the name System two o six grade
and you know, not not really my opinion. I think
they're cooler names out there. But um, I guess they
(16:34):
didn't read our email, so place don't set yourself sure
you had something. We need to at least put these
out in the world because they're pretty epically great. I
don't know. I don't think Judge A Tron nine thousand
is something that people want to hear when they're getting
prison time. No one wants to be locked up by robobusted. Well,
I mean, how about we just call it Mr Roboto
(16:57):
call it a day. Yeah, you know, it's to have
any illegal legal puns in it, and it's just something
that makes it feel like it's your friendly neighborhood robot
that's just gonna send you to prison or put you
to death. Oh, like the government minders that would regularly
pop up if you are browsing the web on a
in a Chinese internet cafe. He's like these little car cute,
(17:18):
very cute cartoon police characters pop up and say, hey,
just you know, we're keeping an eye on you, make
sure everything is safe. It's like a clippy or something
like that, but little and one of my it's funny,
this slight tangent. One of the things. Um one of
my old professors, a dear friend pointed out, he's a
former Chinese national until he left for the US. He said,
(17:42):
the weirdest thing about those cartoons was that if you
looked closely, they had blue eyes. I don't know why
that always astounded him, but anyhow, Yes, the internet access
game in China is is very different, and they are
leading the four. They are leading the arge on Ai.
I would point out our earlier episode where the various
(18:05):
members of the Pentagon and the Western Defense Establishment said
that the US was just not ready at this point,
severely outclassed, which is unfortunately true from all indications, which
isn't something that we're used to write. We're used to
being like on the bleeding edge of like all this
kind of stuff. So it's for them to admit that openly.
Has gotta gotta give them props for the for the
(18:26):
humility on that one. Um. But I mean they're when
when we say that they're leading in AI, it's like
across the board, they're using it in all aspects of life.
Like we talked about sesame credit and things like that.
I mean, maybe that's not quite the same, but at
the very least, they're using technology with algorithmic and AI
components in tons of ways to either streamlined processes or
(18:47):
what have you. And this one just made me feels
a little bit more less benign. I don't know, Yeah,
I don't know. Man. Look I think this and sesame
credit are gonna be having some fun times together when
they get fully integrated. Now. Wasn't implying that sestimate credit
is benign now? And so you're right, these are like
that's a match date in heaven. But you know, there
(19:08):
certainly are probably more but now ways that China's using AI,
but these are two very spooky ones. Also, sesame credit
is one of the things that system two oh six
takes into consideration. Just if anyone that doesn't remember tessimme
credit is like essentially a ranking of social status based
on things that you do, and you're on your record, right,
(19:30):
and it gives you higher or lower tier access to goods,
to services, essentially into governments, you know, things to travel.
And it's also ah, it's it's not just individual's actions,
it's their associations. So if you went to high school
with someone who is considered of a lower credit rating,
than that can affect you too, even if you don't
(19:52):
really kick it with them. Um, but yeah, they're bus
in the car while they're driving it, aren't we all?
Uh m, all right, So I'm gonna I really quickly.
I just want to mention the newspaper article, if that's okay, Ben,
that I think we were initially alerted to this story from.
It's in the South China Morning Post. The title is
(20:15):
Chinese scientists develop ai prosecutor that can press its own charges.
It's written by Stephen Chen and it was published on
the December. Uh. You may not be able to find it,
or if you do find it, you may not be
able to access without you know, paying some money to
access that. You can also find it in asia one
(20:35):
dot com. That's where we located it and both of
those both of those reports and most of the reports
are quoting South China Morning Post. You can find some
other domestic Chinese media that has been translated to English
if you don't read Chinese. Uh. There was a paper
published this past December December in a journal called Management Review,
(20:59):
which is peer reviewed, but peer reviewed domestically, and in
this UH, Professor She Young, who is a director of
the Chinese Academy of Sciences Big Data and Knowledge Management Laboratory,
broke down the basics of how this AI works. It
is a machine. It was built and tested in Shanghai,
(21:20):
and it makes sense there was built and tested in
Shanghai because Shanghai has the largest and busiest district prosecution office.
For like a very rough comparison, folks, imagine this if
you're familiar with the US, this would be like California,
which is the United States biggest court system. We're talking
like twelve percent of all litigation. Imagine California saying, chiechh,
(21:45):
we have to make our lives easier somehow cut to
the infomercial, but it's the California legal system saying there's
got to be a better way. Uh. That's what that's
what people in Shanghai. We're thinking as well, Professor. She
argues that this technology could reduce prosecutor's daily workload and
let them focus on more difficult tasks that need that
(22:08):
human touch. So it's often being framed as a not
a hard hitting law and order style prosecutor, but as
a twenty four hour judges assistant. So very much a
clippy vibe. Can we also just say that the name
of the District Prosecution office in Shanghai is a delight.
It is the Poodong People's procuretret procuret. I don't know
(22:33):
this word. I'm used to like words like protectorate, but
like it's like something that procures things, and I don't know.
It's just a very new word for me. It's it's
um I I had. I looked up just quite a bit.
The procureator is kind of the same as the prosecutor, Yes,
very very similar, not exactly the same, very similar. Yeah,
(22:55):
it's both the it's the individual or the department that
has charged with the investigation prosecution of crimes. It dates back,
I believe, to the Roman Empire, which is kind of
neat because you see a lot of human history in
these sort of legacy terms and they pop up in
you know, the legal systems across the world. Yeah, so
(23:16):
here's here's what they did. They developed this AI prosecutor
and it can run on a desktop computer. So for
each suspect or, like each case it is considering, it
decides whether or not to press a charge and what
kind of charge to press based on one thousand or
so traits obtained from this description text. And most of
(23:41):
the traits that it's picking up are things that are
too small or too abstract to maybe really stand out
to um to a meet sack prosecutor, a human prosecutor.
And this reminds me a lot of the AI that
figured out the new way to make an efficient coil gun.
Remember that when it was noticing tiny variables you could
(24:03):
change to up the performance of those of those arms.
Uh yeah, but man, I I want to know what
the heck they're talking about. Because they're basing that on
a description that is written by a human who has
gathered facts about the case. Then this AI is just
combing over that written description that a human made to
(24:26):
to pick out these little things that the other human
who didn't it's the little things. It also kind of
goes into the larger conversation about like what AI even means.
Like we talked about Ada Lovelace right in the past,
like when we've discussed AI and her concept of AI
isn't possible because the computer can only ever do what
(24:46):
the programmer tells it to do. But essentially what we're
talking about here is a computer doing what the programmer
tells it to do, or the very least um operating
under a set of instructions that the programmer tells it
to consider. So like it makes me question, like what
AI means? I don't think it's the same in every case.
It's not like this computer has such quote unquote artificial
(25:07):
intelligence that it can just automatically dissect, you know, court
transcripts without any extra you know, parameters and just figure out,
you know, where the justice lies. It has to have
a set of instructions that it's operating from which are
inherently um influenced by humans. Right, And then this is
something we'll talk about a little later. To one of
the big questions is is this simply automating a process?
(25:31):
For instance, I think that everyone can agree that if
you are in a spreadsheet or something or on some
kind of platform and you make like a listwide change,
then that's not really a I just told it to
change everything to three or whatever. I want to talk
a little bit about how about how misreported this story
(25:55):
has been in the West, because again there's a lot
of anti Chinese prejudice in the West. Prosecutors in China
were already using System two oh six for several years
to help figure out how to approach evidence to determine
whether or not a suspected criminal was a danger to
the public at large. But this was pretty limited up
(26:17):
to now. All System to A six could do was
corelate that information, make some indications and recommendations. It could
not quote this from the management review paper, could not
participate in the decision making process of filing charges and
suggesting sentences. And when we talked about sentences, we're not
talking about the quick brown fox jumps over the lazy dog.
(26:40):
We're talking about prison time. And I think we can
all agree a lot of people don't want, you know,
judge a Tron nine thousand to decide whether they get
five years of hard time. And that's what's happening because
the AI to be able to suggest sentences would have
to be able to identify and remove information in a
(27:02):
case that it deems irrelevant, and then it would be
it would also need to process human language in its
neural network. This is it would also be based on
the reporting of a human who's writing down facts about
the case right for now. And this, this is where
we're at. This is the precipice. And this means that
(27:25):
for the first time in human or AI history, there
is now a machine that can use these processes to
not just not just put the final stamp on existing charges,
but to actually charge people with crimes. And I get
the feeling. All three of us definitely want to talk
about how stuff works. That's one for all our old
(27:47):
friends there. The this machine was trained, and there's a
way a lot of these pieces of tach get trained.
It was fed more than seventeen thousand cases, all distinct,
separate cases, from twenty fifteen up to and it would
capture things like everything from the metadata to ultimately the
(28:07):
content of what was happening, the time that something occurred,
the place, the region, the people, the individuals, their past behavior,
their current behavior, and then we'll calculate consequences. And so
far it can identify and press charges for Shanghai's eight
most common crimes, and this we're gonna give you the
eight most common crimes. I think they paint interesting picture
(28:30):
of crime and fun times in Shanghai. They're fun, every
one of them, but a couple in particular. M Yeah,
credit card fraud, Okay, that's really common, running a gambling operation,
dangerous driving, intentional injury, obstructing official duties, theft, fraud, and
perhaps the most vague and dangerous quote, picking quarrels and
(28:53):
provoking trouble a k. A. Rabble rousing being being like
what do you call it? Um? Being like an agitator?
Is that what we're talking about here? And ne'er do well? Yeah,
it could be. I mean, it could be something pretty serious,
serious assault, or it could be like, you know, spreading
(29:14):
about the party. But there's there but but but why
don't they call it serious assault? Like there's obvious to everybody,
there's in this very nature of this charge. It's up
to it's the in the eye of the beholder. You know,
I don't know what to call it, but I know
what it is when I see it, right, Like the
old ruling about pornography in the US, I think the
serious assault would fall under intentional injury in a more
(29:36):
extreme way, maybe like um provoking trouble might be something
as simple as passing out following Gong literature, which you know,
which is Oh, by the way, guys, Shenyan is back
in town. If you if you want to see the show,
have you seen that meme where it's like a shen
Yan billboard on the moon? I have not, but I
(29:59):
not surprised it might be a real photo. They know
they're trying to get ahead of a lot of stuff.
The promotion game is on point. But my other question is,
like running a gambling operation, is that such a common
crime that someone's like, we do not have enough prosecutors
for this, we need a robot, and that that that
it is because it's outlawed as a practice, you know,
(30:21):
like large like there's no scenario in which gambling is legal.
Am I right? Or is this yeah? For me? It's
you If you multiply the population of the United States
by four, you get China's population. When you've got jurisdiction
over that many human beings, I can only imagine that
that is still a problem just with that many human
(30:41):
beings around. People like the gamble. Well, yeah, gambling or
casino gambling specifically is illegal in mainland China, but it
is legal in the Cow, and the Cow is the
world's biggest gambling hub. That's where the richies go when
they like, you know, they always talk here about like
I think that was even a joke in succession, Like uh,
(31:03):
Alexander Scar Scar's character, who's like a you know, Google
type dude, like does a tweet about and going to
Macau and it implies that he's like got a lot
of money coming on the horizon. Oh yeah, Macau or Monico, right,
But Monica always seemed like the more expensive of the two.
It was Macau because it was specifically like about gambling.
Monica is more like a vacation destination, right, And I
(31:27):
might be wrong. Well, there's a lot of gambling Monica too.
It's a rich person's paradise, is what it's supposed to be.
But but we'll get to Monica in a future episode.
Tell us if you'd like, if you've got some dirt
on there, we'd love to hear it. Uh. With with this,
we know, okay that we're essentially talking about something based
on this pre existing thing called System two thousand and six.
(31:48):
And if you just cast aside all concerns, right, and
just look at it the way you would look at
a fancy new car. This new version of system, to say,
can do some pretty impressive things in real time. Yes,
and it really does take you back to the civil
law system. Right. We said there are rules that if
(32:12):
they're broken there there's there's input in, there's output, and
it's exacting. It's not an interpretation of some judge. So
so let's get into this thing. Can transcribe as suspects
testimony at hearings. That's good, right. It can recognize the
identity of that person based on conversations in a courtroom.
It can transfer large amounts of legal documents, which is good, right,
(32:35):
Like take the actual legal documents, turn them into something digitized,
mark them up. Now you've got that ready for the
prosecutor or or for the protector. I can't remember. They
can also identify defective evidence to avoid wrongful convictions. That's good, right,
there's something wrong with this evidence that doesn't match up
(32:55):
the way it should be and as it's normally entered.
I mean that's those are pretty good things, I think,
But it doesn't seem like something at the moment that
I would trust with, you know, deciding who's going to
get charged with something. It just feels like what it
was used for in the past. Yeah, exactly. And then
this system can also This is like one of the
(33:17):
most sci fi parts to me. This system can also
respond of voice commands. It can display evidence and info
on digital screens for different people in a courtroom at
a hearing, so it saves time presenting evidence. Some folks,
like Ma Chong Chan, professor at East China University of
Political Science Law in Shanghai, as some folks see it
(33:40):
as a vast improvement over the old ways. Uh. Ma
Changshan points out that, for instance, a suspect might make
multiple confessions over some interval of time, and that the
AI system could instantly detect contradictions when comparing and contract
sting those statements. Uh. And the scary thing is again
(34:03):
the way the West is reporting this is is odd
in that it's been reported as though this just happened.
Some parts of this just happened, But this is only
a new chapter in a much longer story. China has
been working in this field since at least the mids.
The first public inklings of this technology are surfacing around
(34:26):
and back. In January of twenty nineteen, the first version
of System two oh six went into action. It was
deployed in Shanghai Number two Intermediate People's Court. Like we
were saying, this is already a thing, and this is
one example of the larger push toward artificial intelligence. And
(34:46):
also I want to say, um right now, I want
to say thank you to you, Matt, because we did
need to hold this, uh and make it an episode.
So I think we all we all picked this up
when it when it first came out, and I don't
know about you, but I was surprised that there was
much more to the story. This is not this is
not like a white board prototype. At this point, it
(35:09):
is well past that. Um. So people are saying, hey,
this is good, let's make people's lives easier. They're even
saying get this, that it will make a less biased system.
But what about the critics? Is this all well and good?
Spoiler alert? Yeah, not really. We're gonna pause for a
word from our sponsor and we'll return to explore some
(35:31):
of the criticism of this new system, which I think
some of our fellow conspiracy realists are already uh talking
about out loud to themselves. Quite possibly, we're back. Surprise,
not everybody's on board. What critics are in particularly concerned
(35:54):
with the system's accuracy and lack thereof, and that's something
we really really need to talk talk about. Researchers are
over the moon about this new version. System two oh six,
according to their studies and their conclusions, has a nineties
seven percent accuracy rates. That means that it can use
this aggregate description, this conglomeration of those thousand or so
(36:18):
traits on a suspective criminal case to correctly file a
charge of the time. That's pretty good, that's admirable. Yeah,
unless you're in that three percent margin of error. You
know that they were not talking about three percent chance
of getting the wrong concert set or having your flight
(36:40):
info and correct. We're talking about three percent chance of
it is and people going to prison exactly. And I
haven't really gotten into this yet, and I'm not sure
if we necessarily know fully, but like these wouldn't be
used for crimes of significantly high severity with like you know,
(37:01):
life and death on the line, right, I mean, this
would be for the specific set of crimes that maybe
are like low prison sentences or hefty fines or how
extreme do the types of punishments that these things are
used for go. Yeah, it's a good question because it's
still we're working with live fire here. This is a
(37:22):
fluid situation right now. It's just those eight crimes. But
I would you know, point out, as we were saying earlier,
that one of those is troublingly vague, you know what
I mean, Even intentional injury is troublingly vague. Did you
punch someone in a fight, ay to karaoke bar, or
did you like remove their ability to walk for the
(37:42):
foreseeable future? You know, there are two different things. But um,
at this point, you know, the most dangerous thing is
what is figuring out what the punishment would be for,
you know, promoting dissenter, picking quarrels and causing trouble. That's right,
because that's the thing I mean. I think a lot
of the critics argued this could be like I think
the word that I've seen is weaponized by the government
(38:04):
where it can be used to like you know, pick
out like malcontents based on you know, or enemies of
the state, based on this very draconian set of conditions
with but but but then again, is this that much
different than the way the government already does it with
a bureaucracy that size, And is the margin of error
higher or lower for the AI versus the actual overworked bureaucrats.
(38:31):
That's the difficult question because there's not a lot of
transparency in that regard. It's sort of like how uh,
the Japanese police have an incredibly high clearance rate on crimes,
higher than the two US has ever gotten to, you know,
And part of that goes down to the question of
self assessment or lack thereof, within these organizations. Who again,
(38:52):
who watches the watchman? Uh? Indie? In the South China
Morning Post article that you mentioned earlier, matt Uh, as
well as in some other articles I have found digging
into this, there's something really telling that happens. There is
a prosecutor who levels some criticism about this. This prosecutor
is based in Shanghai. This prosecutor has chosen to do
(39:15):
their best to remain anonymous, which I think also gives
you some insight into the fear that people have about
speaking out around this thing. The prosecutor said, the issue sensitive.
They don't want to talk about it, uh, with their
they want to talk about on record. But they pointed
out something I think a lot of us have been
thinking about, which is this, the accuracy of may be
(39:38):
high from a technological point of view, but there will
always be a chance of a mistake. And so this
prosecutor said, who takes responsibility when that happens. Is that
the prosecutor is that the machine, is that the person
who designed the algorithm. It's like the question of whether
or not an AI can have its own patent. But
it's a dangerous important question, Uh, because I don't know
(40:00):
about you guys, but I don't think I don't think
anybody's gonna be able to really address that until the
legal rubber hits the road, which would mean that several
things have to happen. Are if then in that case,
is someone is wrongfully convicted, they win that terrible brutal
lottery and they're one of the three percent right they
(40:21):
didn't actually intentionally injure somebody, They won't quourly or you know,
raising trouble, but they're wrongfully convicted, they're going to jail,
or they're paying some massive fine. Like you said, Matt,
The next if thing is they would have to be
able to somehow fight back in court, and that's kind
(40:41):
of difficult, right, how do you how do you do that?
How do you get representation there? So it's gonna be
a while before someone can answer that anonymous prosecutors question. Also,
the prosecutor says, you know, if we have I want
to ask you guys this, I want to ask if
you think, based on the fact that we don't know
anything about this person, um, do you think there's ego
(41:05):
at play here? When the prosecutor says direct involvement of
AI and decision making can affect the autonomy of human prosecutors, uh,
this person said, most prosecutors don't want computer scientists meddling
in our legal judgments. Where do you think that's coming from?
Fear for their job security? Sure, I mean that's got
(41:28):
to be at play here to some extent rights it
must be a I don't know how pay works there,
but it must be like a respected position to be
a prosecutor, right, or to be hard to pronounce name
like procractrotect, direct. And so then this prosecutor and people
(41:49):
who are criticizing this system are also aware of something
that we just brought up the old Lovelace dilemma, which
is that this thing can only file charge based on
its previous experience. It can't foresee a public reaction to
a case in a fluid, social, human populated environment, And
(42:12):
so the prosecutor concludes AI might help us detect mistakes,
but it cannot replace humans in making a decision. So yeah,
I think most people can agree with that now, But
why all the hubbub Like, why all the hubbats. I
guess that's not worth it, thank you. But but but I mean,
we talked about at the top of the show how
(42:32):
overloaded every you know court system in the freaking universe
is just by the nature of what it is, um
and and how many people it touches, you know, in
a society. Do you think that in a perfect world,
this could just take some of the lower level cases
off of these you know, high level officials or these
you know lawyers um, since I can't pronounce that word um,
(42:55):
and actually free them up to do the kind of
case work that really does require more new wants and
intellect and human touch. I don't necessarily think this is
a job killer in the way that it's being pitched,
but the way it's being pitched could also be I mean,
in a perfect world, yeah, that'd be awesome, right, that
would free up a lot of people. But there are
(43:16):
we do need to talk about why people are up
in arms about this, and I would say at the top,
I don't know if we'll agree, but I think some
of the reasons and concerns are much more valid than others. Like, first, yes,
there is a lot of prejudice against China, especially in
the West. It's a dystopian AI prosecutor is perfect red meat.
(43:38):
It's the perfect juice for the anti China crowd. And
I'll like, like, that's intensely problematic. So this system is
again it's making determinations based on past information, which is
honestly also a thing that humans do all the time.
It's it's regurgitating what it's been fed, and it's been
fed as much as possible. There are big questions about
(44:01):
who decides what info to give it and how does
that shape its decision, And that's I mean, that's that
transparent answer, and it's not. No, it's not. Um. It
might be transparent in terms of the type of information,
but the public is not going to know a lot
of that. That's why I keep bringing up that there's
human element before the AI gets involved. So, like you know,
(44:23):
in each individual case, what it's being fed is based
upon how well someone performs their duties right um each
each time, each iteration, and the thought is you do
that enough times that it can have the best view
over a wide you know, number of cases as possible,
but still in an autonomous car to drive. Actually it
(44:46):
reminds me of I don't remember exactly the case, but
there was a city government that was using UM either
city government or a corporation or a group of corporations
that was using AI to sift through job applications, and
the AI inherently was sexist and maybe a little racist
because it was using legacy information from back when people
(45:09):
were maybe more racist and sexist to determine what types
of UM you know, applicants would would rise to the top.
And they weren't basing it on race or sex necessarily directly,
but there are all these sub factors that would figure
in based on the legacy data that the systems were fed.
So it showed this inherent bias based on the type
(45:32):
of you know data that it was being fed in
the same way that the outcome of whatever cases were
chosen to be fed to this system would absolutely affect
the outcome. Well, yeah, that's that's the thing. I mean.
I put this in because I think it's one of
the most important parts of the conversation about any conversation
in AI or this sort of automation. I'm glad we've
anticipated it before we got to it and showed because
(45:54):
AI can, yes, regularly outperform human minds in multiple fields,
but it is still very much built by humans, and
as such, I would argue that means it inherently remains
vulnerable to human bias. They think about the problems that
occur in other automated systems like facial recognition in the
field of law enforcement. There is absolutely no reason that
(46:16):
this version of system two oh six should be considered
any exception to the rule. But a lot of people
aren't really talking about that as much as they should. Instead,
especially in the West, we're hearing people talk about uh
anti China concern. It's it's misleading, and folks, please, we're
not painting a situation where there are clear good guys here.
(46:38):
It's pretty rare actually for there to be clear good
guys and these kind of conversations. This is not some
sanctimonious defense of China. This is instead a point about hypocrisy.
Here's our Shamalan plot twist of the day. Check it
out here the italics. When we say this, legal systems
in other countries, including the United States, are also all
(47:00):
ready using AI to some degree. True story. So next
time you hear like a news report about this, so
you see a quick video about this, uh check, just
like keep that in the back of your mind. Are
they going to say anything about what's happening in the US?
My uh, my experience sadly is no, they're not. Yeah.
(47:22):
So should we we move on to to the thing
that was announced ten or at least that we found
out about in Yeah, okay, cool. Uh. We learned about
this thing called the Public Safety Assessment. What is that?
That sounds great, That sounds normal, very innocuous. Yeah. This
(47:45):
is a software tool developed by Laura and John Arnold
to their foundation, the Laura and John Arnold Foundation. This
is based in Texas. It's a system that is designed
to give judges very similar thing to what system too
U six was supposed to do, to give judges the
most objective information available for a particular case so that
(48:05):
they can make a decision about someone moving through the system. Yeah.
And I like to bring up that important distinction, Matt,
because it's letting the judges ultimately make the decision. It's
not saying this guy was running and dirty game of
p knuckle or whatever people gamble on. Uh, black jack
might be more relevant. I don't know. I guess people
(48:28):
can gamble on anything. But the point is a state
judge in New Jersey now can use this p s
A to assist in making those pre trial decisions. And Matt,
you and I were talking about this a little bit
off air. Pre trial decisions for people who are fortunate
enough not to have to be aware of those are
things were like, let's say there's a teenager in the
(48:49):
US is while and out. You know, you're in high school,
you're not old enough to drink, you got caught with
some beers, you're trying to be too cool, you got
your wings clipped, and uh, the secutors say, well, you
know it's your first offense. Keep your nose clean. Instead
of putting you in the system and possibly ruining your
life or damaging your chances of success and adulthood, we're
(49:13):
gonna do a pre trial intervention, which means you check out,
you seem like a good kid, you gotta do community service,
you gotta like take a class, Your license is gonna
you know, have these different little caveats on it for
a year or so. That has like this can be
automated now, and judges in other states have used this system,
(49:35):
but New Jersey is getting the most pressed for it
um because they were sort of first to the post.
It just feels like those are the kinds of things
that should be automated, because you know, a system could
very easily look up a person's record, determine if something
was a first offense, and then every all that pre
trial intervention stuff is all by the numbers, Like do
(49:55):
we really need that clogging up a physical court appearance?
You know that that could be used for something more
serious that requires a little more oversight, because at the
end of the day, you're showing up. We've all been
to court, whether the traffic court or like you know,
some other thing that you have to kind of wait
in line for, whether jury duty or whatever. And the
whole process is incredibly slow and inefficient, as if as
(50:18):
efficient as they try to make it wouldn't it free
up some things? Theoretically if they just got rid of
all of those physical appearances and just had a system say, ope,
you the first offense, here's your thing, click the link,
do the thing, pay the thing. You're good, you don't
have to show up. I feel like it makes sense
and I'm with you there to an extent. There's other
things though, like the setting of bail. How much should
(50:39):
you know somebody have to pay to not go to
jail while they await their trial. How much is someone
a flight risk? I mean maybe you can assess that
objectively through an AI system, but I feel like, again,
it's a well, it's a whole weird thing anyway. Access. Yeah,
the access would have to have to the access For
(51:01):
an AI to make that decision with all the context
would be a clear violation of privacy laws in the US.
That's how you would know, like, hey, someone's uncle is
a millionaire and can clearly just fly them the hell
out of here and then they'll go to a non
extradition country like that. That can happen, and we can't
(51:23):
like the differences the US officially, Uh, what needs to
seem hesitant to break those walls? Of privacy and that
is not as much of a concern for you know,
the majority of cases in the Chinese government. And you
know it's gonna look again class courtrooms. Guys. Germany is
(51:45):
considering similar moves right as we record. And it is
whether you think it's great, whether you think it removes
bias or only automates it. We can easily predict other
countries are right now following suit where they will follow
suit in the future. And that's where that's where we
get to the bleeding edge stuff we're referencing earlier. So
(52:09):
the AI prosecutor like one of those characters, and dragon
ball Z is powering up. You know, it's like it's
gathering its power balls. It's gonna get continually upgraded. It's soon,
the professor, she says, it will soon be able to
recognize less common crimes. It will be able to file
multiple charges against one UH suspect. So for instance, let's
(52:34):
say someone has a crazy night and they do some
regrettable things. Then they can get hit with dangerous driving,
they can get hit with causing trouble, they can get
hit with intentional injury. You know what I mean. I
don't know, man, Maybe if your weekends. Crazy enough you
get all eight? Um, but uh, this this scope is
going to expand both still X axis, yeah, x axis
(52:55):
more regions and then the y axis of depth of
its ability. And I mean it feels like a lot
of this is like these are like slippery slope arguments,
like there are versions of this that could be useful,
but we don't trust the implement towards of this stuff
to do it right, you know, because it becomes a
question of a how much do you trust AI? Like
is this something that you you feel confident enough that
(53:18):
it's at an advanced enough level to do this kind
of stuff and not worry about it and be do
you trust the implementers of said AI to use it
as advertised? Um? I think maybe for me it's a
yes to number one and a note to number two.
And that's where the weirdness comes in for me. I
don't know what do you guys think? Well? I mean
this That's the other thing. It's unclear when or whether
(53:42):
this technology will find applications in other fields. I would
say it definitely will. Like think how valuable this would
be for the decision to make or reject, to accept
or reject alone? Right. Uh, basically, the sky nets the limit.
But the the issue is the who is that the
badgers are already out of the bag. This stuff is,
(54:04):
this stuff is coming, um, maybe sooner to some places
rather than later. But China is making aggressive use of
this technology nearly every sector of government to improve efficiency,
ostensibly to reduce corruption, and honestly, I feel like it's
an open secret to further solidify control over the populace
(54:27):
um this. So remember how we're talking about this AI
system being used in New Jersey. Well, Chinese courts have
been doing this with like just the just the very
same thing, using AI to help automate the decision about
whether or not to accept or reject an appeal in
(54:48):
the legal system, and that again doesn't seem incredibly dystoping
and dangerous. But then what what do you guys think
about the Chinese prisons using a I tech to track
the physical and mental status of prisoners. Official goal is
to reduce violence? It depends, I mean, is it something
(55:08):
that you could judge has a positive impact on the
safety of guards? Does it in fact reduce violence? I
think there are ways that you could find that out, um,
But again, it all depends on who's doing it, and
I don't. I think we don't particularly inherently trust the
Chinese government to to give us the real reasons. But
I think it could be really helpful in a prison
(55:29):
situation and keep guards from getting hurt and keep in
mates from getting hurt if you used it right. They're
already being monitored anyway. I just don't quite see how
this is nefarious. I mean, people in prison are inherently monitored.
That's sort of the whole deal. Are we essentially saying
give every incarcerated person a fitbit kind of thing like
where then they would all all that data would be
(55:49):
collected and decisions would be made upon it. That's what
it sounds like to me. I don't know. Yeah, I
don't know. It's a good question. It's a good point
about people already being kind of uh panopticon monitoring situation.
But either way, you know, the question then verges on
the realm of philosophy. Right, can an artificial intelligence of
(56:12):
some sort remove the human bias of a judge or
does it only automate existing bias, possibly exacerbating existing bias
while removing some of the checks that are in place
to uh protect people against incorrect conclusions and what happens
if someone powerful decides to influence the system. How would
(56:32):
the public ever know? What, if anything, could the public do.
It's not outside the realm of possibility. It's also like,
I mean, the human touch part is the empathy part.
And honestly, and you could argue that empathy makes things
less efficient. You know, whenever we see like dystopian future
AI Skynet type situations or like RoboCop, uh, it takes
(56:55):
empathy out of the equation and judges things exclusively on
this like data set rather than like, you know, oh,
this kid had a bad day and maybe I'm gonna
inject this proceeding with a little bit of human empathy.
That's sort of out of the realm of this type
of processing. Yeah, I think overall, philosophically, the one thing
you do need in a judge system like that, in
(57:18):
a legal system is empathy to really look at the
the grey areas of law. But again, that works when
you're in a common law situation. When you're in a
civil law situation, it's different. And you know, ah, that's
a good point, and to think about that back full
circle there, like this would be harder to implement in
a way that people could stomach here in the US. Yeah,
(57:42):
because you can't consider it in the same way. Right,
That's what that's what that's what our system is all about.
A judge fully considering what's happened in the past around
a case like this and the exact aspects of the
current case. How do how are those things different, how's
the social setting different, and the more rays that are
(58:03):
existing right now? And that's just how people generally feel
about things, what's right and what's wrong. Then they make
that decision, and then that decision gets used down the road.
So I don't know it. I hate to say it, guys,
in a civil law system like this, it almost makes
complete sense to me that, right, because the judges are
(58:24):
checking facts and connecting the facts to condifying punishments. Yeah,
that's possible. But the question is what makes the um
what what has the best chance of attaining the most
accuracy while minimizing, if not erasing the existence of bias.
And those are very difficult questions that answer. But right
(58:45):
now we can say that this is open to corruption.
It really does depend upon the origin point of the
information it's being fed. It depends on you know, like
the thing about the thing about having to find facts
and then just go a media It lead to a
codified consequence is that we have to ask about the
bias of the people who wrote those codes of law,
(59:07):
who wrote those statutes first. So it's deeper than just
whomever programs system two oh six. And that's where we
That's where we leave it for you, folks. What do
you think is this trend of automating justice inevitable? Is
it overall a good or bad direction? And how do
you see it playing out in the future. Would love
(59:29):
to hear your thoughts. We try to be easy to
find online. Oh the Internet. We're all over the thing.
You can find us under the handle at conspiracy Stuff
on YouTube, Twitter, Facebook, or we also have a really
cool Facebook group called Here's where it It's Crazy that
you can join. Um, if you're more of an Instagram type,
you can find us at conspiracy Stuff Show. Yes, you
(59:50):
can also give us a call. Our number is one
eight three three std w y t K. When you
call in, give yourself a cool nickname and you've got
three minutes. Tell us whatever you'd like. We do ask
that you leave one message at a time. If you
can just cuts down on the three of us and
the time we spend going through voicemails just a little bit.
(01:00:13):
We do appreciate that very very much. Now, if you've
got more to say then can fit into that three minutes,
you can instead send us a good old fashioned email.
We are conspiracy at iHeart radio dot com. Stuff they
(01:00:44):
don't want you to know. Is a production of I
heart Radio. For more podcasts from my heart Radio, visit
the i heart Radio app, Apple Podcasts, or wherever you
listen to your favorite shows.