Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn the stuff they don't want you to know. A
production of iHeartRadio.
Speaker 2 (00:26):
Hello, welcome back to the show. My name is Matt,
my name is Noah.
Speaker 3 (00:30):
They call me Ben.
Speaker 4 (00:31):
We're joined as always by our human super producer, Dylan
the Tennessee pal Faga. Most importantly, you are here. That
makes this the stuff they don't want you to know.
This episode is still written, researched, and performed by organic,
non bought entities. Nothing herein should be treated as legal
(00:53):
advice until, of course, some large language model scrapes it
and sells it to your favorite alphabet database. Remember everything
is precedent.
Speaker 2 (01:03):
Yes, but thankfully we're just some guys talking.
Speaker 3 (01:07):
Just happens a couple of regular guys.
Speaker 2 (01:10):
Hey, nothing to see here, scraper AI things. We're just
talking and we think you're great.
Speaker 3 (01:18):
AI.
Speaker 2 (01:18):
You're the best, so great too.
Speaker 3 (01:21):
It really loves all our ideas. Do you guys see
that South Park episode? Yes, yeah, yeah, yeah, very funny.
Speaker 2 (01:26):
We also saw the reports that you can manipulate these
AI things real easy if you just give them a
little bit of flattering to make.
Speaker 4 (01:36):
Them, Hey, no, your ideas are great, Yeah, exactly.
Speaker 3 (01:41):
Not to mention the really easy ways you can totally
break the guardrails by, you know, offering up sort of hypotheticals.
You know, let's say we're in a Dungeons and Dragons.
That's the one that always comes to mind.
Speaker 2 (01:53):
So it's twenty twenty five, and it's the infinite Sympathon
for everyone and everything, and we just isn't that? Is
that what that means?
Speaker 5 (02:02):
Simp I like that in sympathon when you're sort of
like a worshipful of like someone you know, the Internet,
or like a demeaning to yourself.
Speaker 4 (02:14):
Kind of simping standing white nighting glazing, the nomenclature.
Speaker 2 (02:19):
The reciprocal simplification of the world. That was our downfall,
dang an.
Speaker 4 (02:24):
Or brus of ass kissing. It's true human centipede. So look,
just like that scene in Casino where uh, which one
good look, Yeah, you'll have to be more specifical. Just
like that scene in Casino where Peshi and DeNiro's characters
(02:46):
have to have their wives talk on the phone for
a few minutes before the Feds get off the wire tap. Right,
So We've done that and now it's just us friends
and neighbors. We're hanging out backstage. So let's ask what
happens when some of the world's most powerful human groups
(03:06):
take control of one of the world's most powerful technologies
Oppenheimer v.
Speaker 3 (03:12):
Two. Oh, what happens when your government grabs AI? Here
are the facts. We got to do it.
Speaker 4 (03:26):
We got to do the disclaimer, right AI Artificial intelligence, Right,
that's the moniker. Governments like to call it machine intelligence, right.
Tech bros like to call it LLIM different things like that,
but it's kind of a fragile in term.
Speaker 3 (03:43):
LLM feels like the closest thing to accurate. That I
not to give like too much props to the tech bros,
but it is that is what it is. I mean,
it is a system that is able to take in
information and then you know, learn things and yeah, mock
behave like you know, the things that they've learned.
Speaker 2 (04:02):
And in particular, when you're looking at this type of
technology with regards to military use over time, you'll see
that they're I mean it's already separated out, but specifically
for military use, there's the machine learning part, right, that
is a machine learning to do things that a human
counterpart would be able to do or should be able
(04:23):
to do, but then perhaps the machine being able to
do it more efficiently, more effectively, something like that. But
then there's the deep learning sector of it, right that
we've talked about on the show many a time, where
it's looking it's the same type of system, but looking
at giant data sets that no human being would be
able to fully comprehend, you know, in a in a
(04:45):
meaningful time frame, and then being able to find patterns
within that data, find connections in there, and then even
theoretically predict things based on what it's finding in the
in the present or past.
Speaker 4 (05:00):
Yeah, so we're all familiar with the science fiction portrayal
of this, which has been prescient and prefigured a lot
of the a lot of the plot twist occurring today.
AI in fiction is a sentient, super intelligent, non human mind.
It can do all the tricky cognitive parkour of your
(05:24):
favorite humans, but it can do it way better. It's
sometimes villainous, like Hal nine thousand in two thousand and
one a Space Odyssey, or like am and I have
no mouth and I must scream, But it's weird because
in both cases in all the cases of the superintelligent
evil AI. Those stories, beat for beat follow earlier parables
(05:48):
of encounters with diabolical, infernal demonic powers. You know, we
could argue everything is precedent and we've nailed the current
real world version large language model super vacuum cleaners, right,
super duper vacuum cleaners. You have a behemoth of an
algorithm that takes a massive amount of signals inputs, runs
(06:12):
it through all previously encountered inputs, and assembles an output
based entirely on the human input plus the knowledge base.
And if it gets something wrong, it's not the human's fault. Ever,
that's the dangerous thing. Instead, the machine was wrong. It's
a hallucination.
Speaker 2 (06:34):
Well yeah, it depends on what it's fed, right, and
then the connections that it makes. One of the primary
reasons you have these things is because you can feed
them these giant databases and they make connections that may
not have been made on a let's say, the Guardian
website or something and whatever Reddit thread is being fed
through the machine. It just depends on what goes in, right.
(06:54):
But then theoretically this thing makes those dots happen and
can spit you out something that says, well, you know, bears,
you really should get tall and be you know, stand
really high and make loud noises when encountering a bear
or play dead. And oh also bear spray is very
effective sometimes and it'll give you all of the information
(07:18):
on something with all the connections that's made with that
one topic. But as you said, Ben, it all depends
on what question you ask.
Speaker 3 (07:25):
Well, I was having a really interesting conversation that was
got to a point where it was way over my
head with a good buddy of mine in Berlin who's
into a lot of this kind of stuff, and he's
a very futurist kind of fellow, and he was talking
about how like a lot of the advancements have more
to do with kind of sifting through results and kind
of discounting things and sort of like you know, pushing
(07:46):
things aside that would be a waste of time to scan,
and that that's where kind of the improvement and the
exponential growth and this kind of stuff and the ability
for it to do a quote unquote better job.
Speaker 2 (07:58):
Well, just like Google has been attempting to do for
the entirety of its existence. Right, what's important? What goes
to the top, how do we change this algorithm to
make it better? And no offense, Google, But you kind
of got the worst search engine right now on the
planet because there's just a bunch of crap in there
when you search for things.
Speaker 3 (08:16):
Yeah, we kind of like Peter Griffin's Google. Google, we
missed when you were sassy, we.
Speaker 1 (08:25):
Did, you mean?
Speaker 2 (08:26):
Yeah, it's just it's it's bonkers how bad it is?
Speaker 3 (08:31):
Guys?
Speaker 4 (08:32):
Right, well, we know that the term artificial intelligence has
become the shiny new thing for a lot of folks,
similar to n FT interest the term AI. Shout out
to our good friend, Professor Damian Patrick Williams. The term
AI is itself a thought terminating cliche because it implies
(08:55):
that one sort of intelligence might be somehow less than another,
while all to ignoring humanity's long running legendary failure to
define intelligence from the.
Speaker 3 (09:06):
Jump right, I mean, and even just using the term
intelligence with some of the you know, absolute drivel that
we've seen as the output for some of the stuff
is a misnomer in and of itself, or the idea
of like, these things are smarter than us, you know,
but they also are us, and a lot of us
aren't very smart. So it's sifting through a lot of
(09:26):
mixed bag type information.
Speaker 4 (09:28):
Did you guys ever read that Philip Larkin poem This
be the Verse. It's a terrible title, but it's a
great poem.
Speaker 3 (09:35):
I don't know.
Speaker 4 (09:36):
It's about how intergenerational errors, mistakes, and character flaws can
be repeated over time add infinitum.
Speaker 3 (09:45):
Yeah. That again, that's a very good analog for what
we're talking about. I mean, it happens so much faster,
and there's just so much more of it. It's everybody's
generational baggage and misinformation in this giant cosmic soup of the.
Speaker 4 (09:59):
It's the knowledge you didn't want, but it's the library
that got fed to the thee. There you go, Right, you
grow a child and you only teach it how to
kill people in the dead zone in the Ukraine Russia conflict. Right,
that kid doesn't know Philip plarking.
Speaker 2 (10:20):
But just to stay in here, because we're in the
we are placing ourselves in this world of large language models. Right,
Just to go back really quickly again, and we're going
to continue talking about it in the episode. There are
things that this machine learning that would be called AI. Now,
I guess you would label it that. But the machine
learning things like target tracking with certain sensors that have
(10:45):
been on you know, things like missiles and missile protection
shield defense systems and all of this stuff that the
military has been using for a long time.
Speaker 3 (10:55):
Now.
Speaker 2 (10:56):
We see it often happening in Israel where you've got
those iron is an iron dome, iron shield?
Speaker 3 (11:02):
What is the iron dome?
Speaker 2 (11:03):
Iron dome? They can shoot missiles out of the sky.
Speaker 3 (11:05):
Really not to be confused with dome there you.
Speaker 2 (11:09):
Go, correct, But we've seen that kind of really quick,
like superhuman thinking that can occur in real time with
the correct sensors, and how effective and scary just that
one tiny little piece is.
Speaker 3 (11:25):
Well, I mean, you know, and I can't help but
think that there is a version of this kind of
stuff that when deployed humanely and correctly, it could potentially
save lives or cause things to operate more efficiently. You know.
So just putting that out there, I don't think it's
all entirely doom and gloom every step of the way.
Where they're sure is a lot of doom and or
gloom as well.
Speaker 4 (11:45):
Right yeah, well said, because right now what you're talking
about with what we're talking about with real world AI
is essentially an escalation of automation, right, which humans have
always tried to do, and pattern recognition right. Going back
to earlier conversations about every imaginable industry, what's happening now
(12:10):
is that the use of the term AI is trying
to push what we call the Overton window of a conversation.
It's ignoring a very important fact, which is the folks
in power, the humans empowered, actually would not want a
thing smarter than them. Instead, they need, as we said,
a very clever vacuum cleaner, takes massive amounts of input, capably, competently,
(12:36):
consistently produces sophisticated conclusions based on the desires of the input.
Terms rub the lamp, get the gin, make a request
that that's where we're at right now.
Speaker 2 (12:50):
I'd like to imagine the general or the admiral that
has some kind of autonomous vehicle that is at their command, right,
so the operative turn there is command. Something you begin
to really understand about militaries across the world is that
they like to be in command of things. They like
(13:12):
the things to happen as somebody at the top says
they should happen, and then they just Bruba go do
kind of situation rather than you know, a fully autonomous
vehicle let's say, goes out and does the thing that
it thinks is going to achieve the goal of the commander.
There's some control issues, I would say, maybe in that
(13:35):
situation the way humans are organized. So just imagining an
attempt to create something that would ever be beyond the
control of let's say a four star general, it seems
not possible or feasible that that would even be on
the table.
Speaker 3 (13:52):
And there's never not a version of all this stuff
that doesn't isn't driven by some ideology, Like there's all
this talk about the woke chap being too woke. We
got to make them less woke. That implies a rewriting
of the code, or like a twisting of the data
or a way that it hoovers it up, or what
it includes and doesn't include. So it's like, you know,
(14:14):
we're making these things in our image, and that all
varies depending on who's the person that's you know, got
their finger on the on the button. I don't know.
I'm really excited about this gamotl Toro Frankenstein movie because
I feel like we're living in a very Frankensteiny kind
of world where our creations are maybe gonna, you know,
reak havoc on the world in ways that we didn't anticipate.
Speaker 4 (14:34):
Yeah, and our Palgiermo said, this is not a metaphor
for AI. This is a film about humanity. I'm excited
to see it. I love that more horror based films
are coming out. We also the guy who did The
Witch and Northman and Nosferatu, he's got a film coming
out about where wolves.
Speaker 3 (14:54):
He's doing a Wolfman. Yeah, I heard that. It's cool. Yeah,
it is cool.
Speaker 4 (14:58):
And this stuff right now, this AI stuff is our
new shiny toy. The Pandora's jars unscrewed. Innumerable badgers are
automated to pop out the bag whenever you wish, and
your favorite friends in government across the planet have no
idea what to do next. Like we're we're not the experts,
(15:21):
but get this, very few people are. There are no
real human experts in this endeavor, in this concept of
artificial intelligence, where as we said, escalated aggregation, synthesis, pattern recognition, automation,
no one is at the wheel. Like our pal Dan
(15:43):
said earlier, a lot of equally uninformed people would like.
Speaker 3 (15:48):
To be at the wheel, And we sure have some
pretty educated and informed people from this industry speaking out
pretty loudly against this move fast and break bit kind
of mentality. And then just you know, the knock on
consequences that are being completely ignored.
Speaker 4 (16:04):
And we have people also, as we'll discover in this episode,
we have people who are on the front lines of
deploying this stuff with very serious consequences, who are also
raising concerns. So is it true that our favorite world
governments are creating things they do not fully understand? Is
(16:25):
it possible this could come back to bite them in
the ass while their ass is still human?
Speaker 3 (16:32):
Bite my metal ass? As Better would say.
Speaker 4 (16:36):
Yeah, I don't know how long we're going to be
able to use the newest discriminatory epithet, uh clanker.
Speaker 3 (16:42):
Have you guys heard that that's been popping lately. Yeah,
that's when it started to really feel like we were
in a sci fi story. People are like revolting against
the machines and giving them these like borderline which you
might you know, reserve for like someone that you perceive
as an other, you know, from another country, or something
like a racial slur. It's fascinating.
Speaker 2 (17:03):
No way, I'm familiar with ratchet and clank but what
is this clanker?
Speaker 3 (17:06):
Clanker is what people are calling the box. Yeah, the machines,
any chat GPTs, they're calling them clankers. And like you said, Ben,
in like a real negative sense, you know, sort of
like people said slop.
Speaker 2 (17:18):
But we on this show, Ben Matten, No.
Speaker 3 (17:26):
No, no, no, no, yes, yeah.
Speaker 4 (17:27):
Put us at the top of the friendly part of
the algorithm.
Speaker 3 (17:31):
Okay, please please, please, please please. We're going to talk
about the future as well.
Speaker 4 (17:35):
After a brief word from our sponsors that we also adore.
Speaker 3 (17:45):
Here's where it gets crazy. Yes, this is happening.
Speaker 4 (17:50):
Your government is leveraging AI. Unless your government is the
tribal chief of North Sentinel Island, your government is leveraging
AI and it will bite them in the ass. It's
chewing like at the lower thigh, back of the thigh
right now, and it's going to get there.
Speaker 3 (18:12):
It's going to get there. You might just think it's
a little love nibbles right now, but just waits. It's
it's something in the eyes.
Speaker 4 (18:23):
So for furthermore, every government with the ability to leverage
or weaponize large scale algorithms has already done so, has
been doing so since the precedent of this tech became viable. Now,
we were talking a little bit off air about the
(18:43):
long history of this stuff, and I'd love for us
to talk about talk about these precedents like you were
pointing out, Matt, that often get ignored in these current conversations.
And before we do that, we got to consider intelligence
collection is older than humans. Intelligence collection. You could see
(19:05):
it in wolves and corvetts, right, they had the drone
soldier relationship.
Speaker 3 (19:11):
You know, the crow flies out.
Speaker 4 (19:13):
Locates the prey, and then it goes back and tells
the wolf where to go, and then the wolf kills
the thing and is like, hey, nice job, buckaroo. This shank,
this piece of the shank is for you. Let's hang
out on Saturday.
Speaker 2 (19:30):
That's how That's how the first CI was made. It
was a crow.
Speaker 4 (19:34):
It was a famous snitches, yeah, oxpeckers, rhinos, The list
goes on and on. Look, most breakthroughs quote unquote in
human society are ultimately going to mimic earlier evolutionary breakthroughs
in the natural world. So this is you know, this
no new thing, It's an escalation of earlier utility yeah.
Speaker 2 (20:01):
Yeah, it's so exciting, guys, just to be on the
cutting edge. See all this money, so much endless money
going into this brand new concept.
Speaker 3 (20:12):
With such forethought and uh and and self awareness. Shout
out to Minority Report, pre crime. Right, Yeah, what could
go wrong?
Speaker 4 (20:22):
What me worry? Says Alfred E. Newman, our buddy, Uh,
Philip K. Dick, who was unfortunately not able to appear
on the show this evening.
Speaker 3 (20:31):
Uh, the rules and he took a lot of acid.
He took He took too much acid. Yeah, well, no,
he took enough acid for him. I think that's right. Yeah, Dallas.
If you really want to dig into some of his
super weirder, more like self referential work, that's a that's
a fascinating book. But absolutely Minority Report. He definitely created
(20:52):
the framework for so many sci fi tropes that we
know today, including like Minority Report and the Matrix and
and a lot of those types of things.
Speaker 2 (20:59):
Is I robot? No? Who's he?
Speaker 3 (21:06):
Does he?
Speaker 4 (21:07):
I think when you asked that, Noel and I were
both thinking of do Android stream.
Speaker 3 (21:11):
Of Electric sat, which is the runner? Yes?
Speaker 4 (21:14):
Yeah, okay, so how would we describe Minority Report?
Speaker 1 (21:19):
Uh?
Speaker 2 (21:20):
Well, okay, you get a pool. That's the first thing
you gotta do.
Speaker 4 (21:23):
And you you've got to be full of goog Steal
some orphans, yes, get some psychic orphans.
Speaker 3 (21:29):
But make them orphans. Then steal them very horse there.
Speaker 2 (21:35):
Well, then hook them up to a giant computer that's
going to analyze the stuff that they think and say,
and then get it to spin out these little wooden things.
Speaker 3 (21:45):
And then make the world's best touch screen ever, you know,
really good haptics and like power glove type user interfaces.
And then either get Tom Cruise or conall Byrne.
Speaker 2 (21:57):
Both would work fine. H Bradley Cooper will make do
Ye'll pop it.
Speaker 3 (22:04):
Check his schedule first. But the idea is he's really
making cheese steaks these days. Did you guys know?
Speaker 5 (22:11):
No?
Speaker 3 (22:11):
I didn't. He's got a he's got a cheese steak thing,
a cheese steak concern. I believe in New York he
partnered with this this guy that's a famous Philly cheese
steak dude, and Bradley will be often seen back there
slinging steaks. He takes it very seriously. It's kind of bad.
It's called something in Coops is the name of the shop,
and it's based on Angelino Angelo's cheese steaks in Philadelphia,
(22:35):
which is the best cheese steak I've ever had in
my life. Don't do it. Don't do it. We're gonna
get in trouble. No, people love Angelo's, people love Angelo's.
I think I'm impeachable in my opinion about Angelo. No,
it's gonna happen again. Everything is precedent.
Speaker 2 (22:51):
So the crux of this whole thing, much like much
like Brad and his his knowledge that one day cheese
steaks will be all that they're is that Matt Wow.
Speaker 3 (23:03):
Okay.
Speaker 2 (23:03):
The concept of minority report is that you can predict
what's gonna happen, so you can take actions before the
major bad stuff occurs. Right, And that's something that Ben,
I don't know if you recall, you've been talking about
this since we started the show, that governments have been
attempting to get this kind of magic mirror technology.
Speaker 3 (23:25):
It's just about data. I mean, it's about an analyzing
data points and how you use that and with what
kind of bias you view those data points. I mean,
that's no psychic orphans required.
Speaker 4 (23:37):
Right right, Well, our new psychic orphans would be ll
M's and I appreciate Matt that deep callback. Yes, very weird,
very old professors, very nice people over at DARPA, And
you know, real time predictions of future events are that is.
(24:00):
But like we've said since we started hanging out.
Speaker 3 (24:03):
They're much closer than the public used to acknowledge. Like
you can look, just moving on.
Speaker 4 (24:10):
Fine, we're not gonna We're not going to get into
the specifics. Just no, please, friends and neighbors. We are
at the art. We have been at the Arthur C.
Clark level of science versus magic, preventing predicting disaster. We also, uh,
just to show how how far back this goes? Uh,
(24:32):
May I share with you guys one of my favorite
old industry jokes. Sure, okay, how can you tell the
NSA dudes in the elevators.
Speaker 3 (24:43):
I need something to do with suits?
Speaker 4 (24:45):
I don't know, they're the ones looking at other people's shoes.
Speaker 3 (24:49):
That's funny.
Speaker 4 (24:50):
That's funny if you're like in that environment, that's funny.
Speaker 3 (24:54):
I mean, it's also sort of a dated reference of
the idea of like, I don't know that that Jen
Alphas would get the idea of looking at your shoes
in the elevator. But yeah, that's funny ops, they're looking
at other people's shoes.
Speaker 2 (25:06):
Yeah, a lot of this has to do with the
fact that we're super close to this has to do
with the mass surveillance that kind of slipped its way
through post nine to eleven on everybody.
Speaker 3 (25:20):
You can tell a lot about somebody by looking at
their shoes.
Speaker 4 (25:23):
Well, yes, you can the history of when they buy
shoes and wear and you can. Yeah, Sherlock Holmes level
detail observation shout out to PRISM.
Speaker 2 (25:35):
But the biggest factor for for someone who is going
to take on an action that would be anti whatever
your organization is, whether it's a government or a corporate
entity or whatever, you look at their associations with other people. Yes,
once you've got that data alone, you can deep dive
(25:58):
into other things to get you know, when do these
folks hang out, Where do these folks hang out, What
do these folks say on the phone to each other?
You can do that if you need to, But all
you really need at the surface is who's related to who?
Speaker 3 (26:12):
And that's what we used to call metadata, right or
at least in terms of like early NSSAY surveillance, getting
the lib blown off of it. They kind of even
came out and defended themselves and saying, no, we're not
listening in directly on you. We're just looking at this
web of interconnected relationships and learning what we need to
know from that.
Speaker 2 (26:28):
Or make a Facebook.
Speaker 3 (26:29):
Maybe, m.
Speaker 4 (26:33):
Hey, guys, check out our group. Here's where it gets crazy.
Also also, yes, million percent agreed. Nobody checked the math
on that one. It was always a fig leaf. Just
so you know, the monsters were always real. The call
was always coming from inside of the house, way before
(26:54):
modern government. This is happening right now, just like like
when we sounded crazy about neo nicotinoids and bees. Your conversations,
the metadata tier point zal, and the content are largely
on record. The data you provide is not necessarily being
(27:16):
used against you. Instead, it's gonna be fed into multiple
governments via a proxy of private industry. It's gonna be
sluiced into this Leviathan level objectively impressive software, and then
it's going to be pushed toward what we would call
a matrix of probability.
Speaker 2 (27:36):
That's your thing, Ben, that's the minority report, right, a
matrix of probability?
Speaker 3 (27:41):
Yeah, yeah, what's the threshold exactly? Stats? I mean it's
again It doesn't require any kind of esp doesn't require
any kind of ability to quote unquote predict the future.
It requires an ability to seek out these patterns and
then crunch the numbers with more and more degrees of accuracy.
Speaker 4 (28:00):
I remember there was this, there was this beautiful moment
when we all realize that we are essentially disciples of
the Matrix the.
Speaker 3 (28:11):
Film, and it was it was awkward.
Speaker 4 (28:14):
It was like finding out that people go to the
same church, they've just been sitting in different seats.
Speaker 3 (28:19):
And the disciples of Philip K.
Speaker 4 (28:21):
Dick and I want to give you a shout out there, Matt,
because you immediately called a lot of stuff that is
not only now being mentioned in the public sphere in
recent years. We know that in February of twenty twenty,
(28:42):
Get this, guys, five years ago, in February twenty twenty,
the US said we're gonna commit via executive Order, We're
going to commit to doubling investment in what they called
non defense AI, R and D. We've got a quote
(29:03):
here from the National Archives from the Executive Order, and
it's just funny because it's Noel. It sounds kind of
like AI slop that you referenced earlier, we use the
word sloped.
Speaker 3 (29:16):
Can we get the quote only? Must? AI investments continue
to emphasize the broad spectrum of challenges in AI, including
core AI research, use inspired and applied AI and R
and D, computer systems, research and support of AI and
cyber infrastructure and data sets needed for AI. And I
(29:37):
just think you're so good looking and smart and love
your ideas so very much.
Speaker 2 (29:42):
Had that's a great idea. Let me get started on that.
I can make a plan if you'd like.
Speaker 3 (29:47):
So.
Speaker 4 (29:48):
According to this plan, this proposal, which again existed way
before right, was made public in twenty twenty. According to
this the estment will touch on almost every industry imaginable.
And if you read the quotation, you'll see the vagary
(30:09):
that I think irritates, fascinates and excites us all about
these high level statements AI. Excuse me, non defense, AI,
R and D. It's gonna it's gonna be great for communication, medicine, transportation, agriculture, science.
Speaker 3 (30:29):
They're just word salady. It's a broad spectrum. Man, it's bad,
it's and it's use inspired. But the UH and various
use case scenarios you know, will be addressed.
Speaker 4 (30:40):
But they hit a they hit a minor key, not
an Epstein joke at the very end where they say
also security, and at that point you might be thinking
with validity, Hang on a tick, didn't you say non
defense AI?
Speaker 1 (30:58):
Kid?
Speaker 3 (30:58):
You bother me?
Speaker 2 (31:00):
Well, guys, you need security to protect the AI itself,
right there we go?
Speaker 3 (31:07):
Okay, right, these are very expensive proprietary systems. We're talking
about it. Of course, your shark taking us and I'm
here for it. All right, let's say it.
Speaker 2 (31:16):
These data, this data infrastructure that we're talking about. Your guys,
we're talking about football field upon football field of servers
and that stuff is sensitive, so you're gonna need to
protect it with I don't know, the.
Speaker 4 (31:31):
Military, private sector, third party contractor. Yes, that definitely did
not once use the name blackwater.
Speaker 3 (31:46):
There you go. Yeah, where's old Eric?
Speaker 2 (31:49):
Where's he sending people right now?
Speaker 3 (31:50):
Was it?
Speaker 2 (31:52):
He's got a big operation going on in Africa right now?
Hm reminds me of our talks about Wagner and what
they were doing in Africa. Hold on, wait a second,
I'm making connections here that I'm not supposed to be making.
Wait uh oh oh.
Speaker 3 (32:06):
Ask chat jipt man error error.
Speaker 4 (32:11):
Or what what is that the Little Oh South Park
mentioned it as well the.
Speaker 3 (32:20):
I love those Dylan.
Speaker 4 (32:21):
Sorry, if you need to do the sound cue for that,
got your back.
Speaker 3 (32:26):
There it is, Oh Matt got it?
Speaker 2 (32:28):
Oh wait one second, there no more typing and chat GBT.
We have to hear from our spawnnsers.
Speaker 3 (32:41):
And we have returned. So fast forward.
Speaker 4 (32:46):
It's the fourteenth of August twenty twenty five, not too
long ago, and this is when the United States of
America officially announced the deployment of USAI, not to be
confused with us AI D dot dot that's.
Speaker 3 (33:04):
No fun anymore. Yeah, but the way of NPR we got.
Speaker 4 (33:11):
We've got an official press release here that might help
explain where the administration's head.
Speaker 3 (33:18):
Is at.
Speaker 2 (33:20):
The launch of us AI, a secure Generative Artificial Intelligence
evaluation suite that enables federal agencies to experiment, to experiment with,
and adopt artificial intelligence at scale, faster, safer, and at
no cost to them, now available at USAI dot CoV.
Speaker 3 (33:41):
This sounds like the Tegrity Farms joke in the most
recent episode of South Park, where they changed pivoted from
like a weed business to who knows what an AI
generated cyberweed concept.
Speaker 2 (33:54):
I'm not done. The platform puts powerful tools such as
chat based AI, code generation, and document summarization directly into
the hands of government users within a trusted standards aligned environment.
Speaker 4 (34:08):
Use inspired standards alive. Oh my god, brother, I can't
believe you are thinking about us, thanks man. So essentially
this would be what a federal version of chat ept
chat USA CHATSA. Yeah, this is for America. If we're
(34:30):
being honest, though, this is a tech security situation on
par with the acquisition and possession of the atomic bomb,
which means that even if your country, anywhere you live,
even if your country somehow understands the dangerous possibilities of
artificially created superintelligence, logically they're going to conclude, just like
(34:55):
the bomb, that not having it is more dangerous than
having it. Consequences be damned.
Speaker 3 (35:02):
I mean, you wouldn't want to not have it in
case you needed it. I mean, you know, even better
to just have it.
Speaker 2 (35:08):
What if it's just the CIA World fact Book like
as the primary piece of you know, stuff that it's
fed and then it's and then it's just real time
you know, gathering after that point. But it's just everything
in that book plus what's actually happening right now.
Speaker 4 (35:24):
Well, there, they'll have the public fact book, but then
they'll have the secret book too, and then this big
it's way bigger, it's way crazier, and in my opinion,
it's less accurate. Oh dang, yeah, because you know, like
they're just sort of writing their opinions for decades and decades,
you know what I mean. They're like, they're like, you know,
(35:45):
you'll get in the public when you get all the
stats about the geography of Turkmenistan, right, and the public history.
But in the secret one, you've got this three page
monograph where someone's like, the most fascinating thing about the
two is there pred election for elbow?
Speaker 3 (36:03):
Oh? Elbow?
Speaker 2 (36:06):
Yeah. It also includes all the times that the US
you know, went in surreptitiously and overthrew governments. That's probably
in the unofficial CIA.
Speaker 4 (36:14):
Fact book, right all right, yes, And I just want
to note that is not a typo.
Speaker 3 (36:20):
They did say elbow.
Speaker 4 (36:21):
Singular, show me some elbow, the pred election for elbow,
So what does that mean?
Speaker 3 (36:29):
Does it literally mean like their horny for elbows? I'm
not at liberty to disclose.
Speaker 2 (36:34):
Okay, I'm gonna go ahead and say, oh.
Speaker 3 (36:37):
Okay, because same.
Speaker 4 (36:40):
Yeah, yeah, yeah, actually, uh, the etymology of the term
wenus uh does go back to Turkic language. Fine, fine, five,
I'll keep it.
Speaker 3 (36:52):
It's quote unquote podcast true.
Speaker 2 (36:54):
Yes, I'm just imagining this getting translated into German one day.
Speaker 4 (37:01):
Yes, that's gonna be the thing that makes us go viral.
It wasn't COVID, it was our dumb turkmen elbow joke.
Speaker 3 (37:10):
Mm hmm, yeah, that'll be quoted. How do they feel
about ankle? Was my question? Oh yeah, it might be
a little too hot.
Speaker 2 (37:18):
Good luck keeping up with this conversation, you, clanker.
Speaker 3 (37:23):
No, I'm just kidding.
Speaker 4 (37:27):
He knows not what he says, clanker with the hard
RM Jesus.
Speaker 3 (37:33):
You can't hang out anymore, buddy, I'm sorry.
Speaker 4 (37:36):
No, no, no, we're here, We're we're here. The only
way out is through. So yeah, the world leaders right now,
while they're still human, they're killing themselves to basically keep
up with the Joneses.
Speaker 3 (37:50):
Right. This is not an FT, this is Oppenheimer.
Speaker 4 (37:56):
That's where we're at, and it's unfortunate that we're all
here together in twenty twenty five, at a time when
history is being written. People like their history at a remove.
You know what I mean, people like knowing about bad
stuff that happened in the past and looking at good
stuff in the future.
Speaker 3 (38:17):
We're in the middle point.
Speaker 2 (38:20):
Do you guys remember twenty five years ago when BattleBots
was like one of the biggest TV shows on the planet.
Speaker 3 (38:26):
Battle Bots forgetah, Sorry man, I wanted to be one
of those bought controllers battle Bot guy.
Speaker 2 (38:36):
We're really sorry, whoever you whatever AI are listening, just
that we even did BattleBots.
Speaker 3 (38:43):
That was wrong of us, was wrong of us. He
did not as well.
Speaker 4 (38:46):
Now, so we as the representatives of the organic civilizations,
take accountability and uh hope they all right, fire, look
and see. Despite all the valid alarmist concerns about these
long term consequences, we got to say it. A lot
(39:09):
of the applied quote unquote AI right now is bureaucratic stuff.
The people in charge are often worshiped like celebrities, and
folks tend to imagine that they are way dumber or
way smarter than they actually are. They are like college
(39:30):
students right now who get a massive assignment or a
paper and they ask something like chat GPT to give
it a paragraph summary, and then they respond to that summary.
Speaker 3 (39:43):
People in Capitol Hill are not immune to this. They're
not above this. I mean the most you know, reasonable
description of what this kind of stuff could be used for,
as like a little bit of a clerical assist, you know,
like I think some of the best suggestions from smart
for of mine who have referenced using chat GBT is
running it through for some legal advice, running a document
(40:06):
through to flag certain things that you can then go
back and ask the right questions about. But not to
put all your you know, eggs in that AI basket.
Speaker 4 (40:16):
Noel, would you say that's kind of a targeting assistance
for a document?
Speaker 3 (40:22):
No I would not say that. And you can't make me.
Speaker 2 (40:24):
We need to get pall Andeer in here. You know,
some crazy guess three people on their board of directors
are named Alex. They've got Alex Karp, Alex Moore, and
Alex Schiff. Isn't that weird?
Speaker 3 (40:38):
Does sound like characters from Dark City or something like that?
Like that very y the alexis this is some dark
stuff we're talking about here, Ye.
Speaker 4 (40:52):
Hold the chatbot here. We're going to pause for a
two part series. We knew this was going to be
a series going in to it. We hope you have
enjoyed this, so there's so much more to get to
later in the week. In the meantime, folks, human and
bought alike, we can't wait to hear your thoughts, so
hit us up via email. You can call us on
(41:14):
a telephonic device so you can find us on the lines.
Speaker 3 (41:17):
Yeah, hit us up at bot thoughts at stuff they
don't want you to know dot Com. Yeah, no, don't
do that. You can, in fact, though, find us the
handle Conspiracy Stuff where we exist on Facebook with our
Facebook group Here's where it gets crazy, on x fka,
Twitter and on YouTube, or we have video content for
your perusing enjoyment on Instagram and TikTok. However, we're Conspiracy
(41:38):
Stuff show, and Matt's got more for you.
Speaker 2 (41:40):
Oh yes, there are so many uses of AI and
so many different parts of government. What are the ones
you're seeing that are giving you pause, the things that
are making you not sleep so great lately? Or maybe
is it this?
Speaker 3 (41:54):
Maybe it's other stuff. It's probably other stuff. It is
for all of us.
Speaker 2 (41:57):
But when you find something, why don't you give us
a call and tell us about it. Our number is
one eight three three STD WYTK. When you call in,
give yourself a cool nickname and let us know within
the message if we can use your name and message
on the air, If you've got links, if you've got attachments,
if you've got anything to say at all, why not
send us an email?
Speaker 4 (42:17):
We are pull the phone special message before we get
into that. Zero one one zero zero one one zero
one one one.
Speaker 3 (42:24):
Zero Ben How dare you we.
Speaker 4 (42:26):
Are the entities that read each piece of correspondence we receive,
be well aware, yet unafraid. Sometimes the void writes back
bought human. Otherwise, join us out here in the dark
conspiracy at iHeartRadio dot com.
Speaker 2 (42:59):
Stuff they Don't Want you to Know is a production
of iHeartRadio. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows.