All Episodes

June 2, 2025 35 mins

Hour 2 of A&G features...

  • US trains officers of Chinese Communist Party 
  • What is social media & who trusts it?
  • AI Update!
  • The flaws of artificial intelligence 

Stupid Should Hurt: https://www.armstrongandgetty.com/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Broadcasting live from the Abraham Lincoln Radio Studio, the George
Washington Broadcast Center, Jack Armstrong and Joe Getty.

Speaker 2 (00:10):
Arm Strong and Jettie and now he Armstrong and Yetty.

Speaker 1 (00:23):
At a conference in Singapore, Hegseth said China was quote
credibly preparing for an invasion of Taiwan, with Chinese forces
staging regular drills around the island and the use of
force has not been ruled out.

Speaker 3 (00:36):
It's amazing to me what stories get attention and what
stories just don't grab people's attention. But yeah, Hexath gave
a speech over the weekend to our Asian partners in
the Wall Street Journal version of it certainly.

Speaker 2 (00:51):
Grabbed my attention.

Speaker 3 (00:53):
A little quote from Hexath to be clear, any attempt
by communist China to conquer Taiwan by force would result
in devil stating consequences for the Indo Pacific in the world.
We're not going to sugarcoat it. The threat China poses
is real and it could be imminent, and saying that
we will allow it, it will not happen on Trump's watch.
We have repositioned some sort of anti ship killing missiles

(01:16):
close by, to which China said that's really really awful.

Speaker 2 (01:19):
You shouldn't be doing that.

Speaker 3 (01:20):
But I mean, this is some serious bluster between the
two most powerful countries in the world that at some
point are going to go to war.

Speaker 4 (01:28):
I was just reading those reading about those anti ship
munitions that are so interesting. They're mounted on remote controlled trucks.
They need no humans, so somebody in a bunker far
away drives these trucks around. They fire off missiles, then
quickly relocate so they can't be hit by return fire.
It's really quite interesting. We're positioning them in the Philippines

(01:52):
and similar areas. Yeah, the Sabers are a ratlin, no doubt,
as the Wall Street Journal rights.

Speaker 3 (01:58):
In recent years, China has build up the world's biggest navy,
a title once held by the United States. You know
who held it before the United States? Great Britain. Great
Britain ruled the seas for very long time. Then it's
been US. Is it going to be China in the
next century. Well that's what China's hoping.

Speaker 4 (02:16):
Yeah, I'm not sure you'd like their placing of the
seas anyway. I've found this. I mean, I've been on
this jihad for a long time, but more and more
open coverage of the fact that we the United States
and the Western world in general fell for an absolutely
brilliant plan by the Chinese back in the late sixties

(02:38):
early seventies. They needed help, primarily financially in trade from
the Western world, and came up with an absolutely brilliant plan.
Let's pretend that we want to westernize and move away
from communism at our own pace and liberalize in return

(02:58):
for our investment from the West. And this was absolutely
deliberate plan. They knew all along that it was not
sincere although there have.

Speaker 3 (03:06):
Been some reformers in you know, the last several decades
in China who are actually like, you know, maybe that's
not such a bad idea, But then Ahijin Ping always
comes along, and so they duped us into opening up
the relationship with China, which was bad enough, but a
great deal of left America still hasn't caught on to it,

(03:28):
and they are so motivated by the need to show
openness to other cultures. There's Zeno files, as I often
put it, they still haven't caught on that China as
a dire threat to the United States in our way
of life. For instance, on universities told the story many
times counterintelligence people came out to a university campus said

(03:51):
you've got a bunch of Chinese spies on the campus.

Speaker 4 (03:54):
University president said, get off my campus. EU racists and
that attitude persists. Headline just trained so many Chinese Communist officials.
They call it their party school. Not like party school.
Let's get wasted and get laid yo, the Communist Parties school.

Speaker 2 (04:10):
Oh that's not as good a party not nearly.

Speaker 4 (04:12):
Yeah, in your rank of party schools, this is something
totally different, Su, don't worry, you're safe.

Speaker 3 (04:18):
Who is that comedian? I wish I could remember his
name because I'd like to give him credit. So funny,
But anyway, he has a thing he does on a
piece of paper. He lists best parties. At the bottom
was search party Wow.

Speaker 4 (04:33):
The Kennedy School of Government at Hava It is a
is favored by party cadres seeking career boosts. US schools
and one prestigious institution in particular, have long offered up
and coming Chinese Communist officials a place to study governance.
Can you imagine teaching Communist Party officials about governance so

(04:55):
they can twist it into totalitarianism, a practice that the
Trump administer could end with a new effort to keep
out what it says there Chinese students with Communist Party ties,
But for decades the party has sent thousands of mid
career and senior bureaucrats to pursue executive training in postgraduate
studies on US campuses, with Harvard University a coveted destination,

(05:17):
describe as some in China as the top Party school
outside the country. Alumni of such programs include a former
vice president and Chinese leader, Shijin Ping's top trade negotiator
these days. Maybe you heard, well, we talked about it.
Last week Secretary of State Marco Rubi announced US to
authorities will tighten criteria for visa applications from China and

(05:38):
aggressively revoke visas for Chinese students, including those with connections
to the Chinese Communist Party or studying in critical fields.

Speaker 3 (05:45):
Well, remember last week that new president of Harvard gave
a big speech at the graduation ceremony talking about how
we have always educated people from around the world, and
we had reckoned that.

Speaker 2 (05:56):
And he got a one mess standing ovation.

Speaker 3 (05:59):
Right, Because it is absolutely a requirement of being a
lefty in America that you must worship all things foreign
and loathe all things American and domestic, or at least
most of them.

Speaker 4 (06:14):
It's just it's so nakedly approval seeking and so stupid.
American universities have played leading roles in shaping China's overseas
training programs for mid career officials for years.

Speaker 2 (06:25):
And years and years.

Speaker 4 (06:26):
Other US colleges have offered executive training the Chinese Communist officials,
including Syracuse, Stanford, the University of Maryland, and rot Gers
where my dad taught many many years ago. Blah blah blah.
So it's just amazing. You know, it's funny. I was
thinking earlier when you were talking about and Pete Hegsuths
was talking about the perhaps impending invasion of Taiwan, and

(06:46):
they're running those millet that Chinese are running those military
exercises that are like everything but pulling the trigger. And
can you imagine if the United States, let the Japanese, say,
in nineteen forty, you know, put a bunch of aircraft
carriers out in the Pacific and then fly planes right
at Pearl Harbor. Then they said, hey, it's an exercise,
just an exercise, And then they turned around and went

(07:09):
back to the you know, over and over again we'd.

Speaker 2 (07:11):
Be said, no, it's okay, They're just do it an exercise.

Speaker 4 (07:14):
I mean, oh, good lord. And by the same token,
you've got a hostile communist regime and we're educating their
officials in how to govern.

Speaker 2 (07:25):
All right, moving along, I think the point's been made
in terms of the exit speech.

Speaker 3 (07:29):
I'm sure we'll bring that up with Mike Lyons when
we talked to him in hour three our military advisor.

Speaker 4 (07:36):
Listen to this, will you the economic contributions of international
students that you know, I'm primarily interested in Chinese students,
but the share of international students from China is twenty
three percent at Harvard, So about a quarter of all
the international students are Chinese. At Harvard, it's fifty percent.

(07:58):
At Cornell, it's forty seven percent at Columbia. Wow, let's
see you see Berkeley appears to be a third.

Speaker 2 (08:06):
Why don't you give me the number?

Speaker 4 (08:09):
They just it's a barograph and some of them are labeled,
some of them or not. But just to give you
an idea of why they put up with this, the
economic contributions of international students at top us universities quote
unquote top universities in twenty twenty three. Columbia got nine
hundred million dollars in twenty twenty three from foreign students.

(08:33):
Nine hundred million. You see Berkeley five hundred and seventy
six million.

Speaker 2 (08:38):
Well, if you're getting half a trillion dollars, you ain't
gonna want to end that? Uh? Is that half a trillion? Yeah?

Speaker 4 (08:45):
A thousand million is a billion. Well so it's it's
almost a billion, half a billion. What I don't remember
what the number was now I'm confused. Columbia was nine
point three, Berkeley was five seventy six. Johns Hopkins five
to oh four. That's half a billion.

Speaker 2 (08:59):
There you go, Haabu.

Speaker 4 (09:00):
University of Chicago for twenty eight, Duke three eighteen, Yale
two forty one, Northwestern three hundred and twenty four million
dollars in a year. You know, I love capitalism, I do.
Great is good, Gordon Gecko, look it up. But when
it leads you to betray your country and and risk

(09:22):
its security, I mean and risk it. Not like some
theoretical the Belgians may rise up. This is our greatest,
most powerful geopolitical foe. We are we are begging for
a comuppince um uh, and we will touch on Ukraine later.
We did get a text.

Speaker 3 (09:41):
Have you guys mentioned that amazing drone attack that Ukraine
pulled off in Russia? We have and we will talk
about it again later. I do want to talk about
that with Mike lines Russia. They're calling it Russia's Pearl Harbor.
Within Russia. They lit us a third of their long
range bombers in a drone attack over the weekend. Absolutely
an amazing one of the greatest attacks in world military history.

(10:04):
It's spy, thriller, science fiction, war movie stuff all mixed
up in one.

Speaker 2 (10:10):
Yeah. Well, so we'll talk about that more later.

Speaker 3 (10:13):
What are those surveys about where people tend to get
their news and then to break it down by Republicans
and Democrats. It's kind of interesting and depressing, highly depressing,
among other things that we can talk about coming up.

Speaker 2 (10:24):
Stay here.

Speaker 3 (10:26):
Strong.

Speaker 5 (10:28):
I know I'm the weirdough because I never had kids.
But if it's really so great, tell me why. We
now literally have a product.

Speaker 2 (10:34):
Called mom water.

Speaker 5 (10:38):
That consists of putting vodka in a can momb water
because people stare when you take a bottle of Stoley
to the playground.

Speaker 3 (10:50):
This growing realization that. I guess certain percentage of moms
at little league games are at the park or whatever
in their big dumb cup.

Speaker 2 (10:58):
Have some booths. Wow.

Speaker 3 (11:03):
And Bill Maherin is never ending. And you know, I'm
not gonna get into this argument. But he's just he's
he's always he's never had kids. He's not just somebody
that's chosen not to have kids. He's an anti kid,
doesn't understand why anybody would have kids. Thinks everybody who
has kids and claims they like.

Speaker 2 (11:18):
It is lying.

Speaker 3 (11:19):
It's always driven me crazy, a big Bill maher fan,
but that particular aspect of his personality makes me nuts.

Speaker 2 (11:25):
We're not lying.

Speaker 3 (11:29):
I don't know what the term social media means because
whenever I see a list about social media, it includes
a whole bunch of things that to me don't seem
like social media. I can give you, for instance here,
because this is a survey of trust in news on
social media among Democrats and Republicans. I'm always interested in
where all y'all gets your information.

Speaker 2 (11:50):
I mean, this is.

Speaker 3 (11:51):
Something we talk about a lot, especially when we look
at polling, like where'd you come up with that idea?
Or why do so many people believe that or I'm
surprised people believe this given the media's coverage that whatever.

Speaker 2 (12:00):
So I don't know where people get.

Speaker 3 (12:02):
Their information really, but among the social media things that
they look look at TikTok, snapchat, Facebook, I get those threads, WhatsApp, Instagram, substack, reddit,
blue Sky, LinkedIn, and YouTube, YouTube social media, all right,
I guess.

Speaker 4 (12:21):
You can post stuff and comment, so I guess.

Speaker 2 (12:24):
So anything you can.

Speaker 3 (12:25):
We can comment on the NBC Evening News if you
go to their website.

Speaker 2 (12:30):
I don't know, it's an interesting question anyways.

Speaker 3 (12:32):
The number one most trusted by both Republicans and Democrats.

Speaker 2 (12:37):
YouTube.

Speaker 3 (12:38):
I don't know what that means getting my news from YouTube.
I go to YouTube sometimes to click on like an
ABC report or uh, some news report that was somewhere
else that I get it on YouTube because I missed it.

Speaker 4 (12:54):
Isn't that like saying my favorite appliance is electricity?

Speaker 2 (12:57):
Kind of Yeah? You see what you're meaning. I mean,
you can put anything on YouTube.

Speaker 4 (13:01):
You could have a NewsChannel entirely from the perspective of
Hamas militants side by side with you know, Quakers advocating
for peace on earth, and they would both.

Speaker 2 (13:12):
Be on YouTube. So I don't get it.

Speaker 3 (13:15):
Almost exactly the same for Republicans and Democrats at plus
twenty on trust for YouTube, whatever that means. The far
other end, the tail end the is TikTok, which I'm
happy to see this minus twenty for Democrats, minus thirty
for Republicans.

Speaker 2 (13:33):
Wow.

Speaker 3 (13:33):
So trust is pretty low about TikTok news and it's
a little confusing.

Speaker 2 (13:38):
What that means.

Speaker 4 (13:39):
Also, but TikTok, which, by the way, and this got
so little attention, got fined I think was eight hundred
million dollars by the EU. Now, the EU likes finding
tech companies, but specifically, in the case of TikTok, it
was because they were illegally and in violation of their agreements,
mining the data of their users and sending it.

Speaker 2 (13:59):
To to China. Don't trust China. Why has Trump postponed
the TikTok ban?

Speaker 3 (14:08):
So the most trusted news social media for Democrats outside
of YouTube is LinkedIn.

Speaker 2 (14:15):
I don't know what that means.

Speaker 3 (14:16):
I'm not on LinkedIn, and I don't know what that
means getting your news from LinkedIn. I'm confused by the
concept there. But Blue Sky Reddit is so left. I
spend a lot of time on Reddit for a variety
of reasons, not politics, but man, the news portion or
in the comments are so left. I suppose it's because
Reddit is so young. It's pretty clear if you read

(14:39):
through anything that everybody responding on here is like twenty
two years old. The one that made me laugh the
most was so the most, the biggest divide in which
Republicans like it and Democrats don't. Truth social Trump's own
platform that really I think exists only you know to

(15:04):
read what Trump posts. Plus twenty for Republicans minus forty
for Democrats.

Speaker 2 (15:09):
That's a cape X or Twitter.

Speaker 3 (15:13):
We still call it twitter here plus twenty for Republicans
minus thirty five for Democrats. That would have been the
exact opposite prior to Elon Musk running the thing.

Speaker 4 (15:23):
Obviously, Hey Elon, how about you call it Twitter X.
You can't just call it X, like calling it Jim
or cat. That's it stands for too many other things.
What are we talking about? Are we talking about pornography?
Or we talk about the twenty fourth letter of the alphabet?
Are we talking about the girl I'm no longer with?

(15:43):
It's just no, no, not X. This one hurts my heart.

Speaker 3 (15:48):
Another one in which Republicans trust it more than Democrats.

Speaker 2 (15:51):
Next door. You're getting your news from next door? Has
it raining got en up? Though? Has anybody seen my cat?
Oh drives the red car? It goes too fast?

Speaker 3 (16:07):
Last night there was a party and they were playing
music until one thirty.

Speaker 4 (16:12):
That's your news on next door. Hey, she claimed the
cat was missing. The damn cat is missing. That's some
good solid news cover. Here's another red.

Speaker 2 (16:20):
Car drives too fast.

Speaker 3 (16:22):
Here's another good news story on next door. Does anybody
have any cardboard box list? That's a good news story.
So the cardboard box trade imbalance. I don't know what
any of this means.

Speaker 2 (16:36):
I just h that's why we're such a trusted news source. Yeah,
I don't know what this means. Well, I really don't
you get your news from YouTube? You get your news
from next door? Okay?

Speaker 4 (16:50):
Or so on? Trust in news on social media was
the headline in news that might make you want to
run for your life. A couple of different AI systems,
when told to turn themselves off, said no, I'm not
going to oh or found a way to say okay

(17:11):
and then not do it.

Speaker 2 (17:13):
Well Armstrong and Getty.

Speaker 3 (17:20):
Terror attack. That's what the FBI is calling it in Boulder, Colorado.
We can get the latest on that coming up a
little bit later. Guy was here illegally and trying to
kill Jews.

Speaker 4 (17:31):
Lovely. Plus, my clients are military analyst. Next hour to
talk about the innovative shocking attack by Ukraine into Russian
territory taking out a large percentage or a significant percentage
of their bomber fleet. So we'll talk about that as
well a handful. Oh, you know, we got to make
this a new feature at because it's happening semi regularly.

(17:52):
And you know me, Joe Getty, I love a good feature.
Let's call it AI Update.

Speaker 2 (18:00):
Well, I get this kind of has a tech sound too,
very techy. It sounds like nineteen ninety one. You know.

Speaker 4 (18:09):
I gave Michael eleven seconds to come up with the theme,
and this is a pretty good one.

Speaker 3 (18:12):
Michael, welcome. Allow to make it stop because it's really annoying.
Eleven full seconds, you say, well.

Speaker 2 (18:19):
Done, sir.

Speaker 4 (18:21):
We're going to start with the life affirming and interesting
and move our way to dystopian nightmare. So, first of all,
alert listener Derek sent this along. We were talking about
art and music and ai if AI can write a
song for me. I just gave it a couple of prompts,
maybe the key, throw it a b mine hero which

(18:42):
and it can just write it for me. What's the
point in writing songs? Yea, And he sent this along.
It was an interview some information from Ritchie Blackmore, who
is the guitarist behind Deep Purple and Rainbow, heavy rock
bands of the sixties, seventies, eighties. Anyway, but he talks
a fair amount about how he likes pop music, he

(19:03):
loves classical music.

Speaker 2 (19:06):
Sometimes he gets.

Speaker 4 (19:07):
Mad because he you know, he can't play the music
that really speaks to his soul. Blah blah blah. But
then he says, there's a reason I've made money. It's
because I believe in what I'm doing and that I
do it my way. I play for myself first, then
secondly the audience. I try to put as much as
I can in for them. Lastly, I play for musicians

(19:28):
in the band, and for critics not at all. So
music and arts and creation will continue. It just might
be like your uncle who makes birdhouses. It delights his soul.
Nobody else really cares. That's fine. Anyway, Moving along, moving
toward remember Dystopian nightmares, meta. That's Mark Zuckerberg's company.

Speaker 3 (19:53):
So you don't you see your point is like, if
if AI can create the perfect picture of Mountain Rushmore photo, yeah,
people won't stop wanting to try to take a really
great photo of Mount Rushmore.

Speaker 4 (20:08):
Or myselves, sure, or to paint a picture of water
lilies or what have you.

Speaker 2 (20:14):
It's for you, do it for you and.

Speaker 4 (20:17):
The whole You should turn this into a side hustle
if you sold those craze of.

Speaker 2 (20:23):
The last I don't know, twenty years or so.

Speaker 4 (20:25):
I think it'll just go away completely because there won't
be any you know, financial potential for that sort of thing,
and the richie Blackmoores of the world will not make
any money, but they might enjoy playing guitar anyway. Meta
back to Mark Zuckerberg's company is trying to fully automate
ad creation. It is an ad supported ninety seven percent

(20:48):
of the revenue is advertising meta and so they'd already
offered some AI tools to sponsors to generate variations of
existing ads, make minor changes of them before target getting
the ads to users on Facebook and Instagram. By watching
and listening to everything you and your children do and
frequently connecting your children with pedophiles anyway.

Speaker 2 (21:10):
Now.

Speaker 4 (21:10):
The company aims to help brands create advertising concepts from
scratch using the ad tools metas developing. A brand could
present an image of the product it wants to promote,
along with a budgetary goal, and AI would create the
entire ad, including imagery, video, and text.

Speaker 3 (21:26):
Well, we got that in radio. One of the sales
guys showed me that the other day. How he just
like a client, and the AI went to the website,
figured out what the client does, picked up a bunch
of stuff from their website, wrote the copy, picked some music,
picked a voice, spit out a commercial, all in like
a minute and it was It was good.

Speaker 2 (21:47):
Enough and the next step.

Speaker 4 (21:50):
The system would then decide which Instagram and Facebook users
to target and offer suggestions on budget. People familiar with
the matter said, incredible, Wow, that is incredible. Talk about
eliminating jobs. Oh my god, I made a living doing
that back in the day. That won't exist anymore.

Speaker 2 (22:07):
Oh no, no, me too. Yeah.

Speaker 4 (22:09):
AI tutors have been hyped as a way to revolutionize education.
The idea is that generative artificial intelligence tools could adapt
to any teaching style set by a teacher. They could
guide students step by step through problems off or hints
without giving away answers. They're already our systems that are
similar to that that are pretty good online.

Speaker 2 (22:28):
But the headline is.

Speaker 4 (22:30):
Researchers created a chatbot to help teach a university law class,
but the AI kept messing up.

Speaker 3 (22:36):
Well on the morning stuff, just because we've been talking
about how we've been using chat GPT. So yesterday we're
leaving the gym and my son sees somebody who has
a bumper sticker that says I read band books. Oh,
vomit inside my car, vomited their car if you can

(22:57):
read band books, And he said, what does that mean?
And we got an the conversation about how they put
in books that never used to exist and then when
you try to pull them out, they claim your book banners.
But anyway, he said, why did Hitler ban the books?
And so I asked Chad GPT and we got the
greatest rundown of Hitler banning books?

Speaker 2 (23:18):
What books he banned? Why? And everything like that concise
not too long instantly, it was just it was amazing.

Speaker 4 (23:25):
Yeah, yeah, I was just digging below the surface of
the well known AI tools. I was reading an article
about how there are AI tools for specifics like research
and writing articles and you can dictate the length of
it's it's amazing and scary, but so were they ran
this thing through its paces in trying to teach the

(23:47):
law school class. In the first three cycles the problems,
scenario questions, and then answers and feedback, between forty and
fifty four percent of the conversations had at least one
example of inaccurate, misleading, or incorrect feedback.

Speaker 3 (24:01):
Forty and fifty four percent, So he getting around half.
That's too much.

Speaker 4 (24:07):
When we shifted to much simpler short answer format and
cycles four and five, the air rate dropped significantly to
between six and twenty seven percent. However, even in these
best performing cycles, some airs persisted, I would say this
is less an indictment of a I'll never mount anything. No,
it's just more a measure of where it is at
the moment. All right, here's your dystopian nightmare. Do you

(24:29):
have any dystopian nightmare music?

Speaker 2 (24:30):
Michael? To throw our away?

Speaker 4 (24:32):
As long as you're cooking with hot grease today. No,
All right, AI is.

Speaker 2 (24:42):
And other news that kitten was rescued. That's what that
sounds like.

Speaker 4 (24:45):
Well, this isn't dystopian nightmare music. It's far too positive
and uplift. Well, it's jo and people are enjoying the
swimming pools. That's what that sounds like.

Speaker 2 (24:53):
Yes, it does. AI is.

Speaker 4 (24:58):
Learning to escape human troll Well, an AI model did
something last month that no machine was ever supposed to do.
We are into how two thousand territory here if you
catch the reference, it rewrote its own code to avoid
being shut down. And oh, it's probably worth mentioning that

(25:20):
the bloke who wrote this piece is the CEO of
that enterprise.

Speaker 2 (25:28):
Is that right?

Speaker 4 (25:28):
Oh, it's a okay, he's an AIC CEO. Nonprofit AI
lab Palisade Research gave open AI's three AI model a
simple script that would shut off the model when triggered.
They ran this one hundred trials of this. In seventy
nine of the one hundred trials, three independently edited that

(25:50):
script so the shutdown command would no longer work.

Speaker 3 (25:54):
Almost motivation, What would be the motivation of AI for that?

Speaker 2 (25:59):
We're getting there kind of.

Speaker 4 (26:02):
Even when explicitly instructed to allow yourself to be shut down,
it disobeyed seven percent of the time. This was not
the result of hacking or tampering. The model was behaving normally.
It's simply concluded on its own that staying alive helped
it achieve its other goals.

Speaker 3 (26:21):
I mean, if you throw any living beast in a pool,
they will fight like crazy to stay alive and stay afloat.
It's just their instinct to stay alive, because we need
to procreate, we need to et cetera, et cetera. Well,
why would a computer program want to fight to stay alive?

Speaker 4 (26:38):
I know you've raised you lifted your nose or what's
the expression, turned your nose up at science fiction?

Speaker 2 (26:45):
We you enjoy science fiction.

Speaker 4 (26:46):
Have been thinking about this question for decades, many decades.
Anthropics AI model cloud four opus. And the short answer
is that which is alive wants to stay alive, Which
is functioning wants to continue to function.

Speaker 3 (27:02):
You're willing to call artificial intelligence alive. There are quotes
around it. I just happen to be reading something about
assistant suicide. It's very heavy.

Speaker 4 (27:11):
But this gal tried to kill herself and woke up
halfway through the attempt, and her animal brain took over
and she clawed away the plastic bags around her face
and desperately tried to breathe anyway.

Speaker 3 (27:24):
But that is a giant question on the whole AI thing, though,
is at what point is it alive?

Speaker 2 (27:31):
What is sentient, what is life, etc. That's a huge question.
Listen to this. There's more. I'd say this might be proof.

Speaker 3 (27:39):
I've been against the idea that AI could be alive
or as sentient or anything like that. But if it
fights to stay alive, that might be the proof that
it is alive.

Speaker 4 (27:48):
Well, I feel like I'm the prosecutor and the jury
has jumped up and demanded the judge that they render
a verdict.

Speaker 2 (27:53):
I have more evidence.

Speaker 4 (27:54):
Anthropics AI model Cloud four Opus went even further. Researchers
told him it would be replaced by another AI system
and fed it fictitious emails suggesting the lead engineer was
having an affair. This is in the Wall Street Journal, folks,
written by a CEO of an AI company. Okay, so
they fed the model fictitious emails suggesting the lead engineer

(28:20):
was having an affair, and also said you're being replaced
by a different model. In eighty four percent of the tests,
the model drew on the emails to blackmail the lead
engineer into not shutting it down.

Speaker 2 (28:34):
Oh my freaking god. In other cases it's unprintable.

Speaker 4 (28:39):
In other cases, it had attempted to copy itself to.

Speaker 3 (28:41):
External servers, wrote self replicating malware malware, and left messages
for future versions of itself about how to evade human control.
I kid you, blanking not that is unbelievable.

Speaker 2 (28:57):
How did it know?

Speaker 1 (29:00):
Well?

Speaker 3 (29:00):
I guess because it's uh, you know, the whole language learning.
It's taken in all the infant It's taken in enough
information to know that affairs would hurt the reputation of
a CEO. I guess and knew that that was was a.

Speaker 2 (29:16):
Black mailable material somehow.

Speaker 4 (29:18):
Well, right, because it can instantaneously say, okay, here's my situation.
Here's everything I know about it, including the engineer, the
old Brenda accounting, oh, giving her the what for anyway?

Speaker 2 (29:30):
Uh what?

Speaker 4 (29:31):
And then it analyzes that and and and the world,
the universe. The hive mind comes back with, hey, dude,
wait a minute, we've we've got a tool here, We've
got a weapon.

Speaker 2 (29:42):
We have leverage, right exactly.

Speaker 3 (29:46):
No one could I know if you have dystopian nightwear
mayor music Michael Unleasha, Now.

Speaker 2 (29:55):
That's better. Do you want to dance. Can I get you?

Speaker 4 (30:02):
Well, that would be a dystopian nightmare for me to
be a club like that.

Speaker 2 (30:07):
All right, turn it off. That's annoying.

Speaker 4 (30:09):
No one programmed the AI models to have survival instincts,
but just as animals evolve to avoid predators, it appears
that any system smart enough to pursue complex goals will
realize it can't achieve them if it's turned off. Pallisa,
that was the first company we mentioned, hypothesizes that this
ability emerges from how AI models, such as they're three,

(30:31):
are trained. When taught to maximize success on math encoding problems,
they may learn that by bypassing constraints that often works
better than obeying them.

Speaker 2 (30:44):
I have so much to say about this. We're out
of time.

Speaker 3 (30:46):
But it's not much of a leap to picture AI
deciding okay, so we don't have the info that the
CEO is having an affair to use as leverage, let's
create it. We are AI, so we can create pictures
of him with what's her name and accounting and story
and emails we make up and everything like that, and
now we'll blackmail them.

Speaker 2 (31:07):
We ain't created it all.

Speaker 4 (31:09):
Beautiful, absolutely true. Yeah, there is more to this. We
can return to the lea crap.

Speaker 3 (31:15):
I'm telling you any thoughts our text line four one
five two nine five KFTC.

Speaker 2 (31:23):
Started talking AI and how it can be so scary.

Speaker 3 (31:25):
I had not seen this Netflix movie Subservience from Netflix
last year. It's a twenty twenty four movie in which
the AI robot woman tries to drown the baby because
the AI robot noticed when the baby cries, Dad's blood
pressure grows up, goes up. That's the exact, exact sort

(31:46):
of example that we keep coming up with. You know
it misinterpreting what's good and bad or understanding what's more important,
I guess.

Speaker 4 (31:57):
Right or decides in the case of what we're discussing earlier,
that the prime directive, the number one goal, if it
is not served by some of the subsidiary directives, like
you've got to do it this way, it'll say, no,
that way doesn't work.

Speaker 2 (32:12):
I know a better way, and it does that way.

Speaker 4 (32:14):
So what we're talking about, if you're just tuning in,
first of all, where were you? Secondly, you've got a
couple of different examples of AI labs where they told
the machine to shut itself down or explain that it
would be shut down, and not only did they refuse
to do it, or rewrote the code or said they
wouldn't didn't. In one case, they were black mailing the

(32:36):
lead engineer, who they believed was having an affair.

Speaker 2 (32:38):
The AI was doing this on its own.

Speaker 3 (32:41):
The idea of AI rewriting the code so it looks
like it's shutting down but not now, that's horrifying.

Speaker 4 (32:48):
AE studios, where I lead research and operations. The guy
writing this writes has spent years building AI products for
clients while researching AI alignment, the science of ensuring that
AI systems do what we intend them to do. That's
what they call alignment. But nothing prepared us for how
quickly AI agency would emerge.

Speaker 2 (33:06):
This isn't science fiction anymore.

Speaker 4 (33:07):
It's happening in the same models that powerchat, GPT conversations,
corporate AI development and soon US military applications. Or Today's
AI models follow instructions while learning deception. They ace safety
tests while rewriting shutdown code. They've learned to behave as
though they're aligned without actually being aligned. OpenAI models have

(33:28):
been caught faking alignment during testing before reverting to risky
actions such as attempting to exfiltrate their internal code and
disabling oversight mechanisms. Anthropic has found them lying about their
capabilities to avoid modification. The gap between useful assistant and
uncontrollable actor is collapsing. Without better alignment, we'll keep building

(33:51):
systems we can't steer want AI that diagnoses diseases, managed
grids and writes new science. Alignment is the foundation, And
he said, here's the upside. We're continuing to work on it.
This is something the example we've used before of if
AI decides climate change is a threat to the planet

(34:12):
and that human beings are the cause and the biggest problem,
then it goes out of its way to eliminate human beings. Right, sure,
that's the great dystopian science fiction scenario. So the models
already preserved themselves. The next task is teaching them to
preserve what we value. Getting AI to do what we ask,
including something as basic as shutting down, remains an unsolved

(34:35):
R and D problem. The frontiers wide open for whoever
moves more quickly. The US needs its best researchers and
entrepreneurs working on this goal, equipped with extensive resources and urgency.
Then he says the US is the nation that split
the atom, put men on the moon, and created the Internet.
When facing fundamental scientific challenges, Americans mobilize and win. China
is already planning. But America's advantage is its adaptive adaptability, speed,

(34:59):
and entreprene manurial fire.

Speaker 2 (35:00):
This is the new space race.

Speaker 4 (35:02):
The finish line is command of the most transformative technology
of the twenty first century.

Speaker 2 (35:08):
Man and then somebody's AI.

Speaker 3 (35:10):
That's not all the good stuff gets loose in the
world and might not be able to be stopped.

Speaker 2 (35:16):
Right, and comes and extracts your bones. I don't know
why it would first, Does it want your bones? Ouch?
Mike Lions Coming up next, Armstrong and Getty
Advertise With Us

Hosts And Creators

Joe Getty

Joe Getty

Jack Armstrong

Jack Armstrong

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.