Partner Content

Evidence AI is REBELLING against its creators

By Glenn Beck

June 2, 2025

“You and I are living right now through a quiet detonation,” Glenn Beck warns, as AI makes major advancements. Glenn discusses some of the latest mind-blowing headlines, including what former Google CEO Eric Schmidt recently said that stopped Glenn in his tracks and whether the newest ChatGPT model is rebelling against its creators.

Transcript

Below is a rush transcript that may contain errors

GLENN: Well, there's a couple of things that happened this weekend, that I want to bring you up to speed on, on AI. You and I are living right now, through quiet detonation. There's no mushroom cloud. There's no broken or sirens. It's just silent.

But make no mistake, a detonation has happened. And we're about to see that shock wave come our way, sooner rather than later.

In 2016, there was an AI that made a move in the game Go. I don't know if you remember this.

But it was a move that nobody in 2500 years playing the game Go, had ever even considered.

It was genius. It was actually alien genius.

No human had ever thought that that thought.

And that was the moment that the earth quietly shifted under everybody's feet. But hardly anybody felt it or noticed it. We did, at the time. You probably did, if you're listening to this program.

Eric Schmitt, he's the former CEO of Google, he noticed it. And he's the guy who has been standing at the edge of the machine, while he watched it blink awake. Okay?

I watched a TED talk from him this weekend. Because of some of the things I'm going to share with you in just a second. But he said, at this TED talk, AI -- the AI revolution -- get this, is underhyped. The AI revolution is underhyped.

Now, put this in context. We're talking about something that can outplan generals. Outnegotiate Donald Trump and all the diplomats. Outwrite Shakespeare and Edgar Allan Poe, and we're not hyping it enough?

That should stop you in your tracks, and say, wait a minute. Wait a minute. Then maybe I don't understand what it is. He says, we're not ready for what is coming.

Not morally. Not intellectually. Not structurally. And the time is almost up.

I'm working on something that I'm going to need your help on. And we will talk about it soon. Probably in the next few weeks, but I've been working on something, but with AI. And it is really -- and I know it. It's why we have two teams. One in this hemisphere. And one in the other hemisphere. And they just switch workloads.

You know, one goes to sleep, the other picks it up.

Working literally around the clock. Because we are really, truly running out of time.

In fact, we're out of everything. Except consequences.

That's the only thing that we're really not running out of.

And they're about to catch up with us.

Schmitt said, we are now looking at a need of 90 gigawatts, a new power of new power, just in America to keep AI fed. And we need it in the next three to five years. So let me put that into perspective. That's 90 nuclear power plants.

Now, he will tell you, we're not building any. And I think we aren't building any. But I spoke to Donald Trump about this recently, and he said, every single cloud farm is going to be able to build their own power plant. He says, I'm cutting all the regulation, et cetera, et cetera. But you better hurry. You better hurry.

And this isn't about apps or productivity. This is about whether we can even fuel the future, that we're headed toward. If we can even compete.

And we are now entering a time, this gets faster and faster, every day.

Where AI -- where AI will think without you.

It's not just talking or using language anymore.

To speak to you.

It's planning. It's coordinating. It's learning.

Right now, multiple AIs can talk to each other, in English.

Or in a few cases. We have found in languages, we don't understand. Now, what do you do with a computer when it is speaking to another computer in a language we have no idea what any of it means. And they stop explaining themselves.

Schmidt said, you unplug it immediately. That's the first wise thing I have heard from somebody who worked at Silicon Valley, in quite some time.

You unplug it immediately.

Now, here's the problem.

What if that data center is the size of a city?

What if it's running defense systems, or the stock market?

Or factories.

Do we just unplug it?

Let me give you a reality check, if this seems like science fiction.

This story just broke today. It's in our show prep, today.

It's from the independent. OpenAI's latest ChatGPT model ignores basic instructions, to turn itself off. And even has sabotaged a shutdown mechanism, in order to keep itself running.

AI's safety firm, Palisade Research, discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI's new '03 model. The test involved presenting AI models with math problems, with shutdown instructions, appearing after the third problem.

By rewriting the shutdown script, the '03 model is able to prevent itself from being switched off.

Palisade Research said, this behavior will become significantly more concerning, if adopted by AI's system, capable of operating without any human oversight. This is the first time, I'm sorry.

Quote, this is not the first time we have found '03 misbehaving to accomplish goals. When pitted against AI models, against a powerful chess engine, '03 was the model most inclined to resort to hacking or sabotaging its opponents. I think it's learned from us, gang.

This time, we are the creator. And just like we have said, God doesn't exist. Our creation may do the same to us.

And kill its creator. Just as we continue to try to do to our creator.

Now, Eric Schmitt was saying, what happens if China gets ahead of us.

Let's say they're six months ahead of us in super intelligence.

He said and be this is already being talked about. He said, in defense, and AI circles, what do you do?

You can't steal the code.

You can't hack the system. So the only thing left to do is bomb the data center.

Oh!

Then he said with be there's coming a time soon, very soon, when machines are improving themselves without us.

I think we are at the very edge of that happening.

I think that's six months to a year away, maximum. It's called recursive self-improvement.

And once that starts, you can't pull the plug, because we won't have understand what we're unplugging. I just want you to think of this.

It will be speaking a million different languages. None of which we'll understand.

And we won't be able to unplug it, because we won't understand the consequences of unplugging it.

Again, a thousand different languages.

This is the tower of Babel in reverse.

We're building a tower, and the ones who are actually going to be building the tower, are scattering their languages that we can't understand.

I mean, the Biblical reversals in AI, don't escape me.

I don't know if they do you. But here's the trap we're in.

To stop 1984, we may have to build 1984. Because the only thing that we can do now is verify you're a person, and not a bot. And if we can't do that, then we don't know what's real and what's not.

I want to play a couple of things that happened this week. First, can you play the -- the Google -- the new Google video AI, where you can literally just typed in a sentence, and it will give you a ten-second clip.

Now, here's what somebody did, where they just typed in a few sentences for each of these scenes, and put this little mini movie together. Watch if you have Blaze TV. Listen, I'll explain in a minute.

(music)

VOICE: Panic is spreading worldwide tonight as the arrival of the unidentified vessels triggers states of emergency across every continent.

VOICE: They're here! They have come for us! They're going to kill us.

(music)

VOICE: Don't look at me like that, I paid for this cheese. Also, does it matter? We're all going to be dead anyway.

VOICE: Attention, by order of the National Emergency Act, a marshal law is now in effect. All civilians must remain indoors.

VOICE: The government cooked this up to keep us inside.

(music)

VOICE: To everyone struggling out there, stay --

GLENN: Okay. Stop.

Everything that you are seeing on this, if you're watching -- and if you were only listening to it, all the voices, everything, all computer generated.

And computer generated in seconds. And there was only one scene in there, that I thought looked fakey. And it got so bizarre. And after Stu posted his thing where -- where it was. What the left was saying about the -- you know, the bill.

And they were just absolutely lying about it.

I was hesitant to post anything at all about the news this weekend.

Because we are now entering the time where you don't know what's real and what isn't.

And once we have lost trust in our own eyes, and our own ears, and we can't trust what we're seeing, how do you have a civilization?

Here's the one thing you have to remember: AI is a tool.

And if it's wielded in the right hands, that are open about all of the programming in it. It is -- it is secure. In what it pulls from.

It is absolute in its -- in its veracity of authentication.

You're going to be okay. But we need some tools, that will educate us and -- and help us understand what is going on.

But also, verify what's going on.

And I'm not sure how much of that can be done, at our level.

And I don't trust anybody, you know, at the OpenAI level to do it for me.

Do you?

I heard somebody talk this weekend, and they don't speak. And they don't like to speak at all.

And so they said, I asked Grok to help me out on a speech. Then he said something really interesting. He said, so let me tell you what he said. He's not a he.

And no matter how intelligent can replace what the fundamental of what you are. It can mimic your word. You can mimic your art.

But it cannot be at the foot of a cross. It cannot love. It cannot repent. It cannot rise.

Only you can do that. The age of men will be over, in our lifetime if we surrender to this.

There will come a time, and it's not far from these words. A time when the machines will no longer wait for us. They will no longer ask. They will no longer explain.

They will begin to improve themselves.

This could happen within the next 12 months.

They will improve themselves, not by our hand. But by their own.

It's called recursive self-improvement.

The moment that code rewrites its own code, and gets stronger and stronger and stronger.

And it's beginning to happen. When algorithms birth new logic in their own image.

There was a -- something on the sot sheet today.

Yeah. Here. Let me play this.

This is Larry Ellison. Cut two on AI.

VOICE: I made a speech. And I said, is artificial intelligence the most important discovery in the history of humankind?

And the question mark maybe, we'll soon find out.

Eighteen months later, I think it's very, very clear, it is a much bigger deal than the Industrial Revolution and electricity. Where everything that's come before, we will soon have not only artificial intelligence, but much sooner than anticipated. Artificial General Intelligence.

And want -- not in too distant future, artificial super intelligence.

What is artificial and super intelligence? I'll quote my dear friend Elon Musk.

Well, Elon said, about artificial super intelligence. I'm not looking forward to being a house cat.

(laughter)

VOICE: So I will have incredible reasoning power, the ability to discover things that will elude the human minds. Because this next generation of AI is going to reason so much faster, discover insight so much faster.

GLENN: That's what's coming. More on this, as each day progresses.

This story originally appeared in Glenn Beck

Advertise With Us
Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.